repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
mapattacker/cheatsheets | python/Basics of Algorithms.ipynb | mit | from IPython.display import Image
Image("../img/big_o1.png", width=600)
"""
Explanation: Basics of Algorithms & Coding Tests
this notebook shows some essentials and practical python codes to help in your coding test like hackerrank or codility
Two most important things
- remove all duplicates before any iterative processing
- in a loop, when using if-else, set condition that allows elimination quickly without iterating the entire array
Prep
- open empty jupyter notebook to test
- have your cheatsheet by your side
- remember all the useful functions in python
- prepare to use regex
During the Test
- After building your function, attempt using your own test scenarios to input arguments
- Hackerrank should be fine as it gives a number of scenarios, but codility sometimes only gives 1
- hence the need to test a few more to check for bugs
Psychology
- do not give up on a question, and switch to & fro; that only waste more time
- prepare for a long extensive coding for each question
- keep calm & analyse step by step
Next Step
- learn about the various algorithims of cos!
- dynamic programming, greedy algorithm, etc.
- Codility gives a good guide
Big-O Notation
This can be applied to both space & time complexity. Considered a CPU-bound Performance
O(1): Constant Time
O(log n): Logarithmic
O(n): Linear Time
O(n log n): Loglinear
O(n^2): Quadratic
Big-O Complexity
End of explanation
"""
Image("../img/big_o2.png", width=800)
"""
Explanation: Data Structure Operations
End of explanation
"""
Image("../img/big_o3.png", width=500)
"""
Explanation: Array Sorting
End of explanation
"""
counter = 0
for item in query:
for item2 in query:
counter += 1
"""
Explanation: Example 1
* time complexity = O(n^2)
* space complexity = O(1)
End of explanation
"""
counter = 0
list1 = []
for item in query:
list1.append(item)
for item2 in query:
for item3 in query:
counter += 1
"""
Explanation: Example 2
* time complexity = O(n^3)
* space complexity = O(n)
End of explanation
"""
import cProfile
cProfile.run('print(10)')
"""
Explanation: cProfile
how much time was spent in various levels of your application
End of explanation
"""
set = set([1,1,2,2,4,5,6])
set
# convert to list
list(set)
"""
Explanation: Remove Duplicates
End of explanation
"""
sort = sorted([4,-1,23,5,6,7,1,4,5])
sort
print([4,-1,23,5,6,7,1,4,5].sort())
"""
Explanation: Sort
End of explanation
"""
# reverse
sort = sorted([4,1,23,5,6,7,1,4,5],reverse=True)
print(sort)
# OR
print(sort[::-1])
"""
Explanation: Reverse Sort
End of explanation
"""
list1 = [1,2,3,4,5]
# last number
list1[-1]
# get every 2nd feature
list1[::2]
array = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
print(array[0])
print(array[-1])
print(array[0:2])
print(array[-3:-1])
# filling an empty array
empty = []
for i in range(10):
empty.append(i)
empty
# remove item
empty.remove(1)
empty
# sum
sum(empty)
"""
Explanation: Basic List
End of explanation
"""
import math
list1 = [1,2,3,4,5]
print('max: ',max(list1))
print('min: ',min(list1))
"""
Explanation: Max & Min
End of explanation
"""
abs(-10.1)
"""
Explanation: Absolute
End of explanation
"""
'-'.join('abcdef')
"""
Explanation: Filling
End of explanation
"""
# individual split
[i for i in 'ABCDEFG']
import textwrap
textwrap.wrap('ABCDEFG',2)
import re
re.findall('.{1,2}', 'ABCDEFG')
"""
Explanation: Splitting
End of explanation
"""
from itertools import permutations
# permutations but without order
list(permutations(['1','2','3'],2))
# permutations but with order
from itertools import combinations
list(combinations([1,2,3], 2))
"""
Explanation: Permutations
End of explanation
"""
test = 'a'
if test.isupper():
print('Upper')
elif test.islower():
print('Lower')
"""
Explanation: If Else
End of explanation
"""
for i in range(5):
if i==2:
break
print(i)
for i in range(5):
if i==2:
continue
print(i)
for i in range(5):
if i==2:
pass
print(i)
"""
Explanation: Loops
Break, Continue, Pass
break cuts the loop
continue bypass entire code downwards in the current loop on condition
pass ignores code within condition only
End of explanation
"""
i = 1
while i < 6:
print(i)
if i == 3:
break
i += 1
"""
Explanation: While Loop
End of explanation
"""
|
kostovhg/SoftUni | MathConceptsForDevelopers-Sep17/04_Hight-SchoolMaths-E/High-School Maths Exercise/Solutions.ipynb | gpl-3.0 | # IMPORTANT, you should run second cell with import statements
# or current cell should contain
# import sympy
x, a, b, c = sympy.symbols('x a b c') # Define symbols for parameters
sympy.init_printing() # LaTeX-formatted result for printing
sympy.solve(a * x**2 + b * x + c, x) # solve parametric equation
import math
def solve_quadratic_equation(a, b, c):
"""
Returns the real solutions of the quadratic equation ax^2 + bx + c = 0
"""
# Check if we have linear equation, a = 0
if a == 0: # if we do
return [ -(c/b)] # return the single root
# if not, we are continue with quadratic equation
# determine the value of b**2 - 4ac
d = float(b * b - 4.0 * a * c)
if d < 0: # there is no roots
return [] # return empty array
else: # we have some roots
if d == 0: # only one root
return [ (-b)/(2.0*a) ]
else: # or two roots
return [(-b - (math.sqrt(d)))/(2.0*a), (-b + (math.sqrt(d)))/(2.0*a)]
# Testing: Execute this cell. The outputs should match the expected outputs. Feel free to write more tests
print(solve_quadratic_equation(1, -2, -1.25)) # [-0.5, 2.5] <== parameters were changed to apply to the result in the comment
print(solve_quadratic_equation(1, -1, -2)) # original parameters
print(solve_quadratic_equation(1, -8, 16)) # [4.0]
print(solve_quadratic_equation(1, 1, 1)) # []
print(solve_quadratic_equation(0, 2, 5))
"""
Explanation: High-School Maths Exercise - Solutions
Basic techniques and concepts for working with Jupyter Notebook, Python and Python libraries
<div style="text-align: right"><h6><i>Hristo Kostov</i></h6>
<a href="https://github.com/kostovhg/SoftUni/blob/master/MathConceptsForDevelopers-Sep17/04_Hight-SchoolMaths-E/High-School%20Maths%20Exercise/Solutions.ipynb">github link to this solution</a></div>
Solution 1. Markdown
lets try some sorted numbered list
this is the first point
this is inclined point
this is sedonc inclined point
This is second main point
Now lets try unsorted list
this is a point
and this is inclined point
second indlined point
Again some point
this will have sublist
this is sublist
of few elements
handing here
One table
| SoftUni | Google |
|------------------------------|---------------------------:|
| 1. | and if you click this |
| 2. | * it will take you * |
| 3. | to SoftUni or Google |
Solution 2. Quadratic Equations - formulas and LaTex
We start with the equation
$$ ax^2 + bx + c = 0 $$
We can try to get something like the sum of squares formula. Recall that
$$ (px+q)^2 = p^2x^2 + 2pqx + q^2 $$
Let's take the first two terms ($ax^2 + bx$). We can see that they fit the formula above almost perfectly. We have
$$ a = p^2, b = 2pq $$
$$ \Rightarrow p = \sqrt{a}, q = \frac{b}{2p} = \frac{b}{2\sqrt{a}}$$.
We also need to add $q^2=\frac{b^2}{4a}$. Since this is an equation, we have to add it to both sides of the original equation. We get:
$$ p^2x^2 + 2pqx + q^2 + c = q^2$$
Now, express the equation above in terms of $a, b, c$:
<p style="color: #0000FF">Result:</p>
$$ {\sqrt{a}}^2x^2 + 2\sqrt{a}\frac{b}{2\sqrt{a}}x + \frac{b^2}{4a} + c = \frac{b^2}{4a} $$
Place $c$ on the right-hand side. We now have our squared formula on the left:
<p style="color: #0000FF">Result:</p>
$$ \sqrt{a}^2x^2 + 2\sqrt{a}\frac{b}{2\sqrt{a}}x + \frac{b^2}{4a} = \frac{b^2}{4a} - c $$
$$ \left(\sqrt{a}\right)^2x^2 + 2\left(\sqrt{a}\right)\left(\frac{b}{2\sqrt{a}}\right)x + \left(\frac{b}{2\sqrt{a}}\right)^2 = \frac{b^2}{4a} - c $$
$$ \left(\sqrt{a}x + \frac{b}{2\sqrt{a}}\right)^2 = \frac{b^2}{4a} - c $$
Take the square root of both sides. Note that this means just removing the second power on the left-hand side (because the number is positive) but the right-haand side can be positive or negative: $\pm$:
<p style="color: #0000FF">Result:</p>
$$ \sqrt{a}x + \frac{b}{2\sqrt{a}} = \pm\sqrt{\frac{b^2}{4a} - \frac{4ac}{4a}} $$
$$ \sqrt{a}x + \frac{b}{2\sqrt{a}} = \pm\sqrt{\frac{b^2 - 4ac}{4a}} $$
You should get something like $\alpha x + \beta = \pm\sqrt{\frac{\gamma}{\delta}}$, where $\alpha, \beta, \gamma, \delta$ are all expresions. Now there's only one term containing $x$. Leave it to the left and transfer everything else to the right:
$$ \sqrt{a}x = \pm\sqrt{\frac{b^2 - 4ac}{4a}} - \frac{b}{2\sqrt{a}}$$
To get $x$, divide both sides of the equation by the coefficient $\alpha$. Simplify the expression:
$$ x = \left(\frac{\pm\sqrt{b^2 - 4ac}}{2\sqrt{a}} - \frac{b}{2\sqrt{a}}\right)\frac{1}{\sqrt{a}} \quad \Rightarrow \quad x = \frac{\pm\sqrt{b^2 - 4ac} - b}{2a} $$
If everything went OK, you should have got the familar expression for the roots of the quadratic equation:
$$ x = \frac{-b \pm\sqrt{b^2 - 4ac}}{2a} $$
Let's play around some more. Remember Vieta's formulas? Let's very quickly calculate them.
Express the sum and product of roots in terms of $a, b, c$. Substitute $x_1$ and $x_2$ for the two roots we just got. Simplify the result and you'll get that :)
<p style="color: #0000FF">Result:</p>
$$ x_1 + x_2 = \frac{-b + \sqrt{b^2 - 4ac}}{2a} + \frac{-b - \sqrt{b^2 - 4ac}}{2a} $$
$$ x_1 + x_2 = \frac{-b + \sqrt{b^2 - 4ac} -b - \sqrt{b^2 - 4ac}}{2a} = \frac{-2b}{2a}$$
$$ x_1 + x_2 = -\frac{b}{a} $$
$$ x_1x_2= \frac{-b + \sqrt{b^2 - 4ac}}{2a}\cdot\frac{-b - \sqrt{b^2 - 4ac}}{2a} = \frac{\left(-b\right)^2 + b\sqrt{b^2 - 4ac} - b\sqrt{b^2 - 4ac} - \left(\sqrt{b^2 - 4ac}\right)^2}{4a^2} = \frac{b^2 - b^2 + 4ac}{4a^2} $$
$$ x_1x_2= \frac{c}{a} $$
If you worked correctly, you should have got the formulas
$$x_1 + x_2 = -\frac{b}{a}, x_1x_2 = \frac{c}{a}$$
Now let's do something else. Let's factor the quadratic equation. This means, we'll just rearrange the terms so that they're more useful.
Start again with the basic equation:
$$ ax^2 + bx + c = 0 $$
Divide both sides of the equation by $a$:
<p style="color: #0000FF">Result:</p>
$$ x^2 + \frac{b}{a}x + \frac{c}{a} = 0 $$
Now you get $b/a$ and $c/a$. Replace them with the sum and product of roots. Be very careful about the signs!
<p style="color: #0000FF">Result:</p>
$$ x^2 -\left(x_1 + x_2\right)x + x_1x_2 = 0 $$
You should have some braces. Expand them:
<p style="color: #0000FF">Result:</p>
$$ xx - xx_1 - xx_2 + x_1x_2 = 0 $$
$$ \left(x - x_1\right)\left(x - x_2\right) = 0 $$
You should now get an expression containing $x$ (our variable) and $x_1, x_2$ (the roots). Please bear in mind those are different.
Find a way to group them and rearrange the symbols a bit. If you do this, you can arrive at the expression
$$ (x - x_1)(x - x_2) = 0 $$
AHA! How is this formula useful? We can now "generate" a quadratic function by only knowing the roots. For example, generate a quadratic function which has roots -1 and 3. Write it in the form $ ax^2 + bx + c = 0 $:
<p style="color: #0000FF">Result:</p>
$$ \left(x - \left(-1\right)\right)\left(x - 3\right) = 0 $$
$$ x^2 - 3x + x -3 = 0 $$
$$ x^2 - 2x - 3 = 0 $$
Solution 3. Solving with Python
End of explanation
"""
x = np.linspace(-3, 5, 1000)
y = 2 * x + 3
ax = plt.gca()
ax.spines["bottom"].set_position("zero")
ax.spines["left"].set_position("zero")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
yticks = ax.yaxis.get_major_ticks()
yticks[2].label1.set_visible(False)
plt.plot(x, y)
plt.show()
# source: gist.github.com/joferkington/3845684
def arrowed_spines(ax=None, arrow_length=20, labels=('',''), arrowprops=None):
xlabel, ylabel = labels
if ax is None:
ax = plt.gca()
if arrowprops is None:
arrowprops = dict(arrowstyle='<|-', facecolor='black')
for i, spine in enumerate(['left', 'bottom']):
# Set up the annotation parameters
t = ax.spines[spine].get_transform()
xy, xycoords = [1, 0], ('axes fraction', t)
xytext, textcoords = [arrow_length, 0], ('offset points', t)
ha, va = 'left', 'bottom'
# if axis is reversed, draw the arrow the other way
top, bottom = ax.spines[spine].axis.get_view_interval()
if top < bottom:
xy[0] = 0
xytext[0] *= -1
ha, va = 'right', 'top'
if spine is 'bottom':
xarrow = ax.annotate(xlabel, xy, xycoords=xycoords, xytext=xytext,
textcoords=textcoords, ha=ha, va='center',
arrowprops=arrowprops)
else:
yarrow = ax.annotate(ylabel, xy[::-1], xycoords=xycoords[::-1],
xytext=xytext[::-1], textcoords=textcoords[::-1],
ha='center', va=va, arrowprops=arrowprops)
return xarrow, yarrow
"""
Explanation: Solution 4. Equation of a Line
End of explanation
"""
from sympy import *
sympy.init_printing()
x, a, b = sympy.symbols('x, a, b', extended_real=True)
#y = sympy.Function('y')
y = sympy.symbols('y')
eqs = sympy.Eq(y, a * exp(b * x))
lne = sympy.Eq(log(eqs.lhs), expand_log(log(eqs.rhs), force=True))
lne
"""
Explanation: Solution 5. Linearizing function
Try to linearize
$$ y = ae^{bx} $$
End of explanation
"""
def plot_math_function(f, min_x, max_x, num_points):
"""
This function plots a graphic of given function 'f' in 2D Cartesian coordinate system
use:
plot_math_function(function 'f', minimum x, maximum x, number of points between min and max x)
"""
# specify the range of points and transfer them to variable 'x'
x = np.linspace(min_x, max_x, num_points)
# assign to variable f_vectorized method from numpy library for vectorize according function 'f'
# this allow us to use different functions 'f' to get the values for y
f_vectorized = np.vectorize(f)
# assign coresponding values for 'y'
y = f_vectorized(x)
# set alias/variable for pmatplotlib.pyplot.gca template for ploting
fig, ax = plt.subplots()
# manipulate the plot object properties
# insert the bottom and left axises to zero
ax.spines["bottom"].set_position("zero")
ax.spines["left"].set_position("zero")
# hide the top and right frame
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
# plot here, to populate ticks objects
plt.plot(x, y)
#plt.legend(loc='upper left')
# extract ticks and set them to plt.subplots (ax)
ax.set_yticklabels(ax.get_yticks())
ax.set_xticklabels(ax.get_xticks())
# create list of ticks labels
ylabels = ax.get_yticklabels()
xlabels = ax.get_xticklabels()
# create variables for indexes from labels list where we are going to hide the labels
xzero = 0
yzero = 0
# loop trough labels and extract index of text '0.0'
for num, xlabel in enumerate(xlabels, start=0):
if xlabel.get_text() == '0.0':
xzero = num
for num, ylabel in enumerate(ylabels, start=0):
if ylabel.get_text() == '0.0':
yzero = num
# another tuning of tick labels
def fine_tunning():
for label in ax.get_xticklabels() + ax.get_yticklabels():
label.set_fontsize(10)
label.set_bbox(dict(facecolor='white', edgecolor='None', alpha=0.65))
plt.xlim(x.min() - 1, x.max() + 1)
plt.ylim(y.min() * 1.1, y.max() * 1.1)
# take all Y major ticks
yticks = ax.yaxis.get_major_ticks()
xticks = ax.xaxis.get_major_ticks()
ax.set_xlabel('x')
ax.set_ylabel('y').set_rotation(0)
# offset label for zero of X
xticks[xzero].label1.set_horizontalalignment('right')
# hide label1 on index with zero of Y
yticks[yzero].label1.set_visible(False)
# fine positioning of x label
ax.xaxis.set_label_coords(1.05, yzero / len(ylabels))
# possitioning of Y label
ax.yaxis.set_label_coords((1 * abs(min_x))/((max_x) - (min_x)), 1.02)
fine_tunning()
plt.show()
plot_math_function(lambda x: 2 * x + 3, -3, 5, 1000)
plot_math_function(lambda x: -x + 8, -1, 10, 1000)
plot_math_function(lambda x: x**2 - x - 2, -3, 4, 1000)
plot_math_function(lambda x: np.sin(x), -np.pi, np.pi, 1000)
plot_math_function(lambda x: np.sin(x) / x, -4 * np.pi, 4 * np.pi, 1000)
"""
Explanation: Solution 6. Generalizing the plotting Function
Write a Python function wich takes another function, x range and number of points, and plots the function graph by evaluating ath every point.
End of explanation
"""
def plot_math_functions(functions, min_x, max_x, num_points):
# Write your code here
x = np.linspace(min_x, max_x, num_points)
vectorized_fs = [np.vectorize(f) for f in functions]
ys = [vectorized_f(x) for vectorized_f in vectorized_fs]
fig, ax = plt.subplots() # set alias for pmatplotlib.pyplot.gca template for ploting
# manipulate the plot object properties
ax.spines["bottom"].set_position("zero") # insert the axis to zero
ax.spines["left"].set_position("zero")
ax.spines["top"].set_visible(False) # hide top frame
ax.spines["right"].set_visible(False)
for y in ys:
plt.plot(x, y)
ax.set_yticklabels(ax.get_yticks()) # extract ticks and set them to plt.subplots (ax)
ax.set_xticklabels(ax.get_xticks())
ylabels = ax.get_yticklabels() # create list of ticks labels
xlabels = ax.get_xticklabels()
# create variables for indexes from labels list where we are going to hide the labels
xzero = 0
yzero = 0
# loop trough labels and extract index of text '0.0'
for num, xlabel in enumerate(xlabels, start=0):
if xlabel.get_text() == '0.0':
xzero = num
for num, ylabel in enumerate(ylabels, start=0):
if ylabel.get_text() == '0.0':
yzero = num
yticks = ax.yaxis.get_major_ticks() # take all Y major ticks
xticks = ax.xaxis.get_major_ticks()
ax.set_xlabel('x') # we can have set_xlabel('name', fontsize = 12)
ax.set_ylabel('y').set_rotation(0)
# offset label for zero of X
xticks[xzero].label1.set_horizontalalignment('right')
# hide label1 on index with zero of Y
yticks[yzero].label1.set_visible(False)
ax.xaxis.set_label_coords(1.02, yzero / len(ylabels)) # fine positioning of x label
ax.yaxis.set_label_coords((1 * abs(min_x))/((max_x) - (min_x)), 1.05) # possitioning of Y label
plt.show()
plot_math_functions([lambda x: 2 * x + 3, lambda x: 0], -3, 5, 1000)
plot_math_functions([lambda x: 3 * x**2 - 2 * x + 5, lambda x: 3 * x + 7], -2, 3, 1000)
plot_math_functions([lambda x: (-4 * x + 7) / 3, lambda x: (-3 * x + 8) / 5, lambda x: (-x - 1) / -2], -1, 4, 1000)
"""
Explanation: Solution 7. Solving Equations Graphically
Plot multiple functions on one graph
End of explanation
"""
plot_math_functions([lambda x: np.arcsin(x), lambda x: np.arccos(x), lambda x: np.arctan(x), lambda x: np.arctan(1/x)], -1.0, 1.0, 1000)
"""
Explanation: Solution 8. Trigonometric functions
Use the plotting function you wrote above to plot the inverse trigonometric functions
End of explanation
"""
def plot_circle(x_c, y_c, r):
"""
Plots the circle with center C(x_c; y_c) and radius r.
This corresponds to plotting the equation x^2 + y^2 = r^2
"""
# Write your code here
plt.gca().set_aspect('equal')
y = np.linspace(y_c -r - 1, y_c + r + 1, 30)
x = np.linspace(x_c - r - 1, x_c + r + 1, 30)
x, y = np.meshgrid(x, y)
circle = x ** 2 + y ** 2 - r ** 2
plt.contour(x, y, circle, [0])
plt.show()
plot_circle(0, 0, 2)
"""
Explanation: Solution 9. Equation of a Circle
Equation of a Circle
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb | apache-2.0 | import os
import tensorflow as tf
import numpy as np
from google.cloud import bigquery
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '2.1'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
"""
Explanation: Create Datasets for the Content-based Filter
This notebook builds the data you will use for creating our content based model. You'll collect the data via a collection of SQL queries from the publicly available Kurier.at dataset in BigQuery.
Kurier.at is an Austrian newsite. The goal of these labs is to recommend an article for a visitor to the site. In this notebook, you collect the data for training, in the subsequent notebook you train the recommender model.
This notebook illustrates:
* How to pull data from BigQuery table and write to local files.
* How to make reproducible train and test splits.
End of explanation
"""
def write_list_to_disk(my_list, filename):
with open(filename, 'w') as f:
for item in my_list:
line = "%s\n" % item
f.write(line)
"""
Explanation: You will use this helper function to write lists containing article ids, categories, and authors for each article in our database to local file.
End of explanation
"""
sql="""
#standardSQL
SELECT
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
GROUP BY
content_id
"""
content_ids_list = bigquery.Client().query(sql).to_dataframe()['content_id'].tolist()
write_list_to_disk(content_ids_list, "content_ids.txt")
print("Some sample content IDs {}".format(content_ids_list[:3]))
print("The total number of articles is {}".format(len(content_ids_list)))
"""
Explanation: Pull data from BigQuery
The cell below creates a local text file containing all the article ids (i.e. 'content ids') in the dataset.
Have a look at the original dataset in BigQuery. Then read through the query below and make sure you understand what it is doing.
End of explanation
"""
sql="""
#standardSQL
SELECT
(SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
GROUP BY
category
"""
categories_list = bigquery.Client().query(sql).to_dataframe()['category'].tolist()
write_list_to_disk(categories_list, "categories.txt")
print(categories_list)
"""
Explanation: There should be 15,634 articles in the database.
Next, you'll create a local file which contains a list of article categories and a list of article authors.
Note the change in the index when pulling the article category or author information. Also, you are using the first author of the article to create our author list.
Refer back to the original dataset, use the hits.customDimensions.index field to verify the correct index.
End of explanation
"""
sql="""
#standardSQL
SELECT
REGEXP_EXTRACT((SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)), r"^[^,]+") AS first_author
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
GROUP BY
first_author
"""
authors_list = bigquery.Client().query(sql).to_dataframe()['first_author'].tolist()
write_list_to_disk(authors_list, "authors.txt")
print("Some sample authors {}".format(authors_list[:10]))
print("The total number of authors is {}".format(len(authors_list)))
"""
Explanation: The categories are 'News', 'Stars & Kultur', and 'Lifestyle'.
When creating the author list, you'll only use the first author information for each article.
End of explanation
"""
sql="""
WITH site_history as (
SELECT
fullVisitorId as visitor_id,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id,
(SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category,
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title,
(SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) AS author_list,
SPLIT(RPAD((SELECT MAX(IF(index=4, value, NULL)) FROM UNNEST(hits.customDimensions)), 7), '.') as year_month_array,
LEAD(hits.customDimensions, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) as nextCustomDimensions
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND
fullVisitorId IS NOT NULL
AND
hits.time != 0
AND
hits.time IS NOT NULL
AND
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
)
SELECT
visitor_id,
content_id,
category,
REGEXP_REPLACE(title, r",", "") as title,
REGEXP_EXTRACT(author_list, r"^[^,]+") as author,
DATE_DIFF(DATE(CAST(year_month_array[OFFSET(0)] AS INT64), CAST(year_month_array[OFFSET(1)] AS INT64), 1), DATE(1970,1,1), MONTH) as months_since_epoch,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) as next_content_id
FROM
site_history
WHERE (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(CONCAT(visitor_id, content_id)), 10)) < 9
"""
training_set_df = bigquery.Client().query(sql).to_dataframe()
training_set_df.to_csv('training_set.csv', header=False, index=False, encoding='utf-8')
training_set_df.head()
sql="""
WITH site_history as (
SELECT
fullVisitorId as visitor_id,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id,
(SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category,
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title,
(SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) AS author_list,
SPLIT(RPAD((SELECT MAX(IF(index=4, value, NULL)) FROM UNNEST(hits.customDimensions)), 7), '.') as year_month_array,
LEAD(hits.customDimensions, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) as nextCustomDimensions
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND
fullVisitorId IS NOT NULL
AND
hits.time != 0
AND
hits.time IS NOT NULL
AND
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
)
SELECT
visitor_id,
content_id,
category,
REGEXP_REPLACE(title, r",", "") as title,
REGEXP_EXTRACT(author_list, r"^[^,]+") as author,
DATE_DIFF(DATE(CAST(year_month_array[OFFSET(0)] AS INT64), CAST(year_month_array[OFFSET(1)] AS INT64), 1), DATE(1970,1,1), MONTH) as months_since_epoch,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) as next_content_id
FROM
site_history
WHERE (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(CONCAT(visitor_id, content_id)), 10)) >= 9
"""
test_set_df = bigquery.Client().query(sql).to_dataframe()
test_set_df.to_csv('test_set.csv', header=False, index=False, encoding='utf-8')
test_set_df.head()
"""
Explanation: There should be 385 authors in the database.
Create train and test sets
In this section, you will create the train/test split of our data for training our model. You use the concatenated values for visitor id and content id to create a farm fingerprint, taking approximately 90% of the data for the training set and 10% for the test set.
End of explanation
"""
%%bash
wc -l *_set.csv
!head *_set.csv
"""
Explanation: Let's have a look at the two csv files you just created containing the training and test set. You'll also do a line count of both files to confirm that you have achieved an approximate 90/10 train/test split.
In the next notebook, Content Based Filtering you will build a model to recommend an article given information about the current article being read, such as the category, title, author, and publish date.
End of explanation
"""
|
NLP-Deeplearning-Club/Classic-ML-Methods-Algo | ipynbs/supervised/Perceptron.ipynb | mit | import requests
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder,StandardScaler
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report
"""
Explanation: 感知器
感知机(Perceptron)是一种二元线性分类器,是最简单的前向人工神经网络.1957由Rosenblatt在康奈尔航空研究室提出,受到心理学家McCulloch和数理逻辑学家Watt Pitts关于人工神经元数学模型的启发,开发出的模仿人类具有感知能力的试错,调整的机器学习方法.
算法
感知机有多种算法,比如最基本的感知机算法,感知机边界算法和多层感知机.我们这里介绍最基本的感知机算法.
以一组线性可分的二元d维数据集$ X ={ (\vec{x_i}, y_t):y_t \in {-1,1 }, i \in [1,n]}$为训练集,我们寻找一个分隔超平面$\vec{w^} \cdot \vec{x} = 0$.
1. 初始化$\vec{w_0} = \vec{0}$,记$t=0$
2. 从$X$中拿出一个数据点$\vec{x_i}$,如果$\vec{w^} \cdot \vec{x}> 0$,预测$\hat{y_i} = 1$,否则$\hat{y_i} = -1$
3. 如果$ y_i \neq \hat{y_i}$,更新一次 $\vec{w_t+1} = \vec{w_t} + y_i * \vec{x_i}, t=t+1 $
4. 重复2和3,直到遍历$X$
收敛性和复杂度
由Novikoff定理可知,最多经过$ \frac{R^2}{\gamma ^2}$步迭代就会收敛,其中$R,\gamma $分别为数据集中元素的长度的最大值和到分割超平面的最小几何距离.
感知机算法复杂度是$O(n)$
优缺点
感知机最大的特点是简单,并且错误边界可控.但基本感知机算法无法处理非线性可分数据集和亦或(XOR)问题,因为基本感知机算法只是在平面画条线,无法把${ (-1,-1), (1,1)}$和${ (1,-1), (1,-1)}$区分开.
发展
模仿神经科学,我们可以把感知机刻画成一个只有输入层和输出层的神经网络.
Rosenblatt等人意识到引入隐藏层,也就是在输入层和输出层之间加入新的层并加入激活函数,可以解决线性不可分的问题.加上上世纪80年代,反向算法的提出,带动了神经网络,以至于现在如火如荼的深度学习的研究.
应用sklearn的相关接口
在sklearn中用于监督学习的感知器相关接口有:
单节点的线性感知器
sklearn.linear_model.Perceptron单节点感知器
sklearn.linear_model.SGDClassifier快速梯度下降分类器,和单节点感知器同样的底层实现.只是出了可以描述感知器也可以描述一些其他算法
Perceptron()和SGDClassifier(loss=”perceptron”, eta0=1, learning_rate=”constant”, penalty=None)是一样的.
单节点感知器是一种适用于大规模学习的一种简单算法.优点在于
不需要设置学习率(learning rate)。
不需要正则化处理。
仅使用错误样本更新模型
使用合页损失(hinge loss)的感知机比SGD略快,所得模型更稀疏.
多层感知器(全连接的神经网络)
neural_network.MLPClassifier([…])多层感知器分类器
neural_network.MLPRegressor([…])多层感知器回归器
sklearn毕竟不是专业的神经网络算法工具.由于计算量大,要用神经网络还是推荐使用诸如tensorflow,theano,keras这样的专业框架配合gpu使用.本文也不会过多的涉及神经网络.
例:使用iris数据集训练模型
iris是一个知名的数据集,有4维连续的特征和三种标签.
End of explanation
"""
csv_content = requests.get("http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data").text
row_name = ['sepal_length','sepal_width','petal_length','petal_width','label']
csv_list = csv_content.strip().split("\n")
row_matrix = [line.strip().split(",") for line in csv_list]
dataset = pd.DataFrame(row_matrix,columns=row_name)
dataset[:10]
"""
Explanation: 数据获取
这个数据集很经典因此很多机器学习框架都为其提供接口,sklearn也是一样.但更多的情况下我们还是要处理各种来源的数据.因此此处我们还是使用最传统的方式获取数据
End of explanation
"""
encs = {}
encs["feature"] = StandardScaler()
encs["feature"].fit(dataset[row_name[:-1]])
table = pd.DataFrame(encs["feature"].transform(dataset[row_name[:-1]]),columns=row_name[:-1])
encs["label"]=LabelEncoder()
encs["label"].fit(dataset["label"])
table["label"] = encs["label"].transform(dataset["label"])
table[:10]
table.groupby("label").count()
"""
Explanation: 数据预处理
由于特征为float类型而标签为标签类别数据,因此标签需要为其编码,特征需要标准化.我们使用z-score进行归一化
End of explanation
"""
train_set,validation_set = train_test_split(table)
train_set.groupby("label").count()
validation_set.groupby("label").count()
"""
Explanation: 数据集拆分
End of explanation
"""
mlp = MLPClassifier(
hidden_layer_sizes=(100,50),
activation='relu',
solver='adam',
alpha=0.0001,
batch_size='auto',
learning_rate='constant',
learning_rate_init=0.001)
mlp.fit(train_set[row_name[:-1]], train_set["label"])
pre = mlp.predict(validation_set[row_name[:-1]])
"""
Explanation: 训练模型
End of explanation
"""
print(classification_report(validation_set["label"],pre))
"""
Explanation: 模型评估
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | blogs/lightning/3_convnet.ipynb | apache-2.0 | %pip install cloudml-hypertune
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%load_ext autoreload
%aimport ltgpred
import tensorflow as tf
print(tf.__version__)
"""
Explanation: Convolutional Neural Network on pixel neighborhoods
This notebook reads the pixel-neighborhood data written out by the Dataflow program of 1_explore.ipynb and trains a simple convnet model on Cloud ML Engine.
End of explanation
"""
!mkdir -p preproc/tfrecord
!gsutil cp gs://$BUCKET/lightning/preproc_0.02_32_2/tfrecord/*-00000-* preproc/tfrecord
%%bash
export PYTHONPATH=${PWD}/ltgpred/
OUTDIR=${PWD}/cnn_trained
DATADIR=${PWD}/preproc/tfrecord
rm -rf $OUTDIR
mkdir -p $OUTDIR
python3 -m trainer.train_cnn \
--train_steps=10 --num_eval_records=512 --train_batch_size=16 --num_cores=1 --nlayers=5 --arch=convnet \
--job-dir=$OUTDIR --train_data_path=${DATADIR}/train* --eval_data_path=${DATADIR}/eval*
!find /home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/
%%bash
saved_model_cli show --all --dir $(ls -d -1 /home/jupyter/training-data-analyst/blogs/lightning/cnn_trained/export/exporter/* | tail -1)
%%bash
export CLOUDSDK_PYTHON=$(which python3)
OUTDIR=${PWD}/cnn_trained
DATADIR=${PWD}/preproc/tfrecord
rm -rf $OUTDIR
gcloud ml-engine local train \
--module-name=trainer.train_cnn --package-path=${PWD}/ltgpred/trainer \
-- \
--train_steps=10 --num_eval_records=512 --train_batch_size=16 --num_cores=1 --nlayers=5 \
--job-dir=$OUTDIR --train_data_path=${DATADIR}/train* --eval_data_path=${DATADIR}/eval*
"""
Explanation: Train CNN model locally
End of explanation
"""
%%writefile largemachine.yaml
trainingInput:
scaleTier: CUSTOM
masterType: complex_model_m_p100
%%bash
#DATADIR=gs://$BUCKET/lightning/preproc/tfrecord
DATADIR=gs://$BUCKET/lightning/preproc_0.02_32_2/tfrecord
#for ARCH in feateng convnet dnn resnet; do
for ARCH in convnet; do
JOBNAME=ltgpred_${ARCH}_$(date -u +%y%m%d_%H%M%S)
OUTDIR=gs://${BUCKET}/lightning/${ARCH}_trained_gpu
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--module-name trainer.train_cnn --package-path ${PWD}/ltgpred/trainer --job-dir=$OUTDIR \
--region=${REGION} --scale-tier CUSTOM --config largemachine.yaml \
--python-version 3.5 --runtime-version 1.10 \
-- \
--train_data_path ${DATADIR}/train-* --eval_data_path ${DATADIR}/eval-* \
--train_steps 5000 --train_batch_size 256 --num_cores 4 --arch $ARCH \
--num_eval_records 1024000 --nlayers 5 --dprob 0 --ksize 3 --nfil 10 --learning_rate 0.01
done
"""
Explanation: Training lighting prediction model on CMLE using GPU
custom_model_m_gpu is a machine with 4 K-80 GPUs.
End of explanation
"""
%%writefile hyperparam_gpu.yaml
trainingInput:
scaleTier: CUSTOM
masterType: complex_model_m_p100
hyperparameters:
goal: MAXIMIZE
maxTrials: 30
maxParallelTrials: 2
hyperparameterMetricTag: val_acc
params:
- parameterName: learning_rate
type: DOUBLE
minValue: 0.01
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
- parameterName: nfil
type: INTEGER
minValue: 5
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
- parameterName: nlayers
type: INTEGER
minValue: 1
maxValue: 5
scaleType: UNIT_LINEAR_SCALE
- parameterName: train_batch_size # has to be multiple of 128
type: DISCRETE
discreteValues: [128, 256, 512, 1024, 2048, 4096]
# - parameterName: arch
# type: CATEGORICAL
# categoricalValues: ["convnet", "feateng", "resnet", "dnn"]
%%bash
OUTDIR=gs://${BUCKET}/lightning/convnet_trained_gpu_hparam
DATADIR=gs://$BUCKET/lightning/preproc_0.02_32_2/tfrecord
JOBNAME=ltgpred_hparam_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--module-name=ltgpred.trainer.train_cnn --package-path=${PWD}/ltgpred --job-dir=$OUTDIR \
--region=${REGION} --scale-tier=CUSTOM --config=hyperparam_gpu.yaml \
--python-version=3.5 --runtime-version=1.10 \
-- \
--train_data_path=${DATADIR}/train-* --eval_data_path=${DATADIR}/eval-* \
--train_steps=5000 --train_batch_size=256 --num_cores=4 --arch=convnet \
--num_eval_records=1024000 --nlayers=5 --dprob=0 --ksize=3 --nfil=10 --learning_rate=0.01 --skipexport
"""
Explanation: Results (Dropout=0)
| Architecture | Training time | Validation RMSE | Validation Accuracy |
| --- | --- | --- | --- |
| feateng | 23 min | 0.2620 | 0.8233 |
| dnn | 62 min | 0.2752 | 0.8272 |
| convnet | 24 min | 0.2261 | 0.8462 |
| resnet | 63 min | 0.3088 | 0.7142 |
Results (Dropout=0.05)
| Architecture | Training time | Validation RMSE | Validation Accuracy |
| --- | --- | --- | --- |
| feateng | 20 min | 0.2641 | 0.8258 |
| dnn | 58 min | 0.2284 | 0.8412 |
| convnet | 23 min | 0.2268 | 0.8459 |
| resnet | 80 min | 0.3005 | 0.6887 |
Other than for the dnn, dropout doesn't seem to help. Based on these results, let's train a <b> convnet with no dropout </b>.
All the results above are for
<pre>
--train_steps=5000 --train_batch_size=256 --num_cores=4 --arch=$ARCH \
--num_eval_records=1024000 --nlayers=5 --dprob=... --ksize=3 --nfil=10 --learning_rate=0.01
</pre>
Hyperparameter tuning on GPU
End of explanation
"""
%%bash
OUTDIR=gs://${BUCKET}/lightning/cnn_trained_tpu
DATADIR=gs://$BUCKET/lightning/preproc_0.02_32_2/tfrecord
JOBNAME=ltgpred_cnn_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--module-name ltgpred.trainer.train_cnn --package-path ${PWD}/ltgpred --job-dir=$OUTDIR \
--region ${REGION} --scale-tier BASIC_TPU \
--python-version 3.5 --runtime-version 1.12 \
-- \
--train_data_path ${DATADIR}/train* --eval_data_path ${DATADIR}/eval* \
--train_steps 1250 --train_batch_size 1024 --num_cores 8 --use_tpu \
--num_eval_records 1024000 --nlayers 5 --dprob 0 --ksize 3 --nfil 10 --learning_rate 0.01
"""
Explanation: The hyperparameter training took 7.5 hours for me, cost 215 ML units (about 110USD list price) and had this as the best set of parameters:
<pre>
{
"trialId": "2",
"hyperparameters": {
"nfil": "10",
"learning_rate": "0.02735530997243607",
"train_batch_size": "1024",
"nlayers": "3"
},
"finalMetric": {
"trainingStep": "1",
"objectiveValue": 0.846787109375
}
},
</pre>
Training lightning prediction model on CMLE using TPUs
Next, let's train on the TPU. Because our batch size is 4x, we can train for 4x fewer steps
End of explanation
"""
%%bash
#DATADIR=gs://$BUCKET/lightning/preproc/tfrecord
#DATADIR=gs://$BUCKET/lightning/preproc_0.02_32_2/tfrecord
DATADIR=gs://$BUCKET/lightning/preproc_0.02_32_1/tfrecord # also 5-min validity
#for ARCH in feateng convnet dnn resnet; do
for ARCH in feateng; do
JOBNAME=ltgpred_${ARCH}_$(date -u +%y%m%d_%H%M%S)
OUTDIR=gs://${BUCKET}/lightning/${ARCH}_trained_gpu
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--module-name ltgpred.trainer.train_cnn --package-path ${PWD}/ltgpred --job-dir=$OUTDIR \
--region=${REGION} --scale-tier CUSTOM --config largemachine.yaml \
--python-version 3.5 --runtime-version 1.12 \
-- \
--train_data_path ${DATADIR}/train-* --eval_data_path ${DATADIR}/eval-* \
--train_steps 5000 --train_batch_size 256 --num_cores 4 --arch $ARCH \
--num_eval_records 1024000 --nlayers 5 --dprob 0 --ksize 3 --nfil 10 --learning_rate 0.01
done
"""
Explanation: When I ran it, training finished with accuracy=0.82 (no change)
More data, harder problem
End of explanation
"""
|
rsterbentz/phys202-2015-work | days/day11/Interpolation.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
"""
Explanation: Interpolation
Learning Objective: Learn to interpolate 1d and 2d datasets of structured and unstructured points using SciPy.
End of explanation
"""
x = np.linspace(0,4*np.pi,10)
x
"""
Explanation: Overview
We have already seen how to evaluate a Python function at a set of numerical points:
$$ f(x) \rightarrow f_i = f(x_i) $$
Here is an array of points:
End of explanation
"""
f = np.sin(x)
f
plt.plot(x, f, marker='o')
plt.xlabel('x')
plt.ylabel('f(x)');
"""
Explanation: This creates a new array of points that are the values of $\sin(x_i)$ at each point $x_i$:
End of explanation
"""
from scipy.interpolate import interp1d
"""
Explanation: This plot shows that the points in this numerical array are an approximation to the actual function as they don't have the function's value at all possible points. In this case we know the actual function ($\sin(x)$). What if we only know the value of the function at a limited set of points, and don't know the analytical form of the function itself? This is common when the data points come from a set of measurements.
Interpolation is a numerical technique that enables you to construct an approximation of the actual function from a set of points:
$$ {x_i,f_i} \rightarrow f(x) $$
It is important to note that unlike curve fitting or regression, interpolation doesn't not allow you to incorporate a statistical model into the approximation. Because of this, interpolation has limitations:
It cannot accurately construct the function's approximation outside the limits of the original points.
It cannot tell you the analytical form of the underlying function.
Once you have performed interpolation you can:
Evaluate the function at other points not in the original dataset.
Use the function in other calculations that require an actual function.
Compute numerical derivatives or integrals.
Plot the approximate function on a finer grid that the original dataset.
Warning:
The different functions in SciPy work with a range of different 1d and 2d arrays. To help you keep all of that straight, I will use lowercase variables for 1d arrays (x, y) and uppercase variables (X,Y) for 2d arrays.
1d data
We begin with a 1d interpolation example with regularly spaced data. The function we will use it interp1d:
End of explanation
"""
x = np.linspace(0,4*np.pi,10) # only use 10 points to emphasize this is an approx
f = np.sin(x)
"""
Explanation: Let's create the numerical data we will use to build our interpolation.
End of explanation
"""
sin_approx = interp1d(x, f, kind='cubic')
"""
Explanation: To create our approximate function, we call interp1d as follows, with the numerical data. Options for the kind argument includes:
linear: draw a straight line between initial points.
nearest: return the value of the function of the nearest point.
slinear, quadratic, cubic: use a spline (particular kinds of piecewise polynomial of a given order.
The most common case you will want to use is cubic spline (try other options):
End of explanation
"""
newx = np.linspace(0,4*np.pi,100)
newf = sin_approx(newx)
"""
Explanation: The sin_approx variabl that interp1d returns is a callable object that can be used to compute the approximate function at other points. Compute the approximate function on a fine grid:
End of explanation
"""
plt.plot(x, f, marker='o', linestyle='', label='original data')
plt.plot(newx, newf, marker='.', label='interpolated');
plt.legend();
plt.xlabel('x')
plt.ylabel('f(x)');
"""
Explanation: Plot the original data points, along with the approximate interpolated values. It is quite amazing to see how the interpolation has done a good job of reconstructing the actual function with relatively few points.
End of explanation
"""
plt.plot(newx, np.abs(np.sin(newx)-sin_approx(newx)))
plt.xlabel('x')
plt.ylabel('Absolute error');
"""
Explanation: Let's look at the absolute error between the actual function and the approximate interpolated function:
End of explanation
"""
x = 4*np.pi*np.random.rand(15)
f = np.sin(x)
sin_approx = interp1d(x, f, kind='cubic')
# We have to be careful about not interpolating outside the range
newx = np.linspace(np.min(x), np.max(x),100)
newf = sin_approx(newx)
plt.plot(x, f, marker='o', linestyle='', label='original data')
plt.plot(newx, newf, marker='.', label='interpolated');
plt.legend();
plt.xlabel('x')
plt.ylabel('f(x)');
plt.plot(newx, np.abs(np.sin(newx)-sin_approx(newx)))
plt.xlabel('x')
plt.ylabel('Absolute error');
"""
Explanation: 1d non-regular data
It is also possible to use interp1d when the x data is not regularly spaced. To show this, let's repeat the above analysis with randomly distributed data in the range $[0,4\pi]$. Everything else is the same.
End of explanation
"""
from scipy.interpolate import interp2d
"""
Explanation: Notice how the absolute error is larger in the intervals where there are no points.
2d structured
For the 2d case we want to construct a scalar function of two variables, given
$$ {x_i, y_i, f_i} \rightarrow f(x,y) $$
For now, we will assume that the points ${x_i,y_i}$ are on a structured grid of points. This case is covered by the interp2d function:
End of explanation
"""
def wave2d(x, y):
return np.sin(2*np.pi*x)*np.sin(3*np.pi*y)
"""
Explanation: Here is the actual function we will use the generate our original dataset:
End of explanation
"""
x = np.linspace(0.0, 1.0, 10)
y = np.linspace(0.0, 1.0, 10)
"""
Explanation: Build 1d arrays to use as the structured grid:
End of explanation
"""
X, Y = np.meshgrid(x, y)
Z = wave2d(X, Y)
"""
Explanation: Build 2d arrays to use in computing the function on the grid points:
End of explanation
"""
plt.pcolor(X, Y, Z)
plt.colorbar();
plt.scatter(X, Y);
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
"""
Explanation: Here is a scatter plot of the points overlayed with the value of the function at those points:
End of explanation
"""
wave2d_approx = interp2d(X, Y, Z, kind='cubic')
"""
Explanation: You can see in this plot that the function is not smooth as we don't have its value on a fine grid.
Now let's compute the interpolated function using interp2d. Notice how we are passing 2d arrays to this function:
End of explanation
"""
xnew = np.linspace(0.0, 1.0, 40)
ynew = np.linspace(0.0, 1.0, 40)
Xnew, Ynew = np.meshgrid(xnew, ynew) # We will use these in the scatter plot below
Fnew = wave2d_approx(xnew, ynew) # The interpolating function automatically creates the meshgrid!
Fnew.shape
"""
Explanation: Compute the interpolated function on a fine grid:
End of explanation
"""
plt.pcolor(xnew, ynew, Fnew);
plt.colorbar();
plt.scatter(X, Y, label='original points')
plt.scatter(Xnew, Ynew, marker='.', color='green', label='interpolated points')
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
plt.legend(bbox_to_anchor=(1.2, 1), loc=2, borderaxespad=0.);
"""
Explanation: Plot the original course grid of points, along with the interpolated function values on a fine grid:
End of explanation
"""
from scipy.interpolate import griddata
"""
Explanation: Notice how the interpolated values (green points) are now smooth and continuous. The amazing thing is that the interpolation algorithm doesn't know anything about the actual function. It creates this nice approximation using only the original course grid (blue points).
2d unstructured
It is also possible to perform interpolation when the original data is not on a regular grid. For this, we will use the griddata function:
End of explanation
"""
x = np.random.rand(100)
y = np.random.rand(100)
"""
Explanation: There is an important difference between griddata and the interp1d/interp2d:
interp1d and interp2d return callable Python objects (functions).
griddata returns the interpolated function evaluated on a finer grid.
This means that you have to pass griddata an array that has the finer grid points to be used. Here is the course unstructured grid we will use:
End of explanation
"""
f = wave2d(x, y)
"""
Explanation: Notice how we pass these 1d arrays to our function and don't use meshgrid:
End of explanation
"""
plt.scatter(x, y);
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
"""
Explanation: It is clear that our grid is very unstructured:
End of explanation
"""
xnew = np.linspace(x.min(), x.max(), 40)
ynew = np.linspace(y.min(), y.max(), 40)
Xnew, Ynew = np.meshgrid(xnew, ynew)
Xnew.shape, Ynew.shape
Fnew = griddata((x,y), f, (Xnew, Ynew), method='cubic', fill_value=0.0)
Fnew.shape
plt.pcolor(Xnew, Ynew, Fnew, label="points")
plt.colorbar()
plt.scatter(x, y, label='original points')
plt.scatter(Xnew, Ynew, marker='.', color='green', label='interpolated points')
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
plt.legend(bbox_to_anchor=(1.2, 1), loc=2, borderaxespad=0.);
"""
Explanation: To use griddata we need to compute the final (strcutured) grid we want to compute the interpolated function on:
End of explanation
"""
|
Saxafras/Spacetime | Random Fields.ipynb | bsd-3-clause | state_overlay_diagram(field, random_states.get_causal_field(), t_max = 50, x_max = 50)
for state in random_states.causal_states():
print state.plc_configs()
for state in random_states.causal_states():
print state.morph()
t_trans = random_states.all_transitions(zipped = False)[1]
print np.unique(t_trans)
print np.log(8)/np.log(2)
print random_states.entropy_rate('forward')
print random_states.entropy_rate('right')
print random_states.entropy_rate('left')
"""
Explanation: It appears the two states are equivalent, which means this is a single state. This is the spacetime equivalent of a fair coin, so this is the desired result. Which makes me feel better about the local epsilon machine constructed from the lightcone equivalence relation.
*** This has been fixed by excluding the present from future light cones. This eliminates the state splitting issue in the reconstruction algorithm ***
End of explanation
"""
random_states = epsilon_field(random_field(600,600))
random_states.estimate_states(3,2,1)
random_states.filter_data()
t_trans = random_states.all_transitions(zipped = False)[1]
print np.unique(t_trans)
print np.log(32)/np.log(2)
print np.log(8)/np.log(2)
"""
Explanation: Would like that the value of intrinsic randomness of this field be 1 bit in both time and space. Here we have three bits for both with past depth 1. (still need to change code to have correct value of depth)
End of explanation
"""
wildcard_field = wildcard_tiling(1000,1000)
wildcard_states = epsilon_field(wildcard_field)
wildcard_states.estimate_states(3,3,1)
wildcard_states.filter_data()
print wildcard_states.number_of_states()
"""
Explanation: It seems we can get correct value of 1 bit of uncertainty if we treat each direction separately (not just each dimension) and divide the branching uncertainty by the size of the fringe along that direction. This procedure does make some sense, and it's good that it works out in this simple case.
End of explanation
"""
|
tgrammat/ML-Data_Challenges | Reinforcement-Learning/TD0-models/01.TaxiProblem.ipynb | apache-2.0 | import gym
import random
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from collections import defaultdict, OrderedDict
env = gym.make('Taxi-v3')
print('OpenAI Gym environments for Taxi Problem:')
[k for k in gym.envs.registry.env_specs.keys() if k.find('Taxi', 0) >=0]
"""
Explanation: Solving the Taxi - Problem with TD(0) Algorithms
Problem Description (Goal):
Say our agent is the driving the taxi. There are totally four locations and the agent has to pick up a passenger at one location and drop at the another. The agent will receive +20 points as a reward for successful drop off and -1 point for every time step it takes. The agent will also lose -10 points for illegal pickups and drops. So the goal of our agent is to learn to pick up and drop passengers at the correct location in a short time without boarding any illegal passengers.
1. Load Libraries & Define OpenAI Gym Environment
End of explanation
"""
env.render()
"""
Explanation: The environment is shown below, where the letters (R, G, Y, B) represents the different locations and a tiny yellow colored rectangle is the taxi driving by our agent.
End of explanation
"""
# Note: this requires a better understanding
tmp = pd.DataFrame.from_dict(env.P[93], orient='index')
tmp = pd.DataFrame(tmp[0].tolist(), index=tmp.index, columns=['A', 'State_no', 'Reward', 'Illegal_Passenger'])
tmp
help(env)
"""
Explanation: For details on the notation followed for state-actions and rewards inside the "Taxi-v3" environment:
End of explanation
"""
%run ../PlotUtils.py
plotutls = PlotUtils()
"""
Explanation: 2. RL-Algorithms based on Temporal Difference - TD(0)
2a. Load the "PlotUtils" Python class
Load the Python class PlotUtils() which provides various plotting utilities and start a new instance.
End of explanation
"""
%run ../TD0_Utils.py
TD0 = TemporalDifferenceUtils(env)
"""
Explanation: 2b. Load the "Temporal Difference" Python class
Load the Temporal Difference Python class, TemporalDifferenceUtils() and start a new instance for the Taxi-v3 OpenAI Gym environment.
End of explanation
"""
# Define Number of Episodes
n_episodes = 3e+3
# e-greedy parameters to investigate
print('Determine the epsilon parameters for the epsilon-greedy policy...\n')
epsilons = np.arange(0.01, 0.05, 0.01)
print('epsilons: {}'.format(epsilons), '\n')
# various step-sizes (alpha) to try
print('Determine the step-sizes parameters (alphas) for the TD(0)...\n')
step_sizes = np.array(0.4)
print('step_sizes: {}'.format(step_sizes), '\n')
# Fixed discount
discount_fixed = 1
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
epsilons, step_sizes = np.meshgrid(epsilons, step_sizes)
# Create a dictionary of the RL-trials of interest
RL_trials = {"baseline":
{'epsilon': 0.017,
'step_size': 0.4, 'discount': 1}}
for n, trial in enumerate(list(zip(*epsilons, *step_sizes))):
key = 'trial_' + str(n+1)
RL_trials[key] = {'epsilon': trial[0],
'step_size': trial[1], 'discount': discount_fixed}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes...\n'.format(int(n_episodes)))
rewards_per_trial_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
# Apply SARSA [on-policy TD(0) Control]
q_values, tot_rewards = TD0.sarsa_on_policy_control(env, n_episodes=n_episodes,
step_size=step_size, discount=discount, epsilon=epsilon)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_SARSA[trial] = tot_rewards
q_values_per_trial_SARSA[trial] = q_values
title = 'Efficiency of the RL Method\n[SARSA on-policy TD(0) Control]'
plotutls.plot_learning_curve(rewards_per_trial_SARSA, title=title)
"""
Explanation: 3. Model Training
3a. SARSA on-Policy TD(0) Control
Notes on the RL trials tested below:
* discount factor (or gamma): It is generally inrtrinsic property of the model. Having tried a gamma= 0.7 for the taxi problem we discuss render our SARSA on-policy solution quite unstable. For this reason we have fixed it at "1" in all the models we are testing below.
<img src="./sarsa_taxi_problem-low_discount.jpg">
Train some candidates RL-models of SARSA on-policy TD(0) Control:
End of explanation
"""
RL_trials['trial_1']
RL_trials
"""
Explanation: It turns out that the best one is the so-called "trial_1":
End of explanation
"""
# Define Number of Episodes
n_episodes = 3e+3
# e-greedy parameters to investigate
print('Determine the epsilon parameters for the epsilon-greedy policy...\n')
epsilons = np.arange(0.01, 0.05, 0.01)
print('epsilons: {}'.format(epsilons), '\n')
# various step-sizes (alpha) to try
print('Determine the step-sizes parameters (alphas) for the TD(0)...\n')
step_sizes = np.array(0.4)
print('step_sizes: {}'.format(step_sizes), '\n')
# Fixed discount
discount_fixed = 1
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
epsilons, step_sizes = np.meshgrid(epsilons, step_sizes)
# Create a dictionary of the RL-trials of interest
RL_trials = {"baseline":
{'epsilon': 0.017,
'step_size': 0.4, 'discount': 1}}
for n, trial in enumerate(list(zip(*epsilons, *step_sizes))):
key = 'trial_' + str(n+1)
RL_trials[key] = {'epsilon': trial[0],
'step_size': trial[1], 'discount': discount_fixed}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes...\n'.format(int(n_episodes)))
rewards_per_trial_QL = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_QL = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
# Apply SARSA [on-policy TD(0) Control]
q_values, tot_rewards = TD0.q_learning_off_policy(env, n_episodes=n_episodes,
step_size=step_size, discount=discount, epsilon=epsilon)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_QL[trial] = tot_rewards
q_values_per_trial_QL[trial] = q_values
title = 'Efficiency of the RL Method\n[Q-Learning off-policy TD(0) Control]'
plotutls.plot_learning_curve(rewards_per_trial_QL, title=title)
"""
Explanation: 3b. Q-Learning Off-Policy TD(0) Control
End of explanation
"""
RL_trials['trial_1']
RL_trials
"""
Explanation: Again, the best RL-model was the so-called "trial_1":
End of explanation
"""
# Define Number of Episodes
n_episodes = 3e+3
# e-greedy parameters to investigate
print('Determine the epsilon parameters for the epsilon-greedy policy...\n')
epsilons = np.arange(0.01, 0.05, 0.01)
print('epsilons: {}'.format(epsilons), '\n')
# various step-sizes (alpha) to try
print('Determine the step-sizes parameters (alphas) for the TD(0)...\n')
step_sizes = np.array(0.4)
print('step_sizes: {}'.format(step_sizes), '\n')
# Fixed discount
discount_fixed = 1
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
epsilons, step_sizes = np.meshgrid(epsilons, step_sizes)
# Create a dictionary of the RL-trials of interest
RL_trials = {"baseline":
{'epsilon': 0.017,
'step_size': 0.4, 'discount': 1}}
for n, trial in enumerate(list(zip(*epsilons, *step_sizes))):
key = 'trial_' + str(n+1)
RL_trials[key] = {'epsilon': trial[0],
'step_size': trial[1], 'discount': discount_fixed}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes...\n'.format(int(n_episodes)))
rewards_per_trial_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
# Apply SARSA [on-policy TD(0) Control]
q_values, tot_rewards = TD0.expected_sarsa_on_policy(env, n_episodes=n_episodes,
step_size=step_size, discount=discount, epsilon=epsilon)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_ExpSARSA[trial] = tot_rewards
q_values_per_trial_ExpSARSA[trial] = q_values
title = 'Efficiency of the RL Method\n[Expected SARSA on-policy TD(0) Control]'
plotutls.plot_learning_curve(rewards_per_trial_ExpSARSA, title=title)
"""
Explanation: 3c. On-Policy Expected SARSA
End of explanation
"""
RL_trials['trial_1']
RL_trials
"""
Explanation: Again, the best RL-model was the so-called "trial_1":
End of explanation
"""
rewards_per_trial_best_models = OrderedDict([('Winning_Model_SARSA', np.array([])),
('Winning_Model_QL', np.array([])),
('Winning_Model_ExpSARSA', np.array([]))])
rewards_per_trial_best_models['Winning_Model_SARSA'] = rewards_per_trial_SARSA['trial_1']
rewards_per_trial_best_models['Winning_Model_QL'] = rewards_per_trial_QL['trial_1']
rewards_per_trial_best_models['Winning_Model_ExpSARSA'] = rewards_per_trial_ExpSARSA['trial_1']
title = 'Efficiency of the RL Method\n[SARSA vs Q-Learning and Expected SARSA Winning Models]'
plotutls.plot_learning_curve(rewards_per_trial_best_models, title=title)
"""
Explanation: 4. Comparison of SARSA, Q-Learning and Expected SARSA best models
End of explanation
"""
|
mercybenzaquen/foundations-homework | databases_hw/db05/Homework_5_Graded.ipynb | mit | from bs4 import BeautifulSoup
from urllib.request import urlopen
html = urlopen("http://static.decontextualize.com/cats.html").read()
document = BeautifulSoup(html, "html.parser")
"""
Explanation: graded = 10/10
Homework #5
This homework presents a sophisticated scenario in which you must design a SQL schema, insert data into it, and issue queries against it.
The scenario
In the year 20XX, I have won the lottery and decided to leave my programming days behind me in order to pursue my true calling as a cat cafe tycoon. This webpage lists the locations of my cat cafes and all the cats that are currently in residence at these cafes.
I'm interested in doing more detailed analysis of my cat cafe holdings and the cats that are currently being cared for by my cafes. For this reason, I've hired you to convert this HTML page into a workable SQL database. (Why don't I just do it myself? Because I am far too busy hanging out with adorable cats in all of my beautiful, beautiful cat cafes.)
Specifically, I want to know the answers to the following questions:
What's the name of the youngest cat at any location?
In which zip codes can I find a lilac-colored tabby?
What's the average weight of cats currently residing at any location (grouped by location)?
Which location has the most cats with tortoiseshell coats?
Because I'm not paying you very much, and because I am a merciful person who has considerable experience in these matters, I've decided to write the queries for you. (See below.) Your job is just to scrape the data from the web page, create the appropriate tables in PostgreSQL, and insert the data into those tables.
Before you continue, scroll down to "The Queries" below to examine the queries as I wrote them.
Problem set #1: Scraping the data
Your first goal is to create two data structures, both lists of dictionaries: one for the list of locations and one for the list of cats. You'll get these from scraping two <table> tags in the HTML: the first table has a class of cafe-list, the second has a class of cat-list.
Before you do anything else, though, execute the following cell to import Beautiful Soup and create a BeautifulSoup object with the content of the web page:
End of explanation
"""
cafe_list = list()
cafe_table = document.find('table', {'class': 'cafe-list'})
tbody = cafe_table.find('tbody')
for tr_tag in tbody.find_all('tr'):
name_zip_dic= {}
cat_name_tag = tr_tag.find ('td', {'class': 'name'})
name_zip_dic['name']= cat_name_tag.string
location_zipcode_tag = tr_tag.find ('td', {'class': 'zip'})
name_zip_dic['zip'] = location_zipcode_tag.string
cafe_list.append(name_zip_dic)
cafe_list
"""
Explanation: Let's tackle the list of cafes first. In the cell below, write some code that creates a list of dictionaries with information about each cafe, assigning it to the variable cafe_list. I've written some of the code for you; you just need to fill in the rest. The list should end up looking like this:
[{'name': 'Hang In There', 'zip': '11237'},
{'name': 'Independent Claws', 'zip': '11201'},
{'name': 'Paws and Play', 'zip': '11215'},
{'name': 'Tall Tails', 'zip': '11222'},
{'name': 'Cats Meow', 'zip': '11231'}]
End of explanation
"""
cat_list = list()
cat_table = document.find('table', {'class': 'cat-list'})
tbody = cat_table.find('tbody')
for tr_tag in tbody.find_all('tr'):
cat_dict = {}
name_tag = tr_tag.find('td', {'class': 'name'})
cat_dict['name']= name_tag.string
birthdate_tag = tr_tag.find('td', {'class': 'birthdate'})
cat_dict['birthdate']= birthdate_tag.string
weight_tag = tr_tag.find('td', {'class': 'weight'})
cat_dict['weight']= weight_tag.string
color_tag = tr_tag.find('td', {'class': 'color'})
cat_dict['color']= color_tag.string
pattern_tag = tr_tag.find('td', {'class': 'pattern'})
cat_dict['pattern']= pattern_tag.string
locations_tag = tr_tag.find('td', {'class': 'locations'})
cat_dict['locations']= locations_tag.string
cat_list.append(cat_dict)
cat_list
"""
Explanation: Great! In the following cell, write some code that creates a list of cats from the <table> tag on the page, storing them as a list of dictionaries in a variable called cat_list. Again, I've written a bit of the code for you. Expected output:
[{'birthdate': '2015-05-20',
'color': 'black',
'locations': ['Paws and Play', 'Independent Claws*'],
'name': 'Sylvester',
'pattern': 'colorpoint',
'weight': 10.46},
{'birthdate': '2000-01-03',
'color': 'cinnamon',
'locations': ['Independent Claws*'],
'name': 'Jasper',
'pattern': 'solid',
'weight': 8.06},
{'birthdate': '2006-02-27',
'color': 'brown',
'locations': ['Independent Claws*'],
'name': 'Luna',
'pattern': 'tortoiseshell',
'weight': 10.88},
[...many records omitted for brevity...]
{'birthdate': '1999-01-09',
'color': 'white',
'locations': ['Cats Meow*', 'Independent Claws', 'Tall Tails'],
'name': 'Lafayette',
'pattern': 'tortoiseshell',
'weight': 9.3}]
Note: Observe the data types of the values in each dictionary! Make sure to explicitly convert values retrieved from .string attributes of Beautiful Soup tag objects to strs using the str() function.
End of explanation
"""
import pg8000
conn = pg8000.connect(database="catcafes")
"""
Explanation: Problem set #2: Designing the schema
Before you do anything else, use psql to create a new database for this homework assignment using the following command:
CREATE DATABASE catcafes;
In the following cell, connect to the database using pg8000. (You may need to provide additional arguments to the .connect() method, depending on the distribution of PostgreSQL you're using.)
End of explanation
"""
conn.rollback()
"""
Explanation: Here's a cell you can run if something goes wrong and you need to rollback the current query session:
End of explanation
"""
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE cafe (
id serial,
name varchar(40),
zip varchar(5)
)
""")
cursor.execute("""
CREATE TABLE cat (
id serial,
name varchar(60),
birthdate varchar(40),
color varchar(40),
pattern varchar(40),
weight numeric
)
""")
cursor.execute("""
CREATE TABLE cat_cafe (
cat_id integer,
cafe_id integer,
active boolean
)
""")
conn.commit()
"""
Explanation: In the cell below, you're going to create three tables, necessary to represent the data you scraped above. I've given the basic framework of the Python code and SQL statements to create these tables. I've given the entire CREATE TABLE statement for the cafe table, but for the other two, you'll need to supply the field names and the data types for each column. If you're unsure what to call the fields, or what fields should be in the tables, consult the queries in "The Queries" below. Hints:
Many of these fields will be varchars. Don't worry too much about how many characters you need—it's okay just to eyeball it.
Feel free to use a varchar type to store the birthdate field. No need to dig too deep into PostgreSQL's date types for this particular homework assignment.
Cats and locations are in a many-to-many relationship. You'll need to create a linking table to represent this relationship. (That's why there's space for you to create three tables.)
The linking table will need a field to keep track of whether or not a particular cafe is the "current" cafe for a given cat.
End of explanation
"""
cafe_name_id_map = {}
for item in cafe_list:
cursor.execute("INSERT INTO cafe (name, zip) VALUES (%s, %s) RETURNING id",
[str(item['name']), str(item['zip'])])
cafe_rowid = cursor.fetchone()[0]
cafe_name_id_map[str(item['name'])] = cafe_rowid
conn.commit()
"""
Explanation: After executing the above cell, issuing a \d command in psql should yield something that looks like the following:
List of relations
Schema | Name | Type | Owner
--------+-------------+----------+---------
public | cafe | table | allison
public | cafe_id_seq | sequence | allison
public | cat | table | allison
public | cat_cafe | table | allison
public | cat_id_seq | sequence | allison
(5 rows)
If something doesn't look right, you can always use the DROP TABLE command to drop the tables and start again. (You can also issue a DROP DATABASE catcafes command to drop the database altogether.) Don't worry if it takes a few tries to get it right—happens to the best and most expert among us. You'll probably have to drop the database and start again from scratch several times while completing this homework.
Note: If you try to issue a DROP TABLE or DROP DATABASE command and psql seems to hang forever, it could be that PostgreSQL is waiting for current connections to close before proceeding with your command. To fix this, create a cell with the code conn.close() in your notebook and execute it. After the DROP commands have completed, make sure to run the cell containing the pg8000.connect() call again.
Problem set #3: Inserting the data
In the cell below, I've written the code to insert the cafes into the cafe table, using data from the cafe_list variable that we made earlier. If the code you wrote to create that table was correct, the following cell should execute without error or incident. Execute it before you continue.
End of explanation
"""
cafe_name_id_map
"""
Explanation: Issuing SELECT * FROM cafe in the psql client should yield something that looks like this:
id | name | zip
----+-------------------+-------
1 | Hang In There | 11237
2 | Independent Claws | 11201
3 | Paws and Play | 11215
4 | Tall Tails | 11222
5 | Cats Meow | 11231
(5 rows)
(The id values may be different depending on how many times you've cleaned the table out with DELETE.)
Note that the code in the cell above created a dictionary called cafe_name_id_map. What's in it? Let's see:
End of explanation
"""
import re
cat_name_id_map = {}
for cat in cat_list:
cursor.execute("INSERT INTO cat (name, birthdate, weight, color, pattern) VALUES (%s, %s, %s, %s, %s) RETURNING id",
[str(cat['name']), str(cat['birthdate']) , str(cat['weight']), str(cat['color']), str(cat['pattern'])])
cat_rowid = cursor.fetchone()[0]
cat_name_id_map[str(cat['name'])] = cat_rowid
conn.commit()
cat_name_id_map = {}
for cat in cat_list:
cursor.execute("INSERT INTO cat (name, birthdate, weight, color, pattern) VALUES (%s, %s, %s, %s, %s) RETURNING id",
[str(cat['name']), str(cat['birthdate']) , str(cat['weight']), str(cat['color']), str(cat['pattern'])])
cat_rowid = cursor.fetchone()[0]
cat_name_id_map[str(cat['name'])] = cat_rowid
cat_id = cat_name_id_map[str(cat['name'])]
locations_str = cat['locations']
locations_list = locations_str.split(',')
for item in locations_list:
match = re.search((r"[*]"), item)
if match:
active = 't'
else:
active = 'f'
cafe_id = cafe_name_id_map[item.replace("*", "").strip()]
cursor.execute("INSERT INTO cat_cafe (cat_id, cafe_id, active) VALUES (%s, %s, %s)",[cat_id, cafe_id, active])
conn.commit()
"""
Explanation: The dictionary maps the name of the cat cafe to its ID in the database. You'll need these values later when you're adding records to the linking table (cat_cafe).
Now the tricky part. (Yes, believe it or not, this is the tricky part. The other stuff has all been easy by comparison.) In the cell below, write the Python code to insert each cat's data from the cat_list variable (created in Problem Set #1) into the cat table. The code should also insert the relevant data into the cat_cafe table. Hints:
You'll need to get the id of each cat record using the RETURNING clause of the INSERT statement and the .fetchone() method of the cursor object.
How do you know whether or not the current location is the "active" location for a particular cat? The page itself contains some explanatory text that might be helpful here. You might need to use some string checking and manipulation functions in order to make this determination and transform the string as needed.
The linking table stores an ID only for both the cat and the cafe. Use the cafe_name_id_map dictionary to get the id of the cafes inserted earlier.
End of explanation
"""
cursor.execute("SELECT max(birthdate) FROM cat")
birthdate = cursor.fetchone()[0]
cursor.execute("SELECT name FROM cat WHERE birthdate = %s", [birthdate])
print(cursor.fetchone()[0])
"""
Explanation: Issuing a SELECT * FROM cat LIMIT 10 in psql should yield something that looks like this:
id | name | birthdate | weight | color | pattern
----+-----------+------------+--------+----------+---------------
1 | Sylvester | 2015-05-20 | 10.46 | black | colorpoint
2 | Jasper | 2000-01-03 | 8.06 | cinnamon | solid
3 | Luna | 2006-02-27 | 10.88 | brown | tortoiseshell
4 | Georges | 2015-08-13 | 9.40 | white | tabby
5 | Millie | 2003-09-13 | 9.27 | red | bicolor
6 | Lisa | 2009-07-30 | 8.84 | cream | colorpoint
7 | Oscar | 2011-12-15 | 8.44 | cream | solid
8 | Scaredy | 2015-12-30 | 8.83 | lilac | tabby
9 | Charlotte | 2013-10-16 | 9.54 | blue | tabby
10 | Whiskers | 2011-02-07 | 9.47 | white | colorpoint
(10 rows)
And a SELECT * FROM cat_cafe LIMIT 10 in psql should look like this:
cat_id | cafe_id | active
--------+---------+--------
1 | 3 | f
1 | 2 | t
2 | 2 | t
3 | 2 | t
4 | 4 | t
4 | 1 | f
5 | 3 | t
6 | 1 | t
7 | 1 | t
7 | 5 | f
(10 rows)
Again, the exact values for the ID columns might be different, depending on how many times you've deleted and dropped the tables.
The Queries
Okay. To verify your work, run the following queries and check their output. If you've correctly scraped the data and imported it into SQL, running the cells should produce exactly the expected output, as indicated. If not, then you performed one of the steps above incorrectly; check your work and try again. (Note: Don't modify these cells, just run them! This homework was about scraping and inserting data, not querying it.)
What's the name of the youngest cat at any location?
Expected output: Scaredy
End of explanation
"""
cursor.execute("""SELECT DISTINCT(cafe.zip)
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.cat_id
JOIN cafe ON cafe.id = cat_cafe.cafe_id
WHERE cat.color = 'lilac' AND cat.pattern = 'tabby' AND cat_cafe.active = true
""")
print(', '.join([x[0] for x in cursor.fetchall()]))
"""
Explanation: In which zip codes can I find a lilac-colored tabby?
Expected output: 11237, 11215
End of explanation
"""
cursor.execute("""
SELECT cafe.name, avg(cat.weight)
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.cat_id
JOIN cafe ON cafe.id = cat_cafe.cafe_id
WHERE cat_cafe.active = true
GROUP BY cafe.name
""")
for rec in cursor.fetchall():
print(rec[0]+":", "%0.2f" % rec[1])
"""
Explanation: What's the average weight of cats currently residing at all locations?
Expected output:
Independent Claws: 9.33
Paws and Play: 9.28
Tall Tails: 9.82
Hang In There: 9.25
Cats Meow: 9.76
End of explanation
"""
cursor.execute("""
SELECT cafe.name
FROM cat
JOIN cat_cafe ON cat.id = cat_cafe.cat_id
JOIN cafe ON cafe.id = cat_cafe.cafe_id
WHERE cat_cafe.active = true AND cat.pattern = 'tortoiseshell'
GROUP BY cafe.name
ORDER BY count(cat.name) DESC
LIMIT 1
""")
print(cursor.fetchone()[0])
"""
Explanation: Which location has the most cats with tortoiseshell coats?
Expected output: Independent Claws
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/csir-csiro/cmip6/models/sandbox-1/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-1', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
WaltGurley/jupyter-notebooks-intro | Jupyter - coding with Python.ipynb | mit | # import necessary objects
import pandas as pd
from matplotlib import pyplot
"""
Explanation: A (very) basic introduction Python in Jupyter notebooks
The purpose of this notebook is to get you started with using Python in Jupyter notebooks. This notebook is an introduction to using Python in a notebook with pandas, a data analysis package, and matplotlib, a plotting package.
This notebook was originally created for a Digital Mixer session at the 2016 STELLA Unconference
Importing packages
To use any functions in a package you must first import the package or the parts of the package you want to employee. For example, below are two different examples of importing. Here is the breakdown of what is happening:
First, we are importing the entire pandas package using:
python
import pandas
Next, we are changing the name of the pandas package in our current program to pd:
python
import pandas as pd
Doing this saves us a little time in the future as anytime you want to use a function in the pandas package you only have to type pd instead of the long word pandas.
For the next import we only want part of the matplotlib library. Specifically, we want pyplot, a graphical plotting framework. To import only part of a package we first the keyword from to indicate which package we want to take from and then import that subsection of code:
python
from matplotlib import pyplot
When this cell is run pandas will be imported and available as pd and pyplot from the matplotlib package will be imported.
End of explanation
"""
# test out code completion and tool-tips here
pd.DataFrame
# after testing code completion and tool-tips, see if you can create a simple data frame
"""
Explanation: Code helpers in Jupyter
Jupyter has some built-in features to help you with programming. Two helpful features are code completion and tool-tips.
Code completion
In the Python code cell below type the following and then press tab:
python
pd.Da
You will see a popup box indicating all the functions in pandas (pd) that start with the letters 'Da'. This is the code completion tool. If you ever can't quite remember the name of a function or want to quickly type a function out this can be handy. Go ahead and select DataFrame from the list of available functions
Tool-tips
Tool-tips provide documentation on parts of our code. For example, if you want to know what DataFrame is and how to use it we can activate a tool-tip. To activate a tool-tip first click in the text of DataFrame in the code cell below then hold down shift and press tab. A popup should appear giving information about what type of data goes into the DataFrame function and a brief explantation of what this function does.
For even more information, with your curser still in the text of DataFrame, hold down shift and press tab twice. Now you are presented with a larger, scrollable popup with more detailed documentation on DataFrame.
For an entire pop-out of this documentation, following the same pattern above, hold shift and press tab four times. This will open the document on DataFrame in another pane.
End of explanation
"""
# create a dictionary of fruits and their counts
fruits = {"apples": 2, "oranges": 5, "bananas": 10, "kiwi": 4, "grapes": 30}
# use the dictionary to create a pandas dataframe
fruitData = pd.DataFrame({"fruits": fruits})
# show the dataframe in the output
fruitData
# generate some summary statistics on the dataframe
fruitData.describe
# special command to make plots appear inline
%matplotlib inline
# make plots in the design style of ggplot
pyplot.style.use('ggplot')
# create a bar graph showing the number of each fruit
fruitData.sort_values("fruits").plot.barh()
"""
Explanation: Pandas and Matplotlib
The following cells of code are simple examples of the pandas and Matplotlib packages. Try running each cell, using tool-tips, and editing the code to understand what these functions do.
This is nowhere near an intro to these packages. For an in-depth introduction to pandas try pandas-cookbook. For a brief intro to plotting in Jupyter notebooks check out Plotting with Matplotlib
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/time_series_prediction/labs/4_modeling_keras.ipynb | apache-2.0 | import os
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (Dense, DenseFeatures,
Conv1D, MaxPool1D,
Reshape, RNN,
LSTM, GRU, Bidirectional)
from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint
from tensorflow.keras.optimizers import Adam
# To plot pretty figures
%matplotlib inline
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# For reproducible results.
from numpy.random import seed
seed(1)
tf.random.set_seed(2)
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "your-gcp-bucket-here" # REPLACE WITH YOUR BUCKET
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
%env
PROJECT = PROJECT
BUCKET = BUCKET
REGION = REGION
"""
Explanation: Time Series Prediction
Objectives
1. Build a linear, DNN and CNN model in keras to predict stock market behavior.
2. Build a simple RNN model and a multi-layer RNN model in keras.
3. Combine RNN and CNN architecture to create a keras model to predict stock market behavior.
In this lab we will build a custom Keras model to predict stock market behavior using the stock market dataset we created in the previous labs. We'll start with a linear, DNN and CNN model
Since the features of our model are sequential in nature, we'll next look at how to build various RNN models in keras. We'll start with a simple RNN model and then see how to create a multi-layer RNN in keras. We'll also see how to combine features of 1-dimensional CNNs with a typical RNN architecture.
We will be exploring a lot of different model types in this notebook. To keep track of your results, record the accuracy on the validation set in the table here. In machine learning there are rarely any "one-size-fits-all" so feel free to test out different hyperparameters (e.g. train steps, regularization, learning rates, optimizers, batch size) for each of the models. Keep track of your model performance in the chart below.
| Model | Validation Accuracy |
|----------|:---------------:|
| Baseline | 0.295 |
| Linear | -- |
| DNN | -- |
| 1-d CNN | -- |
| simple RNN | -- |
| multi-layer RNN | -- |
| RNN using CNN features | -- |
| CNN using RNN features | -- |
Load necessary libraries and set up environment variables
End of explanation
"""
%%time
bq = bigquery.Client(project=PROJECT)
bq_query = '''
#standardSQL
SELECT
symbol,
Date,
direction,
close_values_prior_260
FROM
`stock_market.eps_percent_change_sp500`
LIMIT
100
'''
df_stock_raw = bq.query(bq_query).to_dataframe()
df_stock_raw.head()
"""
Explanation: Explore time series data
We'll start by pulling a small sample of the time series data from Big Query and write some helper functions to clean up the data for modeling. We'll use the data from the percent_change_sp500 table in BigQuery. The close_values_prior_260 column contains the close values for any given stock for the previous 260 days.
End of explanation
"""
def clean_data(input_df):
"""Cleans data to prepare for training.
Args:
input_df: Pandas dataframe.
Returns:
Pandas dataframe.
"""
df = input_df.copy()
# Remove inf/na values.
real_valued_rows = ~(df == np.inf).max(axis=1)
df = df[real_valued_rows].dropna()
# TF doesn't accept datetimes in DataFrame.
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
df['Date'] = df['Date'].dt.strftime('%Y-%m-%d')
# TF requires numeric label.
df['direction_numeric'] = df['direction'].apply(lambda x: {'DOWN': 0,
'STAY': 1,
'UP': 2}[x])
return df
df_stock = clean_data(df_stock_raw)
df_stock.head()
"""
Explanation: The function clean_data below does three things:
1. First, we'll remove any inf or NA values
2. Next, we parse the Date field to read it as a string.
3. Lastly, we convert the label direction into a numeric quantity, mapping 'DOWN' to 0, 'STAY' to 1 and 'UP' to 2.
End of explanation
"""
STOCK_HISTORY_COLUMN = 'close_values_prior_260'
COL_NAMES = ['day_' + str(day) for day in range(0, 260)]
LABEL = 'direction_numeric'
def _scale_features(df):
"""z-scale feature columns of Pandas dataframe.
Args:
features: Pandas dataframe.
Returns:
Pandas dataframe with each column standardized according to the
values in that column.
"""
avg = df.mean()
std = df.std()
return (df - avg) / std
def create_features(df, label_name):
"""Create modeling features and label from Pandas dataframe.
Args:
df: Pandas dataframe.
label_name: str, the column name of the label.
Returns:
Pandas dataframe
"""
# Expand 1 column containing a list of close prices to 260 columns.
time_series_features = df[STOCK_HISTORY_COLUMN].apply(pd.Series)
# Rename columns.
time_series_features.columns = COL_NAMES
time_series_features = _scale_features(time_series_features)
# Concat time series features with static features and label.
label_column = df[LABEL]
return pd.concat([time_series_features,
label_column], axis=1)
df_features = create_features(df_stock, LABEL)
df_features.head()
"""
Explanation: Read data and preprocessing
Before we begin modeling, we'll preprocess our features by scaling to the z-score. This will ensure that the range of the feature values being fed to the model are comparable and should help with convergence during gradient descent.
End of explanation
"""
ix_to_plot = [0, 1, 9, 5]
fig, ax = plt.subplots(1, 1, figsize=(15, 8))
for ix in ix_to_plot:
label = df_features['direction_numeric'].iloc[ix]
example = df_features[COL_NAMES].iloc[ix]
ax = example.plot(label=label, ax=ax)
ax.set_ylabel('scaled price')
ax.set_xlabel('prior days')
ax.legend()
"""
Explanation: Let's plot a few examples and see that the preprocessing steps were implemented correctly.
End of explanation
"""
def _create_split(phase):
"""Create string to produce train/valid/test splits for a SQL query.
Args:
phase: str, either TRAIN, VALID, or TEST.
Returns:
String.
"""
floor, ceiling = '2002-11-01', '2010-07-01'
if phase == 'VALID':
floor, ceiling = '2010-07-01', '2011-09-01'
elif phase == 'TEST':
floor, ceiling = '2011-09-01', '2012-11-30'
return '''
WHERE Date >= '{0}'
AND Date < '{1}'
'''.format(floor, ceiling)
def create_query(phase):
"""Create SQL query to create train/valid/test splits on subsample.
Args:
phase: str, either TRAIN, VALID, or TEST.
sample_size: str, amount of data to take for subsample.
Returns:
String.
"""
basequery = """
#standardSQL
SELECT
symbol,
Date,
direction,
close_values_prior_260
FROM
`stock_market.eps_percent_change_sp500`
"""
return basequery + _create_split(phase)
bq = bigquery.Client(project=PROJECT)
for phase in ['TRAIN', 'VALID', 'TEST']:
# 1. Create query string
query_string = create_query(phase)
# 2. Load results into DataFrame
df = bq.query(query_string).to_dataframe()
# 3. Clean, preprocess dataframe
df = clean_data(df)
df = create_features(df, label_name='direction_numeric')
# 3. Write DataFrame to CSV
if not os.path.exists('../data'):
os.mkdir('../data')
df.to_csv('../data/stock-{}.csv'.format(phase.lower()),
index_label=False, index=False)
print("Wrote {} lines to {}".format(
len(df),
'../data/stock-{}.csv'.format(phase.lower())))
ls -la ../data
"""
Explanation: Make train-eval-test split
Next, we'll make repeatable splits for our train/validation/test datasets and save these datasets to local csv files. The query below will take a subsample of the entire dataset and then create a 70-15-15 split for the train/validation/test sets.
End of explanation
"""
N_TIME_STEPS = 260
N_LABELS = 3
Xtrain = pd.read_csv('../data/stock-train.csv')
Xvalid = pd.read_csv('../data/stock-valid.csv')
ytrain = Xtrain.pop(LABEL)
yvalid = Xvalid.pop(LABEL)
ytrain_categorical = to_categorical(ytrain.values)
yvalid_categorical = to_categorical(yvalid.values)
"""
Explanation: Modeling
For experimentation purposes, we'll train various models using data we can fit in memory using the .csv files we created above.
End of explanation
"""
def plot_curves(train_data, val_data, label='Accuracy'):
"""Plot training and validation metrics on single axis.
Args:
train_data: list, metrics obtrained from training data.
val_data: list, metrics obtained from validation data.
label: str, title and label for plot.
Returns:
Matplotlib plot.
"""
plt.plot(np.arange(len(train_data)) + 0.5,
train_data,
"b.-", label="Training " + label)
plt.plot(np.arange(len(val_data)) + 1,
val_data, "r.-",
label="Validation " + label)
plt.gca().xaxis.set_major_locator(mpl.ticker.MaxNLocator(integer=True))
plt.legend(fontsize=14)
plt.xlabel("Epochs")
plt.ylabel(label)
plt.grid(True)
"""
Explanation: To monitor training progress and compare evaluation metrics for different models, we'll use the function below to plot metrics captured from the training job such as training and validation loss or accuracy.
End of explanation
"""
sum(yvalid == ytrain.value_counts().idxmax()) / yvalid.shape[0]
"""
Explanation: Baseline
Before we begin modeling in keras, let's create a benchmark using a simple heuristic. Let's see what kind of accuracy we would get on the validation set if we predict the majority class of the training set.
End of explanation
"""
# TODO 1a
model = Sequential()
model.add( # TODO: Your code goes here.
model.compile( # TODO: Your code goes here.
history = model.fit( # TODO: Your code goes here.
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
"""
Explanation: Ok. So just naively guessing the most common outcome UP will give about 29.5% accuracy on the validation set.
Linear model
We'll start with a simple linear model, mapping our sequential input to a single fully dense layer.
Lab Task #1a: In the cell below, create a linear model using the keras sequential API which maps the sequential input to a single dense fully connected layer.
End of explanation
"""
np.mean(history.history['val_accuracy'][-5:])
"""
Explanation: The accuracy seems to level out pretty quickly. To report the accuracy, we'll average the accuracy on the validation set across the last few epochs of training.
End of explanation
"""
#TODO 1b
model = Sequential()
# TODO: Your code goes here.
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
"""
Explanation: Deep Neural Network
The linear model is an improvement on our naive benchmark. Perhaps we can do better with a more complicated model. Next, we'll create a deep neural network with keras. We'll experiment with a two layer DNN here but feel free to try a more complex model or add any other additional techniques to try an improve your performance.
Lab Task #1b: In the cell below, create a deep neural network in keras to model direction_numeric. Experiment with different activation functions or add regularization to see how much you can improve performance.
End of explanation
"""
#TODO 1c
model = Sequential()
# Convolutional layer(s)
# TODO: Your code goes here.
# Flatten the result and pass through DNN.
# TODO: Your code goes here.
# Compile your model and train
# TODO: Your code goes here.
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
"""
Explanation: Convolutional Neural Network
The DNN does slightly better. Let's see how a convolutional neural network performs.
A 1-dimensional convolutional can be useful for extracting features from sequential data or deriving features from shorter, fixed-length segments of the data set. Check out the documentation for how to implement a Conv1d in Tensorflow. Max pooling is a downsampling strategy commonly used in conjunction with convolutional neural networks. Next, we'll build a CNN model in keras using the Conv1D to create convolution layers and MaxPool1D to perform max pooling before passing to a fully connected dense layer.
Lab Task #1b: Create 1D Convolutional network in keras. You can experiment with different numbers of convolutional layers, filter sizes, kernals sizes and strides, pooling layers etc. After passing through the convolutional layers, flatten the result and pass through a deep neural network to complete the model.
End of explanation
"""
#TODO 2a
model = Sequential()
# Reshape inputs to pass through RNN layer.
# TODO: Your code goes here.
# Compile your model and train
# TODO: Your code goes here.
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
"""
Explanation: Recurrent Neural Network
RNNs are particularly well-suited for learning sequential data. They retain state information from one iteration to the next by feeding the output from one cell as input for the next step. In the cell below, we'll build a RNN model in keras. The final state of the RNN is captured and then passed through a fully connected layer to produce a prediction.
Lab Task #2a: Create an RNN model in keras. You can try different types of RNN cells like LSTMs,
GRUs or basic RNN cell. Experiment with different cell sizes, activation functions, regularization, etc.
End of explanation
"""
#TODO 2b
model = Sequential()
# Reshape inputs to pass through RNN layers.
# TODO: Your code goes here.
# Compile your model and train
# TODO: Your code goes here.
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
"""
Explanation: Multi-layer RNN
Next, we'll build multi-layer RNN. Just as multiple layers of a deep neural network allow for more complicated features to be learned during training, additional RNN layers can potentially learn complex features in sequential data. For a multi-layer RNN the output of the first RNN layer is fed as the input into the next RNN layer.
Lab Task #2b: Now that you've seen how to build a sinlge layer RNN, create an deep, multi-layer RNN model. Look into how you should set the return_sequences variable when instantiating the layers of your RNN.
End of explanation
"""
#TODO 3a
model = Sequential()
# Reshape inputs for convolutional layer
# TODO: Your code goes here.
# Pass the convolutional features through RNN layer
# TODO: Your code goes here.
# Compile your model and train
# TODO: Your code goes here.
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
"""
Explanation: Combining CNN and RNN architecture
Finally, we'll look at some model architectures which combine aspects of both convolutional and recurrant networks. For example, we can use a 1-dimensional convolution layer to process our sequences and create features which are then passed to a RNN model before prediction.
Lab Task #3a: Create a model that first passes through a 1D-Convolution then passes those sequential features through an sequential recurrent layer. You can experiment with different hyperparameters of the CNN and the RNN to see how much you can improve performance of your model.
End of explanation
"""
#TODO 3b
model = Sequential()
# Reshape inputs and pass through RNN layer.
# TODO: Your code goes here.
# Apply 1d convolution to RNN outputs.
# TODO: Your code goes here.
# Flatten the convolution output and pass through DNN.
# TODO: Your code goes here.
# Compile your model and train
# TODO: Your code goes here.
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
"""
Explanation: We can also try building a hybrid model which uses a 1-dimensional CNN to create features from the outputs of an RNN.
Lab Task #3b: Lastly, create a model that passes through the recurrant layer of an RNN first before applying a 1D-Convolution. As before, the result of the CNN is then flattened and passed through the fully connected layer(s).
End of explanation
"""
|
datascienceguide/datascienceguide.github.io | tutorials/Non-Linear-Regression-Tutorial.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from math import log
from sklearn import linear_model
#comment below if not using ipython notebook
%matplotlib inline
# load data into a pandas dataframe
data = pd.read_csv('../datasets/log_regression_example.csv')
#view first five datapoints
print data[0:5]
#mistake I made yesterday
#change column labels to be more convenient (shorter)
data.columns = ['size', 'price']
#view first five datapoints
print data[0:5]
#problem is size is already a pandas method
# data.size will give the size of the data, not the column
data.size
"""
Explanation: Generalized Linear and Non-Linear Regression Tutorial
Author: Andrew Andrade (andrew@andrewandrade.ca)
First we will outline a solution to last weeks homework assignment by applying linear regression to a log transform of a dataset. We will then go into non-linear regression and linearized models for with a single explanatory variable. In the next tutorial we will learn how to apply this to multiple features (multi-regression)
Predicting House Prices by Applying Log Transform
data inspired from http://davegiles.blogspot.ca/2011/03/curious-regressions.html
Given the task from last week of using linear regression to predict housing prices from the property size, let us first load the provided data, and peak at the first 5 data points.
End of explanation
"""
#rename columns to make indexing easier
data.columns = ['property_size', 'price']
plt.scatter(data.property_size, data.price, color='black')
plt.ylabel("Price of House ($million)")
plt.xlabel("Size of Property (m^2)")
plt.title("Price vs Size of House")
"""
Explanation: Now lets visualize the data. We are going to make the assumption that the price of the house is dependant on the size of property
End of explanation
"""
# generate pseudorandom number
# by setting a seed, the same random number is always generated
# this way by following along, you get the same plots
# meaning the results are reproducable.
# try changing the 1 to a different number
np.random.seed(3)
# shuffle data since we want to randomly split the data
shuffled_data= data.iloc[np.random.permutation(len(data))]
#notice how the x labels remain, but are now random
print shuffled_data[0:5]
#train on the first element to 75% of the dataset
training_data = shuffled_data[0:len(shuffled_data)*3/4]
#test on the remaining 25% of the dataset
#note the +1 is since there is an odd number of datapoints
#the better practice is use shufflesplit which we will learn in a future tutorial
testing_data = shuffled_data[-len(shuffled_data)/4+1:-1]
#plot the training and test data on the same plot
plt.scatter(training_data.property_size, training_data.price, color='blue', label='training')
plt.scatter(testing_data.property_size, testing_data.price, color='red', label='testing')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.ylabel("Price of House ($Million)")
plt.xlabel("Size of Land (m^2)")
plt.title("Price vs Size of Land")
X_train = training_data.property_size.reshape((len(training_data.property_size), 1))
y_train = training_data.price.reshape((len(training_data.property_size), 1))
X_test = testing_data.property_size.reshape((len(testing_data.property_size), 1))
y_test = testing_data.price.reshape((len(testing_data.property_size), 1))
X = np.linspace(0,800000)
X = X.reshape((len(X), 1))
# Create linear regression object
regr = linear_model.LinearRegression()
#Train the model using the training sets
regr.fit(X_train,y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr.predict(X_test) - y_test) ** 2))
plt.plot(X, regr.predict(X), color='black',
linewidth=3)
plt.scatter(training_data.property_size, training_data.price, color='blue', label='training')
plt.scatter(testing_data.property_size, testing_data.price, color='red', label='testing')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.ylabel("Price of House ($Million)")
plt.xlabel("Size of Land (m^2)")
plt.title("Price vs Size of Land")
"""
Explanation: We will learn about how to implement cross validation properly soon, but for now let us put the data in a random order (shuffle the rows) and use linear regression to fit a line on 75% of the data. We will then test the fit on the remaining 25%. Normally you would use scikit learn's cross validation functions, but we are going to implement the cross validation methods ourself (so you understand what is going on).
DO NOT use this method for doing cross validation. You will later learn how to do k folds cross-validation using the scikit learn's implementation. In this tutorial, I implement cross validation manually you intuition for what exactly hold out cross validation is, but in the future we will learn a better way to do cross validation.
End of explanation
"""
# map applied log() function to every element
X_train_after_log = training_data.property_size.map(log)
#reshape back to matrix with 1 column
X_train_after_log = X_train_after_log.reshape((len(X_train_after_log), 1))
X_test_after_log = testing_data.property_size.map(log)
#reshape back to matrix with 1 column
X_test_after_log = X_test_after_log.reshape((len(X_test_after_log), 1))
X_after_log = np.linspace(min(X_train_after_log),max(X_train_after_log))
X_after_log = X_after_log.reshape((len(X_after_log), 1))
regr2 = linear_model.LinearRegression()
#fit linear regression
regr2.fit(X_train_after_log,y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr2.predict(X_test_after_log) - y_test) ** 2))
#np.exp takes the e^x, efficiently inversing the log transform
plt.plot(np.exp(X_after_log), regr2.predict(X_after_log), color='black',
linewidth=3)
plt.scatter(training_data.property_size, training_data.price, color='blue', label='training')
plt.scatter(testing_data.property_size, testing_data.price, color='red', label='testing')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.ylabel("Price of House ($Million)")
plt.xlabel("Size of Land (m^2)")
plt.title("Price vs Size of Land")
"""
Explanation: We can see here, there is obviously a poor fit. There is going to be a very high residual sum of squares and there is no linear relationship. Since the data appears to follow $e^y = x$, we can apply a log transform on the data:
$$y = ln (x)$$
For the purpose of this tutorial, I will apply the log transform, fit a linear model then invert the log transform and plot the fit to the original data.
End of explanation
"""
plt.scatter(X_train_after_log, training_data.price, color='blue', label='training')
plt.scatter(X_test_after_log, testing_data.price, color='red', label='testing')
plt.plot(X_after_log, regr2.predict(X_after_log), color='black', linewidth=3)
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
"""
Explanation: The residual sum of squares on the test data after the log transform (0.07) in this example is much lower than before where we just fit the the data without the transfrom (0.32). The plot even looks much better as the data seems to fit well for the smaller sizes of land and still fits the larger size of land roughly. As an analysist, one might naively use this model afer applying the log transform. As we learn't from the last tutorial, ALWAYS plot your data after you transform the features since there might be hidden meanings in the data!
Run the code below to see hidden insight left in the data (after the log transform)
End of explanation
"""
#read csv
anscombe_ii = pd.read_csv('../datasets/anscombe_ii.csv')
plt.scatter(anscombe_ii.x, anscombe_ii.y, color='black')
plt.ylabel("Y")
plt.xlabel("X")
"""
Explanation: The lesson learnt here is always plot data (even after a transform) before blindly running a predictive model!
Generalized linear models
Now let's exend our knowledge to generalized linear models for the remaining three of the anscombe quartet datasets. We will try and use our intuition to determine the best model.
End of explanation
"""
X_ii = anscombe_ii.x
# X_ii_noisey = X_ii_noisey.reshape((len(X_ii_noisey), 1))
y_ii = anscombe_ii.y
#y_ii = anscombe_ii.y.reshape((len(anscombe_ii.y), 1))
X_fit = np.linspace(min(X_ii),max(X_ii))
polynomial_degree = 2
p = np.polyfit(X_ii, anscombe_ii.y, polynomial_degree)
yfit = np.polyval(p, X_fit)
plt.plot(X_fit, yfit, '-b')
plt.scatter(X_ii, y_ii)
"""
Explanation: Instead of fitting a linear model to a transformation, we can also fit a polynomial to the data:
End of explanation
"""
np.random.seed(1)
x_noise = np.random.random(len(anscombe_ii.x))
X_ii_noisey = anscombe_ii.x + x_noise*3
X_fit = np.linspace(min(X_ii_noisey),max(X_ii_noisey))
polynomial_degree = 1
p = np.polyfit(X_ii_noisey, anscombe_ii.y, polynomial_degree)
yfit = np.polyval(p, X_fit)
plt.plot(X_fit, yfit, '-b')
plt.scatter(X_ii_noisey, y_ii)
print("Residual sum of squares: %.2f"
% np.mean((np.polyval(p, X_ii_noisey) - y_ii)**2))
"""
Explanation: Lets add some random noise to the data, fit a polynomial and calculate the residual error.
End of explanation
"""
polynomial_degree = 5
p2 = np.polyfit(X_ii_noisey, anscombe_ii.y, polynomial_degree)
yfit = np.polyval(p2, X_fit)
plt.plot(X_fit, yfit, '-b')
plt.scatter(X_ii_noisey, y_ii)
print("Residual sum of squares: %.2f"
% np.mean((np.polyval(p2, X_ii_noisey) - y_ii)**2))
"""
Explanation: Now can we fit a larger degree polynomial and reduce the error? Lets try and see:
End of explanation
"""
polynomial_degree = 10
p2 = np.polyfit(X_ii_noisey, anscombe_ii.y, polynomial_degree)
yfit = np.polyval(p2, X_fit)
plt.plot(X_fit, yfit, '-b')
plt.scatter(X_ii_noisey, y_ii)
print("Residual sum of squares: %.2f"
% np.mean((np.polyval(p2, X_ii_noisey) - y_ii)**2))
"""
Explanation: What if we use a really high degree polynomial? Can we bring the error to zero? YES!
End of explanation
"""
#read csv
anscombe_iii = pd.read_csv('../datasets/anscombe_iii.csv')
plt.scatter(anscombe_iii.x, anscombe_iii.y, color='black')
plt.ylabel("Y")
plt.xlabel("X")
"""
Explanation: It is intuitive to see that we are overfitting since the high degree polynomial hits every single point (causing our mean squared error (MSE) to be zero), but it would generalize well. For example, if x=5, it would estimate y to be -45 when you would expect it to be above 0.
when you are dealing with more than one variable, it becomes increasingly difficult to prevent overfitting, since you can not plots past four-five dimensions (x axis,y axis,z axis, color and size). For this reason we should always use cross validation to reduce our variance error (due to overfitting) while we are deducing bias (due to underfitting). Throughout the course we will learn more on what this means, and learn practical tips.
The key takeaway here is more complex models are not always better. Use visualizations and cross validation to prevent overfitting! (We will learn more about this soon!)
Now, let us work on the third set of data from quartet
End of explanation
"""
from sklearn import linear_model
X_iii = anscombe_iii.x.reshape((len(anscombe_iii), 1))
#bit basic linear model
model = linear_model.LinearRegression()
model.fit(X_iii, anscombe_iii.y)
# Robustly fit linear model with RANSAC algorithm
model_ransac = linear_model.RANSACRegressor(linear_model.LinearRegression())
model_ransac.fit(X_iii, anscombe_iii.y)
inlier_mask = model_ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
plt.plot(X_iii,model.predict(X_iii), color='blue',linewidth=3, label='Linear regressor')
plt.plot(X_iii,model_ransac.predict(X_iii), color='red', linewidth=3, label='RANSAC regressor')
plt.plot(X_iii[inlier_mask], anscombe_iii.y[inlier_mask], '.k', label='Inliers')
plt.plot(X_iii[outlier_mask], anscombe_iii.y[outlier_mask], '.g', label='Outliers')
plt.ylabel("Y")
plt.xlabel("X")
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
"""
Explanation: It is obvious that there is an outlier which is going to cause a poor fit to an ordinary linear regression. One way is filtering out the outlier. One method could be to manually hardcode any value which seems to be incorrect. A better method would be to remove any point which is a given standard deviation away from the linear model, then fit a line to remaining data points. Arguably, an even better method could be using the RANSAC algorithm (demonstrated below) from the Scikit learn documentation on linear models or using Thiel-sen regression
End of explanation
"""
#read csv
anscombe_ii = pd.read_csv('../datasets/anscombe_iv.csv')
plt.scatter(anscombe_ii.x, anscombe_ii.y, color='black')
plt.ylabel("Y")
plt.xlabel("X")
"""
Explanation: The takeaway here to read the documentation, and see if there is an already implemented method of solving a problem. Chances there are already prepackaged solutions, you just need to learn about them. Lets move on to the final quatet.
End of explanation
"""
import numpy as np
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], dtype=float)
y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03])
plt.scatter(x, y)
from scipy import optimize
def piecewise_linear(x, x0, y0, k1, k2):
return np.piecewise(x, [x < x0], [lambda x:k1*x + y0-k1*x0, lambda x:k2*x + y0-k2*x0])
p , e = optimize.curve_fit(piecewise_linear, x, y)
xd = np.linspace(0, 15, 100)
plt.scatter(x, y)
plt.plot(xd, piecewise_linear(xd, *p))
"""
Explanation: In this example, we can see that the X axis values stays constant except for 1 measurement where x varies. Since we are trying to predict y in terms of x, as an analyst I would would not use any model to describe this data, and state that more data with different values of X would be required. Additionally, depending on the problem I could remove the outliers, and treat this as univariate data.
The takeaway here is that sometimes a useful model can not be made (garbage in, garbage out) until better data is avaliable.
Non-linear and robust regression
Due to time restrictions, I can not present every method for regression, but depending on your specific problem and data, there are many other regression techniques which can be used:
http://scikit-learn.org/stable/auto_examples/ensemble/plot_adaboost_regression.html#example-ensemble-plot-adaboost-regression-py
http://scikit-learn.org/stable/auto_examples/neighbors/plot_regression.html#example-neighbors-plot-regression-py
http://scikit-learn.org/stable/auto_examples/svm/plot_svm_regression.html#example-svm-plot-svm-regression-py
http://scikit-learn.org/stable/auto_examples/plot_isotonic_regression.html
http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/ols.html
http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/robust_models_0.html
http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/glm.html
http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/gls.html
http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/wls.html
http://cars9.uchicago.edu/software/python/lmfit/
Bonus example: Piecewise linear curve fitting
While I usually prefer to use more robustly implemented algorithms such as ridge or decision tree based regresion (this is because for many features it becomes difficult to determine an adequete model for each feature), regression can be done by fitting a piecewise fuction. Taken from here.
End of explanation
"""
#Piecewise function defining 2nd deg, 1st degree and 3rd degree exponentials
def piecewise_linear(x, x0, x1, y0, y1, k1, k2, k3, k4, k5, k6):
return np.piecewise(x, [x < x0, x>= x0, x> x1], [lambda x:k1*x + k2*x**2, lambda x:k3*x + y0, lambda x: k4*x + k5*x**2 + k6*x**3 + y1])
#Getting data using Pandas
df = pd.read_csv("../datasets/non-linear-piecewise.csv")
ms = df["ms"].values
degrees = df["Degrees"].values
plt.scatter(ms, degrees)
#Setting linspace and making the fit, make sure to make you data numpy arrays
x_new = np.linspace(ms[0], ms[-1], dtype=float)
m = np.array(ms, dtype=float)
deg = np.array(degrees, dtype=float)
guess = np.array( [100, 500, -30, 350, -0.1, 0.0051, 1, -0.01, -0.01, -0.01], dtype=float)
p , e = optimize.curve_fit(piecewise_linear, m, deg)
#Plotting data and fit
plt.plot(x_new, piecewise_linear(x_new, *p), '-', ms[::20], degrees[::20], 'o')
"""
Explanation: Bonus example 2: Piecewise Non-linear Curve Fitting
Now let us extend this to piecewise non-linear Curve Fitting. Taken from here
End of explanation
"""
|
SnShine/aima-python | text.ipynb | mit | from text import *
from utils import open_data
from notebook import psource
"""
Explanation: TEXT
This notebook serves as supporting material for topics covered in Chapter 22 - Natural Language Processing from the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from text.py.
End of explanation
"""
psource(UnigramWordModel, NgramWordModel, UnigramCharModel, NgramCharModel)
"""
Explanation: CONTENTS
Text Models
Viterbi Text Segmentation
Information Retrieval
Information Extraction
Decoders
TEXT MODELS
Before we start analyzing text processing algorithms, we will need to build some language models. Those models serve as a look-up table for character or word probabilities (depending on the type of model). These models can give us the probabilities of words or character sequences appearing in text. Take as example "the". Text models can give us the probability of "the", P("the"), either as a word or as a sequence of characters ("t" followed by "h" followed by "e"). The first representation is called "word model" and deals with words as distinct objects, while the second is a "character model" and deals with sequences of characters as objects. Note that we can specify the number of words or the length of the char sequences to better suit our needs. So, given that number of words equals 2, we have probabilities in the form P(word1, word2). For example, P("of", "the"). For char models, we do the same but for chars.
It is also useful to store the conditional probabilities of words given preceding words. That means, given we found the words "of" and "the", what is the chance the next word will be "world"? More formally, P("world"|"of", "the"). Generalizing, P(Wi|Wi-1, Wi-2, ... , Wi-n).
We call the word model N-Gram Word Model (from the Greek "gram", the root of "write", or the word for "letter") and the char model N-Gram Character Model. In the special case where N is 1, we call the models Unigram Word Model and Unigram Character Model respectively.
In the text module we implement the two models (both their unigram and n-gram variants) by inheriting from the CountingProbDist from learning.py. Note that CountingProbDist does not return the actual probability of each object, but the number of times it appears in our test data.
For word models we have UnigramWordModel and NgramWordModel. We supply them with a text file and they show the frequency of the different words. We have UnigramCharModel and NgramCharModel for the character models.
Execute the cells below to take a look at the code.
End of explanation
"""
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramWordModel(wordseq)
P2 = NgramWordModel(2, wordseq)
print(P1.top(5))
print(P2.top(5))
print(P1['an'])
print(P2[('i', 'was')])
"""
Explanation: Next we build our models. The text file we will use to build them is Flatland, by Edwin A. Abbott. We will load it from here. In that directory you can find other text files we might get to use here.
Getting Probabilities
Here we will take a look at how to read text and find the probabilities for each model, and how to retrieve them.
First the word models:
End of explanation
"""
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P3 = NgramWordModel(3, wordseq)
print("Conditional Probabilities Table:", P3.cond_prob[('i', 'was')].dictionary, '\n')
print("Conditional Probability of 'once' give 'i was':", P3.cond_prob[('i', 'was')]['once'], '\n')
print("Next word after 'i was':", P3.cond_prob[('i', 'was')].sample())
"""
Explanation: We see that the most used word in Flatland is 'the', with 2081 occurences, while the most used sequence is 'of the' with 368 occurences. Also, the probability of 'an' is approximately 0.003, while for 'i was' it is close to 0.001. Note that the strings used as keys are all lowercase. For the unigram model, the keys are single strings, while for n-gram models we have n-tuples of strings.
Below we take a look at how we can get information from the conditional probabilities of the model, and how we can generate the next word in a sequence.
End of explanation
"""
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramCharModel(wordseq)
P2 = NgramCharModel(2, wordseq)
print(P1.top(5))
print(P2.top(5))
print(P1['z'])
print(P2[('g', 'h')])
"""
Explanation: First we print all the possible words that come after 'i was' and the times they have appeared in the model. Next we print the probability of 'once' appearing after 'i was', and finally we pick a word to proceed after 'i was'. Note that the word is picked according to its probability of appearing (high appearance count means higher chance to get picked).
Let's take a look at the two character models:
End of explanation
"""
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramWordModel(wordseq)
P2 = NgramWordModel(2, wordseq)
P3 = NgramWordModel(3, wordseq)
print(P1.samples(10))
print(P2.samples(10))
print(P3.samples(10))
"""
Explanation: The most common letter is 'e', appearing more than 19000 times, and the most common sequence is "_t". That is, a space followed by a 't'. Note that even though we do not count spaces for word models or unigram character models, we do count them for n-gram char models.
Also, the probability of the letter 'z' appearing is close to 0.0006, while for the bigram 'gh' it is 0.003.
Generating Samples
Apart from reading the probabilities for n-grams, we can also use our model to generate word sequences, using the samples function in the word models.
End of explanation
"""
data = open_data("EN-text/flatland.txt").read()
data += open_data("EN-text/sense.txt").read()
wordseq = words(data)
P3 = NgramWordModel(3, wordseq)
P4 = NgramWordModel(4, wordseq)
P5 = NgramWordModel(5, wordseq)
P7 = NgramWordModel(7, wordseq)
print(P3.samples(15))
print(P4.samples(15))
print(P5.samples(15))
print(P7.samples(15))
"""
Explanation: For the unigram model, we mostly get gibberish, since each word is picked according to its frequency of appearance in the text, without taking into consideration preceding words. As we increase n though, we start to get samples that do have some semblance of conherency and do remind a little bit of normal English. As we increase our data, these samples will get better.
Let's try it. We will add to the model more data to work with and let's see what comes out.
End of explanation
"""
psource(viterbi_segment)
"""
Explanation: Notice how the samples start to become more and more reasonable as we add more data and increase the n parameter. We are still a long way to go though from realistic text generation, but at the same time we can see that with enough data even rudimentary algorithms can output something almost passable.
VITERBI TEXT SEGMENTATION
Overview
We are given a string containing words of a sentence, but all the spaces are gone! It is very hard to read and we would like to separate the words in the string. We can accomplish this by employing the Viterbi Segmentation algorithm. It takes as input the string to segment and a text model, and it returns a list of the separate words.
The algorithm operates in a dynamic programming approach. It starts from the beginning of the string and iteratively builds the best solution using previous solutions. It accomplishes that by segmentating the string into "windows", each window representing a word (real or gibberish). It then calculates the probability of the sequence up that window/word occuring and updates its solution. When it is done, it traces back from the final word and finds the complete sequence of words.
Implementation
End of explanation
"""
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P = UnigramWordModel(wordseq)
text = "itiseasytoreadwordswithoutspaces"
s, p = viterbi_segment(text,P)
print("Sequence of words is:",s)
print("Probability of sequence is:",p)
"""
Explanation: The function takes as input a string and a text model, and returns the most probable sequence of words, together with the probability of that sequence.
The "window" is w and it includes the characters from j to i. We use it to "build" the following sequence: from the start to j and then w. We have previously calculated the probability from the start to j, so now we multiply that probability by P[w] to get the probability of the whole sequence. If that probability is greater than the probability we have calculated so far for the sequence from the start to i (best[i]), we update it.
Example
The model the algorithm uses is the UnigramTextModel. First we will build the model using the Flatland text and then we will try and separate a space-devoid sentence.
End of explanation
"""
psource(IRSystem)
"""
Explanation: The algorithm correctly retrieved the words from the string. It also gave us the probability of this sequence, which is small, but still the most probable segmentation of the string.
INFORMATION RETRIEVAL
Overview
With Information Retrieval (IR) we find documents that are relevant to a user's needs for information. A popular example is a web search engine, which finds and presents to a user pages relevant to a query. Information retrieval is not limited only to returning documents though, but can also be used for other type of queries. For example, answering questions when the query is a question, returning information when the query is a concept, and many other applications. An IR system is comprised of the following:
A body (called corpus) of documents: A collection of documents, where the IR will work on.
A query language: A query represents what the user wants.
Results: The documents the system grades as relevant to a user's query and needs.
Presententation of the results: How the results are presented to the user.
How does an IR system determine which documents are relevant though? We can sign a document as relevant if all the words in the query appear in it, and sign it as irrelevant otherwise. We can even extend the query language to support boolean operations (for example, "paint AND brush") and then sign as relevant the outcome of the query for the document. This technique though does not give a level of relevancy. All the documents are either relevant or irrelevant, but in reality some documents are more relevant than others.
So, instead of a boolean relevancy system, we use a scoring function. There are many scoring functions around for many different situations. One of the most used takes into account the frequency of the words appearing in a document, the frequency of a word appearing across documents (for example, the word "a" appears a lot, so it is not very important) and the length of a document (since large documents will have higher occurences for the query terms, but a short document with a lot of occurences seems very relevant). We combine these properties in a formula and we get a numeric score for each document, so we can then quantify relevancy and pick the best documents.
These scoring functions are not perfect though and there is room for improvement. For instance, for the above scoring function we assume each word is independent. That is not the case though, since words can share meaning. For example, the words "painter" and "painters" are closely related. If in a query we have the word "painter" and in a document the word "painters" appears a lot, this might be an indication that the document is relevant but we are missing out since we are only looking for "painter". There are a lot of ways to combat this. One of them is to reduce the query and document words into their stems. For example, both "painter" and "painters" have "paint" as their stem form. This can improve slightly the performance of algorithms.
To determine how good an IR system is, we give the system a set of queries (for which we know the relevant pages beforehand) and record the results. The two measures for performance are precision and recall. Precision measures the proportion of result documents that actually are relevant. Recall measures the proportion of relevant documents (which, as mentioned before, we know in advance) appearing in the result documents.
Implementation
You can read the source code by running the command below:
End of explanation
"""
psource(UnixConsultant)
"""
Explanation: The stopwords argument signifies words in the queries that should not be accounted for in documents. Usually they are very common words that do not add any significant information for a document's relevancy.
A quick guide for the functions in the IRSystem class:
index_document: Add document to the collection of documents (named documents), which is a list of tuples. Also, count how many times each word in the query appears in each document.
index_collection: Index a collection of documents given by filenames.
query: Returns a list of n pairs of (score, docid) sorted on the score of each document. Also takes care of the special query "learn: X", where instead of the normal functionality we present the output of the terminal command "X".
score: Scores a given document for the given word using log(1+k)/log(1+n), where k is the number of query words in a document and k is the total number of words in the document. Other scoring functions can be used and you can overwrite this function to better suit your needs.
total_score: Calculate the sum of all the query words in given document.
present/present_results: Presents the results as a list.
We also have the class Document that holds metadata of documents, like their title, url and number of words. An additional class, UnixConsultant, can be used to initialize an IR System for Unix command manuals. This is the example we will use to showcase the implementation.
Example
First let's take a look at the source code of UnixConsultant.
End of explanation
"""
uc = UnixConsultant()
q = uc.query("how do I remove a file")
top_score, top_doc = q[0][0], q[0][1]
print(top_score, uc.documents[top_doc].url)
"""
Explanation: The class creates an IR System with the stopwords "how do i the a of". We could add more words to exclude, but the queries we will test will generally be in that format, so it is convenient. After the initialization of the system, we get the manual files and start indexing them.
Let's build our Unix consultant and run a query:
End of explanation
"""
q = uc.query("how do I delete a file")
top_score, top_doc = q[0][0], q[0][1]
print(top_score, uc.documents[top_doc].url)
"""
Explanation: We asked how to remove a file and the top result was the rm (the Unix command for remove) manual. This is exactly what we wanted! Let's try another query:
End of explanation
"""
plaintext = "ABCDWXYZ"
ciphertext = shift_encode(plaintext, 3)
print(ciphertext)
"""
Explanation: Even though we are basically asking for the same thing, we got a different top result. The diff command shows the differences between two files. So the system failed us and presented us an irrelevant document. Why is that? Unfortunately our IR system considers each word independent. "Remove" and "delete" have similar meanings, but since they are different words our system will not make the connection. So, the diff manual which mentions a lot the word delete gets the nod ahead of other manuals, while the rm one isn't in the result set since it doesn't use the word at all.
INFORMATION EXTRACTION
Information Extraction (IE) is a method for finding occurences of object classes and relationships in text. Unlike IR systems, an IE system includes (limited) notions of syntax and semantics. While it is difficult to extract object information in a general setting, for more specific domains the system is very useful. One model of an IE system makes use of templates that match with strings in a text.
A typical example of such a model is reading prices from web pages. Prices usually appear after a dollar and consist of numbers, maybe followed by two decimal points. Before the price, usually there will appear a string like "price:". Let's build a sample template.
With the following regular expression (regex) we can extract prices from text:
[$][0-9]+([.][0-9][0-9])?
Where + means 1 or more occurences and ? means at most 1 occurence. Usually a template consists of a prefix, a target and a postfix regex. In this template, the prefix regex can be "price:", the target regex can be the above regex and the postfix regex can be empty.
A template can match with multiple strings. If this is the case, we need a way to resolve the multiple matches. Instead of having just one template, we can use multiple templates (ordered by priority) and pick the match from the highest-priority template. We can also use other ways to pick. For the dollar example, we can pick the match closer to the numerical half of the highest match. For the text "Price $90, special offer $70, shipping $5" we would pick "$70" since it is closer to the half of the highest match ("$90").
The above is called attribute-based extraction, where we want to find attributes in the text (in the example, the price). A more sophisticated extraction system aims at dealing with multiple objects and the relations between them. When such a system reads the text "$100", it should determine not only the price but also which object has that price.
Relation extraction systems can be built as a series of finite state automata. Each automaton receives as input text, performs transformations on the text and passes it on to the next automaton as input. An automata setup can consist of the following stages:
Tokenization: Segments text into tokens (words, numbers and punctuation).
Complex-word Handling: Handles complex words such as "give up", or even names like "Smile Inc.".
Basic-group Handling: Handles noun and verb groups, segmenting the text into strings of verbs or nouns (for example, "had to give up").
Complex Phrase Handling: Handles complex phrases using finite-state grammar rules. For example, "Human+PlayedChess("with" Human+)?" can be one template/rule for capturing a relation of someone playing chess with others.
Structure Merging: Merges the structures built in the previous steps.
Finite-state, template based information extraction models work well for restricted domains, but perform poorly as the domain becomes more and more general. There are many models though to choose from, each with its own strengths and weaknesses. Some of the models are the following:
Probabilistic: Using Hidden Markov Models, we can extract information in the form of prefix, target and postfix from a given text. Two advantages of using HMMs over templates is that we can train HMMs from data and don't need to design elaborate templates, and that a probabilistic approach behaves well even with noise. In a regex, if one character is off, we do not have a match, while with a probabilistic approach we have a smoother process.
Conditional Random Fields: One problem with HMMs is the assumption of state independence. CRFs are very similar to HMMs, but they don't have the latter's constraint. In addition, CRFs make use of feature functions, which act as transition weights. For example, if for observation $e_{i}$ and state $x_{i}$ we have $e_{i}$ is "run" and $x_{i}$ is the state ATHLETE, we can have $f(x_{i}, e_{i}) = 1$ and equal to 0 otherwise. We can use multiple, overlapping features, and we can even use features for state transitions. Feature functions don't have to be binary (like the above example) but they can be real-valued as well. Also, we can use any $e$ for the function, not just the current observation. To bring it all together, we weigh a transition by the sum of features.
Ontology Extraction: This is a method for compiling information and facts in a general domain. A fact can be in the form of NP is NP, where NP denotes a noun-phrase. For example, "Rabbit is a mammal".
DECODERS
Introduction
In this section we will try to decode ciphertext using probabilistic text models. A ciphertext is obtained by performing encryption on a text message. This encryption lets us communicate safely, as anyone who has access to the ciphertext but doesn't know how to decode it cannot read the message. We will restrict our study to <b>Monoalphabetic Substitution Ciphers</b>. These are primitive forms of cipher where each letter in the message text (also known as plaintext) is replaced by another another letter of the alphabet.
Shift Decoder
The Caesar cipher
The Caesar cipher, also known as shift cipher is a form of monoalphabetic substitution ciphers where each letter is <i>shifted</i> by a fixed value. A shift by <b>n</b> in this context means that each letter in the plaintext is replaced with a letter corresponding to n letters down in the alphabet. For example the plaintext "ABCDWXYZ" shifted by 3 yields "DEFGZABC". Note how X became A. This is because the alphabet is cyclic, i.e. the letter after the last letter in the alphabet, Z, is the first letter of the alphabet - A.
End of explanation
"""
print(bigrams('this is a sentence'))
"""
Explanation: Decoding a Caesar cipher
To decode a Caesar cipher we exploit the fact that not all letters in the alphabet are used equally. Some letters are used more than others and some pairs of letters are more probable to occur together. We call a pair of consecutive letters a <b>bigram</b>.
End of explanation
"""
%psource ShiftDecoder
"""
Explanation: We use CountingProbDist to get the probability distribution of bigrams. In the latin alphabet consists of only only 26 letters. This limits the total number of possible substitutions to 26. We reverse the shift encoding for a given n and check how probable it is using the bigram distribution. We try all 26 values of n, i.e. from n = 0 to n = 26 and use the value of n which gives the most probable plaintext.
End of explanation
"""
plaintext = "This is a secret message"
ciphertext = shift_encode(plaintext, 13)
print('The code is', '"' + ciphertext + '"')
flatland = open_data("EN-text/flatland.txt").read()
decoder = ShiftDecoder(flatland)
decoded_message = decoder.decode(ciphertext)
print('The decoded message is', '"' + decoded_message + '"')
"""
Explanation: Example
Let us encode a secret message using Caeasar cipher and then try decoding it using ShiftDecoder. We will again use flatland.txt to build the text model
End of explanation
"""
psource(PermutationDecoder)
"""
Explanation: Permutation Decoder
Now let us try to decode messages encrypted by a general monoalphabetic substitution cipher. The letters in the alphabet can be replaced by any permutation of letters. For example if the alpahbet consisted of {A B C} then it can be replaced by {A C B}, {B A C}, {B C A}, {C A B}, {C B A} or even {A B C} itself. Suppose we choose the permutation {C B A}, then the plain text "CAB BA AAC" would become "ACB BC CCA". We can see that Caesar cipher is also a form of permutation cipher where the permutation is a cyclic permutation. Unlike the Caesar cipher, it is infeasible to try all possible permutations. The number of possible permutations in Latin alphabet is 26! which is of the order $10^{26}$. We use graph search algorithms to search for a 'good' permutation.
End of explanation
"""
ciphertexts = ['ahed world', 'ahed woxld']
pd = PermutationDecoder(canonicalize(flatland))
for ctext in ciphertexts:
print('"{}" decodes to "{}"'.format(ctext, pd.decode(ctext)))
"""
Explanation: Each state/node in the graph is represented as a letter-to-letter map. If there no mapping for a letter it means the letter is unchanged in the permutation. These maps are stored as dictionaries. Each dictionary is a 'potential' permutation. We use the word 'potential' because every dictionary doesn't necessarily represent a valid permutation since a permutation cannot have repeating elements. For example the dictionary {'A': 'B', 'C': 'X'} is invalid because 'A' is replaced by 'B', but so is 'B' because the dictionary doesn't have a mapping for 'B'. Two dictionaries can also represent the same permutation e.g. {'A': 'C', 'C': 'A'} and {'A': 'C', 'B': 'B', 'C': 'A'} represent the same permutation where 'A' and 'C' are interchanged and all other letters remain unaltered. To ensure we get a valid permutation a goal state must map all letters in the alphabet. We also prevent repetions in the permutation by allowing only those actions which go to new state/node in which the newly added letter to the dictionary maps to previously unmapped letter. These two rules togeter ensure that the dictionary of a goal state will represent a valid permutation.
The score of a state is determined using word scores, unigram scores, and bigram scores. Experiment with different weightages for word, unigram and bigram scores and see how they affect the decoding.
End of explanation
"""
|
pysal/pysal | notebooks/viz/splot/mapping_vba.ipynb | bsd-3-clause | import pysal.lib as lp
from pysal.lib import examples
import geopandas as gpd
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
%matplotlib inline
"""
Explanation: Mapping with splot and PySAL
Imports
End of explanation
"""
link_to_data = examples.get_path('columbus.shp')
gdf = gpd.read_file(link_to_data)
gdf.columns
"""
Explanation: Data Preparation
Load example data into a geopandas.GeoDataFrame and inspect column names. In this example we will use the columbus.shp file containing neighborhood crime data of 1980.
End of explanation
"""
x = gdf['HOVAL'].values
y = gdf['CRIME'].values
"""
Explanation: We extract two arrays x (housing value (in $1,000)) and y (residential burglaries and vehicle thefts per 1000 households).
End of explanation
"""
from pysal.viz.splot.mapping import vba_choropleth
"""
Explanation: Create Value-by-Alpha Choropleths using the splot.mapping functionality
What is a Value by Alpha choropleth?
In a nutshell, a Value-by-Alpha Choropleth is a bivariate choropleth that uses the values of the second input variable y as a transparency mask, determining how much of the choropleth displaying the values of a first variable x is shown. In comparison to a cartogram, Value-By-Alpha choropleths will not distort shapes and sizes but modify the alpha channel (transparency) of polygons according to the second input variable y.
Let's look at a couple of examples generated with splot.
End of explanation
"""
# Create new figure
fig, axs = plt.subplots(1,2, figsize=(20,10))
# use gdf.plot() to create regular choropleth
gdf.plot(column='HOVAL', scheme='quantiles', cmap='RdBu', ax=axs[0])
# use vba_choropleth to create Value-by-Alpha Choropleth
vba_choropleth(x, y, gdf, rgb_mapclassify=dict(classifier='quantiles'),
alpha_mapclassify=dict(classifier='quantiles'),
cmap='RdBu', ax=axs[1])
# set figure style
axs[0].set_title('normal Choropleth')
axs[0].set_axis_off()
axs[1].set_title('Value-by-Alpha Choropleth')
# plot
plt.show()
# Create new figure
fig, axs = plt.subplots(1,2, figsize=(20,10))
# create a vba_choropleth
vba_choropleth(x, y, gdf, rgb_mapclassify=dict(classifier='quantiles'),
alpha_mapclassify=dict(classifier='quantiles'),
cmap='RdBu', ax=axs[0],
revert_alpha=False)
# set revert_alpha argument to True
vba_choropleth(x, y, gdf, rgb_mapclassify=dict(classifier='quantiles'),
alpha_mapclassify=dict(classifier='quantiles'),
cmap='RdBu', ax=axs[1],
revert_alpha = True)
# set figure style
axs[0].set_title('divergent = False')
axs[1].set_title('divergent = True')
# plot
plt.show()
"""
Explanation: We can create a value by alpha map using splot's vba_choropleth functionality.
We will plot a Value-by-Alpha Choropleth with x defining the rgb values and y defining the alpha value. For comparison we plot a choropleth of x with gdf.plot():
End of explanation
"""
# Create new figure
fig, axs = plt.subplots(2,2, figsize=(20,10))
# classifier quantiles
vba_choropleth(x, y, gdf, cmap='viridis', ax = axs[0,0],
rgb_mapclassify=dict(classifier='quantiles', k=3),
alpha_mapclassify=dict(classifier='quantiles', k=3))
# classifier natural_breaks
vba_choropleth(x, y, gdf, cmap='viridis', ax = axs[0,1],
rgb_mapclassify=dict(classifier='natural_breaks'),
alpha_mapclassify=dict(classifier='natural_breaks'))
# classifier std_mean
vba_choropleth(x, y, gdf, cmap='viridis', ax = axs[1,0],
rgb_mapclassify=dict(classifier='std_mean'),
alpha_mapclassify=dict(classifier='std_mean'))
# classifier fisher_jenks
vba_choropleth(x, y, gdf, cmap='viridis', ax = axs[1,1],
rgb_mapclassify=dict(classifier='fisher_jenks', k=3),
alpha_mapclassify=dict(classifier='fisher_jenks', k=3))
plt.show()
"""
Explanation: You can see the original choropleth is fading into transparency wherever there is a high y value.
You can use the option to bin or classify your x and y values. splot uses mapclassify to bin your data and displays the new color and alpha ranges:
End of explanation
"""
color_list = ['#a1dab4','#41b6c4','#225ea8']
vba_choropleth(x, y, gdf, cmap=color_list,
rgb_mapclassify=dict(classifier='quantiles', k=3),
alpha_mapclassify=dict(classifier='quantiles'))
plt.show()
"""
Explanation: Instead of using a colormap you can also pass a list of colors:
End of explanation
"""
# Create new figure
fig, axs = plt.subplots(1,2, figsize=(20,10))
# create a vba_choropleth
vba_choropleth(x, y, gdf, rgb_mapclassify=dict(classifier='quantiles'),
alpha_mapclassify=dict(classifier='quantiles'),
cmap='RdBu', ax=axs[0],
revert_alpha=False)
# set revert_alpha argument to True
vba_choropleth(x, y, gdf, rgb_mapclassify=dict(classifier='quantiles'),
alpha_mapclassify=dict(classifier='quantiles'),
cmap='RdBu', ax=axs[1],
revert_alpha = True)
# set figure style
axs[0].set_title('divergent = False')
axs[1].set_title('divergent = True')
# plot
plt.show()
"""
Explanation: Sometimes it is important in geospatial analysis to actually see the high values and let the small values fade out. With the revert_alpha = True argument, you can revert the transparency of the y values.
End of explanation
"""
# create new figure
fig, axs = plt.subplots(1,2, figsize=(20,10))
# create a vba_choropleth
vba_choropleth(x, y, gdf, cmap='RdBu',
divergent=False, ax=axs[0])
# set divergent to True
vba_choropleth(x, y, gdf, cmap='RdBu',
divergent=True, ax=axs[1])
# set figure style
axs[0].set_title('revert_alpha = False')
axs[0].set_axis_off()
axs[1].set_title('revert_alpha = True')
# plot
plt.show()
"""
Explanation: You can use the divergent argument to display divergent alpha values. This means values at the extremes of your data range will be displayed with an alpha value of 1. Values towards the middle of your data range will be mapped more and more invisible towards an alpha value of 0.
End of explanation
"""
from pysal.viz.splot._viz_utils import shift_colormap
# shift the midpoint to the 80st percentile of your datarange
mid08 = shift_colormap('RdBu', midpoint=0.8)
# shift the midpoint to the 20st percentile of your datarange
mid02 = shift_colormap('RdBu', midpoint=0.2)
# create new figure
fig, axs = plt.subplots(1,2, figsize=(20,10))
# vba_choropleth with cmap mid08
vba_choropleth(x, y, gdf, cmap=mid08, ax=axs[0], divergent=True)
# vba_choropleth with cmap mid02
vba_choropleth(x, y, gdf, cmap=mid02, ax=axs[1], divergent=True)
# plot
plt.show()
"""
Explanation: Create your own cmap for plotting
Sometimes you need to display divergent values with a natural midpoint not overlapping with he median of your data. For example if you measure the temperature over a country ranging from -2 to 10 degrees Celsius. Or if you need to assess whether a certain threshold is reached.
For cases like this splot provides a utility function to shift your colormap.
End of explanation
"""
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(111)
vba_choropleth(x, y, gdf,
alpha_mapclassify=dict(classifier='quantiles', k=5),
rgb_mapclassify=dict(classifier='quantiles', k=5),
legend=True, ax=ax)
plt.show()
"""
Explanation: Add a legend
If your values are classified, you have the option to add a legend to your map.
End of explanation
"""
|
planet-os/notebooks | api-examples/cams_covid_analysis.ipynb | mit | %matplotlib notebook
%matplotlib inline
import numpy as np
import dh_py_access.lib.datahub as datahub
import xarray as xr
import matplotlib.pyplot as plt
import ipywidgets as widgets
from mpl_toolkits.basemap import Basemap,shiftgrid
import dh_py_access.package_api as package_api
import matplotlib.colors as colors
import pandas as pd
import warnings
import shutil
import imageio
import datetime
import os
warnings.filterwarnings("ignore")
"""
Explanation: Analyzing the Air Pollution Decrease Caused by the Global COVID-19 Pandemic
Last December 2019, we heard about the first COVID-19 cases in China.
Now, three months later, the WHO has officially declared Coronavirus outbreak as a pandemic and also an emergency of international concern.
The ongoing outbreak doesn't giving signs of getting better in any way, however, there is alway something good in bad. The air pollution has decreased dramatically over past month and they are saying that it could same even more lives than COVID-19 takes. In light of that, we would like to introduce a high-quality global air pollution reanalysis and high-quality global air pollution near-realtime forecast dataset we have in the Planet OS Datahub where the first one provides air quality data from 2008-2018 and second a 5-day air quality forecast.
The Copernicus Atmosphere Monitoring Service uses a comprehensive global monitoring and forecasting system that estimates the state of the atmosphere on a daily basis, combining information from models and observations, to provide a daily 5-day global surface forecast.
The CAMS reanalysis dataset covers the period January 2003 to 2018. The CAMS reanalysis is the latest global reanalysis data set of atmospheric composition (AC) produced by the Copernicus Atmosphere Monitoring Service (CAMS), consisting of 3-dimensional time-consistent AC fields, including aerosols, chemical species and greenhouse gases (GHGs). The data set builds on the experience gained during the production of the earlier MACC reanalysis and CAMS interim reanalysis.
In this analysis we’ve used PM2.5 in the analysis as these particles, often described as the fine particles, are up to 30 times smaller than the width of a human hair. These tiny particles are small enough to be breathed deep into the lungs, making them very dangerous to people’s health.
As we would like to have data about large areas we will download data by using Package API.
End of explanation
"""
server = 'api.planetos.com'
API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
version = 'v1'
"""
Explanation: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
End of explanation
"""
dh = datahub.datahub(server,version,API_key)
dataset_nrt = 'cams_nrt_forecasts_global'
dataset_rean = 'ecmwf_cams_reanalysis_global_v1'
variable_name1 = 'pm2p5'
"""
Explanation: At first, we need to define the dataset name and a variable we want to use.
End of explanation
"""
area_name = 'Europe'
latitude_north = 63; longitude_west = -18
latitude_south = 35; longitude_east = 30
time_start = '2008-01-01T00:00:00'
time_end = '2019-01-01T00:00:00'
"""
Explanation: Then we define spatial range. We decided to analyze US, where unfortunately catastrofic wildfires are taking place at the moment and influeces air quality.
End of explanation
"""
package_cams = package_api.package_api(dh,dataset_rean,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,time_start=time_start,time_end=time_end,area_name=area_name)
package_cams.make_package()
package_cams.download_package()
package_cams_nrt = package_api.package_api(dh,dataset_nrt,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,area_name=area_name)
package_cams_nrt.make_package()
package_cams_nrt.download_package()
"""
Explanation: Download the data with package API
Create package objects
Send commands for the package creation
Download the package files
End of explanation
"""
dd1 = xr.open_dataset(package_cams.local_file_name)
dd1['lon'] = dd1['lon']
dd1['pm2p5_micro'] = dd1.pm2p5 * 1000000000.
dd1.pm2p5_micro.data[dd1.pm2p5_micro.data < 0] = np.nan
dd2 = xr.open_dataset(package_cams_nrt.local_file_name)
dd2['pm2p5_micro'] = dd2.pm2p5 * 1000000000.
dd2.pm2p5_micro.data[dd2.pm2p5_micro.data < 0] = np.nan
year_ago = (pd.to_datetime(dd2.time[0].data) - datetime.timedelta(days=365+366)).strftime('%Y-%m-%dT%H:%M:%S')
data_rean = dd1.pm2p5_micro.sel(time=str(year_ago))
data_nrt= dd2.pm2p5_micro[0]
data_rean_shifted, lon1 = shiftgrid(180,data_rean,dd1.lon.values,start=False)
data_nrt_shifted, lon2 = shiftgrid(180,data_nrt,dd2.longitude.values,start=False)
"""
Explanation: Work with the downloaded files
We start with opening the files with xarray and adding PM2.5 as micrograms per cubic meter as well to make the values easier to understand and compare. After that, we will create a map plot with a time slider, then make a GIF using the images, and finally, we will look into a specific location.
End of explanation
"""
dd2
m = Basemap(projection='merc', lat_0 = 55, lon_0 = -4,
resolution = 'i', area_thresh = 0.05,
llcrnrlon=longitude_west, llcrnrlat=latitude_south,
urcrnrlon=longitude_east, urcrnrlat=latitude_north)
lons,lats = np.meshgrid(lon1,dd1.lat.data)
lonmap,latmap = m(lons,lats)
lons_n,lats_n = np.meshgrid(lon2,dd2.latitude.data)
lonmap_nrt,latmap_nrt = m(lons_n,lats_n)
"""
Explanation: Here we are making a Basemap of the US that we will use for showing the data.
End of explanation
"""
vmax = 100
vmin = 1
dd1
dd2
fig=plt.figure(figsize=(10,7))
ax = fig.add_subplot(121)
pcm = m.pcolormesh(lonmap,latmap,data_rean_shifted,
vmin = vmin,vmax=vmax,cmap = 'rainbow')
plt.title(str(data_rean.time.data)[:-10])
m.drawcoastlines()
m.drawcountries()
m.drawstates()
ax2 = fig.add_subplot(122)
pcm2 = m.pcolormesh(lonmap_nrt,latmap_nrt,data_nrt_shifted,
vmin = vmin,vmax=vmax,cmap = 'rainbow')
m.drawcoastlines()
m.drawcountries()
m.drawstates()
cbar = plt.colorbar(pcm,fraction=0.03, pad=0.040)
plt.title(str(data_nrt.time.data)[:-10])
cbar.set_label('micrograms m^3')
plt.savefig('201819marchvs2020.png',dpi=300)
"""
Explanation: Now it is time to plot all the data. A great way to do it is to make an interactive widget, where you can choose time stamp by using a slider.
As the minimum and maximum values are very different, we are using logarithmic colorbar to visualize it better.
On the map we can see that the very high PM2.5 values are in different states. Maximums are most of the time near 1000 µg/m3, which is way larger than the norm (25 µg/m3). By using the slider we can see the air quality forecast, which shows how the pollution is expected to expand.
We are also adding a red dot to the map to mark the area, where the PM2.5 is the highest. Seems like it is moving a lot and many wild fires are influencing it. We can also see that most of the Continental US is having PM2.5 values below the standard, which is 25 µg/m3, but in the places where wild fires taking place, values tend to be at least over 100 µg/m3.
End of explanation
"""
def make_ani():
folder = './anim/'
for k in range(len(dd1.pm2p5_micro)):
filename = folder + 'ani_' + str(k).rjust(3,'0') + '.png'
if not os.path.exists(filename):
fig=plt.figure(figsize=(10,7))
ax = fig.add_subplot(111)
pcm = m.pcolormesh(lonmap,latmap,dd1.pm2p5_micro.data[k],
norm = colors.LogNorm(vmin=vmin, vmax=vmax),cmap = 'rainbow')
m.drawcoastlines()
m.drawcountries()
m.drawstates()
cbar = plt.colorbar(pcm,fraction=0.02, pad=0.040,ticks=[10**0, 10**1, 10**2,10**3])
cbar.ax.set_yticklabels([0,10,100,1000])
plt.title(str(dd1.pm2p5_micro.time[k].data)[:-10])
ax.set_xlim()
cbar.set_label('micrograms m^3')
if not os.path.exists(folder):
os.mkdir(folder)
plt.savefig(filename,bbox_inches = 'tight')
plt.close()
files = sorted(os.listdir(folder))
images = []
for file in files:
if not file.startswith('.'):
filename = folder + file
images.append(imageio.imread(filename))
kargs = { 'duration': 0.1,'quantizer':2,'fps':5.0}
imageio.mimsave('cams_pm2p5.gif', images, **kargs)
print ('GIF is saved as cams_pm2p5.gif under current working directory')
shutil.rmtree(folder)
make_ani()
"""
Explanation: Let's include an image from the last time-step as well, because GitHub Preview doesn't show the time slider images.
With the function below we will save images you saw above to the local filesystem as a GIF, so it is easily shareable with others.
End of explanation
"""
lon = -118; lat = 34
data_in_spec_loc = dd1.sel(longitude = lon,latitude=lat,method='nearest')
print ('Latitude ' + str(lat) + ' ; Longitude ' + str(lon))
"""
Explanation: To see data more specifically we need to choose the location. This time we decided to look into Los Angeles and San Fransisco, as the most populated cities in California.
End of explanation
"""
fig = plt.figure(figsize=(10,5))
plt.plot(data_in_spec_loc.time,data_in_spec_loc.pm2p5_micro, '*-',linewidth = 1,c='blue',label = dataset)
plt.xlabel('Time')
plt.title('PM2.5 forecast for Los Angeles')
plt.grid()
lon = -122.4; lat = 37.7
data_in_spec_loc = dd1.sel(longitude = lon,latitude=lat,method='nearest')
print ('Latitude ' + str(lat) + ' ; Longitude ' + str(lon))
"""
Explanation: In the plot below we can see the PM2.5 forecast on the surface layer. Note that the time zone on the graph is UTC while the time zone in San Fransisco and Los Angeles is UTC-08:00. The air pollution from the wildfire has exceeded a record 100 µg/m3, while the hourly norm is 25 µg/m3. We can also see some peaks every day around 12 pm UTC (4 am PST) and the lowest values are around 12 am UTC (4 pm PST).
Daily pm 2.5 values are mostly in the norm, while the values will continue to be high during the night. This daily pattern where the air quality is the worst at night is caused by the temperature inversion. As the land is not heated by the sun during the night, and the winds tend to be weaker as well, the pollution gets trapped near the ground. Pollution also tends to be higher in the winter time when the days are shorter. Thankfully day time values are much smaller.
End of explanation
"""
fig = plt.figure(figsize=(10,5))
plt.plot(data_in_spec_loc.time,data_in_spec_loc.pm2p5_micro, '*-',linewidth = 1,c='blue',label = dataset)
plt.xlabel('Time')
plt.title('PM2.5 forecast for San Fransisco')
plt.grid()
"""
Explanation: Thankfully, San Fransisco air quality is in the norm even in the night time. However, we have to be careful as it could easily change with the wind direction as the fires are pretty close to the city. We can also see that in the end of the forecast values are rising quite rapidly.
End of explanation
"""
os.remove(package_cams.local_file_name)
"""
Explanation: Finally, we will remove the package we downloaded.
End of explanation
"""
|
moonbury/pythonanywhere | github/MasteringMatplotlib/mmpl-big-data.ipynb | gpl-3.0 | import matplotlib
matplotlib.use('nbagg')
%matplotlib inline
"""
Explanation: Big Data
Table of Contents
Introduction
Visualization tools for large data sets
matplotlib and large data sets
Working with large data sources
On the file system with NumPy, Pandas, PyTables, CSV and HDF5
On distributed data stores with Hadoop
Visualizing large data
Finding the limits of matplotlib
Adjusting limits with configuration
Decimation
Resampling
Before we get started, let's do our usual warm-up proceedure:
End of explanation
"""
import glob, io, math, os
import psutil
import numpy as np
import pandas as pd
import tables as tb
from scipy import interpolate
from scipy.stats import burr, norm
import matplotlib as mpl
import matplotlib.pyplot as plt
from IPython.display import Image
"""
Explanation: Let's bring in some of the modules we'll be needing as well:
End of explanation
"""
plt.style.use("../styles/superheroine-2.mplstyle")
"""
Explanation: We can re-use our custom style from a couple notebook ago, too:
End of explanation
"""
(c, d) = (10.8, 4.2)
(mean, var, skew, kurt) = burr.stats(c, d, moments='mvsk')
r = burr.rvs(c, d, size=100000000)
"""
Explanation: Introduction
The term "big data" is semantically ambiguous due to the varying contexts to which it is applied and the motivations of those applying it. The first question that may have occurred to you upon seeing this chapter's title is "how is this applicable to matplotlib or even plotting in general?" Before we answer that question, though, let's establish a working definition of big data.
The Wikipedia article on big data opens with the following informal definition: "Big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate." This is a great place to start: it is honest, admitting to being imprecise; it also implies that the definition may change given differing contexts. The words "large" and "complex" are relative, and "traditional data processing" is not going to mean the same thing between different industry segments. In fact, different departments in a single organization may have widely varying data processing "traditions."
The canonical example of big data relates to its origins in web search. Google is generally credited with starting the big data "movement" with the publication of the paper "MapReduce: Simplified Data Processing on Large Clusters" by Dean and Ghemawat. The paper describes the means by which Google was able to quickly search an enormous volume of textual data (crawled web pages and log files, for example) amounting, in 2004, to around 20 TB. In the intervening decade, more and more companies, institutions, and even individuals are faced with the need to quickly process data sets varying in sizes from hundreds of gigabytes to multiples of exobytes. To the small business that used to manage hundreds of megabytes and is now facing several orders of magnitude in data sources for analysis, 250 gigabytes is "big data." For intelligence agencies storing information from untold data sources, even a few terabytes is small; to them, big data is hundreds of petabytes.
To each, though, the general problem remains the same: what worked before on smaller data sets is no longer feasible. New methodologies, new approaches to the use of hardware, communication protocols, data distribution, search, analysis, and visualization -- among many others -- are required. No matter which methodologies are used to support a big data project, one of the last steps in most of them is the presentation of digested data to human eyes. This could be anything from a decision maker to an end-user, but the need is the same: a visual representation of the data collected, searched, and analyzed. This is where tools like matplotlib come into play.
Visualization tools for large data sets
As stated in previous notebooks, matplotlib was originally designed for use on workstations and desktops, not servers. Its design did not arise from use cases for high-volume or large data sets. However, there are steps you can take that allow for matplotlib to be used in such situations. First, though, here's an overview of tools that were designed with large data sets in mind:
ParaView - an open source, multi-platform data analysis and visualization application. ParaView was developed to analyze extremely large datasets using distributed memory computing resources. It can be run on supercomputers to analyze datasets of petascale size as well as on laptops for smaller data. Paraview also offers a Python scripting interface.
VisIt - an open source, interactive, scalable, visualization, animation and analysis tool. VisIt has a parallel and distributed architecture allowing users to interactively visualize and analyze data ranging in scale from small ($< 10^1$ core) desktop-sized projects to large ($> 10^5$ core) computing facility simulation campaigns. VisIt is capabable of visualizing data from over 120 different scientific data formats. VisIt offers a Python interface.
Bokeh - Bokeh is a Python interactive visualization library that targets modern web browsers for presentation. Its goal is to provide elegant, concise construction of novel graphics in the style of D3.js, but also deliver this capability with high-performance interactivity over very large or streaming datasets.
Vispy - a new 2D and 3D high-performance visualization library which can handle very large data sets. Vispy uses the OpenGL library and GPUs for increased performance and with it, users are able to interactively explore plots having hundreds of millions of points. For now, knowledge of OpenGL is very helpful when using Vispy.
However, all this being said, matplotlib is a powerful and well-known tool in the scientific computing community. Organizations and teams have uncountable years of cumulative experience building, installing, augmenting, and using matplotlib and the libraries of related projects like NumPy and SciPy. If there is a new way to put old tools to use without having to suffer the losses in productivity and re-engineering of infrastructure associated with platform changes, it is often in everyone's best interest to do so.
matplotlib and large data sets
In this spirit of adapting established tools to new challenges, the last chapter saw us finding ways to work around matplotlib's limitations on a single workstation. In this chapter, we will explore ways around some of the other limitations matplotlib users may run up against when working on problems with very large data sets. Note that this investigation will often cause us to bump up against the topic of clustering; we will be setting those explorations aside for now, though. Lest you feel that a critical aspect of the problem domain is being ignored, take heart: these will be the topic of the next chapter.
There are two major areas of the problem domain we will cover in this chapter:
* Preparing large data for use by matplotlib, and
* Visualizing the prepared data.
These are two distinct areas, each with their own engineering problems that need to be solved and we will be taking a look at several options in each area.
Working with large data sources
Much of the data which users feed into matplotlib when generating plots is from NumPy. NumPy is one of the fastest ways of processing numerical and array-based data in Python (if not the fastest), so this makes sense. However, by default, NumPy works in-memory: if the data set you want to plot is larger than the total RAM available on your system, performance is going to plummet.
Let's take a look at an example which illustrates this limitation.
An example problem
Let's generate a data set with 100 million points.
End of explanation
"""
len(r)
"""
Explanation: That took about 10 seconds to generate, and RAM usage peaked at around 2.25 GB while the data was being generated.
Let's make sure we've got the expected size:
End of explanation
"""
r.tofile("../data/points.bin")
ls -alh ../data/points.bin
"""
Explanation: If we save this to a file, it weighs in at about 3/4 of a GB:
End of explanation
"""
x = np.linspace(burr.ppf(0.0001, c, d),
burr.ppf(0.9999, c, d), 100)
y = burr.pdf(x, c, d)
"""
Explanation: That actually does fit in memory, but generating much larger files tends to be problematic (on a machine with 8 GB of RAM). We can re-use it multiple times, though, to reach a size that is larger than can fit in the system RAM.
Before we go there, though, let's take a look at what we've got by generating a smooth curve for the probability distribution:
End of explanation
"""
(figure, axes) = plt.subplots(figsize=(20, 10))
axes.plot(x, y, linewidth=5, alpha=0.7)
axes.hist(r, bins=100, normed=True)
plt.show()
"""
Explanation: Let's try plotting a histogram of the 100,000,000 data points as well as the probablility distribution function:
End of explanation
"""
(figure, axes) = plt.subplots(figsize=(20, 10))
axes.plot(r)
plt.show()
"""
Explanation: Even with 100 million points, that only took about 10 seconds to render. This is due to the fact that NumPy is handling most of the work and we're only displaying a limited number of visual elements. What would happen if we did try to plot all 100,000,000 points?
End of explanation
"""
Image("memory-before.png")
"""
Explanation: After about 30 seconds of crunching, the above error was thrown: the Agg backend (a shared library on Mac OS X) simply couldn't handle the number of artists required to render all those points. We'll examine this sort of situation towards the end of the chapter and discuss ways of working around it.
But this clarifies the above point for us: our first plot rendered relatively quickly because we were selective about the data we chose to present, given the large number of points we are working with.
So let's say we do have data from files that is too large to fit into memory? What can we do? Possible ways of addressing this include:
* Moving the data out of memory and onto the file system
* Moving the data off of the file system and into databases
We will explore examples of these below.
File system
NumPy
Let's restart the IPython kernel and re-execute the first few lines above, performing the imports and getting our stylesheet set up. To restart the kernel, in the IPython menu at the top of this page, select "Kernel" and then "Restart".
Once the kernel is restarted, take a look at the RAM utilization on your system for the fresh Python process for the notebook:
End of explanation
"""
data = np.fromfile("../data/points.bin")
data_shape = data.shape
data_len = len(data)
data_len
Image("memory-after.png")
"""
Explanation: Now let's load the array data in and then re-check the memory utilization:
End of explanation
"""
8 * 1024
filesize = 763
8192 / filesize
"""
Explanation: That took just a few seconds to load with a memory consumption equivalent to the file size of the data.
That means ...
End of explanation
"""
del data
"""
Explanation: ... we'd need 11 of those files concatenated to make a file too large to fit in the memory for this system. But that's if all the memory was available. Let's what how much memory we have available right now, after we delete the data we just pulled in:
End of explanation
"""
psutil.virtual_memory().available / 1024**2
"""
Explanation: We'll wait for a few seconds, to let the system memory stabilize, and then check:
End of explanation
"""
2449 / filesize
"""
Explanation: About 2.5GB. So, to overrun our RAM we'd just need a fraction of the total:
End of explanation
"""
data = np.memmap("../data/points.bin", mode="r", shape=data_shape)
"""
Explanation: Which means we only need 4 of our original files to make something which won't fit in memory. However, below we will still use 11 files to ensure a data that would be much larger than memory if loaded into memory.
So, how should we create this large file for demonstration purposes? (Knowing that in a real-life situation, the data would already be 1) created, and 2) potentially quite large.)
We could try to use np.tile to create a file of the desired size (larger than memory), but that could make our system unusable for a significant period of time. Instead, let's use np.memmap which will treat a file on disk as an array, thus letting us work with data which is too large to fit into memory.
Let's load the data file again, but this time as a memory-mapped array:
End of explanation
"""
big_data_shape = (data_len * 11,)
big_data = np.memmap("../data/many-points.bin", dtype="uint8", mode="w+", shape=big_data_shape)
"""
Explanation: Loading the array to a memmap object was very quick (compared to bringing the contents of the file into memory), taking less than a second to complete.
Now let's create a new file for writing data to, sized to be larger than our total system memory (if held in-memory; on disk it will be smaller).
End of explanation
"""
ls -alh ../data/many-points.bin
"""
Explanation: That creates a 1GB file:
End of explanation
"""
big_data.shape
"""
Explanation: which is mapped to an array having the shape we requested:
End of explanation
"""
big_data
"""
Explanation: and just contains zeros:
End of explanation
"""
for x in range(11):
big_data[x * data_len:((x * data_len) + data_len)] = data
big_data
"""
Explanation: Now let's fill the empty data structure with copies of the data we saved to the 763 MB file:
End of explanation
"""
big_data_len = len(big_data)
big_data_len
data[100000000 - 1]
big_data[100000000 - 1]
"""
Explanation: If you check your system memory before and after, you will only see minimal changes, confirming that we are not creating an 8GB data structure in-memory. Furthermore, that only took a few seconds to do.
End of explanation
"""
data[100000000]
"""
Explanation: Attempting to get the next index from our original data set will throw an error, since it didn't have that index:
End of explanation
"""
big_data[100000000]
"""
Explanation: But our new data does:
End of explanation
"""
big_data[1100000000 - 1]
"""
Explanation: And then some!
End of explanation
"""
(figure, axes) = plt.subplots(figsize=(20, 10))
axes.hist(big_data, bins=100)
plt.show()
"""
Explanation: We can also plot data from a memmaped array without significant lag-times. Note, however, that below we are creating a histogram from 1.1 million points of data, so it won't be instantaneous ...
End of explanation
"""
head = "country,town,year,month,precip,temp\n"
row = "{},{},{},{},{},{}\n"
town_count = 1000
(start_year, end_year) = (1894, 2014)
(start_month, end_month) = (1, 13)
sample_size = (1 + 2 * town_count * (end_year - start_year) * (end_month - start_month))
countries = range(200)
towns = range(town_count)
years = range(start_year, end_year)
months = range(start_month, end_month)
for country in countries:
with open("../data/{}.csv".format(country), "w") as csvfile:
csvfile.write(head)
csvdata = ""
weather_data = norm.rvs(size=sample_size)
weather_index = 0
for town in towns:
for year in years:
for month in months:
csvdata += row.format(
country, town, year, month,
weather_data[weather_index],
weather_data[weather_index + 1])
weather_index += 2
csvfile.write(csvdata)
"""
Explanation: That took about 40 seconds to generate.
Note that with our data file-hacking we have radically changed the nature of our data, since we've increased the sample size linearly without regard for the distribution. The purpose of this demonstration wasn't to preserve a sample distribution, but rather to show how one can work with large data sets.
Pandas and PyTables
A few years ago, a question was asked on StackOverflow about working with large data in Pandas. It included questions such as:
<blockquote>
What are some best-practice workflows for accomplishing the following:
<ul><li>Loading flat files into a permanent, on-disk database structure?</li>
<li>Querying that database to retrieve data to feed into a pandas data structure?</li>
<li>Updating the database after manipulating pieces in pandas?</li></ul>
</blockquote>
The answer given by Jeff Reback was exceptional and is commonly referenced as the way to work with very large data sets in Pandas. The question was framed around a desire to move away from proprietary software which handled large data sets well, and the only thinking keeping this person from making the leap was not knowing how to process large data sets in Pandas and NumPy.
Using the scenario outlined, one can easily use tens of GB of file data using the high-performance HDF5 data structure. Pandas provides documentation on its use of HDF5 in the "I/O" section of its docs site. Jeff provides the following question to consider when defining a workflow for large data files in Pandas and NumPy:
What is the size of your data? What are the number of rows and columns? What are the types of columns? Are you appending rows, or just columns?
What will typical operations look like? For example, will you be querying columns to select a bunch of rows and specific columns, then doing some in-memory operation, then maybe creating new columns, and finally saving these?
After your typical set of operations, what will you do? In other words, is step #2 ad hoc, or repeatable?
Roughly how many total GB are you expecting to process from your source data files? How are these organized? (E.g. by records?) Does each one contain different fields, or do they have some records per file with all of the fields in each file?
Do you ever select subsets of rows (records) based on specified criteria (e.g. select the rows with field A > 5)? and then do something? Or do you just select fields A, B, C with all of the records (and then do something)?
Do you need all of your columns -- as a group -- for all of your typical operations? Or is there a good proportion that you may only use for reports (e.g. you want to keep the data around, but don't need to pull in that column explicity until final results time)?
Jeff's response boils down to a few core concepts:
* creating a store
* grouping your fields according to your needs
* reading large file data in chunks, to prevent swamping system memory
* reindexing the data (by chunks) and adding it to the store
* selecting groups from all the chunks that have been saved in the store
HDF5, PyTables, and Pandas
Hierarchical Data Format is a set of file formats (namely HDF4 and HDF5) originally developed at the National Center for Supercomputing Applications to store and organize large amounts of numerical data. (Some may remember NCSA as the place where they downloaded their first graphical web browser, code which is the ancestor not only to the Netscape web browser, but also Internet Explorer. A web server was also created there, and this evolved into the Apache HTTP Server.)
HDF is supported by Python, R, Julia, Java, Octave, IDL, and MATLAB, to name a few. HDF5 offers significant improvements and useful simplifications over HDF4. It uses B-trees to index table objects and, as such, works well for write-once/read-many time series data with common use occurring across fields such as meteorological studies, biosciences, the finance industry, and aviation. HDF5 files of multi-terabyte sizes are common in these applications, typically constructed from the analyses of multiple HDF5 source files, thus providing a single (and often extensive) source of grouped data for a particular application.
The PyTables library is built on top of the Python HDF5 library and NumPy, thus not only providing access to one of the most widely-used large-data file formats in the scientific computing community, but then links data extracted from these files with the data types and objects provided by the fast Python numerical processing library.
Pandas, in turn, wraps PyTables, thus extending its convenient in-memory data structures, functions, and objects to large on-disk files. To use HDF data with Pandas, you'll want to create a pd.HDFStore, read from HDF data sources with pd.read_hdf, or write to one with pd.to_hdf.
One project to keep an eye on is Blaze. It's an open wrapper and utility framework for working with large data sets, generalizing such actions creation, access, updates, and migration. Blaze supports not only HDF, but also SQL, CSV, and JSON. The API usage between Pandas and Blaze is very similar, if various methods are spelled differently. This page outlines the basics and is a good review.
In the example below we will be using PyTables to create an HDF5 too large to comfortably fit in memory. We will follow these steps (for more details and to examine steps that we have left out, be sure to see Yves Hilpisch's presentation Out-of-Memory Data Analytics with Python):
* Create a series of CSV source data files taking up ~14 GB of disk space
* Create an empty HDF5 file
* Create a table in the HDF5 file, providing schema metadata and compression options
* Load our CSV source data into the HDF5 table
* Query our new data source, once the data has been migrated
Remember the temperature precipitation data for St. Francis, KS USA from a previous notebook? Let's create fake data sets we can pretend are for temperature and precipitation data for hundreds of thousands of towns across the globe for the last century:
End of explanation
"""
ls -rtm ../data/*.csv
"""
Explanation: That took about 35 minutes on my 2009 iMac. Here are the files:
End of explanation
"""
ls -lh ../data/0.csv
"""
Explanation: Let's just take a look at one of those:
End of explanation
"""
tb_name = "../data/weather.h5t"
h5 = tb.open_file(tb_name, "w")
h5
"""
Explanation: Each file is about 72 MB, at 200, that makes about 14 GB -- too much for RAM!
Running queries against so much data in .csv files isn't going to be very efficient: it's going to take a long time. So what are our options? Well, for reading this data, HDF5 is a very good fit, designed for jobs like this, in fact. Let's use PyTables to convert our CSV files a single HDF5 file. Note that we aren't using the Pandas HDFStore, since Pandas isn't currently designed to handle exteremly large data sets out of memory. Instead we'll be using PyTables which has been designed for such use cases. We'll start by creating an empty table file:
End of explanation
"""
data_types = np.dtype(
[("country", "<i8"),
("town", "<i8"),
("year", "<i8"),
("month", "<i8"),
("precip", "<f8"),
("temp", "<f8")])
"""
Explanation: Next we'll need to provide some assistance to PyTables by indicating the data types of each column in our table:
End of explanation
"""
filters = tb.Filters(complevel=5, complib='blosc')
"""
Explanation: Let's also define a compression filter to be used by PyTables when saving our data:
End of explanation
"""
tab = h5.create_table(
"/", "weather",
description=data_types,
filters=filters)
"""
Explanation: Now we can create the table inside of our new HDF5 file:
End of explanation
"""
for filename in glob.glob("../data/*.csv"):
it = pd.read_csv(filename, iterator=True, chunksize=10000)
for chunk in it:
tab.append(chunk.to_records(index=False))
tab.flush()
"""
Explanation: With that done, let's load each CSV file, reading it by chunks so as not to overload our memory, and then append it to our new HDF5 table:
End of explanation
"""
h5.get_filesize()
"""
Explanation: That took about 7 minutes on my machine, and what started out as ~14 GB of .csv files is now a single, compressed 4.8 GB HDF5 file:
End of explanation
"""
tab
"""
Explanation: Here's the metadata for our PyTables-wrapped HDF5 table:
End of explanation
"""
tab[100000:100010]
tab[100000:100010]["precip"]
"""
Explanation: Let's get some data:
End of explanation
"""
h5.close()
"""
Explanation: When you're done, go ahead and close the file:
End of explanation
"""
h5 = tb.open_file(tb_name, "r")
tab = h5.root.weather
tab
"""
Explanation: If you want to work with the data again, simply load it up:
End of explanation
"""
(figure, axes) = plt.subplots(figsize=(20, 10))
axes.hist(tab[:1000000]["temp"], bins=100)
plt.show()
"""
Explanation: Let's plot the first million entries:
End of explanation
"""
tab[0:281250]["temp"].mean()
"""
Explanation: As you can see from this example and from the previous ones, accessing our data via HDF5 files is exteremely fast (certainly compared to attempting to use large CSV files).
What about executing calculations against this data? Unfortunately, running the following will consume an enormous amount of RAM:
python
tab[:]["temp"].mean()
We've just asked for all of the data: all 288,000,000 rows of it. That's going to end up loading everything into RAM, grinding the average workstation to a halt. Ideally, though, when you iterate through the source data and to create your HDF5 file, you also crunch the numbers you will need, adding supplemental columns or groups to the HDF5 file for later use by you and your peers.
If we have data which we will mostly be selecting (extracting portions) and which has already been crunched as-needed, grouped as needed, etc., HDF5 is a very good fit. This is why one of the common use cases you see for HDF5 is that of sharing/distributing processed data.
However, if we have data which we will need to process repeatedely, then either we will need to use another method besides that which would cause all the data to be loaded into memory, or find a better match for our data-processing needs. Before we move on, let's give HDF5 another chance...
We saw above that selecting data was very fast in HDF5: what about getting the mean for a small section of data, say the first 281,250 rows? (We chose that number since it multiplies nicely to our total of 288,000,000.)
End of explanation
"""
limit = 281250
ranges = [(x * limit, x * limit + limit) for x in range(2 ** 10)]
(ranges[0], ranges[-1])
means = [tab[x * limit:x * limit + limit]["temp"].mean() for x in range(2 ** 10)]
len(means)
"""
Explanation: Well, that was fast! What about iterating through all of records in a similar fashion? Let's break up our 288,000,000 records into chunks of that size:
End of explanation
"""
sum(means) / len(means)
"""
Explanation: That took about 30 seconds to run on my machine.
Of course, once we've taken that step, it's trivial to get the mean value for all 288,000,000 points of temperature data:
End of explanation
"""
sum(means)/len(means)
"""
Explanation: Now let's look at another option for handling large data sets.
Distributed data
We've looked two ways of handling data too large for memory:
* NumPy's memmap
* and the more general HDF5 format wrapped by PyTables
But there is another situation which may come into play for projects that need to use matplotlib to visual all or parts of large data sets: data which is too large to fit on a hard drive. This could be anything from large data sets like those created by super-colliders and radio telescopes to high-volume streaming data used in systems analysis (and social media) and financial markets data. All of these are either too large to fit on a machine or too ephemeral to store, needing to be processed in real-time.
The latter of these is the realm of such projects as Spark, Storm, Kafka, Amazon's Kinesis. We will not be discussing these in this notebook, but will be instead focusing on the former: processing large data sets in a distirbuted environment, in particular, map-reduce. Understanding how to use matplotlib and numpy with a map-reduce framework will provide the foundation necessary for the reader to extend this to the streaming-data scenarios.
Even though we have chosen to example map-reduce, there are many other options for solving problems like these: distributed RDMSs and NoSQL solutions like Riak, Redis, or Cassandra (to name but a few).
MapReduce
So what is “MapReduce” and why are we looking at it in the context of running computations against large sets of data? Wikipedia gives the following definition:
<blockquote>
MapReduce is a programming model for processing and generating large data sets with a parallel, distributed algorithm on a cluster. A MapReduce program is composed of a ``Map`` procedure that performs filtering and sorting, and a ``Reduce`` procedure that performs a summary operation. The "MapReduce System" orchestrates the processing by marshalling the distributed servers, running the various tasks in parallel, managing all communications and data transfers between the various parts of the system, and providing for redundancy and fault tolerance.
</blockquote>
A little context will make this more clear and why it is potentially very useful for visualizing large data sets with matplotlib.
(Note that some of the content in the following sub-sections has been taking from the Wikipedia MapReduce article and the Google MapReduce paper.)
Origins
Between 1999 and 2004, Google had created hundreds of special-purpose computations for processing the huge amounts of data generated by web crawling, HTTP access logs, etc. The many kinds of processing developed were in large part used to create Google search's page ranked search results -- at the time, a vast improvement over other search engine results. Each computation required was pretty straight-forward; it was the combination of these which was unique.
However, in the span of those five years, the computation tasks needed to be split across hundreds and then thousands of machines in order to finish in a timely manner. As such, the difficulties of parallelizing code were introduced: not only the decomposition of tasks into parallelizable parts, but the parallelization of data and hanlding failures. All of these combined with the legacy code being maintain (and created) made for an approach that was becoming less maintainable and growing more difficult to easily add new features.
The inspriation for a new approach to Google's problem came from the second oldest programming language still in use: Lisp (Fortan being the oldest). The authors of the Google MapReduce paper were reminded of the fact that many of their processing jobs consisted of a simple action against a data set (using a modern Lisp as an example):
```cl
(set data "some words to examine")
"some words to examine"
(lists:map #'length/1 (string:tokens data " "))
(4 5 2 7)
```
And then the "folding" of those results into a secondary analytical result:
```cl
(lists:foldl #'+/2 0 (4 5 2 7))
18
```
The function above is called "folding" due to the fact that there is a recursive operation in place, with each item in the list being folded into an accumulator. In this case, the folding function is addition (with arity of "2", thus the +/2); 0 is provided as an initial value for the first fold. Note that if our folding function created items in a list rather than adding two integers for a sum, the initial value would have been a list (empty or otherwise).
The map and fold operations can be combined in the fashion typical of higher-order functions:
```cl
(lists:foldl
#'+/2
0
(lists:map
#'length/1
data
(string:tokens data " ")))
18
```
As you might have guessed by now (or known already), there is another term that folding is known. It is named not for the process by which it is created, but by the nature of the results it creates: reduce. In this case, a list of integers is reduced to a single value by means of the addition function we provided.
In summary: given an initial data set, we've run a length function (with an arity of one) against every element of our data which has been split on the “space” character. Our results were integers representing the length of each word in our data set. Then, we folded our list with the + function, element at a time, into an “accumulator” with the initial value of zero. The end result represented the sum of all the elements. If we wanted a running average instead of a running sum, we would have supplied a different function: it still would take two arguments and it would still sum them, it would just divide that result by two:
```cl
(defun ave (number accum)
(/ (+ number accum) 2))
ave
(lists:foldl
#'ave/2
0
(lists:map
#'length/1
(string:tokens data " ")))
4.875
```
The average word length in our data is 4.875 ASCII characters. This example makes more clear the latent power in solutions like these: for completely new results, we only needed to change one function.
Various Lisps and other functional programming languages have fold or reduce functionality, but this is not just the domain of functional programming: Python 3 has a library dedicated to functional programming idioms: functools. Here's how the above examples would be implemented in Python 3:
```python
data = "some words to examine"
[x for x in map(len, data.split(" ")]
[4, 5, 2, 7]
functools.reduce(operator.add, [4, 5, 2, 7], 0)
18
```
Similarly, these may be composed in Python:
```python
functools.reduce(operator.add,
... map(len, data.split(" ")),
... 0)
18
```
And to calculate the running average:
```python
def ave(number, accum):
... return (number + accum) / 2
...
functools.reduce(ave,
... map(len, data.split(" ")),
... 0)
4.875
```
The really important part to realize here -- given the context of Google's needs in 2004 and the later fluorescence of MapReduce -- is that each map call of len is independent of all the others. These could be called on the same machine in the same process, or in different processes, on different cores, or on another machine altogether (given the appropriate framework, of course). In a similar fashion, the data provided to the reduce function could be from any number of sources, local or remote. In fact, the reduce step could be split across multiple computational resources -- it would just need a final reduce step to aggregate all the results.
This insight led to an innovation in the development process at Google in support of the tasks which had steadily gown in complexity and reduced maintainability. They created infrastructure such that engineers only needed to create their own mapper and reducer functions, and then these could be run against the desired data sources on the appropriate MapReduce clusters. This automated the parallelization of tasks and distribution of workload across any number of servers running in Google's large computation clusters in their data centers.
Optimal Problem Sets
MapReduce programs are not guaranteed to be fast. The main benefit of this programming model is to exploit the optimized algorithms which prepare and process data and results, freeing developers to focus on just the map and reduce parts of the program. In practice, however, the implementation of these can have a heavy impact on the overall performance of tha task in the cluster. When designing a MapReduce algorithm, the author needs to choose a good tradeoff between the computation and the communication costs, and it is common to see communication cost dominating the computation cost.
MapReduce is useful in a wide range of applications, including:
* distributed pattern-based searching
* distributed sorting
* web link-graph reversal
* Singular Value Decomposition,
* web access log stats
* inverted index construction
* document clustering
* machine learning
* statistical machine translation
Moreover, the MapReduce model has been adapted to several computing environments:
* like multi-core systems
* desktop grids
* volunteer computing environments
* dynamic cloud environments
* mobile environments
The Future for MapReduce
In 2014 Google announced that it had stopped relying upon it for its petabyte-scale operations, having since moved on to technologies such as Percolator, Flume and MillWheel that offer streaming operation and updates instead of batch processing, to allow integrating "live" search results without rebuilding the complete index. Furthermore, these technologies are not limited to the concept of map and reduce workflows, but rather the more general concept of data pipeline workflows.
MapReduce certainly isn't dead, and the frameworks that support it aren't going away. However, we've been seeing an evolution in the industry since Google popularized the concept of distributed workloads across commodity hardware with MapReduce, and both proprietary and open source solutions are offering their users the friuts of these innovations.
Open Source Options
Most readers who have even just passing familiarity with big data in general will have heard of Hadoop. A member project of the Apache Software Foundation, Hadoop is an open source distributed storage and processing framework designed to work with very large data sets on commodity hardware computing clusters. The distributed storage part of the project is called HDFS, Hadoop Distributed File System, and the processing part is named MapReduce. When a user uploads data to the Hadoop file system, the files are split into pieces and then distributed across the cluster nodes. When a user creates a code to run on Hadoop MapReduce, the custom mappers and reducers – similar in concept to what we saw in the previous section – are copied to MapReduce nodes in the cluster which are then executed for against the data stored in with that node.
Hadoop's predecessor was created at the Internet Archive in 2002 in an attempt to build a better web page crawler and indexer. When the papers on the Google File System and Google's MapReduce were published in 2003 and 2004, respectively, the creators of Hadoop to re-envision their project and create a framework upon which it could run more efficiently. That was the birth of Hadoop. Yahoo! invested heavily in the project a few years later and open sourced it while at the same time providing its researchers access to a testing cluster – that last was the seed for Hadoop's very strong role in the field of machine learning.
Though Hadoop is the primary driver for the big data market, projected to generate 23 billion USD by 2016, it is not the only big data framework available in the open source community. A notable, if quiet, contender is the Disco project.
In 2008, the Nokia Research Center needed a tool that would allow them to process enormous amounts of data in real-time. They wanted their researchers – many of them proficient in Python – to be able to easily and quickly create MapReduce jobs against their large data sets. They also needed their system to be fault-tolerant and scalable. As such, they built the server on top of the Erlang distributed programming language, and created a protocol and Python client which could talk to it, thus allowing their users to continue using the language they new so well.
Since then, Disco has continued evolving and provides a generalized workflow on top of its distributed file system: Disco pipelines. The pipeline workflow enables data scientists to create distributed processing tasks which go far beyond the original vision of MapReduce.
The functionality of MapReduce is no longer available only in the domain of MapReduce frameworks: the rise of NoSQL databases which then extended their functionality to distributed data have started offering MapReduce features in their products. Redis clusters, for instance, make it trivial to implement MapReduce functionality. The Riak distributed NoSQL key-value data store, based upon the Amazon Dynamo paper (not to be confused with the DynamoDB product from Amazon), offers built-in MapReduce capabilities. Riak provides an API for executing MapReduce jobs against nodes in a cluster, and this is supported by the Python Riak client library. MongoDB is another NoSQL database which offers built-in support for MapReduce.
In our case, though, we will be focusing on the Hadoop implementation of MapReduce, utilizing its support for Python via its streaming protocol. In particular, we will take advantage of a service provider which allows us to quickly and easily set up Hadoop clusters: Amazon Elastic MapReduce, or EMR.
Amazon EMR (Elastic Map Reduce)
In this section we will be using Hadoop on Amazon Elastic MapReduce, performing the following tasks:
Creating a cluster
Pushing our data set to the cluster
Writing a mapper and reducer in Python
Testing our mapper and reducer against small, local data
Adding nodes to our EMR cluster in preparation for our job
Executing our MapReduce job against the EMR cluster we created
Examining the results
We're going to create an Amazon EMR clusster from the command line using the aws tool we've installed:
bash
$ aws emr create-cluster --name "Weather" --ami-version 3.6.0 \
--applications Name=Hue Name=Hive Name=Pig Name=HBase \
--use-default-roles --ec2-attributes KeyName=YourKeyName \
--instance-type c1.xlarge --instance-count 3
j-63JNVV2BYHC
We're going to need that cluster ID, so let's export it as a shell variable. We're also going to need to use the full path to your .pem file, so we'll set one for that too:
bash
$ export CLUSTER_ID=j-63JNVV2BYHC
$ export AWS_PEM=/path/to/YourKeyName.pem
We can check the state of the cluster with the following:
bash
$ aws emr describe-cluster --cluster-id $CLUSTER_ID |grep STATUS
STATUS RUNNING
STATUS RUNNING
STATUS WAITING
The first STATUS is the master node, and once it returns as RUNNING, we can start copying files to it:
bash
$ for FILE in data/{0,1,2}.csv
do
aws emr put \
--src $FILE \
--cluster-id $CLUSTER_ID \
--key-pair-file $AWS_PEM
done
Or we can move them all up there (changing to a volume that has more space):
bash
$ for FILE in data/*.csv
do
aws emr put \
--src $FILE \
--dest /mnt1 \
--cluster-id $CLUSTER_ID \
--key-pair-file $AWS_PEM
done
Login to the server and copy the data to HDFS:
bash
$ aws emr ssh --cluster-id $CLUSTER_ID --key-pair-file $AWS_PEM
bash
[hadoop@ip-10-255-7-47 ~]$ hdfs dfs -mkdir /weather
[hadoop@ip-10-255-7-47 ~]$ hdfs dfs -put /mnt1/*.csv /weather
Let's make sure the files are there:
bash
[hadoop@ip-10-255-7-47 ~]$ $ hdfs dfs -ls /weather|head -10
Found 200 items
-rw-r--r-- 1 hadoop supergroup 75460820 2015-03-29 18:46 /weather/0.csv
-rw-r--r-- 1 hadoop supergroup 75456830 2015-03-29 18:47 /weather/1.csv
-rw-r--r-- 1 hadoop supergroup 76896036 2015-03-30 00:16 /weather/10.csv
-rw-r--r-- 1 hadoop supergroup 78337868 2015-03-30 00:16 /weather/100.csv
-rw-r--r-- 1 hadoop supergroup 78341694 2015-03-30 00:16 /weather/101.csv
-rw-r--r-- 1 hadoop supergroup 78341015 2015-03-30 00:16 /weather/102.csv
-rw-r--r-- 1 hadoop supergroup 78337662 2015-03-30 00:16 /weather/103.csv
-rw-r--r-- 1 hadoop supergroup 78336193 2015-03-30 00:16 /weather/104.csv
-rw-r--r-- 1 hadoop supergroup 78336537 2015-03-30 00:16 /weather/105.csv
Before we write our Python code to process the data now stored in HDFS, let's remind ourselves what the data looks like:
bash
[hadoop@ip-10-255-7-47 ~]$ head 0.csv
country,town,year,month,precip,temp
0,0,1894,1,0.8449506929198441,0.7897647433139449
0,0,1894,2,0.4746140099538822,0.42335801512344756
0,0,1894,3,-0.7088399152900952,0.776535509023379
0,0,1894,4,-1.1731692311337918,0.8168558530942849
0,0,1894,5,1.9332497442673315,-0.6066233105016293
0,0,1894,6,0.003582147937914687,0.2720125869889254
0,0,1894,7,-0.5612131527063922,2.9628153460517272
0,0,1894,8,0.3733525007455101,-1.3297078910961062
0,0,1894,9,1.9148724762388318,0.6364284082486487
Now let's write the mapper (saved as mapper.py). This will be used by Hadoop and expects input via STDIN:
```python
!/usr/bin/env python
import sys
def parse_line(line):
return line.strip().split(",")
def is_header(line):
return line.startswith("country")
def main():
for line in sys.stdin:
if not is_header(line):
print(parse_line(line)[-1])
if name == "main":
main()
```
Next we can write the reducer (saved as reducer.py):
```python
!/usr/bin/env python
import sys
def to_float(data):
try:
return float(data.strip())
except:
return None
def main():
accum = 0
count = 0
for line in sys.stdin:
temp = to_float(line)
if temp == None:
continue
accum += temp
count += 1
print(accum / count)
if name == "main":
main()
```
Make them both executable:
bash
[hadoop@ip-10-255-7-47 ~]$ chmod 755 *.py
Let's test drive the mapper before using it in Hadoop:
bash
[hadoop@ip-10-255-7-47 ~]$ head 0.csv | ./mapper.py
0.7897647433139449
0.42335801512344756
0.776535509023379
0.8168558530942849
-0.6066233105016293
0.2720125869889254
2.9628153460517272
-1.3297078910961062
0.6364284082486487
Let's add the reducer to the mix:
bash
[hadoop@ip-10-255-7-47 ~]$ head 0.csv | ./mapper.py | ./reducer.py
0.526826584472
A quick manual check confirms that the generated average is correct for the values parsed by the mapper.
With our Python code tested and working, we're ready to run it on Hadoop... almost. Since there's a lot of data to process, let's switch to a local terminal session and create some more nodes:
bash
$ aws emr add-instance-groups \
--cluster-id $CLUSTER_ID \
--instance-groups \
InstanceCount=6,InstanceGroupType=task,InstanceType=m1.large \
InstanceCount=10,InstanceGroupType=task,InstanceType=m3.xlarge
bash
j-63JNVV2BYHC
INSTANCEGROUPIDS ig-ZCJCUQU6RU21
INSTANCEGROUPIDS ig-3RXZ98RUGS7OI
Let's check the cluster:
bash
$ aws emr describe-cluster --cluster-id $CLUSTER_ID
CLUSTER False j-63JNVV2BYHC ec2-54-70-11-85.us-west-2.compute.amazonaws.com Weather 189 3.6.0 3.6.0 EMR_DefaultRole False True
APPLICATIONS hadoop 2.4.0
APPLICATIONS Hue
BOOTSTRAPACTIONS Install Hue s3://us-west-2.elasticmapreduce/libs/hue/install-hue
BOOTSTRAPACTIONS Install HBase s3://us-west-2.elasticmapreduce/bootstrap-actions/setup-hbase
EC2INSTANCEATTRIBUTES us-west-2b OubiwannAWSKeyPair sg-fea0e9cd sg-fca0e9cf EMR_EC2_DefaultRole
INSTANCEGROUPS ig-3M0BXLF58BAO1 MASTER c1.xlarge ON_DEMAND MASTER 1 1
STATUS RUNNING
STATECHANGEREASON
TIMELINE 1427653325.578 1427653634.541
INSTANCEGROUPS ig-1YYKNHQQ27GRM CORE c1.xlarge ON_DEMAND CORE 2 2
STATUS RUNNING
STATECHANGEREASON
TIMELINE 1427653325.579 1427653692.548
INSTANCEGROUPS ig-3RXZ98RUGS7OI TASK m3.xlarge ON_DEMAND task 10 0
STATUS RESIZING
STATECHANGEREASON Expanding instance group
TIMELINE 1427676271.495
INSTANCEGROUPS ig-ZCJCUQU6RU21 TASK m1.large ON_DEMAND task 6 0
STATUS RESIZING
STATECHANGEREASON Expanding instance group
TIMELINE 1427676243.42
STATUS WAITING
STATECHANGEREASON Waiting after step completed
TIMELINE 1427653325.578 1427653692.516
We can see that the two we just added have a STATUS of RESIZING. We'll keep an eye on this until they've finished.
Back on the Hadoop cluster, let's execute our map-reduce job against the data we've updated to the cluster and saved to HDFS:
bash
[hadoop@ip-10-255-7-47 ~]$ hadoop \
jar contrib/streaming/hadoop-*streaming*.jar \
-D mapred.reduce.tasks=1 \
-files mapper.py,reducer.py \
-mapper mapper.py \
-reducer reducer.py \
-combiner reducer.py \
-input /weather/*.csv \
-output /weather/total-mean-temp
To see the results:
bash
[hadoop@ip-10-255-7-47 ~]$ hdfs dfs -ls /weather/total-mean-temp
Found 2 items
-rw-r--r-- 1 hadoop supergroup 0 2015-03-29 20:20 /weather/total-mean-temp/_SUCCESS
-rw-r--r-- 1 hadoop supergroup 18 2015-03-29 20:20 /weather/total-mean-temp/part-00000
[hadoop@ip-10-255-7-47 ~]$ hdfs dfs -cat /weather/total-mean-temp/part-00000
-5.30517804131e-05
This is within an order of magnitude of the result obtained by manually slicing the HDF5 file:
End of explanation
"""
data_len = len(tab)
data_len
"""
Explanation: Without an in-depth analysis, one might venture a guess that the difference between these two values could be due to floating point calculations on different platforms using different versions of Python (the Python version on the Hadoop cluster was 2.6; we're using 3.4.2). At any rate, the mean calculation meets with expectations: close to zero for a normal distribution centered around zero.
Hadoop and matplotlib
The standard use case for matplotlib is on a workstation, often at an interactive Python or IPython prompt. In such scenarios, we are used to crunching our data – such as getting means, standard deviations, etc. – and then plotting them, all within a few commands (and seconds) of each other.
In the world of big data, that experience changes drastically. What was an implicit understanding that one's data is in-process, trivial to copy and perform analytical operations upon, is now an involved process comprised of cluster deployments, configuration management, distributed data, communication latencies, and the like. The only thing remaining the same is that its our data, and we need to plot it.
When the data was too large for memory, but still able to fit on a single hard drive, HDF5 and PyTables gave us the means by which we could use our old approaches with very little change in our analytical workflows. Once our data is too large for a hard drive or a file server, those workflows have to change: we can't even pretend it's the same data world we lived in previously. We have to think in terms of partitioned data and our jobs running against those partitions.
We still get to use NumPy, but the work is not being done on our machine in our IPython shell: it's being done remotely on a cluster comprised of distributed nodes. Our work in interactive shells is transformed instead to a testbed where we operate on a small sample set in preparation for pushing out a full job to the cluster. Additionally, every new big data project has the potentially to be legitimately different from any other one. For each organization that needs to work with big data, and for each set of data, the particulars of the day-to-day analytics workflows are likely change.
In the end, though, our jobs will run and we will have distilled from the octillions of data points, the few tens of millions needed in the final analysis, and it is this data which we provide to matplotlib for plotting. Though big data requires that the preparation of data for plotting move outside of the familiarity of an interactive Python prompt, the essence remains the same: we need to know what we have, we need to know how to distill that, and we've got to be able to visualize it.
Visualizing large data
The majority of this notebook has been dedicated to processing large data sets and plotting histograms. This was done intentionally, since using such an approach limited the number or artists on the matplotlib canvas to something on the order of 100s vs. attempting to plot millions of artists. In this section we will address the problem of displaying actual elements from large data sets. We will return to our fast HDF5 table for the remainder of the notebook.
As a refresher on the volume we're looking at, our data set has the following number of data points:
End of explanation
"""
limit = 1000
(figure, axes) = plt.subplots()
axes.plot(tab[:limit]["temp"], ".")
plt.show()
"""
Explanation: Adding commas to more easily see the value -- 288,000,000 -- we're looking at almost a third of a billions points.
Let's start with establishing a baseline: is there a practical limit we should consider when attempting to render artists? Let's use the data from our HDF5 file to explore this, starting with 1000 data points:
End of explanation
"""
limit = 10000
(figure, axes) = plt.subplots()
axes.plot(tab[:limit]["temp"], ".")
plt.show()
"""
Explanation: That was quick; maybe something under a second. Let's try 10,000 data points:
End of explanation
"""
limit = 100000
(figure, axes) = plt.subplots()
axes.plot(tab[:limit]["temp"], ".")
plt.show()
"""
Explanation: That was still fast -- again, under 1 second to render. 100,000?
End of explanation
"""
limit = 1000000
(figure, axes) = plt.subplots()
axes.plot(tab[:limit]["temp"], ".")
plt.show()
limit = 10000000
(figure, axes) = plt.subplots()
axes.plot(tab[:limit]["temp"], ".")
plt.show()
"""
Explanation: We're starting to see some more time taken; that was about 1 second. Let's keep going:
End of explanation
"""
frac = math.floor(data_len / 100000)
frac
"""
Explanation: A million data points only took about 2 or 3 seconds -- not bad for a single plot, but if we had to plot 100s of these, we'd need to start utilizing some of the techniques we discussed in the cloud-deploy notebook. 10 million points took about 15 seconds. Note that if we had used lines instead of points an exception would have been raised -- Exception in image/png formatter: Allocated too many blocks -- indicating that the shared library we're using for the Agg backend couldn't handle the number of artists required.
100,000 data points looks like a good limit for the number we'd want to plot simultaneously, with no or no appreciable delay in rendering, but that still leaves us with only a fraction of our total data set:
End of explanation
"""
(figure, axes) = plt.subplots()
axes.plot(tab[:10000000]["temp"])
plt.show()
"""
Explanation: How should we procede if we want to plot data points representing our full range of data? We have several options:
* adjusting matplotlib configuration to use plot chunking
* decimation
* (re)sampling
* data smoothing
matplotlibrc Agg Rendering
If you are using the Agg backend and are having issues rendering large data sets, one option is to experiment with the agg.path.chuncksize configuration variable in your matplotlibrc file or from a Python interactive prompt using rcParams. Let's try to graph the 10,000,000 data points as lines:
End of explanation
"""
mpl.rcParams["agg.path.chunksize"] = 20000
"""
Explanation: That generated the error in the backend we talked about above. Let's tweak the chunksize from 0 to 20,000 (a recommended starting point offered in the comments of the matplotlibrc configuration file):
End of explanation
"""
(figure, axes) = plt.subplots()
axes.plot(tab[:10000000]["temp"])
plt.show()
"""
Explanation: And now re-render:
End of explanation
"""
decimated = tab[::frac]["temp"]
"""
Explanation: Note that this feature was marked as experimental in 2008 and it remains so. Using it may introduce minor visual artifacts in rendered plot.
More importantly, though, this is a workaround that will allow you to squeeze a little more range out of matplotlib's plotting limits. As your data sets grow in size, you will eventually get to the point where even this configuration change can no longer help.
Decimation
Another approach carries the unfortunate name of the brutal practice employed by the Roman army against large groups guilty of capital offences: removal of a tenth. We use the term here more generally, indicating "removal of a fraction sufficient to give us our desired performance" -- which, as it turns out, will be much more than a tenth. From our quick calculation above, we determined that we would need no more than every 2880th data point. The following approach brings this into memory:
End of explanation
"""
len(decimated)
"""
Explanation: If you know this is something you will be doing, it would be better to generate this data at the time you created your HDF5 table.
Let's sanity check our length:
End of explanation
"""
xvals = range(0, data_len, frac)
(figure, axes) = plt.subplots()
axes.plot(xvals, decimated, ".", alpha=0.2)
plt.show()
"""
Explanation: Now let's plot the 100,000 data points which represent the full spectrum of our population:
End of explanation
"""
[size] = tab[0:limit]["temp"].shape
size
(figure, axes) = plt.subplots()
axes.plot(range(size), tab[0:limit]["temp"], ".", alpha=0.2)
plt.show()
"""
Explanation: Things to keep in mind:
* potentially important data points can be eliminated in this method
* depending on the type of distribution, standard deviation, variance, and other statistical values could very well change in the decimated data set
If these are important to you, you still have some other options available.
Resampling
If your data is random, then you can simply take a slice of the desired number of points:
End of explanation
"""
|
ozorich/phys202-2015-work | assignments/midterm/InteractEx06.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
"""
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
"""
Image('fermidist.png')
"""
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
"""
def fermidist(energy, mu, kT):
"""Compute the Fermi distribution at energy, mu and kT."""
K=kT
F=1/(np.exp((energy-mu)/K)+1)
return F
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
"""
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
\begin{equation}
F(\epsilon) =\frac{1}{e^{(\epsilon-\mu)/kT}+1}
\end{equation}
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
"""
def plot_fermidist(mu, kT):
energy=np.linspace(0.0,10,50)
distr=fermidist(energy,mu,kT)
f=plt.figure(figsize=(8,6))
ax=plt.gca()
plt.plot(energy,distr)
ax=plt.gca()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.title('Fermi Distribution')
plt.xlabel('Energy')
plt.ylabel('Distribution') # can't figure why these won't work
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
"""
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
"""
interact(plot_fermidist, mu=(0.0,5.0,0.1),kT=(0.1,10,0.1))
"""
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation
"""
|
mholtrop/Phys605 | Python/Plotting/Plot_from_CSV_data.ipynb | gpl-3.0 | import pandas
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Plotting data from Excel or CSV files
The plotting capabilities of the Excel spreadsheet program are intended for business plots, and so leave a lot to be desired for plotting scientific data. Fortunately, this is relatively easy in Python with Matplotlib.
First we need to import the modules that we require. The Pandas module is intended to read and write data, and has a component to read and write Excel files. It is not the only module that does this. There is also a module called csv, that would work just as well, but differently. Since Pandas is widely used, I will demonstrate its use here.
We also import matplotlib.plt, which allows us to do plotting. It is very comprehensive and well documented. Alternatives are Bokeh, which makes Javascript enabled plots, and Plotly, which is a commercial plotter.
End of explanation
"""
import os
os.listdir() # Show me what is in the directory
data_blue = pandas.read_excel("IV_Curve_Blue_LED.xlsx")
"""
Explanation: Once Pandas is enabled, we can use it to rear the csv file that the AD produced. To read an Excel xlmx file you can use "pandas.read_excel" instead.
End of explanation
"""
data_blue.head()
"""
Explanation: The data from the file is now read into the data_blue variable. We can inspect what was in the file with the "head()" function, that will print the titles and the first few values.
End of explanation
"""
print(data_blue['Channel 1 (V)'][3])
"""
Explanation: To access individual values, we would specify which column, e.g. "Channel 1(V)", and which row. We see that the actual data was more accurate than the head() function printed.
End of explanation
"""
plt.figure(figsize=(10,7)) # Set the size of your plot. It will determine the relative size of all the labels.
plt.plot(data_blue['Channel 1 (V)'],data_blue['Math 1 (mA)'],label="Blue Led") # Plot a curve.
plt.xlabel("V [V]")
plt.ylabel("I [mA]")
plt.title("I-V curves for LEDs")
plt.show()
"""
Explanation: We now want to plot two of the columns against each other. We want "Math 1(mA)" on the y axis and "Channel 1(V)" on the x-axis. We tell plt (Matplotlib.pyplot) that we was a figure, we plot the data, we label the axis and give the plot a title, then show the results.
End of explanation
"""
data_green = pandas.read_csv("IV_Curve_Green_LED.csv")
data_red = pandas.read_csv("IV_Curve_Red_LED.csv")
data_orange = pandas.read_csv("IV_Curve_Orange_LED.csv")
data_aqua= pandas.read_csv("IV_Curve_Aqua_LED.csv")
data_violet= pandas.read_csv("IV_Curve_Violet_LED.csv")
data_yellow= pandas.read_csv("IV_Curve_Yellow_LED.csv")
data_green.head()
plt.figure(figsize=(10,7)) # Set the size of your plot. It will determine the relative size of all the labels.
plt.plot(data_violet['Channel 1 (V)'],data_violet['Math 1 (mA)'],color="violet",label="Violet Led")
plt.plot(data_blue['Channel 1 (V)'],data_blue['Math 1 (mA)'],color="blue",label="Blue Led") # Plot a curve.
plt.plot(data_aqua['Channel 1 (V)'],data_aqua['Math 1 (mA)'],color="aqua",label="Aqua Led")
plt.plot(data_green['Channel 1 (V)'],data_green['Math 1 (mA)'],color="green",label="Green Led") # Plot a curve.
plt.plot(data_yellow['Channel 1 (V)'],data_red['Math 1 (mA)'],color="yellow",label="Yellow Led")
plt.plot(data_orange['Channel 1 (V)'],data_orange['Math 1 (mA)'],color="orange",label="Orange Led")
plt.plot(data_red['Channel 1 (V)'],data_red['Math 1 (mA)'],color="red",label="Red Led")
plt.xlabel("V [V]")
plt.xlim((-0.5,3.5))
plt.ylabel("I [mA]")
plt.legend(loc="upper left")
plt.title("I-V curves for LEDs")
plt.savefig("LED_curves.pdf")
plt.show()
"""
Explanation: Depending on the oscilloscope settings, you can get the occasional artifact in the graph. Note that the wavegenerator swept back and forth between -5V and 5V, and where it started and stopped depended on details like the time scale and the trigger settings. If your curve does not look like this, you may need to limit the data set to a range where voltage sweep was from a negative to a positive value, and nothing else.
To get multiple curves, we repeat some of the statements, we read a different file, but plot the results at the same time.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cas/cmip6/models/fgoals-f3-l/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'fgoals-f3-l', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: CAS
Source ID: FGOALS-F3-L
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:44
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
sbu-python-summer/python-tutorial | day-2/python-classes.ipynb | bsd-3-clause | class Container(object):
pass
a = Container()
a.x = 1
a.y = 2
a.z = 3
b = Container()
b.xyz = 1
b.uvw = 2
print(a.x, a.y, a.z)
print(b.xyz, b.uvw)
"""
Explanation: Classes
Classes are the fundamental concept for object oriented programming. A class defines a data type with both data and functions that can operate on the data. An object is an instance of a class. Each object will have its own namespace (separate from other instances of the class and other functions, etc. in your program).
We use the dot operator, ., to access members of the class (data or functions). We've already been doing this a lot, strings, ints, lists, ... are all objects in python.
simplest example: just a container (like a struct in C)
End of explanation
"""
class Student(object):
def __init__(self, name, grade=None):
self.name = name
self.grade = grade
"""
Explanation: notice that you don't have to declare what variables are members of the class ahead of time (although, usually that's good practice).
Here, we give the class name an argument, object. This is an example of inheritance. For a general class, we inherit from the base python object class.
More useful class
Here's a class that holds some student info
End of explanation
"""
students = []
students.append(Student("fry", "F-"))
students.append(Student("leela", "A"))
students.append(Student("zoidberg", "F"))
students.append(Student("hubert", "C+"))
students.append(Student("bender", "B"))
students.append(Student("calculon", "C"))
students.append(Student("amy", "A"))
students.append(Student("hermes", "A"))
students.append(Student("scruffy", "D"))
students.append(Student("flexo", "F"))
students.append(Student("morbo", "D"))
students.append(Student("hypnotoad", "A+"))
students.append(Student("zapp", "Q"))
"""
Explanation: Let's create a bunch of them, stored in a list
End of explanation
"""
As = [q.name for q in students if q.grade.startswith("A")]
As
"""
Explanation: <div style="background-color:yellow; padding: 10px"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3></div>
Loop over the students in the students list and print out the name and grade of each student, one per line.
<hr>
We can use list comprehensions with our list of objects. For example, let's find all the students who have A's
End of explanation
"""
class Card(object):
def __init__(self, suit=1, rank=2):
if suit < 1 or suit > 4:
print("invalid suit, setting to 1")
suit = 1
self.suit = suit
self.rank = rank
def value(self):
""" we want things order primarily by rank then suit """
return self.suit + (self.rank-1)*14
# we include this to allow for comparisons with < and > between cards
def __lt__(self, other):
return self.value() < other.value()
def __unicode__(self):
suits = [u"\u2660", # spade
u"\u2665", # heart
u"\u2666", # diamond
u"\u2663"] # club
r = str(self.rank)
if self.rank == 11:
r = "J"
elif self.rank == 12:
r = "Q"
elif self.rank == 13:
r = "K"
elif self.rank == 14:
r = "A"
return r +':'+suits[self.suit-1]
def __str__(self):
return self.__unicode__() #.encode('utf-8')
"""
Explanation: Playing Cards
here's a more complicated class that represents a playing card. Notice that we are using unicode to represent the suits.
unicode support in python is also one of the major differences between python 2 and 3. In python 3, every string is unicode.
End of explanation
"""
c1 = Card()
"""
Explanation: When you instantiate a class, the __init__ method is called. Note that all method in a class always have "self" as the first argument -- this refers to the object that is invoking the method.
we can create a card easily.
End of explanation
"""
c2 = Card(suit=1, rank=13)
"""
Explanation: We can pass arguments to __init__ in when we setup the class:
End of explanation
"""
c1.value()
c3 = Card(suit=0, rank=4)
"""
Explanation: Once we have our object, we can access any of the functions in the class using the dot operator
End of explanation
"""
print(c1)
print(c2)
"""
Explanation: The __str__ method converts the object into a string that can be printed. The __unicode__ method is actually for python 2.
End of explanation
"""
print(c1 > c2)
print(c1 < c2)
"""
Explanation: the value method assigns a value to the object that can be used in comparisons, and the __lt__ method is what does the actual comparing
End of explanation
"""
c1 + c2
"""
Explanation: Note that not every operator is defined for our class, so, for instance, we cannot add two cards together:
End of explanation
"""
import random
class Deck(object):
""" the deck is a collection of cards """
def __init__(self):
self.nsuits = 4
self.nranks = 13
self.minrank = 2
self.maxrank = self.minrank + self.nranks - 1
self.cards = []
for rank in range(self.minrank,self.maxrank+1):
for suit in range(1, self.nsuits+1):
self.cards.append(Card(rank=rank, suit=suit))
def shuffle(self):
random.shuffle(self.cards)
def get_cards(self, num=1):
hand = []
for n in range(num):
hand.append(self.cards.pop())
return hand
def __str__(self):
string = ""
for c in self.cards:
string += str(c) + " "
return string
"""
Explanation: <div style="background-color:yellow; padding: 10px"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3></div>
Create a "hand" corresponding to a straight (5 cards of any suite, but in sequence of rank)
Create another hand corresponding to a flush (5 cards all of the same suit, of any rank)
Finally create a hand with one of the cards duplicated—this should not be allowed in a standard deck of cards. How would you check for this?
<hr>
Deck of Cards
classes can use other include other classes as data objects—here's a deck of cards. Note that we are using the python random module here.
End of explanation
"""
mydeck = Deck()
print(mydeck)
print(len(mydeck.cards))
"""
Explanation: let's create a deck, shuffle, and deal a hand (for a poker game)
End of explanation
"""
mydeck.shuffle()
hand = mydeck.get_cards(5)
for c in sorted(hand): print(c)
"""
Explanation: notice that there is no error handling in this class. The get_cards() will deal cards from the deck, removing them in the process. Eventually we'll run out of cards.
End of explanation
"""
class Currency(object):
""" a simple class to hold foreign currency """
def __init__(self, amount, country="US"):
self.amount = amount
self.country = country
def __add__(self, other):
return Currency(self.amount + other.amount, country=self.country)
def __str__(self):
return "{} {}".format(self.amount, self.country)
"""
Explanation: Operators
We can define operations like + and - that work on our objects. Here's a simple example of currency—we keep track of the country and the amount
End of explanation
"""
d1 = Currency(10, "US")
d2 = Currency(15, "US")
print(d1 + d2)
"""
Explanation: We can now create some monetary amounts for different countries
End of explanation
"""
import math
class Vector(object):
""" a general two-dimensional vector """
def __init__(self, x, y):
print("in __init__")
self.x = x
self.y = y
def __str__(self):
print("in __str__")
return "({} î + {} ĵ)".format(self.x, self.y)
def __repr__(self):
print("in __repr__")
return "Vector({}, {})".format(self.x, self.y)
def __add__(self, other):
print("in __add__")
if isinstance(other, Vector):
return Vector(self.x + other.x, self.y + other.y)
else:
# it doesn't make sense to add anything but two vectors
print("we don't know how to add a {} to a Vector".format(type(other)))
raise NotImplementedError
def __sub__(self, other):
print("in __sub__")
if isinstance(other, Vector):
return Vector(self.x - other.x, self.y - other.y)
else:
# it doesn't make sense to add anything but two vectors
print("we don't know how to add a {} to a Vector".format(type(other)))
raise NotImplementedError
def __mul__(self, other):
print("in __mul__")
if isinstance(other, int) or isinstance(other, float):
# scalar multiplication changes the magnitude
return Vector(other*self.x, other*self.y)
else:
print("we don't know how to multiply two Vectors")
raise NotImplementedError
def __matmul__(self, other):
print("in __matmul__")
# a dot product
if isinstance(other, Vector):
return self.x*other.x + self.y*other.y
else:
print("matrix multiplication not defined")
raise NotImplementedError
def __rmul__(self, other):
print("in __rmul__")
return self.__mul__(other)
def __truediv__(self, other):
print("in __truediv__")
# we only know how to multiply by a scalar
if isinstance(other, int) or isinstance(other, float):
return Vector(self.x/other, self.y/other)
def __abs__(self):
print("in __abs__")
return math.sqrt(self.x**2 + self.y**2)
def __neg__(self):
print("in __neg__")
return Vector(-self.x, -self.y)
def cross(self, other):
# a vector cross product -- we return the magnitude, since it will
# be in the z-direction, but we are only 2-d
return abs(self.x*other.y - self.y*other.x)
"""
Explanation: <div style="background-color:yellow; padding: 10px"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3></div>
As written, our Currency class has a bug—it does not check whether the amounts are in the same country before adding. Modify the __add__ method to first check if the countries are the same. If they are, return the new Currency object with the sum, otherwise, return None.
<hr>
<span class="fa fa-star"></span> Vectors Example
Here we write a class to represent 2-d vectors. Vectors have a direction and a magnitude. We can represent them as a pair of numbers, representing the x and y lengths. We'll use a tuple internally for this
We want our class to do all the basic operations we do with vectors: add them, multiply by a scalar, cross product, dot product, return the magnitude, etc.
We'll use the math module to provide some basic functions we might need (like sqrt)
This example will show us how to overload the standard operations in python. Here's a list of the builtin methods:
https://docs.python.org/3/reference/datamodel.html
To make it really clear what's being called when, I've added prints in each of the functions
End of explanation
"""
v = Vector(1,2)
v
print(v)
"""
Explanation: This is a basic class that provides two methods __str__ and __repr__ to show a representation of it. There was some discussion of this on slack. These two functions provide a readable version of our object.
The convection is what __str__ is human readable while __repr__ should be a form that can be used to recreate the object (e.g., via eval()). See:
http://stackoverflow.com/questions/1436703/difference-between-str-and-repr-in-python
End of explanation
"""
abs(v)
"""
Explanation: Vectors have a length, and we'll use the abs() builtin to provide the magnitude. For a vector:
$$
\vec{v} = \alpha \hat{i} + \beta \hat{j}
$$
we have
$$
|\vec{v}| = \sqrt{\alpha^2 + \beta^2}
$$
End of explanation
"""
u = Vector(3,5)
w = u + v
print(w)
u - v
"""
Explanation: Let's look at mathematical operations on vectors now. We want to be able to add and subtract two vectors as well as multiply and divide by a scalar.
End of explanation
"""
u + 2.0
"""
Explanation: It doesn't make sense to add a scalar to a vector, so we didn't implement this -- what happens?
End of explanation
"""
u*2.0
2.0*u
"""
Explanation: Now multiplication. It makes sense to multiply by a scalar, but there are multiple ways to define multiplication of two vectors.
Note that python provides both a __mul__ and a __rmul__ function to define what happens when we multiply a vector by a quantity and what happens when we multiply something else by a vector.
End of explanation
"""
u/5.0
5.0/u
"""
Explanation: and division: __truediv__ is the python 3 way of division /, while __floordiv__ is the old python 2 way, also enabled via //.
Dividing a scalar by a vector doesn't make sense:
End of explanation
"""
u @ v
"""
Explanation: Python 3.5 introduced a new matrix multiplication operator, @ -- we'll use this to implement a dot product between two vectors:
End of explanation
"""
u.cross(v)
"""
Explanation: For a cross product, we don't have an obvious operator, so we'll use a function. For 2-d vectors, this will result in a scalar
End of explanation
"""
-u
"""
Explanation: Finally, negation is a separate operation:
End of explanation
"""
|
yandexdataschool/gumbel_lstm | normal_lstm.ipynb | mit | %env THEANO_FLAGS="device=gpu3"
import numpy as np
import theano
import theano.tensor as T
import lasagne
import os
"""
Explanation: Contents
We train an LSTM with gumbel-sigmoid gates on a toy language modelling problem.
Such LSTM can than be binarized to reach signifficantly greater speed.
End of explanation
"""
start_token = " "
with open("mtg_card_names.txt") as f:
names = f.read()[:-1].split('\n')
names = [start_token+name for name in names]
print 'n samples = ',len(names)
for x in names[::1000]:
print x
"""
Explanation: Generate mtg cards
Regular RNN language modelling done by LSTM with "binary" gates
End of explanation
"""
#all unique characters go here
token_set = set()
for name in names:
for letter in name:
token_set.add(letter)
tokens = list(token_set)
print 'n_tokens = ',len(tokens)
#!token_to_id = <dictionary of symbol -> its identifier (index in tokens list)>
token_to_id = {t:i for i,t in enumerate(tokens) }
#!id_to_token = < dictionary of symbol identifier -> symbol itself>
id_to_token = {i:t for i,t in enumerate(tokens)}
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(map(len,names),bins=25);
# truncate names longer than MAX_LEN characters.
MAX_LEN = min([60,max(list(map(len,names)))])
#ADJUST IF YOU ARE UP TO SOMETHING SERIOUS
"""
Explanation: Text processing
End of explanation
"""
names_ix = list(map(lambda name: list(map(token_to_id.get,name)),names))
#crop long names and pad short ones
for i in range(len(names_ix)):
names_ix[i] = names_ix[i][:MAX_LEN] #crop too long
if len(names_ix[i]) < MAX_LEN:
names_ix[i] += [token_to_id[" "]]*(MAX_LEN - len(names_ix[i])) #pad too short
assert len(set(map(len,names_ix)))==1
names_ix = np.array(names_ix)
"""
Explanation: Cast everything from symbols into identifiers
End of explanation
"""
from agentnet import Recurrence
from lasagne.layers import *
from agentnet.memory import *
from agentnet.resolver import ProbabilisticResolver
from gumbel_sigmoid import GumbelSigmoid
sequence = T.matrix('token sequence','int64')
inputs = sequence[:,:-1]
targets = sequence[:,1:]
l_input_sequence = InputLayer(shape=(None, None),input_var=inputs)
"""
Explanation: Input variables
End of explanation
"""
###One step of rnn
class rnn:
n_hid = 100
#inputs
inp = InputLayer((None,),name='current character')
prev_cell = InputLayer((None,n_hid),name='previous lstm cell')
prev_hid = InputLayer((None,n_hid),name='previous ltsm output')
#recurrent part
emb = EmbeddingLayer(inp, len(tokens), 30,name='emb')
new_cell,new_hid = LSTMCell(prev_cell,prev_hid,emb,
name="rnn")
next_token_probas = DenseLayer(new_hid,len(tokens),nonlinearity=T.nnet.softmax)
#pick next token from predicted probas
next_token = ProbabilisticResolver(next_token_probas)
"""
Explanation: Build NN
You'll be building a model that takes token sequence and predicts next tokens at each tick
This is basically equivalent to how rnn step was described in the lecture
End of explanation
"""
training_loop = Recurrence(
state_variables={rnn.new_hid:rnn.prev_hid,
rnn.new_cell:rnn.prev_cell},
input_sequences={rnn.inp:l_input_sequence},
tracked_outputs=[rnn.next_token_probas,],
unroll_scan=False,
)
# Model weights
weights = lasagne.layers.get_all_params(training_loop,trainable=True)
print weights
predicted_probabilities = lasagne.layers.get_output(training_loop[rnn.next_token_probas])
#If you use dropout do not forget to create deterministic version for evaluation
loss = lasagne.objectives.categorical_crossentropy(predicted_probabilities.reshape((-1,len(tokens))),
targets.reshape((-1,))).mean()
#<Loss function - a simple categorical crossentropy will do, maybe add some regularizer>
updates = lasagne.updates.adam(loss,weights)
#training
train_step = theano.function([sequence], loss,
updates=training_loop.get_automatic_updates()+updates)
"""
Explanation: Loss && Training
End of explanation
"""
n_steps = T.scalar(dtype='int32')
feedback_loop = Recurrence(
state_variables={rnn.new_cell:rnn.prev_cell,
rnn.new_hid:rnn.prev_hid,
rnn.next_token:rnn.inp},
tracked_outputs=[rnn.next_token_probas,],
batch_size=1,
n_steps=n_steps,
unroll_scan=False,
)
generated_tokens = get_output(feedback_loop[rnn.next_token])
generate_sample = theano.function([n_steps],generated_tokens,updates=feedback_loop.get_automatic_updates())
def generate_string(length=MAX_LEN):
output_indices = generate_sample(length)[0]
return ''.join(tokens[i] for i in output_indices)
generate_string()
"""
Explanation: generation
here we re-wire the recurrent network so that it's output is fed back to it's input
End of explanation
"""
def sample_batch(data, batch_size):
rows = data[np.random.randint(0,len(data),size=batch_size)]
return rows
print("Training ...")
#total N iterations
n_epochs=100
# how many minibatches are there in the epoch
batches_per_epoch = 500
#how many training sequences are processed in a single function call
batch_size=32
loss_history = []
for epoch in xrange(n_epochs):
avg_cost = 0;
for _ in range(batches_per_epoch):
avg_cost += train_step(sample_batch(names_ix,batch_size))
loss_history.append(avg_cost)
print("\n\nEpoch {} average loss = {}".format(epoch, avg_cost / batches_per_epoch))
print "Generated names"
for i in range(10):
print generate_string(),
plt.plot(loss_history)
"""
Explanation: Model training
Here you can tweak parameters or insert your generation function
Once something word-like starts generating, try increasing seq_length
End of explanation
"""
|
cathalmccabe/PYNQ | docs/source/getting_started/jupyter_notebooks_advanced_features.ipynb | bsd-3-clause | import random
the_number = random.randint(0, 10)
guess = -1
name = input('Player what is your name? ')
while guess != the_number:
guess_text = input('Guess a number between 0 and 10: ')
guess = int(guess_text)
if guess < the_number:
print(f'Sorry {name}, your guess of {guess} was too LOW.\n')
elif guess > the_number:
print(f'Sorry {name}, your guess of {guess} was too HIGH.\n')
else:
print(f'Excellent work {name}, you won, it was {guess}!\n')
print('Done')
"""
Explanation: Jupyter Notebooks Advanced Features
<div class="alert bg-primary">PYNQ notebook front end allows interactive coding, output visualizations and documentation using text, equations, images, video and other rich media.</div>
<div class="alert bg-primary">Code, analysis, debug, documentation and demos are all alive, editable and connected in the Notebooks.</div>
## Contents
Live, Interactive Cell for Python Coding
Guess that Number
Generate Fibonacci numbers
Plotting Output
Interactive input and output analysis
Interactive debug
Rich Output Media
Display Images
Render SVG images
Audio Playback
Add Video
Add webpages as Interactive Frames
Render Latex
Interactive Plots and Visualization
Matplotlib
Notebooks are not just for Python
Access to linux shell commands
Shell commands in python code
Python variables in shell commands
Magics
Timing code using magics
Coding other languages
Contents
Live, Interactive Python Coding
Guess that number game
Run the cell to play
Cell can be run by selecting the cell and pressing Shift+Enter
End of explanation
"""
def generate_fibonacci_list(limit, output=False):
nums = []
current, ne_xt = 0, 1
while current < limit:
current, ne_xt = ne_xt, ne_xt + current
nums.append(current)
if output == True:
print(f'{len(nums[:-1])} Fibonacci numbers below the number '
f'{limit} are:\n{nums[:-1]}')
return nums[:-1]
limit = 1000
fib = generate_fibonacci_list(limit, True)
"""
Explanation: Contents
Generate Fibonacci numbers
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import *
limit = 1000000
fib = generate_fibonacci_list(limit)
plt.plot(fib)
plt.plot(range(len(fib)), fib, 'ro')
plt.show()
"""
Explanation: Contents
Plotting Fibonacci numbers
Plotting is done using the matplotlib library
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import *
def update(limit, print_output):
i = generate_fibonacci_list(limit, print_output)
plt.plot(range(len(i)), i)
plt.plot(range(len(i)), i, 'ro')
plt.show()
limit=widgets.IntSlider(min=10,max=1000000,step=1,value=10)
interact(update, limit=limit, print_output=False);
"""
Explanation: Contents
Interactive input and output analysis
Input and output interaction can be achieved using Ipython widgets
End of explanation
"""
from IPython.core.debugger import set_trace
def debug_fibonacci_list(limit):
nums = []
current, ne_xt = 0, 1
while current < limit:
if current > 1000:
set_trace()
current, ne_xt = ne_xt, ne_xt + current
nums.append(current)
print(f'The fibonacci numbers below the number {limit} are:\n{nums[:-1]}')
debug_fibonacci_list(10000)
"""
Explanation: Contents
Interactive debug
Uses set_trace from the Ipython debugger library
Type 'h' in debug prompt for the debug commands list and 'q' to exit
End of explanation
"""
from IPython.display import SVG
SVG(filename='images/python.svg')
"""
Explanation: Contents
Rich Output Media
Display images
Images can be displayed using combination of HTML, Markdown, PNG, JPG, etc.
Image below is displayed in a markdown cell which is rendered at startup.
Contents
Render SVG images
SVG image is rendered in a code cell using Ipython display library.
End of explanation
"""
import numpy as np
from IPython.display import Audio
framerate = 44100
t = np.linspace(0,5,framerate*5)
data = np.sin(2*np.pi*220*t**2)
Audio(data,rate=framerate)
"""
Explanation: Contents
Audio Playback
IPython.display.Audio lets you play audio directly in the notebook
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('ooOLl4_H-IE')
"""
Explanation: Contents
Add Video
IPython.display.YouTubeVideo lets you play Youtube video directly in the notebook. Library support is available to play Vimeo and local videos as well
End of explanation
"""
from IPython.display import IFrame
IFrame('https://pynq.readthedocs.io/en/latest/getting_started.html',
width='100%', height=500)
"""
Explanation: Video Link with image display
<a href="https://www.youtube.com/watch?v=ooOLl4_H-IE">
<img src="http://img.youtube.com/vi/ooOLl4_H-IE/0.jpg" width="400" height="400" align="left"></a>
Contents
Add webpages as Interactive Frames
Embed an entire page from another site in an iframe; for example this is the PYNQ documentation page on readthedocs
End of explanation
"""
%%latex
\begin{align} P(Y=i|x, W,b) = softmax_i(W x + b)= \frac {e^{W_i x + b_i}}
{\sum_j e^{W_j x + b_j}}\end{align}
"""
Explanation: Contents
Render Latex
Display of mathematical expressions typeset in LaTeX for documentation.
End of explanation
"""
from IPython.display import IFrame
IFrame('https://matplotlib.org/gallery/index.html', width='100%', height=500)
"""
Explanation: Contents
Interactive Plots and Visualization
Plotting and Visualization can be achieved using various available python libraries such as Matplotlib, Bokeh, Seaborn, etc.
Below is shown a Iframe of the Matplotlib website. Navigate to 'gallery' and choose a plot to run in the notebook
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.path import Path
from matplotlib.spines import Spine
from matplotlib.projections.polar import PolarAxes
from matplotlib.projections import register_projection
def radar_factory(num_vars, frame='circle'):
"""Create a radar chart with `num_vars` axes.
This function creates a RadarAxes projection and registers it.
Parameters
----------
num_vars : int
Number of variables for radar chart.
frame : {'circle' | 'polygon'}
Shape of frame surrounding axes.
"""
# calculate evenly-spaced axis angles
theta = np.linspace(0, 2*np.pi, num_vars, endpoint=False)
def draw_poly_patch(self):
# rotate theta such that the first axis is at the top
verts = unit_poly_verts(theta + np.pi / 2)
return plt.Polygon(verts, closed=True, edgecolor='k')
def draw_circle_patch(self):
# unit circle centered on (0.5, 0.5)
return plt.Circle((0.5, 0.5), 0.5)
patch_dict = {'polygon': draw_poly_patch, 'circle': draw_circle_patch}
if frame not in patch_dict:
raise ValueError('unknown value for `frame`: %s' % frame)
class RadarAxes(PolarAxes):
name = 'radar'
# use 1 line segment to connect specified points
RESOLUTION = 1
# define draw_frame method
draw_patch = patch_dict[frame]
def __init__(self, *args, **kwargs):
super(RadarAxes, self).__init__(*args, **kwargs)
# rotate plot such that the first axis is at the top
self.set_theta_zero_location('N')
def fill(self, *args, **kwargs):
"""Override fill so that line is closed by default"""
closed = kwargs.pop('closed', True)
return super(RadarAxes, self).fill(closed=closed, *args, **kwargs)
def plot(self, *args, **kwargs):
"""Override plot so that line is closed by default"""
lines = super(RadarAxes, self).plot(*args, **kwargs)
for line in lines:
self._close_line(line)
def _close_line(self, line):
x, y = line.get_data()
# FIXME: markers at x[0], y[0] get doubled-up
if x[0] != x[-1]:
x = np.concatenate((x, [x[0]]))
y = np.concatenate((y, [y[0]]))
line.set_data(x, y)
def set_varlabels(self, labels):
self.set_thetagrids(np.degrees(theta), labels)
def _gen_axes_patch(self):
return self.draw_patch()
def _gen_axes_spines(self):
if frame == 'circle':
return PolarAxes._gen_axes_spines(self)
# The following is a hack to get the spines (i.e. the axes frame)
# to draw correctly for a polygon frame.
# spine_type must be 'left', 'right', 'top', 'bottom', or `circle`.
spine_type = 'circle'
verts = unit_poly_verts(theta + np.pi / 2)
# close off polygon by repeating first vertex
verts.append(verts[0])
path = Path(verts)
spine = Spine(self, spine_type, path)
spine.set_transform(self.transAxes)
return {'polar': spine}
register_projection(RadarAxes)
return theta
def unit_poly_verts(theta):
"""Return vertices of polygon for subplot axes.
This polygon is circumscribed by a unit circle centered at (0.5, 0.5)
"""
x0, y0, r = [0.5] * 3
verts = [(r*np.cos(t) + x0, r*np.sin(t) + y0) for t in theta]
return verts
def example_data():
# The following data is from the Denver Aerosol Sources and Health study.
# See doi:10.1016/j.atmosenv.2008.12.017
#
# The data are pollution source profile estimates for five modeled
# pollution sources (e.g., cars, wood-burning, etc) that emit 7-9 chemical
# species. The radar charts are experimented with here to see if we can
# nicely visualize how the modeled source profiles change across four
# scenarios:
# 1) No gas-phase species present, just seven particulate counts on
# Sulfate
# Nitrate
# Elemental Carbon (EC)
# Organic Carbon fraction 1 (OC)
# Organic Carbon fraction 2 (OC2)
# Organic Carbon fraction 3 (OC3)
# Pyrolized Organic Carbon (OP)
# 2)Inclusion of gas-phase specie carbon monoxide (CO)
# 3)Inclusion of gas-phase specie ozone (O3).
# 4)Inclusion of both gas-phase species is present...
data = [
['Sulfate', 'Nitrate', 'EC', 'OC1', 'OC2', 'OC3', 'OP', 'CO', 'O3'],
('Basecase', [
[0.88, 0.01, 0.03, 0.03, 0.00, 0.06, 0.01, 0.00, 0.00],
[0.07, 0.95, 0.04, 0.05, 0.00, 0.02, 0.01, 0.00, 0.00],
[0.01, 0.02, 0.85, 0.19, 0.05, 0.10, 0.00, 0.00, 0.00],
[0.02, 0.01, 0.07, 0.01, 0.21, 0.12, 0.98, 0.00, 0.00],
[0.01, 0.01, 0.02, 0.71, 0.74, 0.70, 0.00, 0.00, 0.00]]),
('With CO', [
[0.88, 0.02, 0.02, 0.02, 0.00, 0.05, 0.00, 0.05, 0.00],
[0.08, 0.94, 0.04, 0.02, 0.00, 0.01, 0.12, 0.04, 0.00],
[0.01, 0.01, 0.79, 0.10, 0.00, 0.05, 0.00, 0.31, 0.00],
[0.00, 0.02, 0.03, 0.38, 0.31, 0.31, 0.00, 0.59, 0.00],
[0.02, 0.02, 0.11, 0.47, 0.69, 0.58, 0.88, 0.00, 0.00]]),
('With O3', [
[0.89, 0.01, 0.07, 0.00, 0.00, 0.05, 0.00, 0.00, 0.03],
[0.07, 0.95, 0.05, 0.04, 0.00, 0.02, 0.12, 0.00, 0.00],
[0.01, 0.02, 0.86, 0.27, 0.16, 0.19, 0.00, 0.00, 0.00],
[0.01, 0.03, 0.00, 0.32, 0.29, 0.27, 0.00, 0.00, 0.95],
[0.02, 0.00, 0.03, 0.37, 0.56, 0.47, 0.87, 0.00, 0.00]]),
('CO & O3', [
[0.87, 0.01, 0.08, 0.00, 0.00, 0.04, 0.00, 0.00, 0.01],
[0.09, 0.95, 0.02, 0.03, 0.00, 0.01, 0.13, 0.06, 0.00],
[0.01, 0.02, 0.71, 0.24, 0.13, 0.16, 0.00, 0.50, 0.00],
[0.01, 0.03, 0.00, 0.28, 0.24, 0.23, 0.00, 0.44, 0.88],
[0.02, 0.00, 0.18, 0.45, 0.64, 0.55, 0.86, 0.00, 0.16]])
]
return data
if __name__ == '__main__':
N = 9
theta = radar_factory(N, frame='polygon')
data = example_data()
spoke_labels = data.pop(0)
fig, axes = plt.subplots(figsize=(9, 9), nrows=2, ncols=2,
subplot_kw=dict(projection='radar'))
fig.subplots_adjust(wspace=0.25, hspace=0.20, top=0.85, bottom=0.05)
colors = ['b', 'r', 'g', 'm', 'y']
# Plot the four cases from the example data on separate axes
for ax, (title, case_data) in zip(axes.flatten(), data):
ax.set_rgrids([0.2, 0.4, 0.6, 0.8])
ax.set_title(title, weight='bold', size='medium', position=(0.5, 1.1),
horizontalalignment='center', verticalalignment='center')
for d, color in zip(case_data, colors):
ax.plot(theta, d, color=color)
ax.fill(theta, d, facecolor=color, alpha=0.25)
ax.set_varlabels(spoke_labels)
# add legend relative to top-left plot
ax = axes[0, 0]
labels = ('Factor 1', 'Factor 2', 'Factor 3', 'Factor 4', 'Factor 5')
legend = ax.legend(labels, loc=(0.9, .95),
labelspacing=0.1, fontsize='small')
fig.text(0.5, 0.965, '5-Factor Solution Profiles Across Four Scenarios',
horizontalalignment='center', color='black', weight='bold',
size='large')
plt.show()
"""
Explanation: Contents
Matplotlib
Below we run the code available under examples --> Matplotlib API --> Radar_chart in the above webpage
Link to Radar chart
End of explanation
"""
!cat /proc/cpuinfo
"""
Explanation: Contents
Notebooks are not just for Python
Access to linux shell commands
<div class="alert alert-info">Starting a code cell with a bang character, e.g. `!`, instructs jupyter to treat the code on that line as an OS shell command</div>
System Information
End of explanation
"""
!cat /etc/os-release | grep VERSION
"""
Explanation: Verify Linux Version
End of explanation
"""
!head -5 /proc/cpuinfo | grep "BogoMIPS"
"""
Explanation: CPU speed calculation made by the Linux kernel
End of explanation
"""
!cat /proc/meminfo | grep 'Mem*'
"""
Explanation: Available DRAM
End of explanation
"""
!ifconfig
"""
Explanation: Network connection
End of explanation
"""
!pwd
!echo --------------------------------------------
!ls -C --color
"""
Explanation: Directory Information
End of explanation
"""
files = !ls | head -3
print(files)
"""
Explanation: Contents
Shell commands in python code
End of explanation
"""
shell_nbs = '*.ipynb | grep "ipynb"'
!ls {shell_nbs}
"""
Explanation: Python variables in shell commands
By enclosing a Python expression within {}, i.e. curly braces, we can substitute it into shell commands
End of explanation
"""
%lsmagic
"""
Explanation: Contents
Magics
IPython has a set of predefined ‘magic functions’ that you can call with a command line style syntax. There are two kinds of magics, line-oriented and cell-oriented. Line magics are prefixed with the % character and work much like OS command-line calls: they get as an argument the rest of the line, where arguments are passed without parentheses or quotes. Cell magics are prefixed with a double %%, and they are functions that get as an argument not only the rest of the line, but also the lines below it in a separate argument.
To learn more about the IPython magics, simple type %magic in a separate cell
Below is a list of available magics
End of explanation
"""
import random
L = [random.random() for _ in range(100000)]
%time L.sort()
"""
Explanation: Contents
Timing code using magics
The following examples show how to call the built-in%time magic
%time times the execution of a single statement
Reference: The next two code cells are excerpted from the Python Data Science Handbook by Jake VanderPlas
Link to full handbook
Time the sorting on an unsorted list
A list of 100000 random numbers is sorted and stored in a variable 'L'
End of explanation
"""
%time L.sort()
"""
Explanation: Time the sorting of a pre-sorted list
The list 'L' which was sorted in previous cell is re-sorted to observe execution time, it is much less as expected
End of explanation
"""
%%bash
factorial()
{
if [ "$1" -gt "1" ]
then
i=`expr $1 - 1`
j=`factorial $i`
k=`expr $1 \* $j`
echo $k
else
echo 1
fi
}
input=5
val=$(factorial $input)
echo "Factorial of $input is : "$val
"""
Explanation: Contents
Coding other languages
If you want to, you can combine code from multiple kernels into one notebook.
Just use IPython Magics with the name of your kernel at the start of each cell that you want to use that Kernel for:
%%bash
%%HTML
%%python2
%%python3
%%ruby
%%perl
End of explanation
"""
|
drvinceknight/cfm | assets/assessment/mock/solution.ipynb | mit | ### BEGIN SOLUTION
import sympy as sym
a, b, c = sym.Symbol("a"), sym.Symbol("b"), sym.Symbol("c")
sym.expand((9 * a ** 2 * b * c ** 4) ** (sym.S(1) / 2) / (6 * a * b ** (sym.S(3) / 2) * c))
### END SOLUTION
"""
Explanation: Computing for Mathematics - Mock individual coursework
This jupyter notebook contains questions that will resemble the questions in your individual coursework.
Important Do not delete the cells containing:
```
BEGIN SOLUTION
END SOLUTION
```
write your solution attempts in those cells.
If you would like to submit this notebook:
Change the name of the notebook from main to: <student_number>. For example, if your student number is c1234567 then change the name of the notebook to c1234567.
Write all your solution attempts in the correct locations;
Save the notebook (File>Save As);
Follow the instructions given in class/email to submit.
Question 1
Output the evaluation of the following expressions exactly.
a. \(\frac{(9a^2bc^4) ^ {\frac{1}{2}}}{6ab^{\frac{3}{2}}c}\)
End of explanation
"""
### BEGIN SOLUTION
sym.expand((sym.S(2) ** (sym.S(1) / 2) + 2) ** 2 - 2 ** (sym.S(5) / 2))
### END SOLUTION
"""
Explanation: b. \((2 ^ {\frac{1}{2}} + 2) ^ 2 - 2 ^ {\frac{5}{2}}\)
End of explanation
"""
### BEGIN SOLUTION
(sym.S(1) / 8) ** (sym.S(4) / 3)
### END SOLUTION
"""
Explanation: \((\frac{1}{8}) ^ {\frac{4}{3}}\)
End of explanation
"""
def expand(expression):
### BEGIN SOLUTION
"""
Take a symbolic expression and expands it.
"""
return sym.expand(expression)
### END SOLUTION
"""
Explanation: Question 2
Write a function expand that takes a given mathematical expression and returns the expanded expression.
End of explanation
"""
### BEGIN SOLUTION
a = sym.Symbol("a")
D = sym.Matrix([[1, 2, a], [3, 1, 0], [1, 1, 1]])
### END SOLUTION
"""
Explanation: Question 3
The matrix \(D\) is given by \(D = \begin{pmatrix} 1& 2 & a\ 3 & 1 & 0\ 1 & 1 & 1\end{pmatrix}\) where \(a\ne 2\).
a. Create a variable D which has value the matrix \(D\).
End of explanation
"""
### BEGIN SOLUTION
D_inv = D.inv()
### END SOLUTION
"""
Explanation: b. Create a variable D_inv with value the inverse of \(D\).
End of explanation
"""
### BEGIN SOLUTION
b = sym.Matrix([[3], [4], [1]])
sym.simplify(D.inv() @ b).subs({a: 4})
### END SOLUTION
"""
Explanation: c. Using D_inv output the solution of the following system of equations:
\[
\begin{array}{r}
x + 2y + 4z = 3\
3x + y = 4\
x + y + z = 1\
\end{array}
\]
End of explanation
"""
import random
def sample_experiment():
"""
Returns the throw type and whether it was caught
"""
### BEGIN SOLUTION
if random.random() < .25:
throw = "backhand"
probability_of_catch = .8
else:
throw = "forehand"
probability_of_catch = .9
caught = random.random() < probability_of_catch
### END SOLUTION
return throw, caught
"""
Explanation: Question 4
During a game of frisbee between a handler and their dog the handler chooses to randomly select if they throw using a backhand or a forehand: 25% of the time they will throw a backhand.
Because of the way their dog chooses to approach a flying frisbee they catch it with the following probabilities:
80% of the time when it is thrown using a backhand
90% of the time when it is thrown using a forehand
a. Write a function sample_experiment() that simulates a given throw and returns the throw type (as a string with value "backhand" or "forehand") and whether it was caught (as a boolean: either True or False).
End of explanation
"""
### BEGIN SOLUTION
number_of_repetitions = 1_000_000
random.seed(0)
samples = [sample_experiment() for repetition in range(number_of_repetitions)]
probability_of_catch = sum(catch is True for throw, catch in samples) / number_of_repetitions
### END SOLUTION
"""
Explanation: b. Using 1,000,000 samples create a variable probability_of_catch which has value an estimate for the probability of the frisbee being caught.
End of explanation
"""
### BEGIN SOLUTION
samples_with_drop = [(throw, catch) for throw, catch in samples if catch is False]
number_of_drops = len(samples_with_drop)
probability_of_forehand_given_drop = sum(throw == "forehand" for throw, catch in samples_with_drop) / number_of_drops
### END SOLUTION
"""
Explanation: c. Using the above, create a variable probability_of_forehand_given_drop which has value an estimate for the probability of the frisbee being thrown with a forehand given that it was not caught.
End of explanation
"""
|
mmaelicke/felis_python1 | felis_python1/lectures/05_Functions.ipynb | mit | print('Hello, Wolrd!')
print('This is Python.')
"""
Explanation: Functions
In Python it's realy easy to define your own functions. Once defined, you can use them just like any standard Python function. By condensing functionality into a function, your code will get structured and is way more readable. Beside that this specific chunk of code can be used over and over again with way less effort. A good function is as general and abstract, that it cannot only be used in the project it was written for, but in any Python project.
Every Python function is composed by some main parts: the function name, signature and the body. Optionally, a return value and arguments can be defined. The arguments are also referred to as attributes.
Syntax
The function sytanc is as follows:
<code>
<span style="color: purple">def</span> name (attributes):
body
return value
</code>
End of explanation
"""
def greet():
return 'Hello, World!'
message = greet()
print(message)
"""
Explanation: group the two print statements into a print_all function
Extend print_all. It should accept a user name as an attribute and should substitute the World in the first print call with that name. Name the new function welcome.
Using the input function, we could first prompt for a name and them use it in welcome.
What does make more sense: extending welcome or defining a second function for prompting a name and then pass it to welcome?
Execute this function in a endlessly until the phrase <span style='color:red'>'exit'</span> or <span style='color:red'>'Exit'</span> was passed by the user.
Return Values
Up to now, no of the defined functions returned a value. In Python, there is no need to define a return value in the function signature (different from C++ or Java). It's also possible to return values only conditionally or to return a varying number of values.
Different to welcome the function greet will return the 'Hello World' Phares instead of printing it.
End of explanation
"""
def sum_a_and_b(a, b):
return a + b
print(sum_a_and_b(5, 3))
"""
Explanation: Rewrite welcome in the style of greet. It should substitute the user name and return the result.
Arguments
Using input in productive Pyhton code is very uncommon. It is quite complicated to reuse the code, as it will not always be executed in a python console. Beyond that, it's more readable to define possible inputs as attributes and function arguments as it makes debugging way easier. Then, it is up to a developer how the input value shall be obtained. The input function, a configuration file, a web formular, a text file or a global variable could be used to get the value for the argument.<br>
The example below will take two number and return their sum. This is very clear and reuseable.
End of explanation
"""
def print_arguments(arg1, arg2, arg3='Foo', arg4='Bar', g=9.81):
print('arg1:\t', arg1)
print('arg2:\t', arg2)
print('arg3:\t', arg3)
print('arg4:\t', arg4)
print('g:\t', g)
print('-' * 14)
"""
Explanation: A function argument can also be optional. Therefore a default value has to be defined in the signature. You woould call this attribute an optinal attribute or more pythonic: keyword argument. Without this default value, an argument is also called a positional argument as Python can only reference it by the passing order. Thus, one cannot mix positional and keyword arguments. Nevertheless, the keyword arguemnts can be mixed in their ordner as they can be identified by thier keyword. As a last option, you can set any keyword argument in a positional manner, but then the order matters again.
End of explanation
"""
x = 5
print('x = 5\t\tmemory address:', hex(id(x)))
x = 6
print('x = 6\t\tmemory address:', hex(id(x)))
"""
Explanation: locals / globals
Another important topic for functions is variable validity and life span. A variable is alife until you overwrite or explicitly delete it. In case you re-assign a variable the life span ends, even if the variable typse stays the same. That means, the new value will allocate another position in the memory.
End of explanation
"""
a = 5
def f():
b = 3
print('a: ', a, 'b: ', b)
# call
f()
b
"""
Explanation: Additionally one can declare and assign a variable, but it may only be valid in specific sections of your code. These sections of validity are called namespace. Any function defines its own namespace (the function body) and any variable declared within that scope is not valid outside this namespace. Nevertheless, namespaces can be nested. Then any outer namespace is also valid inside the inner namespaces.
End of explanation
"""
a = 5
print('Global a: ', globals()['a'], '\t\tmemory address: ', hex(id(a)))
def f(b):
print('Local b:', locals()['b'], '\t\tmemory address: ', hex(id(b)))
f(a)
"""
Explanation: The running Python session also defines a namespace. This is called the global namespace. The built-in function <span style='color: blue'>globals</span> can be used to return the global namespace content as a <span style='color: green'>dict</span>. The <span style='color: blue'>locals</span> function does the same for the local (inner) namespaces.
End of explanation
"""
def get_attributes(*args, **kwargs):
return args, kwargs
a, b = get_attributes('foo', 'bar', g=9.81, version=2.7, idiot_president='Donald Trump')
print('args:', a)
print('kwargs:', b)
"""
Explanation: *args, **kwargs
Two very specific and important attributes are *args and **kwargs attributes. You can use any name for these two variables, but it is highly recommented to use default name to prevent confusions!. The star is a operator that is also called the asterisk operator. The single operator stacks or unstacks a varying number of positional arguments into a list, while the double operator does the same thing to keyword arguments and dictionaries. For a better understanding use the function below, just returning the content of args and kwargs.
End of explanation
"""
list(map(lambda x:x**2, [1,2,3,4,5,6,7,8,9]))
list(map(lambda x:len(x), 'This is a sentence with rather with rather short and extraordinary long words.'.split()))
list(map(lambda x:(x,len(x)), 'This is a sentence with rather with rather short and extraordinary long words.'.split()))
"""
Explanation: lambda
The <span style='color: blue'>lambda</span> function is a special case, as it is an anonymous function. This is the only function without a function name. On the other side, <span style='color: blue'>lambda</span>s does not accept keyword arguments. <span style='color: blue'>lambda</span>s are helpful to write short auxiliary functions, or inline functions that are used as arguments themselves. The seconda field are list comprehensions.
End of explanation
"""
def mean(data):
return sum(data) / len(data)
def get_aggregator(variable_name):
if variable_name.lower() == 'temperature':
return mean
elif variable_name.lower() == 'rainfall':
return sum
data = [2,4,7,9,2,3,5]
# Temperature data
agg = get_aggregator('Temperature')
print('Temperature: ', agg(data), '°C')
agg = get_aggregator('Rainfall')
print('Rainfall: ', agg(data), 'mm')
"""
Explanation: Returning functions
In Python it is also possible to define a function, that will return another function. There is even a built-in function called <span style="color:purple">callable</span>, taht tests python objects for being callable. Caution: This does still not guarantee, that the object is a function, because Python also defines callable class instances.
The example below could be used in a data management environment.
End of explanation
"""
def factorial(n):
return n*factorial(n-1) if n > 1 else 1
"""
Explanation: Recursive functions
A recursive function calls itself in the body. This technique can dramatically decrease the code size and shifts the code often very close to mathematical algorithms, but there are also some downsides. If a 'break' condition is never met or implemented, the recursive calling will never stop. (In fact it will stop, as Python will stop it but it will also stop your code.). The other downside is the dramatical loss of performance for highly nested functions. As an example the faculty function is implemented below:
$$
n! = n+ (n-1)!~~~~~~~~~ n > 1, 1! = 1
$$
End of explanation
"""
def fibonacci_iterative(n):
a,b = 0, 1
for i in range(n):
a,b = b, a + b
return a
def fibonacci_recursive(n):
pass
print('iterative:', fibonacci_iterative(10))
print('recursive:', fibonacci_recursive(10))
%time fibonacci_iterative(40)
%time fibonacci_recursive(40)
"""
Explanation: Now define the Fibonacci series into <span style="color:blue">fibonacci_recursive</span> below.
$$F_n = F_{n - 1} + F_{n - 2} $$ using the start values $F_0 = 0$ and $F_1 = 1$.
End of explanation
"""
def get_model(arid=True):
if arid:
a = 1.2
else:
a = 1.4
def _model(rain, evap, a=a):
return (rain - evap)**a if rain > evap else 0
return _model
rain_values = [3, 0, 0, 16, 4]
evap_values = [1, 1, 5, 3, 2]
print(list(map(get_model(arid=True), rain_values, evap_values)))
print(list(map(get_model(arid=False), rain_values, evap_values)))
"""
Explanation: Nested functions
In Python it is also possible to define a function within the namespace of another function. Just like variables, this function also has a validity. You can use this concept to define a very specific auxiliary function, that cannot be used in another scope just in that namespace where it is valid. Then other developers cannot misuse your function.
The second use case is that an outer function wants to influence the definition of the inner function. The outer will be declared as your code is imported. The inner first as the outer is run. Therefore anything happening during runtime could still influence the declaration of the inner function. Here, the default values of a model are dependent on the boundary conditions.
End of explanation
"""
|
Olsthoorn/TransientGroundwaterFlow | exercises_notebooks/TransientFlowToAWell.ipynb | gpl-3.0 | import numpy as np
from scipy.special import expi
#help(expi) # remove the first # to show the help for the function expi
"""
Explanation: Transient flow to a well
The Theis' well function (a well in a confined aquifer)
The Theis will function is perhaps the most famous, and most often used and practical analytical solution in groundwater science. It describes the transient flow to a fully penetrating well in a confined aquifer after the well starts pumping at time zero. The solution is also used for unconfined flow, but then it is an approximation that is good as long as the thickness of the aquifer does not change substantially, not more thabn 20%, say, from it's initial value.
Figure: The situation considered by Theis (confined aquifer)
Figure: The situation considered by Theis (unconfined aquifer, s<<h)
In cases with wells that are only partially penetrating the aquifer, we can add the influence of that separately as we will see.
Although the solution was derived for a uniform and unchanging ambient groundwater head, it can still be applied in much more general situations, because we can use superpositioin, that is, we can add the influence of different and indiependent actors that change the groundwater level in space and or in time separately. Therefore, if we can, with a solution like that of Theis, compute the effect of a single well everywhere in the aquifer of any time, we can do for an arbitrary number of wells, simply, by adding their individual effects. Not only this, we can also superimpose other effects that are not due to wells, if we have their analytical solution available.
Governing partial differential equation solved by Theis
Theis solved the following partial differential equation
Figure: Situation to derive the partial differential equation
Continuity for a ring of width $dr$ at radius $r$, see figure, yields:
$$ \frac {\partial Q} {\partial r} = \frac \partial {\partial r} \left(-2 \pi r kD \frac {\partial \phi} {\partial r} \right)= - 2 \pi r S \frac {\partial \phi} {\partial t} $$
For convenience, use drawdown $s$ instead of head $\phi$
$$ s = \phi_0 - \phi $$
$$ \frac {\partial} {\partial r} \left( 2 \pi r kD \frac {\partial s} {\partial r} \right) = 2\pi r S \frac {\partial s} {\partial t} $$
$$ kD \frac {\partial} {\partial r} \left( r \frac {\partial s} {\partial r} \right) = r S \frac {\partial s} {\partial t} $$
Which yields the governing partial differential equation for transient horizointal flow to a well that starts pumping at a fixef flow $Q_0$ at $t=0$:
$$\frac 1 r \frac {\partial s} {\partial r} + \frac {\partial^2 s} {\partial r^2} = \frac S {kD} \frac {\partial s} {\partial t}$$
Which was solved by Theis (1935) subject to the initial condition $s(x,0) = 0$ and boundary conditions $s(\infty, t)=0$ and $2\pi r kD \frac{\partial s}{\partial r} = Q_0$ for $r \rightarrow 0$. (This solution can be readily obtained by means of the Laplace transform).
The dradown according to Theis is mathematically descrbided, by hydrologists, as
$$ s = \frac Q {2 \pi kD} W \left( \frac {r^2 S} {4 kD t} \right) $$
Where owercse $s$ [L] is the transient drawdown of the groundwater head due to the well, $Q$ [L3/T] is the well extraction, $kD$ [L2/T] the transmissivity of the aqufier, $S$ [-] the storage coefficient of the aquifer, $r$ [L] the distance to the well center and $t$ [T] time since the well was switched on.
$W(u)$ is the so-called Theis well function, which is a function of only one dimensionless parameter, $u$ that is a combination of $r$, $t$, $S$ and $kD$ as shown.
The name Well Function was given by C.V. Theis (1930). The well function turned out to be a regular mathematical function that already was available under the name exponential integra1 at the time that Theis developed his formunla. It's form is:
$$ W \left( u \right) = Ei \left( u \right) = \intop_u^\infty \frac {e^{-y}} y dy $$
The function has been tabled in many books on groundwater hydrology and pumping test analysis, among which the book
Kruseman, G.P. and N.A. de Rider (1994) Analysis of Pumping Test Data. ILRI publication 47, Wageningen, The Netherlands, 1970 to 1994. ISBN 90 70754 207.
The print of the book of the year 2000 is available on the internet: KrdR 2000
For verification of self-implemented well functions here is the table of its values form page 294 of the mentioned book:
How to get the well function?
In the past we used to look up the well function in a table like the one given. Nowadays, with computing power everywhere, we only use such tables to verify our version of the function when we programmed it ourselves or use one from a scientific library. This is what we'll do here was well.
One way is to see if the function is already available on our computer. Well if you have Maple, Matlab or Python.scipy it is in one form or another. If you don't know where, then searhing the internet is always a good start.
This shows that we have to look for the function expi in the module scipy.special
End of explanation
"""
u = 4 * 10** -np.arange(11.) # generates values 4, 4e-1, 4e-2 .. 4e-10
print("{:>10s} {:>10s}".format('u ', 'wu '))
for u, wu in zip(u, -expi(-u)): # makes a list of value pairs [u, W(u)]
print("{0:10.1e} {1:10.4e}".format(u, wu))
"""
Explanation: The reveals that we have the function
$$ expi = \intop_{-\infty}^u \frac {e^y} y dy $$
By just changing the sign of y to -y we obtain
$$ W(u) = \intop_u^\infty \frac {e^{-y}} y dy = - \intop_{y = -\infty}^{y = u} \frac {e^{y}} y dy $$
Replace $y$ by $-\xi$ the $W(u)$ becomes
$$ W(u) = - \intop_{\xi = \infty}^{\xi = -u} \frac {e^{-\xi}} \xi d \xi = - expi(-u) $$
So that
$$ W(u) = -expi(-u) $$
according to the definition used in scipy.special.expi.
Notice that diferent libraries and books may define the exponential integral differently. The famous `Abramowitz M & Stegun, I (1964) Handbook of Mathematical Functions. Dover`, for example define the exponential function exactly as the theis well function.
We can readily check the expi function using the table from Kruseman and De Ridder (2000) p294 that was referenced above. Verifying for example the values for u = 4, 0.4, 0.04, 0.004 etc to $4^{-10$ can be done as follows:
End of explanation
"""
from scipy.special import expi
W = lambda u : -expi(-u)
"""
Explanation: which is equal to the values in the table.
It''s now convenient to use the familiar form W(u) instead of -expi(-u)
We can define a function for W either as an anonymous function or a regular function. Anonymous functions are called lambdda functions or lambda expressions in Python. In this case:
End of explanation
"""
def W(u): return -expi(-u)
"""
Explanation: Or, alternatively as a regular one-line function:
End of explanation
"""
import scipy
W = lambda u: -scipy.special.expi( -u ) # Theis well function
"""
Explanation: or in full, so that we don't need the import above and we directly see where the function comes from:
End of explanation
"""
r = 350; t = 1.; kD=2400; S=0.001; Q=2400
u = r**2 * S / (4 * kD * t)
s = Q/(4 * np.pi * kD) * W(u) # applying the theis well function according to the book
print(" r = {} m\n\
t = {} d\n\
kD = {} m2/d\n\
S = {} [-]\n\
Q = {} m3/d\n\
u = {:.5g} [-]\n\
W(u) = {:.5g} [-]\n\
s(r, t) = {:.5g} m".
format(r, t, kD, S, Q, u, W(u), s))
"""
Explanation: Now we can put this well function immediately to use for answering practical questions. For example: what is the drawdown after $t=1\,d$ at distance $r=350 \, m$ by a well extracting $Q = 2400\, m^3/d$ in an confined aquifer with transmissivity $kD = 2400\, m^2/d$ and storage coefficient $S=0.001$ [-] ?
End of explanation
"""
u = lambda r, t: r**2 * S / (4 * kD * t)
"""
Explanation: Above we computed $u$ separately to prevent cluttering the expression. Of course, you can define a lambda or regular function to compute like so
End of explanation
"""
u(r,t) # yields u as a function of r and t
W(u(r,t)) # given W(u) as a function of r and t
Q/(4 * np.pi * kD) * W(u(r,t)) # gives the drawdown that we had before
"""
Explanation: The lambda function $u$ now takes two parameters like $u(r,t)$ and uses the other parameters $S$ and $kD$ that it finds in the workspace at the moment when the lambda function is created. So don't change $S$ and $kD$ afterwards without redefining $u(r,t)$.
Try this out:
End of explanation
"""
t = np.logspace(-3, 2, 51) # gives 51 times on log scale between 10^(-3) = 0.001 and 10^(2) = 100
"""
Explanation: It's now straight forward to compute the drawdown for many times like so:
End of explanation
"""
for it, tt in enumerate(t):
if it % 10 == 0: print()
print("%8.3g" % tt, end=" ")
"""
Explanation: This given the following times:
End of explanation
"""
s = Q / (4 * np.pi * kD) * W(u(r,t)) # computes s(r,t)
s # shows s(r,t)
"""
Explanation: With these times we can compute the drawdown for all these times in a single strike without changing anything to our formula:
End of explanation
"""
print("{:>10s} {:>10s}".format('time', 'drawdown'))
for tt, ss in zip(t, s):
print("{0:10.3g} {1:10.3g}".format(tt,ss))
"""
Explanation: For a nicer print print t and s next to each other
End of explanation
"""
import matplotlib.pyplot as plt # imports plot functions (matlab style)
fig = plt.figure()
# Drawdown versus log(t)
ax1 = fig.add_subplot(121)
ax1.set(xlabel='time [d]', ylabel='drawdown [m]', xscale='log', title='Drawdown versus log(t)')
ax1.invert_yaxis()
ax1.grid(True)
plt.plot(t, s)
# Drawdown versus t
ax2 = fig.add_subplot(122)
ax2.set(xlabel='time [d]', ylabel='', xscale='linear', title='Drawdown versus t')
ax2.invert_yaxis()
ax2.grid(True)
plt.plot(t, s)
plt.show()
"""
Explanation: And of course we can make a plot of these results:
End of explanation
"""
well_names = ['School', 'Lazaret', 'Square', 'Mosque', 'Water_company']
Q = [400., 1200., 1150., 600., 1900]
x = [-300., -250., 100., 55., 125.]
y =[-450., +230., 50., -300., 250.]
Nwells = len(well_names)
x0 = 0.
y0 = 0.
t = np.logspace(-2, 2, 41)
s = np.zeros((Nwells, len(t)))
for iw, Q0, xw, yw in zip(range(Nwells), Q, y, x):
r = np.sqrt((xw-x0) ** 2 + (yw - y0) **2)
s[iw,:] = Q0 / (4 * np.pi * kD) * W(u(r,t))
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set(xlabel='time [d]', ylabel='drawdown[m]', title='Drawdown due to multiple wells')
ax.invert_yaxis()
ax.grid(True)
for iw, name in zip(range(Nwells), well_names):
ax.plot(t, s[iw,:], label=name)
ax.plot(t, np.sum(s, axis=0), label='total_drawdown')
ax.legend()
plt.show()
"""
Explanation: Exercises
Show the drawdown as a function of r instead of x, for t=2 d and r between 0.1 and 1000 m
For the 5 wells of which the lcoations and extractions are given below, show the combined drawdown for time between 0.01 and 10 days at x= 0 and y = 0.
End of explanation
"""
|
CSB-book/CSB | good_code/solutions/Lahti2014_solution_detailed.ipynb | gpl-3.0 | import csv # Import csv modulce for reading the file
"""
Explanation: Solution of Lahti et al. 2014
Write a function that takes as input a dictionary of constraints (i.e., selecting a specific group of records) and returns a dictionary tabulating the BMI group for all the records matching the constraints. For example, calling:
get_BMI_count({'Age': '28', 'Sex': 'female'})
should return:
{'NA': 3, 'lean': 8, 'overweight': 2, 'underweight': 1}
End of explanation
"""
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
header = csvr.fieldnames
print(header)
# For each row
for i, row in enumerate(csvr):
print(row)
if i > 2:
break
"""
Explanation: We start my reading the Metadata file, set up a csv reader, and print the header and first few lines to get a feel for data.
End of explanation
"""
# Initiate an empty dictionary to keep track of counts per BMI_group
BMI_count = {}
# set up our dictionary of constraints for testing purposes
dict_constraints = {'Age': '28', 'Sex': 'female'}
"""
Explanation: It's time to decide on a data structure to record our result: For each row in the file, we want to make sure all the constraints are matching the desired ones. If so, we keep count of the BMI group. A dictionary with the BMI_groups as keys and counts as values will work well:
End of explanation
"""
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
for i, row in enumerate(csvr):
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
print("in row", i, "the key", e,"in data does not match", e, "in constraints")
if i > 5:
break
"""
Explanation: OK, now the tricky part: for each row, we want to test if the constraints (a dictionary) matches the data (which itself is a dictionary). We can do it element-wise, that means we take a key from the data dictionary (row) and test if its value is NOT identical to the corresponding value in the constraint dictionary. We start out by setting the value matching to TRUE and set it to FALSE if we encounter a discripancy. This way, we stop immediately if one of the elements does not match and move on to the next row of data.
End of explanation
"""
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
if my_BMI in BMI_count.keys():
# If we've seen it before, add one record to the count
BMI_count[my_BMI] = BMI_count[my_BMI] + 1
else:
# If not, initialize at 1
BMI_count[my_BMI] = 1
BMI_count
"""
Explanation: In some rows, all constraints will be fulfillled (i.e., our matching variable will still be TRUE after checking all elements). In this case, we want to increase the count of that particular BMI_group in our result dictionary BMI_count. We can directly add one to the appropriate BMI_group if we have seen it before, else we initiate that key with a value of one:
End of explanation
"""
def get_BMI_count(dict_constraints):
""" Take as input a dictionary of constraints
for example, {'Age': '28', 'Sex': 'female'}
And return the count of the various groups of BMI
"""
# We use a dictionary to store the results
BMI_count = {}
# Open the file, build a csv DictReader
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
if my_BMI in BMI_count.keys():
# If we've seen it before, add one record to the count
BMI_count[my_BMI] = BMI_count[my_BMI] + 1
else:
# If not, initialize at 1
BMI_count[my_BMI] = 1
return BMI_count
get_BMI_count({'Nationality': 'US', 'Sex': 'female'})
"""
Explanation: Excellent! Now, we can put everything together and create a function that accepts our constraint dictionary. Remember to document everything nicely:
End of explanation
"""
# We use a dictionary to store the results
BMI_IDs = {}
# Open the file, build a csv DictReader
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
if my_BMI in BMI_IDs.keys():
# If we've seen it before, add the SampleID
BMI_IDs[my_BMI] = BMI_IDs[my_BMI] + [row['SampleID']]
else:
# If not, initialize
BMI_IDs[my_BMI] = [row['SampleID']]
BMI_IDs
"""
Explanation: Write a function that takes as input the constraints (as above), and a bacterial "genus". The function returns the average abundance (in logarithm base 10) of the genus for each group of BMI in the sub-population. For example, calling:
get_abundance_by_BMI({'Time': '0', 'Nationality': 'US'}, 'Clostridium difficile et rel.')
should return:
```
Abundance of Clostridium difficile et rel. In sub-population:
Nationality -> US
Time -> 0
3.08 NA
3.31 underweight
3.84 lean
2.89 overweight
3.31 obese
3.45 severeobese
```
To solve this task, we can recycle quite a bit of code that we just developed. However, instead of just counting occurances of BMI_groups, we want to keep track of the records (i.e., SampleIDs) that match our constraints and look up a specific bacteria abundance in the file HITChip.tab.
First, we create a dictionary with all records that match our constraints:
End of explanation
"""
with open("../data/Lahti2014/HITChip.tab", "r") as HIT:
csvr = csv.DictReader(HIT, delimiter = "\t")
header = csvr.fieldnames
print(header)
for i, row in enumerate(csvr):
print(row)
if i > 2:
break
"""
Explanation: Before moving on, let's have a look at the HITChip file:
End of explanation
"""
# set up dictionary to track abundances by BMI_group and number of identified records
abundance = {}
# choose a bacteria genus for testing
genus = "Clostridium difficile et rel."
with open('../data/Lahti2014/HITChip.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check whether we need this SampleID
matching = False
for g in BMI_IDs:
if row['SampleID'] in BMI_IDs[g]:
if g in abundance.keys():
abundance[g][0] = abundance[g][0] + float(row[genus])
abundance[g][1] = abundance[g][1] + 1
else:
abundance[g] = [float(row[genus]), 1]
# we have found it, so move on
break
abundance
"""
Explanation: We see that that each row contains the SampleID and abundance data for various phylogenetically clustered bacteria. For each row in the file, we can now check if we are interested in that particular SampleID (i.e., if it matched our constraint and is in our BMI_IDs dictionary). If so, we retrieve the abundance of the bacteria of interest and add it to the the previously identified abundances within a particular BMI_group. If we had not encounter this BMI_group before, we initiate the key with the abundance as value. As we want to calculate the mean of these abuncandes later, we also keep track of the number of occurances:
End of explanation
"""
import scipy
print("____________________________________________________________________")
print("Abundance of " + genus + " In sub-population:")
print("____________________________________________________________________")
for key, value in dict_constraints.items():
print(key, "->", value)
print("____________________________________________________________________")
for ab in ['NA', 'underweight', 'lean', 'overweight',
'obese', 'severeobese', 'morbidobese']:
if ab in abundance.keys():
abundance[ab][0] = scipy.log10(abundance[ab][0] / abundance[ab][1])
print(round(abundance[ab][0], 2), '\t', ab)
print("____________________________________________________________________")
print("")
"""
Explanation: Now we take care of calculating the mean and printing the results. We need to load the scipy (or numby) module in order to calculate log10:
End of explanation
"""
import scipy # For log10
def get_abundance_by_BMI(dict_constraints, genus = 'Aerococcus'):
# We use a dictionary to store the results
BMI_IDs = {}
# Open the file, build a csv DictReader
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
if my_BMI in BMI_IDs.keys():
# If we've seen it before, add the SampleID
BMI_IDs[my_BMI] = BMI_IDs[my_BMI] + [row['SampleID']]
else:
# If not, initialize
BMI_IDs[my_BMI] = [row['SampleID']]
# Now let's open the other file, and keep track of the abundance of the genus for each
# BMI group
abundance = {}
with open('../data/Lahti2014/HITChip.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check whether we need this SampleID
matching = False
for g in BMI_IDs:
if row['SampleID'] in BMI_IDs[g]:
if g in abundance.keys():
abundance[g][0] = abundance[g][0] + float(row[genus])
abundance[g][1] = abundance[g][1] + 1
else:
abundance[g] = [float(row[genus]), 1]
# we have found it, so move on
break
# Finally, calculate means, and print results
print("____________________________________________________________________")
print("Abundance of " + genus + " In sub-population:")
print("____________________________________________________________________")
for key, value in dict_constraints.items():
print(key, "->", value)
print("____________________________________________________________________")
for ab in ['NA', 'underweight', 'lean', 'overweight',
'obese', 'severeobese', 'morbidobese']:
if ab in abundance.keys():
abundance[ab][0] = scipy.log10(abundance[ab][0] / abundance[ab][1])
print(round(abundance[ab][0], 2), '\t', ab)
print("____________________________________________________________________")
print("")
get_abundance_by_BMI({'Time': '0', 'Nationality': 'US'},
'Clostridium difficile et rel.')
"""
Explanation: Last but not least, we put it all together in a function:
End of explanation
"""
def get_all_genera():
with open('../data/Lahti2014/HITChip.tab') as f:
header = f.readline().strip()
genera = header.split('\t')[1:]
return genera
"""
Explanation: Repeat this analysis for all genera, and for the records having Time = 0.
A function to extract all the genera in the database:
End of explanation
"""
get_all_genera()[:6]
"""
Explanation: Testing:
End of explanation
"""
for g in get_all_genera()[:5]:
get_abundance_by_BMI({'Time': '0'}, g)
"""
Explanation: Now use this function to print the results for all genera at Time = 0:
End of explanation
"""
|
kubeflow/kfserving-lts | docs/samples/v1alpha2/transformer/image_transformer/kfserving_sdk_transformer.ipynb | apache-2.0 | from kubernetes import client
from kfserving import KFServingClient
from kfserving import constants
from kfserving import V1alpha2EndpointSpec
from kfserving import V1alpha2PredictorSpec
from kfserving import V1alpha2TransformerSpec
from kfserving import V1alpha2PyTorchSpec
from kfserving import V1alpha2CustomSpec
from kfserving import V1alpha2InferenceServiceSpec
from kfserving import V1alpha2InferenceService
from kubernetes.client import V1Container
from kubernetes.client import V1ResourceRequirements
import kubernetes.client
import os
import requests
import json
import numpy as np
"""
Explanation: Sample for using transformer with KFServing SDK
The notebook shows how to use KFServing SDK to create InferenceService with transformer, predictor.
End of explanation
"""
api_version = constants.KFSERVING_GROUP + '/' + constants.KFSERVING_VERSION
default_endpoint_spec = V1alpha2EndpointSpec(
predictor=V1alpha2PredictorSpec(
min_replicas=1,
pytorch=V1alpha2PyTorchSpec(
storage_uri='gs://kfserving-samples/models/pytorch/cifar10',
model_class_name= "Net",
resources=V1ResourceRequirements(
requests={'cpu':'100m','memory':'1Gi'},
limits={'cpu':'100m', 'memory':'1Gi'}))),
transformer=V1alpha2TransformerSpec(
min_replicas=1,
custom=V1alpha2CustomSpec(
container=V1Container(
image='gcr.io/kubeflow-ci/kfserving/image-transformer:latest',
name='user-container',
resources=V1ResourceRequirements(
requests={'cpu':'100m','memory':'1Gi'},
limits={'cpu':'100m', 'memory':'1Gi'})))))
isvc = V1alpha2InferenceService(api_version=api_version,
kind=constants.KFSERVING_KIND,
metadata=client.V1ObjectMeta(
name='cifar10', namespace='default'),
spec=V1alpha2InferenceServiceSpec(default=default_endpoint_spec))
"""
Explanation: Define InferenceService with Transformer
Add predictor and transformer on the endpoint spec
End of explanation
"""
KFServing = KFServingClient()
KFServing.create(isvc)
"""
Explanation: Create InferenceService with Transformer
Call KFServingClient to create InferenceService.
End of explanation
"""
KFServing.get('cifar10', namespace='default', watch=True, timeout_seconds=120)
"""
Explanation: Check the InferenceService
End of explanation
"""
api_instance = kubernetes.client.CoreV1Api(kubernetes.client.ApiClient())
service = api_instance.read_namespaced_service("istio-ingressgateway", "istio-system", exact='true')
cluster_ip = service.status.load_balancer.ingress[0].ip
url = "http://" + cluster_ip + "/v1/models/cifar10:predict"
headers = { 'Host': 'cifar10.default.example.com' }
with open('./input.json') as json_file:
data = json.load(json_file)
print(url, headers)
response = requests.post(url, json.dumps(data), headers=headers)
probs = json.loads(response.content.decode('utf-8'))["predictions"]
print(probs)
print(np.argmax(probs))
"""
Explanation: Predict the image
End of explanation
"""
KFServing.delete('cifar10', namespace='default')
"""
Explanation: Delete the InferenceService
End of explanation
"""
|
MTG/essentia | src/examples/python/musicbricks-tutorials/5-melody_analysis.ipynb | agpl-3.0 | # import essentia in standard mode
import essentia
import essentia.standard
from essentia.standard import *
"""
Explanation: Melody analysis - MusicBricks Tutorial
Introduction
This tutorial will guide you through some tools for Melody Analysis using the Essentia library (http://www.essentia.upf.edu). Melody analysis tools will extract a pitch curve from a monophonic or polyphonic audio recording [1]. It outputs a time series (sequence of values) with the instantaneous pitch value (in Hertz) of the perceived melody.
We provide two different operation modes:
1) using executable binaries;
2) Using Python wrappers.
References:
[1] J. Salamon and E. Gómez, "Melody extraction from polyphonic music signals using pitch contour characteristics," IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 6, pp. 1759–1770, 2012.
1-Using executable binaries
You can download the executable binaries for Linux (Ubuntu 14) and OSX in this link: http://tinyurl.com/melody-mbricks
To execute the binaries you need to specify the input audio file and an output YAML file, where the melody values will be stored.
Extracting melody from monophonic audio
Locate an audio file to be processed in WAV format (input_audiofile).
Usage: ./streaming_pitchyinfft input_audiofile output_yamlfile
Extracting melody from polyphonic audio
Usage: ./streaming_predominantmelody input_audiofile output_yamlfile
2-Using Python wrappers
You should first install the Essentia library with Python bindings. Installation instructions are detailed here: http://essentia.upf.edu/documentation/installing.html .
End of explanation
"""
# import matplotlib for plotting
import matplotlib.pyplot as plt
import numpy
"""
Explanation: After importing Essentia library, let's import other numerical and plotting tools
End of explanation
"""
# create an audio loader and import audio file
loader = essentia.standard.MonoLoader(filename = 'flamenco.wav', sampleRate = 44100)
audio = loader()
print("Duration of the audio sample [sec]:")
print(len(audio)/44100.0)
"""
Explanation: Load an audio file
End of explanation
"""
# PitchMelodia takes the entire audio signal as input - no frame-wise processing is required here...
pExt = PredominantPitchMelodia(frameSize = 2048, hopSize = 128)
pitch, pitchConf = pExt(audio)
time=numpy.linspace(0.0,len(audio)/44100.0,len(pitch) )
"""
Explanation: Extract the pitch curve from the audio example
End of explanation
"""
# plot the pitch contour and confidence over time
f, axarr = plt.subplots(2, sharex=True)
axarr[0].plot(time,pitch)
axarr[0].set_title('estimated pitch[Hz]')
axarr[1].plot(time,pitchConf)
axarr[1].set_title('pitch confidence')
plt.show()
"""
Explanation: Plot extracted pitch contour
End of explanation
"""
|
cuttlefishh/emp | methods/figure-data/fig-1/Fig1_data_files.ipynb | bsd-3-clause | # Load up metadata map
metadata_fp = '../../../data/mapping-files/emp_qiime_mapping_qc_filtered.tsv'
metadata = pd.read_csv(metadata_fp, header=0, sep='\t')
metadata.head()
metadata.columns
# take just the columns we need for this figure panel
fig1ab = metadata.loc[:,['#SampleID','empo_0','empo_1','empo_2','empo_3','latitude_deg','longitude_deg']]
fig1ab.head()
"""
Explanation: Figure 1 csv data generation
Figure data consolidation for Figure 1, which maps samples and shows distribution across EMPO categories
Figure 1a and 1b
for these figure, we just need the samples, EMPO level categories, and lat/lon coordinates
End of explanation
"""
fig1 = pd.ExcelWriter('Figure1_data.xlsx')
fig1ab.to_excel(fig1,'Fig-1ab')
fig1.save()
"""
Explanation: Write to Excel notebook
End of explanation
"""
|
NuGrid/NuPyCEE | regression_tests/SYGMA_SSP_h_yield_input.ipynb | bsd-3-clause | #from imp import *
#s=load_source('sygma','/home/nugrid/nugrid/SYGMA/SYGMA_online/SYGMA_dev/sygma.py')
#%pylab nbagg
import sys
import sygma as s
print (s.__file__)
s.__file__
#import matplotlib
#matplotlib.use('nbagg')
import matplotlib.pyplot as plt
#matplotlib.use('nbagg')
import numpy as np
from scipy.integrate import quad
from scipy.interpolate import UnivariateSpline
import os
# Trigger interactive or non-interactive depending on command line argument
__RUNIPY__ = sys.argv[0]
if __RUNIPY__:
%matplotlib inline
else:
%pylab nbagg
"""
Explanation: Regression test suite: Test of basic SSP GCE features
Prepared by Christian Ritter
Test of SSP with artificial yields,pure h1 yields, provided in NuGrid tables (no PopIII tests here). Focus are basic GCE features.
You can find the documentation <a href="doc/sygma.html">here</a>.
Before starting the test make sure that use the standard yield input files.
Outline:
$\odot$ Evolution of ISM fine
$\odot$ Sources of massive and AGB stars distinguished
$\odot$ Test of final mass of ISM for different IMF boundaries
$\odot$ Test of Salpeter, Chabrier, Kroupa IMF by checking the evolution of ISM mass (incl. alphaimf)
$\odot$ Test if SNIa on/off works
$\odot$ Test of the three SNIa implementations, the evolution of SN1a contributions
$\odot$ Test of parameter tend, dt and special_timesteps
$\odot$ Test of parmeter mgal
$\odot$ Test of parameter transitionmass
TODO: test non-linear yield fitting (hard set in code right now, no input parameter provided)
End of explanation
"""
k_N=1e11*0.35/ (1**-0.35 - 30**-0.35) #(I)
"""
Explanation: IMF notes:
The IMF allows to calculate the number of stars $N_{12}$ in the mass interval [m1,m2] with
(I) $N_{12}$ = k_N $\int _{m1}^{m2} m^{-2.35} dm$
Where k_N is the normalization constant. It can be derived from the total amount of mass of the system $M_{tot}$
since the total mass $M_{12}$ in the mass interval above can be estimated with
(II) $M_{12}$ = k_N $\int _{m1}^{m2} m^{-1.35} dm$
With a total mass interval of [1,30] and $M_{tot}=1e11$ the $k_N$ can be derived:
$1e11 = k_N/0.35 * (1^{-0.35} - 30^{-0.35})$
End of explanation
"""
N_tot=k_N/1.35 * (1**-1.35 - 30**-1.35) #(II)
print (N_tot)
"""
Explanation: The total number of stars $N_{tot}$ is then:
End of explanation
"""
Yield_tot=0.1*N_tot
print (Yield_tot/1e11)
"""
Explanation: With a yield ejected of $0.1 Msun$, the total amount ejected is:
End of explanation
"""
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,imf_type='salpeter',imf_bdys=[1,30],iniZ=0.02,hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
#% matplotlib inline
import read_yields as ry
path = os.environ['SYGMADIR']+'/yield_tables/agb_and_massive_stars_nugrid_MESAonly_fryer12delay.txt'
#path='/home/christian/NuGrid/SYGMA_PROJECT/NUPYCEE/new/nupycee.bitbucket.org/yield_tables/isotope_yield_table.txt'
ytables = ry.read_nugrid_yields(path,excludemass=[32,60])
zm_lifetime_grid=s1.zm_lifetime_grid_current #__interpolate_lifetimes_grid()
#return [[metallicities Z1,Z2,...], [masses], [[log10(lifetimesofZ1)],
# [log10(lifetimesofZ2)],..] ]
#s1.__find_lifetimes()
#minm1 = self.__find_lifetimes(round(self.zmetal,6),mass=[minm,maxm], lifetime=lifetimemax1)
"""
Explanation: compared to the simulation:
End of explanation
"""
print (Yield_tot_sim)
print (Yield_tot)
print ('ratio should be 1 : ',Yield_tot_sim/Yield_tot)
"""
Explanation: Compare both results:
End of explanation
"""
Yield_agb= ( k_N/1.35 * (1**-1.35 - 8.**-1.35) ) * 0.1
Yield_massive= ( k_N/1.35 * (8.**-1.35 - 30**-1.35) ) * 0.1
print ('Should be 1:',Yield_agb/s1.history.ism_iso_yield_agb[-1][0])
print ('Should be 1:',Yield_massive/s1.history.ism_iso_yield_massive[-1][0])
print ('Test total number of SNII agree with massive star yields: ',sum(s1.history.sn2_numbers)*0.1/Yield_massive)
print ( sum(s1.history.sn2_numbers))
s1.plot_totmasses(source='agb')
s1.plot_totmasses(source='massive')
s1.plot_totmasses(source='all')
s1.plot_totmasses(source='sn1a')
"""
Explanation: Test of distinguishing between massive and AGB sources:
Boundaries between AGB and massive for Z=0 (1e-4) at 8 (transitionmass parameter)
End of explanation
"""
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\
imf_bdys=[1,30],iniZ=0,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
s1.plot_mass(specie='H',label='H, sim',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.35 * (m**-1.35 - 30.**-1.35) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=15,label='H, semi')
plt.legend(loc=4)
"""
Explanation: Calculating yield ejection over time
For plotting, take the lifetimes/masses from the yield grid:
$
Ini Mass & Age [yrs]
1Msun = 5.67e9
1.65 = 1.211e9
2 = 6.972e8
3 = 2.471e8
4 = 1.347e8
5 = 8.123e7
6 = 5.642e7
7 = 4.217e7
12 = 1.892e7
15 = 1.381e7
20 = 9.895e6
25 = 7.902e6
$
End of explanation
"""
k_N=1e11*0.35/ (5**-0.35 - 20**-0.35)
N_tot=k_N/1.35 * (5**-1.35 - 20**-1.35)
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',\
imf_bdys=[5,20],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print ('Sould be 1:' ,Yield_tot_sim/Yield_tot)
"""
Explanation: Simulation results in the plot above should agree with semi-analytical calculations.
Test of parameter imf_bdys: Selection of different initial mass intervals
Select imf_bdys=[5,20]
End of explanation
"""
k_N=1e11*0.35/ (1**-0.35 - 5**-0.35)
N_tot=k_N/1.35 * (1**-1.35 - 5**-1.35)
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\
imf_bdys=[1,5],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
"""
Explanation: Select imf_bdys=[1,5]
End of explanation
"""
print ('Sould be 1: ',Yield_tot_sim/Yield_tot)
"""
Explanation: Results:
End of explanation
"""
alphaimf = 1.5 #Set test alphaimf
k_N=1e11*(alphaimf-2)/ (-1**-(alphaimf-2) + 30**-(alphaimf-2))
N_tot=k_N/(alphaimf-1) * (-1**-(alphaimf-1) + 30**-(alphaimf-1))
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='alphaimf',alphaimf=1.5,imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print ('Should be 1 :',Yield_tot/Yield_tot_sim)
"""
Explanation: Test of parameter imf_type: Selection of different IMF types
power-law exponent : alpha_imf
The IMF allows to calculate the number of stars $N_{12}$ in the mass interval [m1,m2] with
$N_{12}$ = k_N $\int _{m1}^{m2} m^{-alphaimf} dm$
Where k_N is the normalization constant. It can be derived from the total amount of mass of the system $M_{tot}$
since the total mass $M_{12}$ in the mass interval above can be estimated with
$M_{12}$ = k_N $\int _{m1}^{m2} m^{-(alphaimf-1)} dm$
With a total mass interval of [1,30] and $M_{tot}=1e11$ the $k_N$ can be derived:
$1e11 = k_N/(alphaimf-2) * (1^{-(alphaimf-2)} - 30^{-(alphaimf-2)})$
End of explanation
"""
def imf_times_m(mass):
if mass<=1:
return 0.158 * np.exp( -np.log10(mass/0.079)**2 / (2.*0.69**2))
else:
return mass*0.0443*mass**(-2.3)
k_N= 1e11/ (quad(imf_times_m,0.01,30)[0] )
N_tot=k_N/1.3 * 0.0443* (1**-1.3 - 30**-1.3)
Yield_tot=N_tot * 0.1
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='chabrier',imf_bdys=[0.01,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print (Yield_tot)
print (Yield_tot_sim)
print ('Should be 1 :',Yield_tot/Yield_tot_sim)
plt.figure(11)
s1.plot_mass(fig=11,specie='H',label='H',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.3 * 0.0443*(m**-1.3 - 30.**-1.3) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
"""
Explanation: Chabrier:
Change interval now from [0.01,30]
M<1: $IMF(m) = \frac{0.158}{m} * \exp{ \frac{-(log(m) - log(0.08))^2}{2*0.69^2}}$
else: $IMF(m) = m^{-2.3}$
End of explanation
"""
def imf_times_m(mass):
p0=1.
p1=0.08**(-0.3+1.3)
p2=0.5**(-1.3+2.3)
p3= 1**(-2.3+2.3)
if mass<0.08:
return mass*p0*mass**(-0.3)
elif mass < 0.5:
return mass*p1*mass**(-1.3)
else: #mass>=0.5:
return mass*p1*p2*mass**(-2.3)
k_N= 1e11/ (quad(imf_times_m,0.01,30)[0] )
p1=0.08**(-0.3+1.3)
p2=0.5**(-1.3+2.3)
N_tot=k_N/1.3 * p1*p2*(1**-1.3 - 30**-1.3)
Yield_tot=N_tot * 0.1
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='kroupa',imf_bdys=[0.01,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print ('Should be 1: ',Yield_tot/Yield_tot_sim)
plt.figure(111)
s1.plot_mass(fig=111,specie='H',label='H',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.3 *p1*p2* (m**-1.3 - 30.**-1.3) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
"""
Explanation: Simulation should agree with semi-analytical calculations for Chabrier IMF.
Kroupa:
M<0.08: $IMF(m) = m^{-0.3}$
M<0.5 : $IMF(m) = m^{-1.3}$
else : $IMF(m) = m^{-2.3}$
End of explanation
"""
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=False,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print ((s1.history.ism_elem_yield_1a[0]),(s1.history.ism_elem_yield_1a[-1]))
print ((s1.history.ism_elem_yield[0]),(s1.history.ism_elem_yield[-1]))
print ((s2.history.ism_elem_yield_1a[0]),(s2.history.ism_elem_yield_1a[-1]))
print ((s2.history.ism_elem_yield[0]),(s2.history.ism_elem_yield[-1]))
print ((s1.history.ism_elem_yield[-1][0] + s2.history.ism_elem_yield_1a[-1][0])/s2.history.ism_elem_yield[-1][0])
s2.plot_mass(fig=33,specie='H-1',source='sn1a') #plot s1 data (without sn) cannot be plotted -> error, maybe change plot function?
"""
Explanation: Simulation results compared with semi-analytical calculations for Kroupa IMF.
Test of parameter sn1a_on: on/off mechanism
End of explanation
"""
plt.figure(99)
#interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid
#ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt')
#zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='exp',
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s1.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
plt.plot(grid_masses,grid_lifetimes,label='spline fit grid points (SYGMA)')
plt.xlabel('Mini/Msun')
plt.ylabel('log lifetime')
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
plt.plot(np.array(m),np.log10(np.array(ages)),marker='+',markersize=20,label='input yield grid',linestyle='None')
plt.plot(10**spline_lifetime(np.log10(ages)),np.log10(ages),linestyle='--',label='spline fit SNIa')
plt.legend()
#plt.yscale('log')
"""
Explanation: Test of parameter sn1a_rate (DTD): Different SN1a rate implementatinos
Calculate with SNIa and look at SNIa contribution only. Calculated for each implementation from $410^7$ until $1.510^{10}$ yrs
DTD taken from Vogelsberger 2013 (sn1a_rate='vogelsberger')
$\frac{N_{1a}}{Msun} = \int _t^{t+\Delta t} 1.310^{-3} * (\frac{t}{410^7})^{-1.12} * \frac{1.12 -1}{410^7}$ for $t>410^7 yrs$
def dtd(t):
return 1.3e-3(t/4e7)-1.12 * ((1.12-1)/4e7)
n1a_msun= quad(dtd,4e7,1.5e10)[0]
Yield_tot=n1a_msun1e11*0.1 * 7 #special factor
print Yield_tot
reload(s)
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='vogelsberger',imf_type='salpeter',imf_bdys=[1,30],iniZ=-1,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
print 'Should be 1: ',Yield_tot/Yield_tot_sim
s1.plot_mass(specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(t):
def dtd(t):
return 1.3e-3(t/4e7)-1.12 * ((1.12-1)/4e7)
return quad(dtd,4e7,t)[0]1e11*0.1 * 7 #special factor
yields1=[]
ages1=[]
for m1 in m:
t=ages[m.index(m1)]
if t>4e7:
yields1.append(yields(t))
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
Simulation results should agree with semi-analytical calculations for the SN1 yields.
Exponential DTD taken from Wiersma09 (sn1a_rate='wiersmaexp') (maybe transitionmass should replace 8Msun?)
$\frac{N_{1a}}{Msun} = \int t ^{t+\Delta t} f{wd}(t) exp(-t/\tau)/\tau$ with
if $M_z(t) >3$ :
$f_{wd}(t) = (\int _{M(t)}^8 IMF(m) dm)$
else:
$f_{wd}(t) = 0$
with $M(t) = max(3, M_z(t))$ and $M_z(t)$ being the mass-lifetime function.
NOTE: This mass-lifetime function needs to be extracted from the simulation (calculated in SYGMA, see below)
The following performs the simulation but also takes the mass-metallicity-lifetime grid from this simulation.
With the mass-lifetime spline function calculated the integration can be done further down. See also the fit for this function below.
End of explanation
"""
#following inside function wiersma09_efolding
#if timemin ==0:
# timemin=1
from scipy.integrate import dblquad
def spline1(x):
#x=t
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
#if self.imf_bdys[0]>3:
# minm_prog1a=self.imf_bdys[0]
return max(minm_prog1a,10**spline_lifetime(np.log10(x)))
def f_wd_dtd(m,t):
#print ('time ',t)
#print ('mass ',m)
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
#if maximum progenitor mass is smaller than 8Msun due to IMF range:
#if 8>self.imf_bdys[1]:
# maxm_prog1a=self.imf_bdys[1]
if mlim>maxm_prog1a:
return 0
else:
#Delay time distribution function (DTD)
tau= 2e9
mmin=0
mmax=0
inte=0
#follwing is done in __imf()
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#print ('IMF test',norm*m**-2.35)
#imf normalized to 1Msun
return norm*m**-2.35* np.exp(-t/tau)/tau
a= 0.01 #normalization parameter
#if spline(np.log10(t))
#a=1e-3/()
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
# in principle since normalization is set: nb_1a_per_m the above calculation is not necessary anymore
Yield_tot=n1a*1e11*0.1 *1 #7 #special factor
print (Yield_tot_sim)
print (Yield_tot)
print ('Should be : ', Yield_tot_sim/Yield_tot)
s1.plot_mass(specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
a= 0.01 #normalization parameter
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
"""
Explanation: Small test: Initial mass vs. lifetime from the input yield grid compared to the fit in the the Mass-Metallicity-lifetime plane (done by SYGMA) for Z=0.02.
A double integration has to be performed in order to solve the complex integral from Wiersma:
End of explanation
"""
print (sum(s1.wd_sn1a_range1)/sum(s1.wd_sn1a_range))
s1.plot_sn_distr(xaxis='time',fraction=False)
"""
Explanation: Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (exp) implementation.
Compare number of WD's in range
End of explanation
"""
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
from scipy.integrate import dblquad
def spline1(x):
#x=t
return max(3.,10**spline(np.log10(x)))
def f_wd_dtd(m,t):
#print ('time ',t)
#print ('mass ',m)
mlim=10**spline(np.log10(t))
#print ('mlim',mlim)
if mlim>8.:
return 0
else:
#mmin=max(3.,massfunc(t))
#mmax=8.
#imf=self.__imf(mmin,mmax,1)
#Delay time distribution function (DTD)
tau= 1e9 #3.3e9 #characteristic delay time
sigma=0.66e9#0.25*tau
#sigma=0.2#narrow distribution
#sigma=0.5*tau #wide distribution
mmin=0
mmax=0
inte=0
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#imf normalized to 1Msun
return norm*m**-2.35* 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-(t-tau)**2/(2*sigma**2))
#a= 0.0069 #normalization parameter
#if spline(np.log10(t))
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
Yield_tot=n1a*1e11*0.1 #special factor
print (Yield_tot_sim)
print (Yield_tot)
print ('Should be 1: ', Yield_tot_sim/Yield_tot)
s2.plot_mass(fig=988,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
"""
Explanation: Wiersmagauss
End of explanation
"""
print (sum(s2.wd_sn1a_range1)/sum(s2.wd_sn1a_range))
"""
Explanation: Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (Gauss) implementation.
Compare number of WD's in range
End of explanation
"""
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e8,tend=1.3e10,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],special_timesteps=-1,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
from scipy.interpolate import UnivariateSpline
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
from scipy.integrate import quad
def spline1(t):
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
return max(minm_prog1a,10**spline_lifetime(np.log10(t)))
#funciton giving the total (accummulatitive) number of WDs at each timestep
def wd_number(m,t):
#print ('time ',t)
#print ('mass ',m)
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
if mlim>maxm_prog1a:
return 0
else:
mmin=0
mmax=0
inte=0
#normalized to 1msun!
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
return norm*m**-2.35 #self.__imf(mmin,mmax,inte,m)
def maoz_sn_rate(m,t):
return wd_number(m,t)* 4.0e-13 * (t/1.0e9)**-1
def maoz_sn_rate_int(t):
return quad( maoz_sn_rate,spline1(t),8,args=t)[0]
#in this formula, (paper) sum_sn1a_progenitors number of
maxm_prog1a=8
longtimefornormalization=1.3e10 #yrs
fIa=0.00147
fIa=1e-3
#A = (fIa*s2.number_stars_born[1]) / quad(maoz_sn_rate_int,0,longtimefornormalization)[0]
A = 1e-3 / quad(maoz_sn_rate_int,0,longtimefornormalization)[0]
print ('Norm. constant A:',A)
n1a= A* quad(maoz_sn_rate_int,0,1.3e10)[0]
Yield_tot=n1a*1e11*0.1 #specialfactor
print (Yield_tot_sim)
print (Yield_tot)
print ('Should be 1: ', Yield_tot_sim/Yield_tot)
"""
Explanation: SNIa implementation: Maoz12 $t^{-1}$
End of explanation
"""
s2.plot_mass(fig=44,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
#yields= a* dblquad(wdfrac,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1
yields= A*quad(maoz_sn_rate_int,0,t)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
plt.legend(loc=3)
"""
Explanation: Check trend:
End of explanation
"""
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
stellar_param_on=False)
print ('Should be 0: ',s1.history.age[0])
print ('Should be 1: ',s1.history.age[-1]/1.3e10)
print ('Should be 1: ',s1.history.timesteps[0]/1e7)
print ('Should be 1: ',s1.history.timesteps[-1]/1e7)
print ('Should be 1: ',sum(s1.history.timesteps)/1.3e10)
"""
Explanation: Test of parameter tend, dt and special_timesteps
First constant timestep size of 1e7
End of explanation
"""
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.5e9,special_timesteps=200,imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print ('Should be 0: ',s2.history.age[0])
print ('Should be 1: ',s2.history.age[-1]/1.5e9)
print ('Should be 201: ',len(s2.history.age))
print ('Should be 1: ',s2.history.timesteps[0]/1e7)
#print ('in dt steps: ',s2.history.timesteps[1]/1e7,s1.history.timesteps[2]/1e7,'..; larger than 1e7 at step 91!')
print ('Should be 200: ',len(s2.history.timesteps))
print ('Should be 1: ',sum(s2.history.timesteps)/1.5e9)
plt.figure(55)
plt.plot(s1.history.age[1:],s1.history.timesteps,label='linear (constant) scaled',marker='+')
plt.plot(s2.history.age[1:],s2.history.timesteps,label='log scaled',marker='+')
plt.yscale('log');plt.xscale('log')
plt.xlabel('age/years');plt.ylabel('timesteps/years');plt.legend(loc=4)
"""
Explanation: First timestep size of 1e7, then in log space to tend with a total number of steps of 200; Note: changed tend
End of explanation
"""
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s4=s.sygma(iolevel=0,mgal=1e11,dt=1.3e10,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s5=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=200,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s6=s.sygma(iolevel=0,mgal=1e11,dt=1.3e10,tend=1.3e10,special_timesteps=200,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
#print (s3.history.ism_iso_yield[-1][0] == s4.history.ism_iso_yield[-1][0] why false?)
print ('should be 1 ',s3.history.ism_iso_yield[-1][0]/s4.history.ism_iso_yield[-1][0])
#print (s3.history.ism_iso_yield[-1][0],s4.history.ism_iso_yield[-1][0])
print ('should be 1',s5.history.ism_iso_yield[-1][0]/s6.history.ism_iso_yield[-1][0])
#print (s5.history.ism_iso_yield[-1][0],s6.history.ism_iso_yield[-1][0])
"""
Explanation: Choice of dt should not change final composition:
for special_timesteps:
End of explanation
"""
s1=s.sygma(iolevel=0,mgal=1e7,dt=1e7,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,mgal=1e8,dt=1e8,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s3=s.sygma(iolevel=0,mgal=1e9,dt=1e9,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print ('At timestep 0: ',sum(s1.history.ism_elem_yield[0])/1e7,sum(s2.history.ism_elem_yield[0])/1e8,sum(s3.history.ism_elem_yield[0])/1e9)
print ('At timestep 0: ',sum(s1.history.ism_iso_yield[0])/1e7,sum(s2.history.ism_iso_yield[0])/1e8,sum(s3.history.ism_iso_yield[0])/1e9)
print ('At last timestep, should be the same fraction: ',sum(s1.history.ism_elem_yield[-1])/1e7,sum(s2.history.ism_elem_yield[-1])/1e8,sum(s3.history.ism_elem_yield[-1])/1e9)
print ('At last timestep, should be the same fraction: ',sum(s1.history.ism_iso_yield[-1])/1e7,sum(s2.history.ism_iso_yield[-1])/1e8,sum(s3.history.ism_iso_yield[-1])/1e9)
"""
Explanation: Test of parameter mgal - the total mass of the SSP
Test the total isotopic and elemental ISM matter at first and last timestep.
End of explanation
"""
s1=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e8,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s2=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
pop3_table='yield_tables/popIII_h1.txt')
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e6,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s4=s.sygma(iolevel=0,mgal=1e11,dt=3e7,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s1.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 1',label2='SNII, rate 1',marker1='o',marker2='s',shape2='-',markevery=1)
s2.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='p',markevery=1,shape2='-.')
s4.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='+',markevery=1,shape2=':',color2='y')
s3.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='x',markevery=1,shape2='--')
plt.xlim(6e6,7e7)
plt.vlines(7e6,1e2,1e9)
plt.ylim(1e2,1e4)
print (s1.history.sn2_numbers[1]/s1.history.timesteps[0])
print (s2.history.sn2_numbers[1]/s2.history.timesteps[0])
#print (s1.history.timesteps[:5])
#print (s2.history.timesteps[:5])
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt',
stellar_param_on=False)
s4=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
pop3_table='yield_tables/popIII_h1.txt',stellar_param_on=False)
"""
Explanation: Test of SN rate: depend on timestep size: shows always mean value of timestep; larger timestep> different mean
End of explanation
"""
s3.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, rate',label2='SNII, rate',marker1='o',marker2='s',markevery=1)
s4.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, number',label2='SNII number',marker1='d',marker2='p')
plt.xlim(3e7,1e10)
s1.plot_sn_distr(fig=77,rate=True,marker1='o',marker2='s',markevery=5)
s2.plot_sn_distr(fig=77,rate=True,marker1='x',marker2='^',markevery=1)
#s1.plot_sn_distr(rate=False)
#s2.plot_sn_distr(rate=True)
#s2.plot_sn_distr(rate=False)
plt.xlim(1e6,1.5e10)
#plt.ylim(1e2,1e4)
"""
Explanation: Rate does not depend on timestep type:
End of explanation
"""
s1=s.sygma(iolevel=0,imf_bdys=[1.65,30],transitionmass=8,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,imf_bdys=[1.65,30],transitionmass=10,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim_8=s1.history.ism_iso_yield_agb[-1][0]
Yield_tot_sim_10=s2.history.ism_iso_yield_agb[-1][0]
alphaimf=2.35
k_N=1e11*(alphaimf-2)/ (-1.65**-(alphaimf-2) + 30**-(alphaimf-2))
N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 8**-(alphaimf-1))
Yield_tot_8=0.1*N_tot
N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 10**-(alphaimf-1))
Yield_tot_10=0.1*N_tot
#N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 5**-(alphaimf-1))
#Yield_tot_5=0.1*N_tot
print ('1:',Yield_tot_sim_8/Yield_tot_8)
print ('1:',Yield_tot_sim_10/Yield_tot_10)
#print ('1:',Yield_tot_sim_5/Yield_tot_5)
"""
Explanation: Test of parameter transitionmass : Transition from AGB to massive stars
Check if transitionmass is properly set
End of explanation
"""
s0=s.sygma(iolevel=0,iniZ=0.0001,imf_bdys=[0.01,100],imf_yields_range=[1,100],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
"""
Explanation: imf_yield_range - include yields only in this mass range
End of explanation
"""
|
natronics/JSBSim-Manager | rocket.ipynb | gpl-3.0 | import locale
from openrocketdoc import document
from openrocketdoc import writers
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
###############################################################
# CHANGE THESE NUMBERS!! IT'S FUN.
thrust = 1555.0 # N
burn_time = 10.0 # s
isp = 214.0 # s
################################################################
# Create an engine document
engine = document.Engine('Python Motor')
# Set our design
engine.Isp = isp
engine.thrust_avg = thrust
engine.t_burn = burn_time
# Print setup
print("Engine Design parameters:\n")
print(" Input | Number | Units ")
print(" -------------- | --------: | :---- ")
print(" %14s | %8.1f | %s" % ("Isp", engine.Isp, "s"))
print(" %14s | %s | %s" % ("Thrust", locale.format("%8.1f", engine.thrust_avg, grouping=True), "N"))
print(" %14s | %8.1f | %s" % ("Burn Time", engine.t_burn, "s"))
"""
Explanation: Build A Rocket And Launch It
Procedurally build and simulate a flight. This is my attempt to use the open aerospace rocket documentation tool to describe a rocket and generate JSBSim configuration to simulate its flight.
View the raw jupyter notebook: rocket.ipynb
You can run it yourself by cloning this repo and install requirements:
$ pip install -r requirements.txt
Then run jupyter to edit/run the document in your browser:
$ jupyter notebook
The idea is that you can make up some numbers ("what if I built a rocket with this much thrust?") and this script will parametrically design an entire rocket. Then using openrocketdoc, generate a valid JSBSim case and run JSBSim for you, generating flight simulation output.
Just put in numbers for the engine design and then run the notebook!
Step 1. Design The Engine
Pick an engine design. Well define it based on a desired Isp, thrust, and burn time.
End of explanation
"""
# The Open Rocket Document can compute useful values based on what we defined above.
print("\nOur computed engine will need %0.1f kg of propellent." % engine.m_prop, )
print("It has a total impulse of %s Ns. That would make it a '%s'(%0.0f%%) class motor." % (
locale.format("%d", engine.I_total, grouping=True),
engine.nar_code,
engine.nar_percent
))
jsbsim_engine_file = writers.JSBSimEngine.dump(engine)
print("\nGenerated JSBSim engine document:\n\n```xml")
print(jsbsim_engine_file)
print("```")
"""
Explanation: All we need to do is create an openrocketdoc Engine with those basic numbers:
```python
from openrocketdoc import document
engine = document.Engine('My Rocket Motor')
engine.Isp = 214.0
engine.thrust_avg = 1555.0
engine.t_burn = 10.0
```
Everything else can be computed from that engine class:
End of explanation
"""
prop_density = 1750 # kg/m3 Roughtly HTPB composite solid density[1]
LD = 10 # Length to width ratio
Nose_LD = 5
# [1] http://www.lr.tudelft.nl/en/organisation/departments/space-engineering/space-systems-engineering/expertise-areas/space-propulsion/design-of-elements/rocket-propellants/solids/
print("Rocket Design parameters:\n")
print(" Input | Number | Units ")
print(" ---------------------- | --------: | :---- ")
print(" %22s | %s | %s" % ("Propellent Density", locale.format("%8.1f", engine.thrust_avg, grouping=True), "kg/m3"))
print(" %22s | %8.1f | " % ("Motor L/D ratio", 10))
print(" %22s | %8.1f | " % ("Nosecone L/D ratio", 5))
from math import pi
# volume of propellent needed
prop_volume = engine.m_prop/prop_density
# Solve for the radius/length of the fuel grain (assume solid, end burning)
engine.diameter = 2*(prop_volume/ (2*LD*pi))**(1/3.0)
engine.length = engine.diameter * LD
# Add a nose
nose = document.Nosecone(
document.Noseshape.TANGENT_OGIVE, # Shape
1.0, # shape_parameter
1.5, # mass
engine.diameter * Nose_LD,
diameter=engine.diameter,
material_name="Aluminium"
)
# Payload section
payload = document.Bodytube(
"Payload", # Name
2.5, # mass
0.33, # length
diameter=engine.diameter,
material_name="Aluminium"
)
# Body section the size of the engine
body = document.Bodytube(
"Body", # Name
1.5, # mass
engine.length,
diameter=engine.diameter,
material_name="Aluminium"
)
body.components = [engine]
# Rocket:
rocket = document.Rocket("Rocket")
rocket.aero_properties['CD'] = [0.6]
stage0 = document.Stage("Sustainer")
stage0.components = [nose, payload, body]
rocket.stages = [stage0]
# Print:
print("Computed rocket length: %0.1f meters, diameter: %0.2f mm\n" % ((nose.length + payload.length + body.length), (engine.diameter*1000.0)))
print("Generated diagram of the rocket, with a nosecone, fixed length dummy payload section, and motor:")
from IPython.display import SVG, display
display(SVG(writers.SVG.dump(rocket)))
jsbsim_aircraft_file = writers.JSBSimAircraft.dump(rocket)
print("Generated JSBSim 'Aircraft' document:\n\n```xml")
print(jsbsim_aircraft_file)
print("```")
"""
Explanation: Step 2. Build The Rocket
Now we know how much propellent, guess the density and come up with some parametric rocket design. If we compute some numbers based on a guess of the density of our propellent, we can build up a full rocket desgin from our engine. The only hardcoded magic is a prefered lenght-to-diameter ratio.
End of explanation
"""
import os
aircraft_path = os.path.join("aircraft", rocket.name_slug)
engine_path = "engine"
if not os.path.exists(aircraft_path):
os.makedirs(aircraft_path)
if not os.path.exists(engine_path):
os.makedirs(engine_path)
aircraft_filename = rocket.name_slug + '.xml'
with open(os.path.join(aircraft_path, aircraft_filename), 'w') as outfile:
outfile.write(jsbsim_aircraft_file)
engine_filename = engine.name_slug + '.xml'
with open(os.path.join(engine_path, engine_filename), 'w') as outfile:
outfile.write(jsbsim_engine_file)
nozzle_filename = engine.name_slug + '_nozzle.xml'
with open(os.path.join(engine_path, nozzle_filename), 'w') as outfile:
outfile.write("""<?xml version="1.0"?>
<nozzle name="Nozzle">
<area unit="M2"> 0.001 </area>
</nozzle>
""")
"""
Explanation: Build JSBSim Case
JSBSim needs several files in directories with a particular file structure. We simply write the files above to the filesystem appropriate places. A generic run.xml and init.xml files are already here. They're almost completely independent from the rocket definitions, the only thing "hard coded" is the name of the rocket (which has to match the filename).
End of explanation
"""
import subprocess
import time
# Run JSBSim using Popen
p = subprocess.Popen(["JSBSim", "--logdirectivefile=output_file.xml", "--script=run.xml"])
time.sleep(10) # let it run
"""
Explanation: Run JSBSim
Now we can simulate the flight by invoking JSBSim (assuming you have it installed and in your path). It's as easy as this:
```python
import subprocess
Run JSBSim using Popen
p = subprocess.Popen(["JSBSim", "--logdirectivefile=output_file.xml", "--script=run.xml"])
```
End of explanation
"""
import csv
# Read data from JSBSim
FPS2M = 0.3048
LBF2N = 4.44822
LBS2KG = 0.453592
max_alt = 0
max_alt_time = 0
sim_time = []
measured_accel_x = []
sim_vel_up = []
sim_alt = []
with open('data.csv') as datafile:
reader = csv.reader(datafile, delimiter=',')
for row in reader:
# ignore first line
if row[0][0] == 'T':
continue
time = float(row[0]) # s
alt = float(row[1]) # m
thrust = float(row[2]) * LBF2N # N
weight = float(row[3]) * LBS2KG # kg
vel = float(row[4]) * FPS2M # m/s
vel_down = float(row[5]) * FPS2M # m/s
downrange = float(row[6]) * FPS2M # m
aoa = float(row[7]) # deg
force_x = float(row[8]) * LBF2N # N
sim_time.append(time)
# compute measured accel (IMU)
measured_accel_x.append(force_x/weight)
sim_vel_up.append(-vel_down)
sim_alt.append(alt)
# max alt
if alt > max_alt:
max_alt = alt
max_alt_time = time
print("The apogee (maximum altitude) of this flight was %0.1f km above sea level" % (max_alt/1000.0))
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax1 = plt.subplots(figsize=(18,7))
plt.title(r"Simulated Rocket Altitude")
plt.ylabel(r"Altitude MSL [meters]")
plt.xlabel(r"Time [s]")
plt.plot(sim_time, sim_alt, lw=1.8, alpha=0.6)
plt.ylim([sim_alt[0]-500,max_alt + 1000])
plt.xlim([0, max_alt_time])
plt.show()
fig, ax1 = plt.subplots(figsize=(18,7))
plt.title(r"Simulated Rocket Velocity")
plt.ylabel(r"Velocity [m/s]")
plt.xlabel(r"Time [s]")
plt.plot(sim_time, sim_vel_up, lw=1.8, alpha=0.6)
#plt.ylim([sim_alt[0]-500,max_alt + 1000])
plt.xlim([0, max_alt_time])
plt.show()
"""
Explanation: Analyze The Simulation Results
Now we should have a datafile from the simulation!
End of explanation
"""
|
YihaoLu/pyfolio | pyfolio/examples/portfolio_volatility_weighted_example.ipynb | apache-2.0 | # USAGE: Equal-Weight Portfolio.
# 1) if 'exclude_non_overlapping=True' below, the portfolio will only contains
# days which are available across all of the algo return timeseries.
#
# if 'exclude_non_overlapping=False' then the portfolio returned will span from the
# earliest startdate of any algo, thru the latest enddate of any algo.
#
# 2) Weight of each algo will always be 1/N where N is the total number of algos passed to the function
portfolio_rets_ts, data_df = pf.timeseries.portfolio_returns_metric_weighted([SPY, FXE, GLD],
exclude_non_overlapping=True
)
to_plot = ['SPY', 'GLD', 'FXE'] + ["port_ret"]
data_df[to_plot].apply(pf.timeseries.cum_returns).plot()
pf.timeseries.perf_stats(data_df['port_ret'])
"""
Explanation: Equal-weight Portfolio
End of explanation
"""
# USAGE: Portfolio based on volatility weighting.
# The higher the volatility the _less_ weight the algo gets in the portfolio
# The portfolio is rebalanced monthly. For quarterly reblancing, set portfolio_rebalance_rule='Q'
stocks_port, data_df = pf.timeseries.portfolio_returns_metric_weighted([SPY, FXE, GLD],
weight_function=np.std,
weight_function_window=126,
inverse_weight=True
)
to_plot = ['SPY', 'GLD', 'FXE'] + ["port_ret"]
data_df[to_plot].apply(pf.timeseries.cum_returns).plot()
pf.timeseries.perf_stats(data_df['port_ret'])
"""
Explanation: Volatility-weighted Portfolio (using just np.std as weighting metric)
End of explanation
"""
stocks_port, data_df = pf.timeseries.portfolio_returns_metric_weighted([SPY, FXE, GLD],
weight_function=np.std,
weight_func_transform=pf.timeseries.min_max_vol_bounds,
weight_function_window=126,
inverse_weight=True)
to_plot = ['SPY', 'GLD', 'FXE'] + ["port_ret"]
data_df[to_plot].apply(pf.timeseries.cum_returns).plot()
pf.timeseries.perf_stats(data_df['port_ret'])
"""
Explanation: Volatility-weighted Portfolio (with constraint of no asset weight to be greater than 2x any other asset weight. Function def min_max_vol_bounds defines the the constraint)
End of explanation
"""
stocks_port, data_df = pf.timeseries.portfolio_returns_metric_weighted([SPY, FXE, GLD],
weight_function=np.std,
weight_func_transform=pf.timeseries.bucket_std,
weight_function_window=126,
inverse_weight=True)
to_plot = ['SPY', 'GLD', 'FXE'] + ["port_ret"]
data_df[to_plot].apply(pf.timeseries.cum_returns).plot()
pf.timeseries.perf_stats(data_df['port_ret'])
"""
Explanation: Quantized bucket Volatility-weighted Portfolio (using custom function bucket_std() as weighting metric)
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td2a/td2a_correction_session_1.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 2A.data - DataFrame et Graphes - correction
Opérations standards sur les dataframes (pandas) et les matrices (numpy). Graphiques avec matplotlib).
End of explanation
"""
import pandas
from ensae_teaching_cs.data import donnees_enquete_2003_television
df = pandas.read_csv(donnees_enquete_2003_television(), sep="\t", engine="python")
df.head()
"""
Explanation: <h3 id="exo1">Exercice 1 : créer un fichier Excel</h3>
On souhaite récupérer les données donnees_enquete_2003_television.txt (source : INSEE).
POIDSLOG : Pondération individuelle relative
POIDSF : Variable de pondération individuelle
cLT1FREQ : Nombre d'heures en moyenne passées à regarder la télévision
cLT2FREQ : Unité de temps utilisée pour compter le nombre d'heures passées à regarder la télévision, cette unité est représentée par les quatre valeurs suivantes
0 : non concerné
1 : jour
2 : semaine
3 : mois
Ensuite, on veut :
Supprimer les colonnes vides
Obtenir les valeurs distinctes pour la colonne cLT2FREQ
Modifier la matrice pour enlever les lignes pour lesquelles l'unité de temps (cLT2FREQ) n'est pas renseignée ou égale à zéro.
Sauver le résultat au format Excel.
Vous aurez peut-être besoin des fonctions suivantes :
numpy.isnan
DataFrame.apply
DataFrame.fillna ou
DataFrame.isnull
DataFrame.copy
End of explanation
"""
df = df [[ c for c in df.columns if "Unnamed" not in c]]
df.head()
notnull = df [ ~df.cLT2FREQ.isnull() ] # équivalent ) df [ df.cLT2FREQ.notnull() ]
print(len(df),len(notnull))
notnull.tail()
notnull.to_excel("data.xlsx") # question 4
"""
Explanation: On enlève les colonnes vides :
End of explanation
"""
%system "data.xlsx"
"""
Explanation: Pour lancer Excel, vous pouvez juste écrire ceci :
End of explanation
"""
from IPython.display import Image
Image("td10exc.png")
"""
Explanation: Vous devriez voir quelque chose comme ceci :
End of explanation
"""
def delta(x,y):
return max(x,y)- min(x,y)
delta = lambda x,y : max(x,y)- min(x,y)
delta(4,5)
import random
df["select"]= df.apply( lambda row : random.randint(1,10), axis=1)
echantillon = df [ df["select"] ==1 ]
echantillon.shape, df.shape
"""
Explanation: <h3 id="qu">Questions</h3>
Que changerait l'ajout du paramètre how='outer' dans ce cas ?
On cherche à joindre deux tables A,B qui ont chacune trois clés distinctes : $c_1, c_2, c_3$. Il y a respectivement dans chaque table $A_i$ et $B_i$ lignes pour la clé $c_i$. Combien la table finale issue de la fusion des deux tables contiendra-t-elle de lignes ?
L'ajout du paramètres how='outer' ne changerait rien dans ce cas car les deux tables fusionnées contiennent exactement les mêmes clés.
Le nombre de lignes obtenus serait $\sum_{i=1}^{3} A_i B_i$. Il y a trois clés, chaque ligne de la table A doit être associée à toutes les lignes de la table B partageant la même clé.
<h3 id="exo3">Exercice 2 : lambda fonction</h3>
Ecrire une lambda fonction qui prend deux paramètres et qui est équivalente à la fonction suivante :
End of explanation
"""
from ensae_teaching_cs.data import marathon
import pandas
df = pandas.read_csv(marathon(), sep="\t", names=["ville", "annee", "temps","secondes"])
df.head()
"""
Explanation: <h3 id="exo2">Exercice 3 : moyennes par groupes</h3>
Toujours avec le même jeu de données (marathon.txt), on veut ajouter une ligne à la fin du tableau croisé dynamique contenant la moyenne en secondes des temps des marathons pour chaque ville.
End of explanation
"""
# étape 1
# par défaut, la méthode groupby utilise la clé de group comme index
# pour ne pas le faire, il faut préciser as_index = False
gr = df[["ville","secondes"]].groupby("ville", as_index=False).mean()
gr.head()
# étape 2 - on ajoute une colonne
tout = df.merge( gr, on="ville")
tout.head()
# étape 3
piv = tout.pivot("annee","ville","secondes_x")
piv.tail()
"""
Explanation: La solution requiert trois étapes.
Pour avoir la moyenne par villes, il faut grouper les lignes associées à la même villes.
Ensuite, il faut introduire ces moyennes dans la table initiale : on fusionne.
On effectue le même pivot que dans l'énoncé.
End of explanation
"""
gr["annee"] = "moyenne"
pivmean = gr.pivot("annee","ville","secondes")
pivmean
piv = df.pivot("annee","ville","secondes")
pandas.concat( [ piv, pivmean ]).tail()
"""
Explanation: A partir de là, on ne voit pas trop comment s'en sortir. Voici ce que je propose :
On effectue un pivot sur la petite matrice des moyennes.
On ajoute ce second pivot avec le premier (celui de l'énoncé).
End of explanation
"""
import pandas, urllib.request
from ensae_teaching_cs.data import marathon
df = pandas.read_csv(marathon(filename=True),
sep="\t", names=["ville", "annee", "temps","secondes"])
piv = df.pivot("annee","ville","secondes")
gr = df[["ville","secondes"]].groupby("ville", as_index=False).mean()
gr["annee"] = "moyenne"
pivmean = gr.pivot("annee","ville","secondes")
pandas.concat([piv, pivmean]).tail()
"""
Explanation: En résumé, cela donne (j'ajoute aussi le nombre de marathons courus) :
End of explanation
"""
import urllib.request
import zipfile
import http.client
def download_and_save(name, root_url):
try:
response = urllib.request.urlopen(root_url+name)
except (TimeoutError, urllib.request.URLError, http.client.BadStatusLine):
# back up plan
root_url = "http://www.xavierdupre.fr/enseignement/complements/"
response = urllib.request.urlopen(root_url+name)
with open(name, "wb") as outfile:
outfile.write(response.read())
def unzip(name):
with zipfile.ZipFile(name, "r") as z:
z.extractall(".")
filenames = ["etatcivil2012_mar2012_dbase.zip",
"etatcivil2012_nais2012_dbase.zip",
"etatcivil2012_dec2012_dbase.zip", ]
root_url = 'http://telechargement.insee.fr/fichiersdetail/etatcivil2012/dbase/'
for filename in filenames:
download_and_save(filename, root_url)
unzip(filename)
print("Download of {}: DONE!".format(filename))
import pandas
try:
from dbfread_ import DBF
use_dbfread = True
except ImportError as e :
use_dbfread = False
if use_dbfread:
print("use of dbfread")
def dBase2df(dbase_filename):
table = DBF(dbase_filename, load=True, encoding="cp437")
return pandas.DataFrame(table.records)
df = dBase2df('mar2012.dbf')
else :
print("use of zipped version")
import pyensae.datasource
data = pyensae.datasource.download_data("mar2012.zip")
df = pandas.read_csv(data[0], sep="\t", encoding="utf8", low_memory=False)
print(df.shape, df.columns)
df.head()
df["ageH"] = df.apply (lambda r: 2014 - int(r["ANAISH"]), axis=1)
df["ageF"] = df.apply (lambda r: 2014 - int(r["ANAISF"]), axis=1)
df.head()
df.plot(x="ageH",y="ageF", kind="scatter")
df.plot(x="ageH",y="ageF", kind="hexbin")
"""
Explanation: <h3 id="exo4">Exercice 4 : écart entre les mariés</h3>
En ajoutant une colonne et en utilisant l'opération group by, on veut obtenir la distribution du nombre de mariages en fonction de l'écart entre les mariés. Au besoin, on changera le type d'une colone ou deux.
On veut tracer un nuage de points avec en abscisse l'âge du mari, en ordonnée, l'âge de la femme. Il faudra peut-être jeter un coup d'oeil sur la documentation de la méthode plot.
End of explanation
"""
df["ANAISH"] = df.apply (lambda r: int(r["ANAISH"]), axis=1)
df["ANAISF"] = df.apply (lambda r: int(r["ANAISF"]), axis=1)
df["differenceHF"] = df.ANAISH - df.ANAISF
df["nb"] = 1
dist = df[["nb","differenceHF"]].groupby("differenceHF", as_index=False).count()
df["differenceHF"].hist(figsize=(16,6), bins=50)
"""
Explanation: <h3 id="exo5">Exercice 5 : graphe de la distribution avec pandas</h3>
Le module pandas propose un panel de graphiques standard faciles à obtenir. On souhaite représenter la distribution sous forme d'histogramme. A vous de choisir le meilleure graphique depuis la page Visualization.
End of explanation
"""
df["nb"] = 1
dissem = df[["JSEMAINE","nb"]].groupby("JSEMAINE",as_index=False).sum()
total = dissem["nb"].sum()
repsem = dissem.cumsum()
repsem["nb"] /= total
ax = dissem["nb"].plot(kind="bar")
repsem["nb"].plot(ax=ax, secondary_y=True)
ax.set_title("distribution des mariages par jour de la semaine")
"""
Explanation: <h3 id="exo6">Exercice 6 : distribution des mariages par jour</h3>
On veut obtenir un graphe qui contient l'histogramme de la distribution du nombre de mariages par jour de la semaine et d'ajouter une seconde courbe correspond avec un second axe à la répartition cumulée.
End of explanation
"""
|
lithiumdenis/MLSchool | 2. Бостон.ipynb | mit | from sklearn.datasets import load_boston
bunch = load_boston()
print(bunch.DESCR)
X, y = pd.DataFrame(data=bunch.data, columns=bunch.feature_names.astype(str)), bunch.target
X.head()
"""
Explanation: Загрузим данные
End of explanation
"""
SEED = 22
np.random.seed = SEED
"""
Explanation: Зафиксируем генератор случайных чисел для воспроизводимости:
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=SEED)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
"""
Explanation: Домашка!
Разделим данные на условно обучающую и отложенную выборки:
End of explanation
"""
from sklearn.metrics import mean_squared_error
"""
Explanation: Измерять качество будем с помощью метрики среднеквадратичной ошибки:
End of explanation
"""
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
clf = LinearRegression()
clf.fit(X_train, y_train);
print('Вышла средняя ошибка, равная %5.4f' % \
(-np.mean(cross_val_score(clf, X_test, y_test, cv=5, scoring='neg_mean_squared_error'))))
"""
Explanation: <div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задача 1.</h3>
</div>
<div class="panel">
Обучите <b>LinearRegression</b> из пакета <b>sklearn.linear_model</b> на обучающей выборке (<i>X_train, y_train</i>) и измерьте качество на <i>X_test</i>.
<br>
<br>
<i>P.s. Ошибка должна быть в районе 20. </i>
</div>
</div>
End of explanation
"""
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
X_scaled = ss.fit_transform(X_train)
y_scaled = ss.fit_transform(y_train)
sgd = SGDRegressor()
sgd.fit(X_scaled, y_scaled);
print('Вышла средняя ошибка, равная %5.4f' % \
(-np.mean(cross_val_score(sgd, X_scaled, y_scaled, cv=5, scoring='neg_mean_squared_error'))))
"""
Explanation: <div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задача 2. (с подвохом)</h3>
</div>
<div class="panel">
Обучите <b>SGDRegressor</b> из пакета <b>sklearn.linear_model</b> на обучающей выборке (<i>X_train, y_train</i>) и измерьте качество на <i>X_test</i>.
</div>
</div>
End of explanation
"""
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import RidgeCV
############Ridge
params = {
'alpha': [10**x for x in range(-2,3)]
}
from sklearn.linear_model import Ridge
gsR = RidgeCV() #GridSearchCV(Ridge(), param_grid=params)
gsR.fit(X_train, y_train);
print('Вышла средняя ошибка, равная %5.4f' % \
(-np.mean(cross_val_score(gsR, X_test, y_test, cv=5, scoring='neg_mean_squared_error'))))
############Lasso
from sklearn.linear_model import Lasso
from sklearn.linear_model import LassoCV
gsL = GridSearchCV(Lasso(), param_grid=params) #LassoCV() - медленнее
gsL.fit(X_train, y_train);
print('Вышла средняя ошибка, равная %5.4f' % \
(-np.mean(cross_val_score(gsL, X_test, y_test, cv=5, scoring='neg_mean_squared_error'))))
from sklearn.linear_model import ElasticNet
from sklearn.linear_model import ElasticNetCV
gsE = GridSearchCV(ElasticNet(), param_grid=params) #ElasticNetCV() - просто заменить, не слишком точен
gsE.fit(X_train, y_train);
print('Вышла средняя ошибка, равная %5.4f' % \
(-np.mean(cross_val_score(gsE, X_test, y_test, cv=5, scoring='neg_mean_squared_error'))))
"""
Explanation: <div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задача 3.</h3>
</div>
<div class="panel">
Попробуйте все остальные классы:
<ul>
<li>Ridge
<li>Lasso
<li>ElasticNet
</ul>
<br>
В них, как вам уже известно, используются параметры регуляризации <b>alpha</b>. Настройте его как с помощью <b>GridSearchCV</b>, так и с помощью готовых <b>-CV</b> классов (<b>RidgeCV</b>, <b>LassoCV</b> и т.д.).
<br><br>
Найдите уже, в конце-концов, самую точную линейную модель!
</div>
</div>
End of explanation
"""
|
santipuch590/deeplearning-tf | dl_tf_BDU/1.Intro_TF/ML0120EN-1.1-Exercise-TensorFlowHelloWorld.ipynb | mit | %matplotlib inline
import tensorflow as tf
import matplotlib.pyplot as plt
"""
Explanation: <center> "Hello World" in TensorFlow - Exercise Notebook</center>
Before everything, let's import the TensorFlow library
End of explanation
"""
a = tf.constant([5])
b = tf.constant([2])
"""
Explanation: First, try to add the two constants and print the result.
End of explanation
"""
#Your code goes here
c = a + b
"""
Explanation: create another TensorFlow object applying the sum (+) operation:
End of explanation
"""
with tf.Session() as session:
result = session.run(c)
print("The addition of this two constants is: {0}".format(result))
"""
Explanation: <div align="right">
<a href="#sum1" class="btn btn-default" data-toggle="collapse">Click here for the solution #1</a>
<a href="#sum2" class="btn btn-default" data-toggle="collapse">Click here for the solution #2</a>
</div>
<div id="sum1" class="collapse">
```
c=a+b
```
</div>
<div id="sum2" class="collapse">
```
c=tf.add(a,b)
```
</div>
End of explanation
"""
# Your code goes here. Use the multiplication operator.
c = a * b
"""
Explanation: Now let's try to multiply them.
End of explanation
"""
with tf.Session() as session:
result = session.run(c)
print("The Multiplication of this two constants is: {0}".format(result))
"""
Explanation: <div align="right">
<a href="#mult1" class="btn btn-default" data-toggle="collapse">Click here for the solution #1</a>
<a href="#mult2" class="btn btn-default" data-toggle="collapse">Click here for the solution #2</a>
</div>
<div id="mult1" class="collapse">
```
c=a*b
```
</div>
<div id="mult2" class="collapse">
```
c=tf.multiply(a,b)
```
</div>
End of explanation
"""
matrixA = tf.constant([[2,3],[3,4]])
matrixB = tf.constant([[2,3],[3,4]])
# Your code goes here
first_operation = matrixA * matrixB
second_operation = tf.matmul(matrixA, matrixB)
"""
Explanation: Multiplication: element-wise or matrix multiplication
Let's practice the different ways to multiply matrices:
- Element-wise multiplication in the first operation ;
- Matrix multiplication on the second operation ;
End of explanation
"""
with tf.Session() as session:
result = session.run(first_operation)
print("Element-wise multiplication: \n", result)
result = session.run(second_operation)
print("Matrix Multiplication: \n", result)
"""
Explanation: <div align="right">
<a href="#matmul1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul1" class="collapse">
```
first_operation=tf.multiply(matrixA, matrixB)
second_operation=tf.matmul(matrixA,matrixB)
```
</div>
End of explanation
"""
a=tf.constant(1000)
b=tf.Variable(0)
init_op = tf.global_variables_initializer()
# Your code goes here
update = tf.assign(b, a)
with tf.Session() as session:
session.run(init_op)
session.run(update)
print(b.eval())
"""
Explanation: Modify the value of variable b to the value in constant a:
End of explanation
"""
# Variables
val_prev = tf.Variable(0)
val_curr = tf.Variable(1)
val_new = tf.Variable(0)
fib_op = val_curr + val_prev
# Update operations
update_new_op = tf.assign(val_new, fib_op)
update_prev_op = tf.assign(val_prev, val_curr)
update_curr_op = tf.assign(val_curr, val_new)
init_op = tf.global_variables_initializer()
num_iterations = 40
fibonacci = []
with tf.Session() as session:
# Init
session.run(init_op)
print(val_curr.eval())
# Iterate
for it in range(num_iterations):
# Update current and previous values
session.run(update_new_op)
session.run(update_prev_op)
session.run(update_curr_op)
fib_val = val_curr.eval()
print(fib_val)
fibonacci.append(fib_val)
plt.plot(fibonacci)
"""
Explanation: <div align="right">
<a href="#assign" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="assign" class="collapse">
```
a=tf.constant(1000)
b=tf.Variable(0)
init_op = tf.global_variables_initializer()
update = tf.assign(b,a)
with tf.Session() as session:
session.run(init_op)
session.run(update)
print(session.run(b))
```
</div>
Fibonacci sequence
Now try to do something more advanced. Try to create a fibonnacci sequence and print the first few values using TensorFlow:</b></h3>
If you don't know, the fibonnacci sequence is defined by the equation: <br><br>
$$F_{n} = F_{n-1} + F_{n-2}$$<br>
Resulting in a sequence like: 1,1,2,3,5,8,13,21...
End of explanation
"""
# Your code goes here
a = tf.placeholder(name='a', dtype=tf.float32)
b = tf.placeholder(name='b', dtype=tf.float32)
my_op = tf.maximum(a, b)
with tf.Session() as session:
val = session.run(my_op, feed_dict={a: [125], b: [2.]})
print(val)
"""
Explanation: <div align="right">
<a href="#fibonacci-solution" class="btn btn-default" data-toggle="collapse">Click here for the solution #1</a>
<a href="#fibonacci-solution2" class="btn btn-default" data-toggle="collapse">Click here for the solution #2</a>
</div>
<div id="fibonacci-solution" class="collapse">
```
a=tf.Variable(0)
b=tf.Variable(1)
temp=tf.Variable(0)
c=a+b
update1=tf.assign(temp,c)
update2=tf.assign(a,b)
update3=tf.assign(b,temp)
init_op = tf.initialize_all_variables()
with tf.Session() as s:
s.run(init_op)
for _ in range(15):
print(s.run(a))
s.run(update1)
s.run(update2)
s.run(update3)
```
</div>
<div id="fibonacci-solution2" class="collapse">
```
f = [tf.constant(1),tf.constant(1)]
for i in range(2,10):
temp = f[i-1] + f[i-2]
f.append(temp)
with tf.Session() as sess:
result = sess.run(f)
print result
```
</div>
Now try to create your own placeholders and define any kind of operation between them:
End of explanation
"""
a = tf.constant(5.)
b = tf.constant(2.)
"""
Explanation: <div align="right">
<a href="#placeholder" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="placeholder" class="collapse">
```
a=tf.placeholder(tf.float32)
b=tf.placeholder(tf.float32)
c=2*a -b
dictionary = {a:[2,2],b:[3,4]}
with tf.Session() as session:
print session.run(c,feed_dict=dictionary)
```
</div>
Try changing our example with some other operations and see the result.
<div class="alert alert-info alertinfo">
<font size = 3><strong>Some examples of functions:</strong></font>
<br>
tf.multiply(x, y)<br />
tf.div(x, y)<br />
tf.square(x)<br />
tf.sqrt(x)<br />
tf.pow(x, y)<br />
tf.exp(x)<br />
tf.log(x)<br />
tf.cos(x)<br />
tf.sin(x)<br /> <br>
You can also take a look at [more operations]( https://www.tensorflow.org/versions/r0.9/api_docs/python/math_ops.html)
</div>
End of explanation
"""
#your code goes here
c = tf.sin(a) * tf.exp(b)
"""
Explanation: create a variable named c to receive the result an operation (at your choice):
End of explanation
"""
with tf.Session() as session:
result = session.run(c)
print("c =: {}".format(result))
"""
Explanation: <div align="right">
<a href="#operations" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="operations" class="collapse">
```
c=tf.sin(a)
```
</div>
End of explanation
"""
|
ghvn7777/ghvn7777.github.io | content/fluent_python/7_decorate.ipynb | apache-2.0 | def deco(func):
def inner():
print('running inner()')
return inner()
@deco
def target():
print('running target()')
target
"""
Explanation: 装饰器用于在源码中 “标记” 函数,以某种方式增强函数行为。这是一项强大的功能,但是如果想掌握,必须理解闭包
nonlocal 是新出现的关键字,在 Python 3.0 中引入。作为 Python 程序员,如果严格遵守基于类的面向对象编程方式,即使不知道这个关键字也没事,但是如果想自己实现函数装饰器,那就必须了解闭包的方方面面,因此也就需要知道 nonlocal
这一章中我们主要讨论的话题如下:、
Python 如何计算装饰器语法
Python 如何判断变量是不是局部的
闭包存在的原因和工作原理
nonlocal 能解决什么问题
掌握这些知识,可以进一步探讨装饰器:
实现行为良好的装饰器
标准库中有用的装饰器
实现一个参数化装饰器
下面我们先介绍基础知识:
基础知识
假如有个 decorate 装饰器
@decorate
def target():
print('running target()')
上面的写法与下面效果一样:
```
def target():
print('running target()')
target = decorate(target)
```
End of explanation
"""
#!/usr/bin/env python
# encoding: utf-8
registry = []
def register(func):
print('running register(%s)' % func)
registry.append(func)
return func
@register
def f1():
print('running f1()')
@register
def f2():
print('running f2()')
def f3():
print('running f3()')
def main():
print('running main()')
print('registry ->', registry)
f1()
f2()
f3()
if __name__ == '__main__':
main()
# 运行后答案如下
# running register(<function f1 at 0x7fbac67ca6a8>)
# running register(<function f2 at 0x7fbac67ca730>)
# running main()
# registry -> [<function f1 at 0x7fbac67ca6a8>, <function f2 at 0x7fbac67ca730>]
# running f1()
# running f2()
# running f3()
"""
Explanation: Python 何时执行装饰器
装饰器一个关键特性是,它们被装饰的函数定义之后立即运行。这通常是在导入模块(Python 加载模块时),如下面的 register.py 模块
End of explanation
"""
import register
"""
Explanation: 看到 register 在模块中其它函数之前运行(两次)。调用 register 时,传给它的参数是被装饰的函数,例如 <function f1 at 0x7fbac67ca6a8>
加载模块后,registry 有两个被装饰函数的引用:f1 和 f2。这两个函数,以及 f3,只在 main 明确调用它们时候才执行
如果单纯导入 register.py
End of explanation
"""
register.registry
"""
Explanation: 此时查看 registry 的值:
End of explanation
"""
promos = []
def promotion(promo_func):
promos.append(promo_func)
return promo_func
@promotion
def fidelity_promo(order):
'''为积分为 1000 或以上的顾客提供 5% 折扣'''
return order.total() * .05 if order.customer.fidelity >= 1000 else 0
@promotion
def bulk_item_promo(order):
'''单个商品为 20 个或以上时提供 %10 折扣'''
discount = 0
for item in order.cart:
if item.quantity >= 20:
discount += item.total() * .1
return discount
@promotion
def large_order_promo(order):
'''订单中的不同商品达到 10 个或以上时提供 %7 折扣'''
def best_promo(order):
return max(promo(order) for promo in promos)
"""
Explanation: 这主要为了强调,函数装饰器在导入模块时立即执行,而被装饰的函数只在明确调用时执行。这突出了 Python 程序员说的 导入时 和 运行时 的区别
考虑到装饰器在真实代码中的常用方式,上面的例子有两个不同寻常的地方:
装饰器函数与被装饰的函数在同一个模块定义,实际上装饰器通常在一个模块中定义,然后应用到其它模块中的函数上
register 装饰器返回的函数与通过参数传入的相同,实际上大多数装饰器会在内部定义一个函数,然后返回
使用装饰器改进 “策略” 模式
在上一章的商品折扣例子中有一个问题,每次通过 best_promo 函数判断最大折扣,这个函数也需要一个折扣函数列表,如果忘了添加,会导致一些不容易被发现的问题。下面使用注册装饰器解决了这个问题:
End of explanation
"""
def f1(a):
print(a)
print(b)
f1(3)
"""
Explanation: 这个方案有几个优点
促销策略函数无需使用特殊名(即不用以 _promo 结尾)
@promotion 装饰器突出了被装饰函数的作用,还可以临时禁止某个折扣策略,只需要把装饰器注释
促销折扣策略可以在其它模块中定义,在系统任意地方都行,只要使用 @promotion 装饰器即可
不过,多数的装饰器会修改被装饰函数。通常,它们会定义一个内部函数,然后将其返回,替换被装饰的函数。使用内部函数的代码几乎都要靠闭包才能正确工作。为了理解闭包,我们先退后一步,了解 Python 中变量的作用域
变量作用域规则
End of explanation
"""
b = 6
f1(3)
"""
Explanation: 这里出现错误不奇怪,如果先给全局变量 b 赋值,然后调用 f1,就不会出错
End of explanation
"""
b = 6
def f2(a):
print(a)
print(b)
b = 9
f2(3)
"""
Explanation: 这也很正常,下面是一个会让你大吃一惊的例子:),前面代码和上面的例子一样,可是,在赋值之前,第二个 print 失败了。
End of explanation
"""
b = 6
def f3(a):
global b
print(a)
print(b)
b = 9
f3(3)
f3(3)
b = 30
b
"""
Explanation: 首先输出了 3,表明 print(a) 执行了。但是 print(b) 无法执行,这是因为,Python 编译函数的定义时,判断 b 是局部变量,因为在函数中给它赋值了。生成的字节码证实了这种判断,Python 会尝试从本地环境获取 b。后面调用 f2(3) 时,f2 的定义体会获取并打印局部变量 a 的值,但是尝试获取局部变量 b 的时候,发现 b 没有绑定值。
这不是缺陷,而是设计选择,Python 不要求声明变量,但是假定在函数定义体中赋值的变量是局部变量。
如果在函数中赋值时想让解释器把 b 当成全局变量,要求使用 global 声明:
End of explanation
"""
class Averager():
def __init__(self):
self.series = []
def __call__(self, new_value):
self.series.append(new_value)
total = sum(self.series)
return total / len(self.series)
avg = Averager()
avg(10)
avg(11)
avg(12)
"""
Explanation: 了解了 Python 的变量作用域之后,我们就可以讨论闭包了
闭包
人们有时会把闭包和匿名函数弄混,这是有历史原因的,在函数内部定义函数不常见,直到开始使用匿名函数才这么做。而且,只有涉及嵌套函数时才有闭包的问题。因此,很多人是同时知道这两个概念的
其实,闭包指延伸了作用域的函数,其中包含函数定义体中引用、但是不在定义体中定义的非全局变量。函数是不是匿名的没有关系,关键是它能访问定义体之外定义的非全局变量。
我们通过例子来理解,加入 avg 函数是计算不断增加的系列值的均值,例如整个历史中的某个商品的平均收盘价,每天都会增加新价格,因此平均值要考虑至目前为止的所有价格。
一开始,avg 是这样的:
End of explanation
"""
def make_averager():
series = []
def averager(new_value):
series.append(new_value)
total = sum(series)
return total / len(series)
return averager
avg = make_averager()
avg(10)
avg(11)
avg(12)
"""
Explanation: 下面是函数式实现,使用高阶函数 make_averager,调用 make_averager,返回一个 averager 函数对象。每次调用 averager 时,都会把参数添加到系列值中,然后计算当前平均值
End of explanation
"""
avg.__code__.co_varnames
avg.__code__.co_freevars
"""
Explanation: 注意,这两个例子有共通之处,调用 Averager() 或 make_averager() 得到一个可调用对象 avg,它会更新历史值,然后计算当前均值。
Averager 将历史值存在哪里很明显,self.series 属性中,avg 函数将 series 存在哪里呢?
注意 series 是 make_averager 的局部变量,因为那个函数的定义体中初始化了 series,series = [],可是调用 avg(10) 时候,make_averager 已经返回了,而它本身地作用域也没有了
在 averager 函数中,series 是自由变量。这是一个技术术语,指未在本地作用域绑定的变量,averager 的闭包延伸到这个函数的作用于之外,包含自由变量 series 的绑定
审查返回的 averager 对象,我们会在 __code__ 属性发现保存局部变量和自由变量的名称
End of explanation
"""
avg.__code__.co_freevars
avg.__closure__
avg.__closure__[0].cell_contents
"""
Explanation: series 的绑定在返回的 avg 函数的 __closure__ 属性中,avg.__closure__ 中的各个元素对应于 avg.__code__.co_freevars 中的一个名称。这些元素是 cell 对象,有个 cell_contents 属性,保存着真正的值。这些属性如下所示:
End of explanation
"""
def make_averager():
count = 0
total = 0
def averager(new_value):
count += 1
total += new_value
return total / count
return averager
avg = make_averager()
avg(10)
"""
Explanation: 综上,闭包是一种函数,它会保留定义函数时存在的自有变量的绑定,这样调用函数时,虽然定义域不可用了,但是仍然能使用这些绑定
注意,只有嵌套在其它函数中的函数才可能需要处理不在全局作用域中的外部变量
nonlocal 声明
前面实现的计算平均值方法效率不高,因为每次把值存到历史数列中,遍历历史数列求平均数,更好的方法是只存储目前的平均数和个数,然后用这两个数计算平均值
下面是一个有缺陷的的程序,只是为了阐明某个观点,我们来看一下:
End of explanation
"""
def make_averager():
count = 0
total = 0
def averager(new_value):
nonlocal count, total
count += 1
total += new_value
return total / count
return averager
avg = make_averager()
avg(10)
avg(11)
"""
Explanation: 问题是,当 count 是数字和不可变类型时, count += 1 和 count = count + 1 是等价的。我们在 averager 函数中为 count 赋值了,这会把 Count变成局部变量,total 变量也会受这个问题影响。
在前面的例子没有这个问题,因为我们没有给 series 赋值,我们只是调用 series.append(), 并把 series 传给 sum 和 len。也就是说,我们利用了列表是可变对象的这一事实
但是对于数字,字符串,元组等不可变类型来说,只能读取,不能更新。如果尝试像上面重新绑定,就会隐式的创建局部变量 count。这样 count 就不是自由变量了,因此不会保存到闭包中
为了解决这个问题,Python 3 引入了 nonlocal 声明。它的作用是把变量标记为自由变量,即使在函数中为变量赋予新值了,也会变成自由变量。如果为 nonlocal 声明的变量赋予新值,闭包中保存的绑定会更新。最新版 make_averager 的正确实现如下所示:
End of explanation
"""
import time
def clock(func):
def clocked(*args):
#返回计时器的精准时间(系统的运行时间),包含整个系统的睡眠时间。系统起始运行时间不确定,所以一般只有两个时间差才有效
t0 = time.perf_counter()
result = func(*args)
elapsed = time.perf_counter() - t0
name = func.__name__
# repr 方法用得好,用的对象例如列表不能用 str(),但是可以用 repr() 获取对象的标准字符串表示形式
arg_str = ', '.join(repr(arg) for arg in args)
# %r 把对象转成标准字符串形式,因为不知道返回值类型
print('[%0.8fs]%s(%s) -> %r' % (elapsed, name, arg_str, result))
return result
return clocked
import time
@clock
def snooze(seconds):
time.sleep(seconds)
@clock
def factorial(n):
return 1 if n < 2 else n * factorial(n - 1)
print('*' * 40, 'Calling snooze(.123)')
snooze(.123)
print('*' * 40, 'Calling factorial(6)')
factorial(6)
"""
Explanation: Python 2 中的处理方法可以把 count 和 total 存储为可变对象,例如列表和字典就可以了。
实现一个简单的装饰器
下面定义一个装饰器,会在每次调用被装饰的函数时计时,然后把经过的时间、传入的参数和调用结果打印
End of explanation
"""
factorial.__name__
"""
Explanation: clock 中还定义 clocked 函数的原因是 clock 函数中的内容在模块引入后会被执行,所以再嵌套一个函数,保证不会在模块引入后直接执行装饰器内容,然后在 clocked 中对原函数计时
工作原理
@clock
def factorial(n):
return 1 if n < 2 else n * factorial(n - 1)
等价于
def factorial(n):
return 1 if n < 2 else n * factorial(n - 1)
factorial = clock(factorial)
因此,在这两个示例中,factorial 会作为 func 参数传给 clock,然后 clock 函数会返回 clocked 函数, Python 解释器在背后会把 clocked 赋值给 factorial。可以看到查看 factorial 的 __name__ 属性会得到以下结果:
End of explanation
"""
import time
import functools
def clock(func):
@functools.wraps(func)
def clocked(*args, **kwargs):
t0 = time.time()
result = func(*args, **kwargs)
elapsed = time.time() - t0
name = func.__name__
arg_lst = []
if args:
arg_lst.append(', '.join(repr(arg) for arg in args))
if kwargs:
pairs = ['%s=%r' % (k, w) for k, w in sorted(kwargs.items())]
arg_lst.append(', '.join(pairs))
arg_str = ', '.join(arg_lst)
print('[%0.8fs]%s(%s) -> %r' % (elapsed, name, arg_str, result))
return result
return clocked
import time
@clock
def snooze(seconds):
time.sleep(seconds)
@clock
def factorial(n):
return 1 if n < 2 else n * factorial(n - 1)
print('*' * 40, 'Calling snooze(.123)')
snooze(.123)
print('*' * 40, 'Calling factorial(6)')
factorial(6)
factorial.__name__
"""
Explanation: 所以,现在 factorial 保存的是 clocked 函数的引用。此后,调用 factorial(n),执行的都是 clocked(n)。clocked 大致做了以下几件事
记录初试时间 t0
调用原来的 factorial 函数,保存结果
计算经过的时间
格式化收集的数据,打印
返回第二步保存的结果
上面的 clock 装饰器有几个缺点,不支持关键字参数,而且遮盖了被装饰函数的 __name__ 和 __doc__ 属性。下面使用 functools.wraps 装饰器可以把相关属性从 func 复制到 clocked 中。此外还能正确处理关键字参数
End of explanation
"""
@clock
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 2) + fibonacci(n - 1)
print(fibonacci(6))
"""
Explanation: 看到 factorial 的属性已经被复制到了 clocked 中。functools.wraps 只是标准库中拿来即用的装饰器之一,下面介绍 functools 模块中最令人印象深刻的两个装饰器: lru_cache 和 singledispatch
标准库中的装饰器
Python 内置了 3 个用于装饰器方法的函数:property, classmethod, staticmethod。property 在第 19 章讨论, 另外两个在第 9 章讨论
另一个常见的装饰器是 functools.wraps,它的作用是协助构建行为良好的装饰器。我们在前面用过,现在来介绍标准库中最值得关注的两个装饰器 lru_cache 和全新的 singledispatch(Python 3.4 新增)。这两个装饰器都在 functools 模块中定义。接下来分别讨论它们
使用 functools.lru_cache 做备忘
functools.lru_cache 是一个非常实用的装饰器,它实现了备忘功能。这是一项优化技术,它把耗时的函数结果保存起来,避免传入相同的参数时重复计算。 LRU 3 个字母是 “Least Recently Used” 的缩写,表明缓存不会无限制增长,一段时间不用的缓存条目会被扔掉
下面是一个生成 n 个斐波那契数的例子:
End of explanation
"""
@functools.lru_cache() # note
@clock
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 2) + fibonacci(n - 1)
print(fibonacci(6))
"""
Explanation: 看到很浪费时间, fibonacci(1) 调用了 8 次, fibonacci(2) 调用了 5 次。但是如果增加两行代码,性能会显著改善,如下:
End of explanation
"""
import html
def htmlize(obj):
content = html.escape(repr(obj)) # 对字符串进行转义,详情见注释说明
return '<pre>{}</pre>'.format(content)
'''
html.escape(s, quote=True)
Convert the characters &, < and > in string s to HTML-safe sequences.
Use this if you need to display text that might contain such characters in HTML.
If the optional flag quote is true, the characters (") and (') are also translated;
this helps for inclusion in an HTML attribute value delimited by quotes, as in <a href="...">.
'''
"""
Explanation: 注意,必须像常规函数那样调用 lru_cache。这一行中有一对括号: @functools.lru_cache()。这么做的原因是,lru_cache() 可以接受配置参数,稍后说明
这里还叠放了装饰器,lru_cache() 应用到了 @clock 返回的函数上
这样一来,看到执行时间减半,而且 n 的每个值只调用一次函数。
除了优化递归算法之外,lru_cache 在从 Web 中获取信息的应用也能发挥巨大作用。特别要注意,lru_cache 可以使用两个可选的参数来配置。它的签名是:functools.lru_cache(maxsize = 128, typed = False)
maxsize 指定最多存储多少个调用结果,缓存满了后,旧的结果会被扔掉,为了得到最佳性能, maxsize 应该设为 2 的次幂。typed 参数如果设为 True,把不同参数类型得到的结构分开保存,即把通常认为相等的浮点数和整数参数(如 1 和 1.0)区分开。顺便说一下,因为 lru_cache 使用字典存储结果,而且键根据调用时传入的定位参数和关键字创建,所以被 lru_cache 装饰的函数,它的所有的参数必须是可散列的。
单分派泛函数
假如我们在调试一个 Web 应用程序,想生成 HTML,显示不同类型的 Python 对象。
我们可能会编写这样的函数:
End of explanation
"""
htmlize({1, 2, 3}) # 默认情况下, 在 <pre> 标签内显示 HTML 转义后的字符串表示形式
htmlize(abs)
htmlize('Heimlich & Co.\n- a game')# str 对象也是显示 HTML 转义后的形式,\n 换成 <br>\n 并且放到 <p> 标签内
htmlize(42) # 数字显示 10进制和 16进制形式
print(htmlize(['alpha', 66, {3, 2, 1}])) #列表根据各自的类型格式化显示
"""
Explanation: 这个函数适用于任何 Python 类型,但是我们想做个扩展,让它使用特别的方式显示某些类型。
str: 把内部的换行符换成 <br>\n;不使用 <pre>,而是使用 <p>
int: 以十进制和十六进制显示数字
list:输出一个 HTML 列表,根据各个元素的类型进行格式化
我们想要的行为如下所示:
End of explanation
"""
from functools import singledispatch
from collections import abc
import numbers
import html
@singledispatch
def htmlize(obj):
content = html.escape(repr(obj))
return '<pre>{}</pre>'.format(content)
@htmlize.register(str) #各个函数用 @base_function.register(type) 装饰
def _(text): #专门函数的名称不重要, _ 是个不错的选择
content = html.escape(text).replace('\n', '<br>\n')
return '<p>{0}</p>'.format(content)
@htmlize.register(numbers.Integral) # Integral 是 int 的虚拟超类
def _(n):
return '<pre>{0} (0x{0:x})</pre>'.format(n)
@htmlize.register(tuple) #可以叠放多个装饰器,让同一个函数支持不同的类型
@htmlize.register(abc.MutableSequence)
def _(seq):
inner = '</li>\n<li>'.join(htmlize(item) for item in seq)
return '<ul>\n<li>' + inner + '</li>\n</ul>'
"""
Explanation: Python 不支持重载方法或函数,所以我们不能使用不同签名定义 htmlize 的变体。也无法使用不同方式处理不同的数据类型。在 Python 中,一种常见的做法是把 htmlize 变成一个分派函数,使用一串 if else 来调用专门的函数,但是这样太笨,而且不好维护
Python 3.4 新增了 functools.singledispatch 装饰器可以把整体方案拆成多个模块,甚至可以为你无法修改的类提供专门的函数。使用 @singledispathc 会把普通的函数变成泛函数。根据第一参数的类型,以不同方式执行相同操作的一组函数
End of explanation
"""
registry = []
def register(func):
print('funning register(%s)' % func)
registry.append(func)
return func
@register
def f1():
print('running f1()')
print('running main()')
print('registry ->', registry)
f1()
"""
Explanation: 只要有可能,注册的专门函数应该处理抽象基类(例如 numbers.Integral 和 abc.MutableSequence),不要处理具体实现(如 int 和 list)。这样,代码支持的兼容类型更广泛
叠放装饰器
上面已经用过了叠放装饰器的方式,@lru_cache 应用到 @clock 装饰 fibonacci 得到的结果上。上面的例子最后也用到了两个 @htmlize.register 装饰器
把 @d1 和 @d2 两个装饰器按顺序应用到 f 函数上,作用相当于 f = d1(d2(f)),也就是说
@d1
@d2
def f():
print('f')
等同于
```
def f():
print('f')
f = d1(d2(f))
```
除了叠放装饰器以外,我们还用到了接收参数的装饰器,例如上面的 htmlize.register(type)
参数化装饰器
Python 把被装饰的函数作为第一个参数传给装饰器函数。我们可以创建一个装饰器工厂函数,把参数传给他,返回一个装饰器,然后再把它应用到要装饰器函数上,我们以见过的最简单的装饰器为例说明:
End of explanation
"""
registry = set() #添加删除元素更快(相比列表)
def register(active = True):
def decorate(func): #这是真正的装饰器,它的参数是一个函数
print('running register(active=%s)->decorate(%s)' % (active, func))
if active:
registry.add(func)
else:
registry.discard(func)
return func #decorate 是装饰器函数,所以返回 func
return decorate # register 是装饰器工厂函数,返回 decorate
@register(active = False) # @register 工厂函数必须作为函数调用,传入需要的参数
def f1():
print('running f1()')
@register() # 如果不传入参数也要作为函数调用
def f2():
print("running f2()")
def f3():
print('running f3()')
registry
"""
Explanation: 为了便于启动和禁用 register 执行的函数注册功能,我们为他提供一个可选的 activate 参数,设为 False 时,不注册被装饰的函数。实现如下,从概念上来看,这个新的 register 函数不是装饰器,而是装饰器工厂函数。调用它会返回真正的装饰器,这才是应用到目标上的装饰器
为了接受参数,新的 register 装饰器必须作为函数调用
End of explanation
"""
register()(f3)
registry
register(active=False)(f2)
registry
"""
Explanation: 这里的关键是,register() 要返回 decorate,并把它应用到被装饰器函数上,注意只有 f2 在 registry 中,因为 f1 传给装饰器工厂函数的参数是 False。如果不能使用 @ 语法,那就要像常规函数那样使用 register,若想把 f 添加到 registry 中,则装饰 f 函数的语法是 register()(f)。下面演示了如何把函数添加到 registry 中,以及如何从中删除函数
End of explanation
"""
import time
DEFAULT_FMT = '[{elapsed:0.8f}s] {name}({args}) -> {result}'
def clock(fmt = DEFAULT_FMT): # 参数化装饰器工厂函数
def decorate(func): # 真正的装饰器
def clocked(*_args): # 包装被装饰器的函数
t0 = time.time()
_result = func(*_args)
elapsed = time.time() - t0
name = func.__name__
args = ', '.join(repr(arg) for arg in _args)
result = repr(_result)
print(fmt.format(**locals())) #是为了在 fmt 中引用 clocked 的局部变量 --> 用得好!
return result
return clocked
return decorate
@clock() #不传入参数调用 clock(),应用的装饰器默认格式的 str
def snooze(seconds):
time.sleep(seconds)
for i in range(3):
snooze(.123)
"""
Explanation: 参数化装饰器原理相当复杂,我们讨论的是很简单的内容,参数化装饰器通常会把被装饰的函数替换掉,而且结构上需要多一层嵌套,接下来会讨论这种函数金字塔
参数化 clock 装饰器
我们这次为 clock 装饰器添加一个功能,让用户传入一个格式字符串,控制被装饰函数的输出,见下面例子,为了方便起见,我们下面用的是最初实现的 clock,而不是示例中使用 @functools.wraps 的改进后的版本,因为那一版增加了一层函数
End of explanation
"""
@clock('{name}: {elapsed}s')
def snooze(seconds):
time.sleep(seconds)
for i in range(3):
snooze(.123)
@clock('{name}({args}) dt={elapsed:0.3f}s')
def snooze(seconds):
time.sleep(seconds)
for i in range(3):
snooze(.123)
"""
Explanation: 下面是用户传入自定义的格式字符串的调用:
End of explanation
"""
|
ryan-leung/PHYS4650_Python_Tutorial | notebooks/04-Introduction-to-Pandas.ipynb | bsd-3-clause | import pandas
pandas.__version__
import pandas as pd
import numpy as np
"""
Explanation: Python Data Analytics
<img src="images/pandas_logo.png" alt="pandas" style="width: 400px;"/>
Pandas is a numerical package used extensively in data science. You can call the install the pandas package by
pip install pandas
Like numpy, the underlying routines are written in C with improved performance
<a href="https://colab.research.google.com/github/ryan-leung/PHYS4650_Python_Tutorial/blob/master/notebooks/04-Introduction-to-Pandas.ipynb"><img align="right" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory">
</a>
End of explanation
"""
data = pd.Series([1., 2., 3., 4.])
data
data = pd.Series([1, 2, 3, 4])
data
"""
Explanation: Built-In Documentation in jupyter
For example, to display all the contents of the pandas namespace, you can type
ipython
In [3]: pd.<TAB>
And to display Pandas's built-in documentation, you can use this:
ipython
In [4]: pd?
The Pandas Series Object
A Pandas Series is a one-dimensional array of indexed data.
End of explanation
"""
data.values
"""
Explanation: To retrieve back the underlying numpy array, we have the values attribute
End of explanation
"""
data.index
"""
Explanation: The index is an array-like object of type pd.Index.
End of explanation
"""
data[1]
data[1:3]
"""
Explanation: Slicing and indexing just like Python standard list
End of explanation
"""
data = pd.Series([1, 2, 3, 4],
index=['a', 'b', 'c', 'd'])
data
"""
Explanation: The Pandas Index
The index is useful to denote each record, the datatypes of the index can be varied. You can think of another numpy array binded to the data array.
End of explanation
"""
location = {
'Berlin': (52.5170365, 13.3888599),
'London': (51.5073219, -0.1276474),
'Sydney': (-33.8548157, 151.2164539),
'Tokyo': (34.2255804, 139.294774527387),
'Paris': (48.8566101, 2.3514992),
'Moscow': (46.7323875, -117.0001651)
}
location = pd.Series(location)
location
location['Berlin']
"""
Explanation: If we supply a dictionary to the series, it will be constructed with an index.
By default, a Series will be created where the index is drawn from the sorted keys.
End of explanation
"""
location['London':'Paris']
"""
Explanation: Unlike a dictionary, though, the Series also supports array-style operations such as slicing
End of explanation
"""
location = {
'Berlin': (52.5170365, 13.3888599),
'London': (51.5073219, -0.1276474),
'Sydney': (-33.8548157, 151.2164539),
'Tokyo': (34.2255804, 139.294774527387),
'Paris': (48.8566101, 2.3514992),
'Moscow': (46.7323875, -117.0001651)
}
location = pd.DataFrame(location)
location
# Switching rows to columns is as easy as a transpose
location.T
# Change the columns by .columns attribute
location = location.T
location.columns = ['lat', 'lon']
location
location.index
location.columns
"""
Explanation: The Pandas DataFrame Object
The pandas dataframe object is a very powerful table like object.
End of explanation
"""
import urllib.request
urllib.request.urlretrieve(
'http://data.insideairbnb.com/taiwan/northern-taiwan/taipei/2018-11-27/visualisations/listings.csv',
'airbnb_taiwan_listing.csv'
)
urllib.request.urlretrieve(
'http://data.insideairbnb.com/china/hk/hong-kong/2018-11-12/visualisations/listings.csv',
'airbnb_hongkong_listing.csv'
)
"""
Explanation: Read Data
pandas has built-in data readers, you can type pd.read<TAB> to see what data format does it support:
we will focus in csv file which is widely used
We have some data downloaded from airbnb, you can find it in the folder, you may also download the file by executing the following code:
End of explanation
"""
airbnb_taiwan = pd.read_csv('airbnb_taiwan_listing.csv')
airbnb_taiwan
airbnb_hongkong = pd.read_csv('airbnb_hongkong_listing.csv')
airbnb_hongkong
"""
Explanation: Read CSV files
End of explanation
"""
mask = airbnb_hongkong['price'] > 1000
airbnb_hongkong[mask]
# In one line :
airbnb_taiwan[airbnb_taiwan['price'] > 4000]
"""
Explanation: Filter data
End of explanation
"""
A = pd.Series([2, 4, 6], index=[0, 1, 2])
B = pd.Series([1, 3, 5], index=[1, 2, 3])
A + B
A.add(B, fill_value=0)
"""
Explanation: Missing Data in Pandas
Missing data is very important in pandas dataframe/series operations. Pandas do element-to-element operations based on index. If the index does not match, it will produce a not-a-number (NaN) results.
End of explanation
"""
# Fill Zero
(A + B).fillna(0)
# forward-fill
(A + B).fillna(method='ffill')
# back-fill
(A + B).fillna(method='bfill')
"""
Explanation: The following table lists the upcasting conventions in Pandas when NA values are introduced:
|Typeclass | Conversion When Storing NAs | NA Sentinel Value |
|--------------|-----------------------------|------------------------|
| floating | No change | np.nan |
| object | No change | None or np.nan |
| integer | Cast to float64 | np.nan |
| boolean | Cast to object | None or np.nan |
Pandas treats None and NaN as essentially interchangeable for indicating missing or null values. They are convention functions to replace and find these values:
isnull(): Generate a boolean mask indicating missing values
notnull(): Opposite of isnull()
dropna(): Return a filtered version of the data
fillna(): Return a copy of the data with missing values filled or imputed
End of explanation
"""
airbnb_hongkong['price'].describe()
"""
Explanation: Data Aggregations
we will use back the airbnb data to demonstrate data aggerations
End of explanation
"""
data_grouped = airbnb_hongkong.groupby(['neighbourhood'])
data_mean = data_grouped['price'].mean()
data_mean
data_mean = airbnb_taiwan.groupby(['neighbourhood'])['price'].mean()
data_mean
airbnb_taiwan.groupby(['room_type']).id.count()
airbnb_hongkong.groupby(['room_type']).id.count()
airbnb_taiwan.groupby(['room_type'])['price'].describe()
airbnb_hongkong.groupby(['room_type'])['price'].describe()
"""
Explanation: The following table summarizes some other built-in Pandas aggregations:
| Aggregation | Description |
|--------------------------|---------------------------------|
| count() | Total number of items |
| first(), last() | First and last item |
| mean(), median() | Mean and median |
| min(), max() | Minimum and maximum |
| std(), var() | Standard deviation and variance |
| mad() | Mean absolute deviation |
| prod() | Product of all items |
| sum() | Sum of all items |
End of explanation
"""
airbnb = pd.concat([airbnb_taiwan, airbnb_hongkong], keys=['taiwan', 'hongkong'])
airbnb
airbnb.index
airbnb.index = airbnb.index.droplevel(level=1)
airbnb.index
airbnb.groupby(['room_type', airbnb.index])['price'].describe()
"""
Explanation: Combining Two or more dataframe
End of explanation
"""
airbnb_taiwan.groupby(['room_type']).id.count()
%matplotlib inline
c = airbnb_taiwan.groupby(['room_type']).id.count()
c.plot.bar()
c = airbnb_taiwan.groupby(['room_type']).id.count().rename("count")
d = airbnb_taiwan.id.count()
(c / d * 100).plot.bar()
"""
Explanation: Easy Plotting in pandas
End of explanation
"""
import numpy as np
ts = pd.Series(np.random.randn(1000), index=pd.date_range('2016-01-01', periods=1000))
ts.plot()
ts = ts.cumsum()
ts.plot()
"""
Explanation: Time series data
Time seies data refers to metrics that has a time dimensions, such as stocks data and weather. In this example, we will look at some random time-series data:
End of explanation
"""
ts.index
ts['2016-02-01':'2016-05-01'].plot()
"""
Explanation: Datetime index filtering
End of explanation
"""
|
tpin3694/tpin3694.github.io | machine-learning/linear_regression_scikitlearn.ipynb | mit | import pandas as pd
from sklearn import linear_model
import random
import numpy as np
%matplotlib inline
"""
Explanation: Title: Linear Regression
Slug: linear_regression
Summary: A simple example of linear regression in scikit-learn
Date: 2016-08-19 12:00
Category: Machine Learning
Tags: Linear Regression
Authors: Chris Albon
Sources: scikit-learn, DrawMyData.
The purpose of this tutorial is to give a brief introduction into the logic of statistical model building used in machine learning. If you want to read more about the theory behind this tutorial, check out An Introduction To Statistical Learning.
Let us get started.
Preliminary
End of explanation
"""
# Load the data
df = pd.read_csv('../data/simulated_data/battledeaths_n300_cor99.csv')
# Shuffle the data's rows (This is only necessary because of the way I created
# the data using DrawMyData. This would not normally be necessary with a real analysis).
df = df.sample(frac=1)
"""
Explanation: Load Data
With those libraries added, let us load the dataset (the dataset is avaliable in his site's GitHub repo).
End of explanation
"""
# View the first few rows
df.head()
"""
Explanation: Explore Data
Let us take a look at the first few rows of the data just to get an idea about it.
End of explanation
"""
# Plot the two variables against eachother
df.plot(x='friendly_battledeaths', y='enemy_battledeaths', kind='scatter')
"""
Explanation: Now let us plot the data so we can see it's structure.
End of explanation
"""
# Create our predictor/independent variable
# and our response/dependent variable
X = df['friendly_battledeaths']
y = df['enemy_battledeaths']
# Create our test data from the first 30 observations
X_test = X[0:30].reshape(-1,1)
y_test = y[0:30]
# Create our training data from the remaining observations
X_train = X[30:].reshape(-1,1)
y_train = y[30:]
"""
Explanation: Break Data Up Into Training And Test Datasets
Now for the real work. To judge how how good our model is, we need something to test it against. We can accomplish this using a technique called cross-validation. Cross-validation can get much more complicated and powerful, but in this example we are going do the most simple version of this technique.
Steps
Divide the dataset into two datasets: A 'training' dataset that we will use to train our model and a 'test' dataset that we will use to judge the accuracy of that model.
Train the model on the 'training' data.
Apply that model to the test data's X variable, creating the model's guesses for the test data's Ys.
Compare how close the model's predictions for the test data's Ys were to the actual test data Ys.
End of explanation
"""
# Create an object that is an ols regression
ols = linear_model.LinearRegression()
# Train the model using our training data
model = ols.fit(X_train, y_train)
"""
Explanation: Train The Linear Model
Let us train the model using our training data.
End of explanation
"""
# View the training model's coefficient
model.coef_
# View the R-Squared score
model.score(X_test, y_test)
"""
Explanation: View The Results
Here are some basic outputs of the model, notably the coefficient and the R-squared score.
End of explanation
"""
# Run the model on X_test and show the first five results
list(model.predict(X_test)[0:5])
"""
Explanation: Now that we have used the training data to train a model, called model, we can apply it to the test data's Xs to make predictions of the test data's Ys.
Previously we used X_train and y_train to train a linear regression model, which we stored as a variable called model. The code model.predict(X_test) applies the trained model to the X_test data, data the model has never seen before to make predicted values of Y.
This can easily be seen by simply running the code:
End of explanation
"""
# View the first five test Y values
list(y_test)[0:5]
"""
Explanation: This array of values is the model's best guesses for the values of the test data's Ys. Compare them to the actual test data Y values:
End of explanation
"""
# Apply the model we created using the training data
# to the test data, and calculate the RSS.
((y_test - model.predict(X_test)) **2).sum()
"""
Explanation: The difference between the model's predicted values and the actual values is how is we judge as model's accuracy, because a perfectly accurate model would have residuals of zero.
However, to judge a model, we want a single statistic (number) that we can use as a measure. We want this measure to capture the difference between the predicted values and the actual values across all observations in the data.
The most common statistic used for quantitative Ys is the residual sum of squares:
$$ RSS = \sum_{i=1}^{n}(y_{i}-f(x_{i}))^{2} $$
Don't let the mathematical notation throw you off:
$f(x_{i})$ is the model we trained: model.predict(X_test)
$y_{i}$ is the test data's y: y_test
$^{2}$ is the exponent: **2
$\sum_{i=1}^{n}$ is the summation: .sum()
In the residual sum of squares, for each observation we find the difference between the model's predicted Y and the actual Y, then square that difference to make all the values positive. Then we add all those squared differences together to get a single number. The final result is a statistic representing how far the model's predictions were from the real values.
End of explanation
"""
# Calculate the MSE
np.mean((model.predict(X_test) - y_test) **2)
"""
Explanation: Note: You can also use Mean Squared Error, which is RSS divided by the degrees of freedom. But I find it helpful to think in terms of RSS.
End of explanation
"""
|
FowlerLab/Enrich2 | docs/notebooks/unique_barcodes.ipynb | bsd-3-clause | % matplotlib inline
from __future__ import print_function
import os.path
from collections import Counter
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from enrich2.variant import WILD_TYPE_VARIANT
import enrich2.plots as enrich_plot
pd.set_option("display.max_rows", 10) # rows shown when pretty-printing
"""
Explanation: Selecting variants by number of unique barcodes
This notebook gets scores for the variants in an Experiment that are linked to multiple barcodes, and plots the relationship between each variant's score and number of unique barcodes.
End of explanation
"""
results_path = "/path/to/Enrich2-Example/Results/"
"""
Explanation: Modify the results_path variable in the next cell to match the output directory of your Enrich2-Example dataset.
End of explanation
"""
my_store = pd.HDFStore(os.path.join(results_path, "BRCA1_Example_exp.h5"))
"""
Explanation: Open the Experiment HDF5 file.
End of explanation
"""
my_store.keys()
"""
Explanation: The pd.HDFStore.keys() method returns a list of all the tables in this HDF5 file.
End of explanation
"""
bcm = my_store['/main/barcodemap']
bcm
"""
Explanation: First we will work with the barcode-variant map for this analysis, stored in the "/main/barcodemap" table. The index is the barcode and it has a single column for the variant HGVS string.
End of explanation
"""
variant_bcs = Counter(bcm['value'])
variant_bcs.most_common(10)
"""
Explanation: To find out how many unique barcodes are linked to each variant, we'll count the number of times each variant appears in the barcode-variant map using a Counter data structure. We'll then output the top ten variants by number of unique barcodes.
End of explanation
"""
bc_counts = pd.DataFrame(variant_bcs.most_common(), columns=['variant', 'barcodes'])
bc_counts
"""
Explanation: Next we'll turn the Counter into a data frame.
End of explanation
"""
bc_counts.index = bc_counts['variant']
bc_counts.index.name = None
del bc_counts['variant']
bc_counts
"""
Explanation: The data frame has the information we want, but it will be easier to use later if it's indexed by variant rather than row number.
End of explanation
"""
bc_cutoff = 10
multi_bc_variants = bc_counts.loc[bc_counts['barcodes'] >= bc_cutoff].index[1:]
multi_bc_variants
"""
Explanation: We'll use a cutoff to choose variants with a minimum number of unique barcodes, and store this subset in a new index. We'll also exclude the wild type by dropping the first entry of the index.
End of explanation
"""
multi_bc_scores = my_store.select('/main/variants/scores', where='index in multi_bc_variants')
multi_bc_scores
"""
Explanation: We can use this index to get condition-level scores for these variants by querying the "/main/variants/scores" table. Since we are working with an Experiment HDF5 file, the data frame column names are a MultiIndex with two levels, one for experimental conditions and one for data values (see the pandas documentation for more information).
End of explanation
"""
my_store.close()
"""
Explanation: There are fewer rows in multi_bc_scores than in multi_bc_variants because some of the variants were not scored in all replicate selections, and therefore do not have a condition-level score.
Now that we're finished getting data out of the HDF5 file, we'll close it.
End of explanation
"""
bc_counts['score'] = multi_bc_scores['E3', 'score']
bc_counts
"""
Explanation: We'll add a column to the bc_counts data frame that contains scores from the multi_bc_scores data frame. To reference a column in a data frame with a MultiIndex, we need to specify all column levels.
End of explanation
"""
bc_counts.dropna(inplace=True)
bc_counts
"""
Explanation: Many rows in bc_counts are missing scores (displayed as NaN) because those variants were not in multi_bc_scores. We'll drop them before continuing.
End of explanation
"""
fig, ax = plt.subplots()
enrich_plot.configure_axes(ax, xgrid=True)
ax.plot(bc_counts['barcodes'],
bc_counts['score'],
linestyle='none', marker='.', alpha=0.6,
color=enrich_plot.plot_colors['bright5'])
ax.set_xlabel("Unique Barcodes")
ax.set_ylabel("Variant Score")
"""
Explanation: Now that we have a data frame containing the subset of variants we're interested in, we can make a plot of score vs. number of unique barcodes. This example uses functions and colors from the Enrich2 plotting library.
End of explanation
"""
|
atulsingh0/MachineLearning | scikit-learn/01_Scikit.ipynb | gpl-3.0 | from sklearn.neighbors import KNeighborsClassifier
# instantiate the KN
knn = KNeighborsClassifier(n_neighbors=2)
# training the model
knn.fit(X, y)
#predict the value [5,4,3,2]
knn.predict([5,4,3,2])
knn.predict([[5,4,3,2], [1,2,3,5]])
"""
Explanation: Choosing KNN Classifier algorithm to predict the IRIS data
End of explanation
"""
from sklearn.linear_model import LogisticRegression
lrm = LogisticRegression()
lrm.fit(X, y)
lrm.predict([[5,4,3,2], [1,2,3,5]])
"""
Explanation: Using another model - Logistic Regression
End of explanation
"""
from sklearn import metrics
# test LogisticRegression
# training my model
lrm = LogisticRegression()
lrm.fit(X, y)
y_pred = lrm.predict(X)
# testing accuracy
metrics.accuracy_score(y, y_pred)
# test KNN when K=1
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
y_pred = knn.predict(X)
# testing accracy
metrics.accuracy_score(y, y_pred)
# test KNN when K = 5
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X, y)
y_pred = knn.predict(X)
# testing accracy
metrics.accuracy_score(y, y_pred)
"""
Explanation: Testing the model accuracy when model is trained with all data
End of explanation
"""
# splitting the data
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=4)
# test LogisticRegression
# training my model
lrm = LogisticRegression()
lrm.fit(X_train, y_train)
y_pred = lrm.predict(X_test)
# testing accuracy
metrics.accuracy_score(y_test, y_pred)
# test KNN when K=1
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
# testing accracy
metrics.accuracy_score(y_test, y_pred)
# test KNN when K=5
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
# testing accracy
metrics.accuracy_score(y_test, y_pred)
"""
Explanation: Testing the model accuracy when model is trained with train/test way
End of explanation
"""
# Let's have a loop which will check for all possible value of K
accuracy = []
K = range(1,26)
for k in K:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
# testing accracy
ac = metrics.accuracy_score(y_test, y_pred)
accuracy.append(ac)
# now plotting it
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(K, accuracy)
# we can see, model is performing better when K between 6 and 16
# let's train our model on KNN when K = 6
# test KNN when K=6
knn = KNeighborsClassifier(n_neighbors=6)
knn.fit(X, y)
y_pred = knn.predict(X)
# testing accracy
metrics.accuracy_score(y, y_pred)
print(digits)
"""
Explanation: By training and testing our data, we can say it is better in KNN when K = 5
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/mohc/cmip6/models/sandbox-1/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-1', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/csir-csiro/cmip6/models/sandbox-3/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-3', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
Kaggle/learntools | notebooks/computer_vision/raw/tut5.ipynb | apache-2.0 | #$HIDE_INPUT$
# Imports
import os, warnings
import matplotlib.pyplot as plt
from matplotlib import gridspec
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
# Reproducability
def set_seed(seed=31415):
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
set_seed()
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
warnings.filterwarnings("ignore") # to clean up output cells
# Load training and validation sets
ds_train_ = image_dataset_from_directory(
'../input/car-or-truck/train',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=True,
)
ds_valid_ = image_dataset_from_directory(
'../input/car-or-truck/valid',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=False,
)
# Data Pipeline
def convert_to_float(image, label):
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
return image, label
AUTOTUNE = tf.data.experimental.AUTOTUNE
ds_train = (
ds_train_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
ds_valid = (
ds_valid_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
"""
Explanation: <!--TITLE:Custom Convnets-->
Introduction
Now that you've seen the layers a convnet uses to extract features, it's time to put them together and build a network of your own!
Simple to Refined
In the last three lessons, we saw how convolutional networks perform feature extraction through three operations: filter, detect, and condense. A single round of feature extraction can only extract relatively simple features from an image, things like simple lines or contrasts. These are too simple to solve most classification problems. Instead, convnets will repeat this extraction over and over, so that the features become more complex and refined as they travel deeper into the network.
<figure>
<img src="https://i.imgur.com/VqmC1rm.png" alt="Features extracted from an image of a car, from simple to refined." width=800>
</figure>
Convolutional Blocks
It does this by passing them through long chains of convolutional blocks which perform this extraction.
<figure>
<img src="https://i.imgur.com/pr8VwCZ.png" width="400" alt="Extraction as a sequence of blocks.">
</figure>
These convolutional blocks are stacks of Conv2D and MaxPool2D layers, whose role in feature extraction we learned about in the last few lessons.
<figure>
<!-- <img src="./images/2-block-crp.png" width="400" alt="A kind of extraction block: convolution, ReLU, pooling."> -->
<img src="https://i.imgur.com/8D6IhEw.png" width="400" alt="A kind of extraction block: convolution, ReLU, pooling.">
</figure>
Each block represents a round of extraction, and by composing these blocks the convnet can combine and recombine the features produced, growing them and shaping them to better fit the problem at hand. The deep structure of modern convnets is what allows this sophisticated feature engineering and has been largely responsible for their superior performance.
Example - Design a Convnet
Let's see how to define a deep convolutional network capable of engineering complex features. In this example, we'll create a Keras Sequence model and then train it on our Cars dataset.
Step 1 - Load Data
This hidden cell loads the data.
End of explanation
"""
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
# First Convolutional Block
layers.Conv2D(filters=32, kernel_size=5, activation="relu", padding='same',
# give the input dimensions in the first layer
# [height, width, color channels(RGB)]
input_shape=[128, 128, 3]),
layers.MaxPool2D(),
# Second Convolutional Block
layers.Conv2D(filters=64, kernel_size=3, activation="relu", padding='same'),
layers.MaxPool2D(),
# Third Convolutional Block
layers.Conv2D(filters=128, kernel_size=3, activation="relu", padding='same'),
layers.MaxPool2D(),
# Classifier Head
layers.Flatten(),
layers.Dense(units=6, activation="relu"),
layers.Dense(units=1, activation="sigmoid"),
])
model.summary()
"""
Explanation: Step 2 - Define Model
Here is a diagram of the model we'll use:
<figure>
<!-- <img src="./images/2-convmodel-1.png" width="200" alt="Diagram of a convolutional model."> -->
<img src="https://i.imgur.com/U1VdoDJ.png" width="250" alt="Diagram of a convolutional model.">
</figure>
Now we'll define the model. See how our model consists of three blocks of Conv2D and MaxPool2D layers (the base) followed by a head of Dense layers. We can translate this diagram more or less directly into a Keras Sequential model just by filling in the appropriate parameters.
End of explanation
"""
model.compile(
optimizer=tf.keras.optimizers.Adam(epsilon=0.01),
loss='binary_crossentropy',
metrics=['binary_accuracy']
)
history = model.fit(
ds_train,
validation_data=ds_valid,
epochs=40,
verbose=0,
)
import pandas as pd
history_frame = pd.DataFrame(history.history)
history_frame.loc[:, ['loss', 'val_loss']].plot()
history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot();
"""
Explanation: Notice in this definition is how the number of filters doubled block-by-block: 64, 128, 256. This is a common pattern. Since the MaxPool2D layer is reducing the size of the feature maps, we can afford to increase the quantity we create.
Step 3 - Train
We can train this model just like the model from Lesson 1: compile it with an optimizer along with a loss and metric appropriate for binary classification.
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.1/examples/notebooks/generated/tsa_filters.ipynb | bsd-3-clause | %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
dta = sm.datasets.macrodata.load_pandas().data
index = pd.Index(sm.tsa.datetools.dates_from_range("1959Q1", "2009Q3"))
print(index)
dta.index = index
del dta["year"]
del dta["quarter"]
print(sm.datasets.macrodata.NOTE)
print(dta.head(10))
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
dta.realgdp.plot(ax=ax)
legend = ax.legend(loc="upper left")
legend.prop.set_size(20)
"""
Explanation: Time Series Filters
End of explanation
"""
gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(dta.realgdp)
gdp_decomp = dta[["realgdp"]].copy()
gdp_decomp["cycle"] = gdp_cycle
gdp_decomp["trend"] = gdp_trend
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
gdp_decomp[["realgdp", "trend"]]["2000-03-31":].plot(ax=ax, fontsize=16)
legend = ax.get_legend()
legend.prop.set_size(20)
"""
Explanation: Hodrick-Prescott Filter
The Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\tau_t$ and a cyclical component $\zeta_t$
$$y_t = \tau_t + \zeta_t$$
The components are determined by minimizing the following quadratic loss function
$$\min_{\{ \tau_{t}\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$$
End of explanation
"""
bk_cycles = sm.tsa.filters.bkfilter(dta[["infl", "unemp"]])
"""
Explanation: Baxter-King approximate band-pass filter: Inflation and Unemployment
Explore the hypothesis that inflation and unemployment are counter-cyclical.
The Baxter-King filter is intended to explicitly deal with the periodicity of the business cycle. By applying their band-pass filter to a series, they produce a new series that does not contain fluctuations at higher or lower than those of the business cycle. Specifically, the BK filter takes the form of a symmetric moving average
$$y_{t}^{*}=\sum_{k=-K}^{k=K}a_ky_{t-k}$$
where $a_{-k}=a_k$ and $\sum_{k=-k}^{K}a_k=0$ to eliminate any trend in the series and render it stationary if the series is I(1) or I(2).
For completeness, the filter weights are determined as follows
$$a_{j} = B_{j}+\theta\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
$$B_{0} = \frac{\left(\omega_{2}-\omega_{1}\right)}{\pi}$$
$$B_{j} = \frac{1}{\pi j}\left(\sin\left(\omega_{2}j\right)-\sin\left(\omega_{1}j\right)\right)\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
where $\theta$ is a normalizing constant such that the weights sum to zero.
$$\theta=\frac{-\sum_{j=-K^{K}b_{j}}}{2K+1}$$
$$\omega_{1}=\frac{2\pi}{P_{H}}$$
$$\omega_{2}=\frac{2\pi}{P_{L}}$$
$P_L$ and $P_H$ are the periodicity of the low and high cut-off frequencies. Following Burns and Mitchell's work on US business cycles which suggests cycles last from 1.5 to 8 years, we use $P_L=6$ and $P_H=32$ by default.
End of explanation
"""
fig = plt.figure(figsize=(12, 10))
ax = fig.add_subplot(111)
bk_cycles.plot(ax=ax, style=["r--", "b-"])
"""
Explanation: We lose K observations on both ends. It is suggested to use K=12 for quarterly data.
End of explanation
"""
print(sm.tsa.stattools.adfuller(dta["unemp"])[:3])
print(sm.tsa.stattools.adfuller(dta["infl"])[:3])
cf_cycles, cf_trend = sm.tsa.filters.cffilter(dta[["infl", "unemp"]])
print(cf_cycles.head(10))
fig = plt.figure(figsize=(14, 10))
ax = fig.add_subplot(111)
cf_cycles.plot(ax=ax, style=["r--", "b-"])
"""
Explanation: Christiano-Fitzgerald approximate band-pass filter: Inflation and Unemployment
The Christiano-Fitzgerald filter is a generalization of BK and can thus also be seen as weighted moving average. However, the CF filter is asymmetric about $t$ as well as using the entire series. The implementation of their filter involves the
calculations of the weights in
$$y_{t}^{*}=B_{0}y_{t}+B_{1}y_{t+1}+\dots+B_{T-1-t}y_{T-1}+\tilde B_{T-t}y_{T}+B_{1}y_{t-1}+\dots+B_{t-2}y_{2}+\tilde B_{t-1}y_{1}$$
for $t=3,4,...,T-2$, where
$$B_{j} = \frac{\sin(jb)-\sin(ja)}{\pi j},j\geq1$$
$$B_{0} = \frac{b-a}{\pi},a=\frac{2\pi}{P_{u}},b=\frac{2\pi}{P_{L}}$$
$\tilde B_{T-t}$ and $\tilde B_{t-1}$ are linear functions of the $B_{j}$'s, and the values for $t=1,2,T-1,$ and $T$ are also calculated in much the same way. $P_{U}$ and $P_{L}$ are as described above with the same interpretation.
The CF filter is appropriate for series that may follow a random walk.
End of explanation
"""
|
rvperry/phys202-2015-work | assignments/assignment06/DisplayEx01.ipynb | mit | from IPython.display import display
from IPython.display import Image
from IPython.display import HTML
assert True # leave this to grade the import statements
"""
Explanation: Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell:
End of explanation
"""
Image(url='http://www.mohamedmalik.com/wp-content/uploads/2014/11/Physics.jpg',embed=True,width=600,height=600)
assert True # leave this to grade the image display
"""
Explanation: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
End of explanation
"""
%%HTML
<table>
<tr>
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge(e)</th>
<th>Mass(MeV/$c^2$)</th>
</tr>
<tr>
<td>up</td>
<td>u</td>
<td>$\bar{u}$</td>
<td>$+\frac{2}{3}$</td>
<td>1.5-3.3</td>
</tr>
<tr>
<td>down</td>
<td>d</td>
<td>$\bar{d}$</td>
<td>$-\frac{1}{3}$</td>
<td>3.5-6.0</td>
</tr>
<tr>
<td>charm</td>
<td>c</td>
<td>$\bar{c}$</td>
<td>$+\frac{2}{3}$</td>
<td>1160-1340</td>
</tr>
<tr>
<td>strange</td>
<td>s</td>
<td>$\bar{s}$</td>
<td>$-\frac1{3}$</td>
<td>70-130</td>
</tr>
<tr>
<td>top</td>
<td>t</td>
<td>$\bar{t}$</td>
<td>$+\frac{2}{3}$</td>
<td>169,100-173,300</td>
</tr>
<tr>
<td>bottom</td>
<td>b</td>
<td>$\bar{b}$</td>
<td>$-\frac1{3}$</td>
<td>4130-4370</td>
raise NotImplementedError()
assert True # leave this here to grade the quark table
"""
Explanation: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/006560919734f06efa76c80dc321a748/plot_object_source_estimate.ipynb | bsd-3-clause | import os
from mne import read_source_estimate
from mne.datasets import sample
print(__doc__)
# Paths to example data
sample_dir_raw = sample.data_path()
sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')
subjects_dir = os.path.join(sample_dir_raw, 'subjects')
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
"""
Explanation: The :class:SourceEstimate <mne.SourceEstimate> data structure
Source estimates, commonly referred to as STC (Source Time Courses),
are obtained from source localization methods.
Source localization method solve the so-called 'inverse problem'.
MNE provides different methods for solving it:
dSPM, sLORETA, LCMV, MxNE etc.
Source localization consists in projecting the EEG/MEG sensor data into
a 3-dimensional 'source space' positioned in the individual subject's brain
anatomy. Hence the data is transformed such that the recorded time series at
each sensor location maps to time series at each spatial location of the
'source space' where is defined our source estimates.
An STC object contains the amplitudes of the sources over time.
It only stores the amplitudes of activations but
not the locations of the sources. To get access to the locations
you need to have the :class:source space <mne.SourceSpaces>
(often abbreviated src) used to compute the
:class:forward operator <mne.Forward> (often abbreviated fwd).
See tut-forward for more details on forward modeling, and
tut-inverse-methods
for an example of source localization with dSPM, sLORETA or eLORETA.
Source estimates come in different forms:
- :class:`mne.SourceEstimate`: For cortically constrained source spaces.
- :class:`mne.VolSourceEstimate`: For volumetric source spaces
- :class:`mne.VectorSourceEstimate`: For cortically constrained source
spaces with vector-valued source activations (strength and orientation)
- :class:`mne.MixedSourceEstimate`: For source spaces formed of a
combination of cortically constrained and volumetric sources.
<div class="alert alert-info"><h4>Note</h4><p>:class:`(Vector) <mne.VectorSourceEstimate>`
:class:`SourceEstimate <mne.SourceEstimate>` are surface representations
mostly used together with `FreeSurfer <tut-freesurfer>`
surface representations.</p></div>
Let's get ourselves an idea of what a :class:mne.SourceEstimate really
is. We first set up the environment and load some data:
End of explanation
"""
stc = read_source_estimate(fname_stc, subject='sample')
# Define plotting parameters
surfer_kwargs = dict(
hemi='lh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=0.09, time_unit='s', size=(800, 800),
smoothing_steps=5)
# Plot surface
brain = stc.plot(**surfer_kwargs)
# Add title
brain.add_text(0.1, 0.9, 'SourceEstimate', 'title', font_size=16)
"""
Explanation: Load and inspect example data
This data set contains source estimation data from an audio visual task. It
has been mapped onto the inflated cortical surface representation obtained
from FreeSurfer <tut-freesurfer>
using the dSPM method. It highlights a noticeable peak in the auditory
cortices.
Let's see how it looks like.
End of explanation
"""
shape = stc.data.shape
print('The data has %s vertex locations with %s sample points each.' % shape)
"""
Explanation: SourceEstimate (stc)
A source estimate contains the time series of a activations
at spatial locations defined by the source space.
In the context of a FreeSurfer surfaces - which consist of 3D triangulations
- we could call each data point on the inflated brain
representation a vertex . If every vertex represents the spatial location
of a time series, the time series and spatial location can be written into a
matrix, where to each vertex (rows) at multiple time points (columns) a value
can be assigned. This value is the strength of our signal at a given point in
space and time. Exactly this matrix is stored in stc.data.
Let's have a look at the shape
End of explanation
"""
shape_lh = stc.lh_data.shape
print('The left hemisphere has %s vertex locations with %s sample points each.'
% shape_lh)
"""
Explanation: We see that stc carries 7498 time series of 25 samples length. Those time
series belong to 7498 vertices, which in turn represent locations
on the cortical surface. So where do those vertex values come from?
FreeSurfer separates both hemispheres and creates surfaces
representation for left and right hemisphere. Indices to surface locations
are stored in stc.vertices. This is a list with two arrays of integers,
that index a particular vertex of the FreeSurfer mesh. A value of 42 would
hence map to the x,y,z coordinates of the mesh with index 42.
See next section on how to get access to the positions in a
:class:mne.SourceSpaces object.
Since both hemispheres are always represented separately, both attributes
introduced above, can also be obtained by selecting the respective
hemisphere. This is done by adding the correct prefix (lh or rh).
End of explanation
"""
is_equal = stc.lh_data.shape[0] + stc.rh_data.shape[0] == stc.data.shape[0]
print('The number of vertices in stc.lh_data and stc.rh_data do ' +
('not ' if not is_equal else '') +
'sum up to the number of rows in stc.data')
"""
Explanation: Since we did not change the time representation, only the selected subset of
vertices and hence only the row size of the matrix changed. We can check if
the rows of stc.lh_data and stc.rh_data sum up to the value we had
before.
End of explanation
"""
peak_vertex, peak_time = stc.get_peak(hemi='lh', vert_as_index=True,
time_as_index=True)
"""
Explanation: Indeed and as the mindful reader already suspected, the same can be said
about vertices. stc.lh_vertno thereby maps to the left and
stc.rh_vertno to the right inflated surface representation of
FreeSurfer.
Relationship to SourceSpaces (src)
As mentioned above, :class:src <mne.SourceSpaces> carries the mapping from
stc to the surface. The surface is built up from a
triangulated mesh <https://en.wikipedia.org/wiki/Surface_triangulation>_
for each hemisphere. Each triangle building up a face consists of 3 vertices.
Since src is a list of two source spaces (left and right hemisphere), we can
access the respective data by selecting the source space first. Faces
building up the left hemisphere can be accessed via src[0]['tris'], where
the index $0$ stands for the left and $1$ for the right
hemisphere.
The values in src[0]['tris'] refer to row indices in src[0]['rr'].
Here we find the actual coordinates of the surface mesh. Hence every index
value for vertices will select a coordinate from here. Furthermore
src[0]['vertno'] stores the same data as stc.lh_vertno,
except when working with sparse solvers such as
:func:mne.inverse_sparse.mixed_norm, as then only a fraction of
vertices actually have non-zero activations.
In other words stc.lh_vertno equals src[0]['vertno'], whereas
stc.rh_vertno equals src[1]['vertno']. Thus the Nth time series in
stc.lh_data corresponds to the Nth value in stc.lh_vertno and
src[0]['vertno'] respectively, which in turn map the time series to a
specific location on the surface, represented as the set of cartesian
coordinates stc.lh_vertno[N] in src[0]['rr'].
Let's obtain the peak amplitude of the data as vertex and time point index
End of explanation
"""
peak_vertex_surf = stc.lh_vertno[peak_vertex]
peak_value = stc.lh_data[peak_vertex, peak_time]
"""
Explanation: The first value thereby indicates which vertex and the second which time
point index from within stc.lh_vertno or stc.lh_data is used. We can
use the respective information to get the index of the surface vertex
resembling the peak and its value.
End of explanation
"""
brain = stc.plot(**surfer_kwargs)
# We add the new peak coordinate (as vertex index) as an annotation dot
brain.add_foci(peak_vertex_surf, coords_as_verts=True, hemi='lh', color='blue')
# We add a title as well, stating the amplitude at this time and location
brain.add_text(0.1, 0.9, 'Peak coordinate', 'title', font_size=14)
"""
Explanation: Let's visualize this as well, using the same surfer_kwargs as in the
beginning.
End of explanation
"""
|
mayankjohri/LetsExplorePython | Section 1 - Core Python/Chapter 02 - Data Types Part - 1/2.3. Operators.ipynb | gpl-3.0 | a = 10
b = 22
print("a =", a, ", b =", b)
print("~~~~~~~~~~~~~~~~~")
print("a + b:\t", a + b)
print("a - b:\t", a - b)
print("a * b:\t", a * b)
print("a / b:\t", a / b)
print("a//b:\t", a//b)
print("a % b:\t", a % b)
print("-a:\t", -a)
print("a < b:\t", a < b)
print("a > b:\t", a > b)
print("a <= b:\t", a <= b)
print("a >= b:\t", a >= b)
print("abs(a):\t", abs(a))
import math
print("sqrt(a):", math.sqrt(a))
"""
Explanation: Operators
We are now going to explore Operators. They are single or bi character symbols represent a specific type of operation, say addition, multiplication or comparision.
Python provides many types of operators. We will explore them based on their functionality namely.
Assignment Operators
Arithmetic Operators
Relational Operators
Logical/Boolean Operators
Bitwise Operators
Arithmetic Operators
Python supports all most common maths operations as shown in the below table
| Syntax | Math | Operation Name |
|-------------- |------------------------------------------- |------------------------------------------------------------------ |
| a + b | a + b | addition |
| a - b | a - b | subtraction |
| a * b | a * b | multiplication |
| a / b | a \div b | division (see note below) |
| a // b | a//b | floor division (e.g. 5//2=2) |
| a % b | a % b | modulo |
| -a | -a | negation |
| abs(a)| <code>| a |</code> | absolute value |
| ab | ab | exponent |
| math.sqrt(a) | sqrt a | square root |
<center>Note</center>
In order to use math.sqrt() function, you must explicitly load the math module by adding import math at the top of your file, where all the other modules import's are defined.
We will explore the below operators with various primary data types
Numeric Vs Numeric
Following mathemetical operations are supported in python between numeric data
End of explanation
"""
a = "10"
b = 22
print("a =", a, ", b =", b)
print("~~~~~~~~~~~~~~~~~")
# print("a + b:\t", a + b)
# print("a - b:\t", a - b)
print("a * b:\t", a * b)
# print("a / b:\t", a / b)
# print("a//b:\t", a//b)
# print("a % b:\t", a % b)
# print("-a:\t", -a)
# print("a < b:\t", a < b)
# print("a > b:\t", a > b)
# print("a <= b:\t", a <= b)
# print("a >= b:\t", a >= b)
# print("abs(a):\t", abs(a))
# import math
# print("sqrt(a):", math.sqrt(a))
"""
Explanation: Numeric Vs String
End of explanation
"""
a = "10"
b = 22 + 4j
print("a =", a, ", b =", b)
print("~~~~~~~~~~~~~~~~~")
try:
print("a * b:\t", a * b)
except Exception as e:
print(e)
"""
Explanation: As shown above only "*" multiplication is possible between string & real numeric value
Complex Vs String
End of explanation
"""
a = 10 + 4j
b = 10
print("a =", a, ", b =", b)
print("~~~~~~~~~~~~~~~~~")
print("a + b:\t", a + b)
print("a - b:\t", a - b)
print("a * b:\t", a * b)
print("a / b:\t", a / b)
# print("a//b:\t", a//b)
# print("a % b:\t", a % b)
print("-a:\t", -a)
# print("a < b:\t", a < b)
# print("a > b:\t", a > b)
# print("a <= b:\t", a <= b)
# print("a >= b:\t", a >= b)
print("abs(a):\t", abs(a))
import math
# print("sqrt(a):", math.sqrt(a))
"""
Explanation: Complex Vs Numeric
End of explanation
"""
a = 10
c = 0
c += a
print("c =", c)
c -= a/2
print("c =", c)
c *= a
print("c =", c)
c /= a
print("c =", c)
c **= a
print("c =", c)
c //= a
print("c =", c)
c %= a
print("c =", c)
"""
Explanation: Thus we can see that following operations are not possible with complex data type floor (//), mod (%), compare (<, >, >=, <=).
Assignment Operators
| Syntax | Math | Operation Name |
|-------------- |-------------------------------------------|------------------------------------------------------------------ |
| += | a + b | addition |
| -= | a - b | subtraction |
| *= | a * b | multiplication |
| /= | a \div b | division (see note below) |
| //= | a//b | floor division (e.g. 5//2=2) |
| %= | a % b | modulo |
Numeric
Int
End of explanation
"""
a = 10.20
c = 0
c += a
print("c =", c)
c -= a/2
print("c =", c)
c *= a
print("c =", c)
c /= a
print("c =", c)
c **= a
print("c =", c)
c //= a
print("c =", c)
c %= a
print("c =", c)
"""
Explanation: Real Number
End of explanation
"""
a = 10 + 20j
c = 0
c += a
print("c =", c)
c -= a/2
print("c =", c)
c *= a
print("c =", c)
c /= a
print("c =", c)
c **= a
print("c =", c)
# c //= a
# print("c =", c)
# c %= a
# print("c =", c)
"""
Explanation: Complex Number
End of explanation
"""
a = True
c = 0
c += a
print("c =", c)
c -= a/2
print("c =", c)
c *= a
print("c =", c)
c /= a
print("c =", c)
c **= a
print("c =", c)
c //= a
print("c =", c)
c %= a
print("c =", c)
a = False
c = 0
c += a
print("c =", c)
c -= a/2
print("c =", c)
c *= a
print("c =", c)
# c /= a
# print("c =", c)
c **= a
print("c =", c)
# c //= a
# print("c =", c)
# c %= a
# print("c =", c)
"""
Explanation: Boolean Number
End of explanation
"""
There are following relational operators supported by Python language
"""
Explanation: Relational Operators
End of explanation
"""
a = 10
b = 21.22
print(a < b)
print(a > b)
print(a <= b)
print(a >= b)
print(a == b)
print(a != b)
"""
Explanation: | Syntax | Math | Operation Name |
|---------- |--------|----------------------|
| < | a < b | Less- than |
| > | a > b | Greater- than |
| <= | a <= b | Less- than- equal |
| >= | a >= b | Greater- than- equal |
| == | a == b | Equal to |
| != | a != b | not equal to |
Numeric Vs Numeric
End of explanation
"""
a = 10
b = "Mayank Shrivastava"
# print(a < b)
# print(a > b)
# print(a <= b)
# print(a >= b)
print(a == b)
print(a != b)
"""
Explanation: Numeric Vs String
End of explanation
"""
a = 10
b = 10 + 21j
# print(a < b)
# print(a > b)
# print(a <= b)
# print(a >= b)
print(a == b)
print(a != b)
"""
Explanation: Numeric Vs Complex
End of explanation
"""
a = 10 + 20j
b = 10 + 21j
# print(a < b)
# print(a > b)
# print(a <= b)
# print(a >= b)
print(a == b)
print(a != b)
"""
Explanation: Complex Vs Complex
End of explanation
"""
a = 10 + 20j
b = "Rishi Rai"
# print(a < b)
# print(a > b)
# print(a <= b)
# print(a >= b)
print(a == b)
print(a != b)
"""
Explanation: Complex Vs String
End of explanation
"""
a = "Manish Nandle"
b = "Saurabh Dubey"
print(a < b)
print(a > b)
print(a <= b)
print(a >= b)
print(a == b)
print(a != b)
"""
Explanation: String Vs String
End of explanation
"""
print (0 and 3) # Shows 0
print (2 and 3 )# Shows 3
print (0 or 3) # Shows 3
print (2 or 3) # Shows 2
print (not 0) # Shows True
print (not 2) # Shows False
print (2 in (2, 3)) # Shows True
print (2 is 3) # Shows False
"""
Explanation: Logical/Boolean Operators
| Syntax | example | Operation Name | Meaning |
|---------- |----------------|---------------- |-----------------------------------------------------------|
| and | a and b | Logical AND | returns true if and only if both expressions are true |
| or | a or b | Logical OR | returns true if and only if even an expressions are true |
| not | not(a and b) | Logical NOT | returns reverse of the expression |
| is | a is b | Logical IS | returns true both references are same object else false |
| in | in | Logical IN | returns true if first is found in second else false |
Example:
End of explanation
"""
a = "Sunil Kumar Bhele"
x = 10 #-> 1010
y = 11 #-> 1011
"""
Explanation: Besides boolean operators, there are the functions all(), which returns true when all of the items in the sequence passed as parameters are true, and any(), which returns true if any item is true.
Bitwise Operators
Left Shift (<<)
Right Shift (>>)
And (&)
Or (|)
Exclusive Or (^)
Inversion (~)
End of explanation
"""
x = 10 #-> 1010
y = 11 #-> 1011
print("x << 2 = ", x<<2)
print("x =", x)
print("x >> 2 = ", x>>2)
print("x &y = ", x&y)
print("x | y = ", x|y)
print("x^y = ", x^y)
print("x =", x)
print("~x = ", ~x)
print("~y = ", ~y)
"""
Explanation: 1011
"""
OR
0 0 | 0
0 1 | 1
1 0 | 1
1 1 | 1
AND
0 0 | 0
0 1 | 0
1 0 | 0
1 1 | 1
"""
End of explanation
"""
print (round(3.14159265, 2))
"""
Explanation: Order of Operations
Python uses the standard order of operations as taught in Algebra and Geometry classes. That, mathematical expressions are evaluated in the following order (memorized by many as PEMDAS or BODMAS {Brackets, Orders or pOwers, Division, Multiplication, Addition, Subtraction}) .
(Note that operations which share a table row are performed from left to right. That is, a division to the left of a multiplication, with no parentheses between them, is performed before the multiplication simply because it is to the left.)
| Name | Syntax | Description | PEMDAS Mnemonic |
|---------------------------- |---------- |---------------------------------------------------------------------------------------------------------------------------------------- |----------------- |
| Parentheses | ( ... ) | Before operating on anything else, Python must evaluate all parentheticals starting at the innermost level. (This includes functions.) | Please |
| Exponents | ** | As an exponent is simply short multiplication or division, it should be evaluated before them. | Excuse |
| Multiplication and Division | * / // % | Again, multiplication is rapid addition and must, therefore, happen first. | My Dear |
| Addition and Subtraction | + - | They should happen independent to one another and finally operated among eachother | Aunt Sally |
| operators | descriptions |
|--------------------------|----------------------------------------|
| (), [], {}, ‘’ | tuple, list, dictionnary, string |
| x.attr, x[], x[i:j], f() | attribute, index, slide, function call |
| +x, -x, ~x | unary negation, bitwise invert |
| * | exponent |
| , /, % | multiplication, division, modulo |
| +, - | addition, substraction |
| <<, >> | bitwise shifts |
| & | bitwise and |
| ^ | bitwise xor |
| | bitwise or |
| <, <=, >=, > | comparison operators |
| ==, !=, is, is not, in, | comparison operators (continue) |
| not in | comparison operators (continue) |
| not | boolean NOT |
| and | boolean AND |
| or | boolean OR |
| lambda | lamnda expression |
Formatting output
round()
End of explanation
"""
|
awhite40/pymks | notebooks/elasticity_2D_Multiphase.ipynb | mit | %matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
n = 21
n_phases = 3
from pymks.tools import draw_microstructures
from pymks.datasets import make_delta_microstructures
X_delta = make_delta_microstructures(n_phases=n_phases, size=(n, n))
"""
Explanation: Linear Elasticity in 2D for 3 Phases
Introduction
This example provides a demonstration of using PyMKS to compute the linear strain field for a three-phase composite material. It demonstrates how to generate data for delta microstructures and then use this data to calibrate the first order MKS influence coefficients. The calibrated influence coefficients are used to predict the strain response for a random microstructure and the results are compared with those from finite element. Finally, the influence coefficients are scaled up and the MKS results are again compared with the finite element data for a large problem.
PyMKS uses the finite element tool SfePy to generate both the strain fields to fit the MKS model and the verification data to evaluate the MKS model's accuracy.
Elastostatics Equations and Boundary Conditions
The governing equations for elasticostaics and the boundary conditions used in this example are the same as those provided in the Linear Elastic in 2D example.
Note that an inappropriate boundary condition is used in this example because current version of SfePy is unable to implement a periodic plus displacement boundary condition. This leads to some issues near the edges of the domain and introduces errors into the resizing of the coefficients. We are working to fix this issue, but note that the problem is not with the MKS regression itself, but with the calibration data used. The finite element package ABAQUS includes the displaced periodic boundary condition and can be used to calibrate the MKS regression correctly.
Modeling with MKS
Calibration Data and Delta Microstructures
The first order MKS influence coefficients are all that is needed to compute a strain field of a random microstructure as long as the ratio between the elastic moduli (also known as the contrast) is less than 1.5. If this condition is met we can expect a mean absolute error of 2% or less when comparing the MKS results with those computed using finite element methods [1].
Because we are using distinct phases and the contrast is low enough to only need the first-order coefficients, delta microstructures and their strain fields are all that we need to calibrate the first-order influence coefficients [2].
Here we use the make_delta_microstructure function from pymks.datasets to create the delta microstructures needed to calibrate the first-order influence coefficients for a two-phase microstructure. The make_delta_microstructure function uses SfePy to generate the data.
End of explanation
"""
draw_microstructures(X_delta[::2])
"""
Explanation: Let's take a look at a few of the delta microstructures by importing draw_microstructures from pymks.tools.
End of explanation
"""
from pymks.datasets import make_elastic_FE_strain_delta
from pymks.tools import draw_microstructure_strain
elastic_modulus = (80, 100, 120)
poissons_ratio = (0.3, 0.3, 0.3)
macro_strain = 0.02
size = (n, n)
X_delta, strains_delta = make_elastic_FE_strain_delta(elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio,
size=size, macro_strain=macro_strain)
"""
Explanation: Using delta microstructures for the calibration of the first-order influence coefficients is essentially the same, as using a unit impulse response to find the kernel of a system in signal processing. Any given delta microstructure is composed of only two phases with the center cell having an alternative phase from the remainder of the domain. The number of delta microstructures that are needed to calibrated the first-order coefficients is $N(N-1)$ where $N$ is the number of phases, therefore in this example we need 6 delta microstructures.
Generating Calibration Data
The make_elasticFEstrain_delta function from pymks.datasets provides an easy interface to generate delta microstructures and their strain fields, which can then be used for calibration of the influence coefficients. The function calls the ElasticFESimulation class to compute the strain fields.
In this example, lets look at a three phase microstructure with elastic moduli values of 80, 100 and 120 and Poisson's ratio values all equal to 0.3. Let's also set the macroscopic imposed strain equal to 0.02. All of these parameters used in the simulation must be passed into the make_elasticFEstrain_delta function. The number of Poisson's ratio values and elastic moduli values indicates the number of phases. Note that make_elasticFEstrain_delta does not take a number of samples argument as the number of samples to calibrate the MKS is fixed by the number of phases.
End of explanation
"""
draw_microstructure_strain(X_delta[0], strains_delta[0])
"""
Explanation: Let's take a look at one of the delta microstructures and the $\varepsilon_{xx}$ strain field.
End of explanation
"""
from pymks import MKSLocalizationModel
from pymks import PrimitiveBasis
prim_basis =PrimitiveBasis(n_states=3, domain=[0, 2])
model = MKSLocalizationModel(basis=prim_basis)
"""
Explanation: Because slice(None) (the default slice operator in Python, equivalent to array[:]) was passed in to the make_elasticFEstrain_delta function as the argument for strain_index, the function returns all the strain fields. Let's also take a look at the $\varepsilon_{yy}$ and $\varepsilon_{xy}$ strain fields.
Calibrating First-Order Influence Coefficients
Now that we have the delta microstructures and their strain fields, we will calibrate the influence coefficients by creating an instance of the MKSLocalizatoinModel class. Because we are going to calibrate the influence coefficients with delta microstructures, we can create an instance of PrimitiveBasis with n_states equal to 3, and use it to create an instance of MKSLocalizationModel. The delta microstructures and their strain fields will then be passed to the fit method.
End of explanation
"""
model.fit(X_delta, strains_delta)
"""
Explanation: Now, pass the delta microstructures and their strain fields into the fit method to calibrate the first-order influence coefficients.
End of explanation
"""
from pymks.tools import draw_coeff
draw_coeff(model.coeff)
"""
Explanation: That's it, the influence coefficient have been calibrated. Let's take a look at them.
End of explanation
"""
from pymks.datasets import make_elastic_FE_strain_random
np.random.seed(101)
X, strain = make_elastic_FE_strain_random(n_samples=1, elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio, size=size,
macro_strain=macro_strain)
draw_microstructure_strain(X[0] , strain[0])
"""
Explanation: The influence coefficients for $l=0$ and $l = 1$ have a Gaussian-like shape, while the influence coefficients for $l=2$ are constant-valued. The constant-valued influence coefficients may seem superfluous, but are equally as important. They are equivalent to the constant term in multiple linear regression with categorical variables.
Predict of the Strain Field for a Random Microstructure
Let's now use our instance of the MKSLocalizationModel class with calibrated influence coefficients to compute the strain field for a random two-phase microstructure and compare it with the results from a finite element simulation.
The make_elasticFEstrain_random function from pymks.datasets is an easy way to generate a random microstructure and its strain field results from finite element analysis.
End of explanation
"""
strain_pred = model.predict(X)
"""
Explanation: Note that the calibrated influence coefficients can only be used to reproduce the simulation with the same boundary conditions that they were calibrated with.
Now, to get the strain field from the MKSLocalizationModel, just pass the same microstructure to the predict method.
End of explanation
"""
from pymks.tools import draw_strains_compare
draw_strains_compare(strain[0], strain_pred[0])
"""
Explanation: Finally let's compare the results from finite element simulation and the MKS model.
End of explanation
"""
from pymks.tools import draw_differences
draw_differences([strain[0] - strain_pred[0]], ['Finite Element - MKS'])
"""
Explanation: Let's plot the difference between the two strain fields.
End of explanation
"""
m = 3 * n
size = (m, m)
print size
X, strain = make_elastic_FE_strain_random(n_samples=1, elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio, size=size,
macro_strain=macro_strain)
draw_microstructure_strain(X[0] , strain[0])
"""
Explanation: The MKS model is able to capture the strain field for the random microstructure after being calibrated with delta microstructures.
Resizing the Coefficeints to use on Larger Microstructures
The influence coefficients that were calibrated on a smaller microstructure can be used to predict the strain field on a larger microstructure though spectral interpolation [3], but accuracy of the MKS model drops slightly. To demonstrate how this is done, let's generate a new larger random microstructure and its strain field.
End of explanation
"""
model.resize_coeff(X[0].shape)
"""
Explanation: The influence coefficients that have already been calibrated on a $n$ by $n$ delta microstructures, need to be resized to match the shape of the new larger $m$ by $m$ microstructure that we want to compute the strain field for. This can be done by passing the shape of the new larger microstructure into the resize_coeff method.
End of explanation
"""
draw_coeff(model.coeff)
"""
Explanation: Let's now take a look that ther resized influence coefficients.
End of explanation
"""
strain_pred = model.predict(X)
draw_strains_compare(strain[0], strain_pred[0])
"""
Explanation: Because the coefficients have been resized, they will no longer work for our original $n$ by $n$ sized microstructures they were calibrated on, but they can now be used on the $m$ by $m$ microstructures. Just like before, just pass the microstructure as the argument of the predict method to get the strain field.
End of explanation
"""
draw_differences([strain[0] - strain_pred[0]], ['Finite Element - MKS'])
"""
Explanation: Again, let's plot the difference between the two strain fields.
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/1b3716673f2aeae3f2b0c6c336812aba/80_fix_bem_in_blender.ipynb | bsd-3-clause | # Authors: Marijn van Vliet <w.m.vanvliet@gmail.com>
# Ezequiel Mikulan <e.mikulan@gmail.com>
# Manorama Kadwani <manorama.kadwani@gmail.com>
#
# License: BSD-3-Clause
import os
import shutil
import mne
data_path = mne.datasets.sample.data_path()
subjects_dir = data_path / 'subjects'
bem_dir = subjects_dir / 'sample' / 'bem' / 'flash'
surf_dir = subjects_dir / 'sample' / 'surf'
"""
Explanation: Fixing BEM and head surfaces
Sometimes when creating a BEM model the surfaces need manual correction because
of a series of problems that can arise (e.g. intersection between surfaces).
Here, we will see how this can be achieved by exporting the surfaces to the 3D
modeling program Blender, editing them, and
re-importing them. We will also give a simple example of how to use
pymeshfix <tut-fix-meshes-pymeshfix> to fix topological problems.
Much of this tutorial is based on
https://github.com/ezemikulan/blender_freesurfer by Ezequiel Mikulan.
:depth: 3
End of explanation
"""
# Put the converted surfaces in a separate 'conv' folder
conv_dir = subjects_dir / 'sample' / 'conv'
os.makedirs(conv_dir, exist_ok=True)
# Load the inner skull surface and create a problem
# The metadata is empty in this example. In real study, we want to write the
# original metadata to the fixed surface file. Set read_metadata=True to do so.
coords, faces = mne.read_surface(bem_dir / 'inner_skull.surf')
coords[0] *= 1.1 # Move the first vertex outside the skull
# Write the inner skull surface as an .obj file that can be imported by
# Blender.
mne.write_surface(conv_dir / 'inner_skull.obj', coords, faces, overwrite=True)
# Also convert the outer skull surface.
coords, faces = mne.read_surface(bem_dir / 'outer_skull.surf')
mne.write_surface(conv_dir / 'outer_skull.obj', coords, faces, overwrite=True)
"""
Explanation: Exporting surfaces to Blender
In this tutorial, we are working with the MNE-Sample set, for which the
surfaces have no issues. To demonstrate how to fix problematic surfaces, we
are going to manually place one of the inner-skull vertices outside the
outer-skill mesh.
We then convert the surfaces to .obj files and create a new
folder called conv inside the FreeSurfer subject folder to keep them in.
End of explanation
"""
coords, faces = mne.read_surface(conv_dir / 'inner_skull.obj')
coords[0] /= 1.1 # Move the first vertex back inside the skull
mne.write_surface(conv_dir / 'inner_skull_fixed.obj', coords, faces,
overwrite=True)
"""
Explanation: Editing in Blender
We can now open Blender and import the surfaces. Go to File > Import >
Wavefront (.obj). Navigate to the conv folder and select the file you
want to import. Make sure to select the Keep Vert Order option. You can
also select the Y Forward option to load the axes in the correct direction
(RAS):
<img src="file://../../_static/blender_import_obj/blender_import_obj1.jpg" width="800" alt="Importing .obj files in Blender">
For convenience, you can save these settings by pressing the + button
next to Operator Presets.
Repeat the procedure for all surfaces you want to import (e.g. inner_skull
and outer_skull).
You can now edit the surfaces any way you like. See the
Beginner Blender Tutorial Series
to learn how to use Blender. Specifically, part 2 will teach you how to
use the basic editing tools you need to fix the surface.
<img src="file://../../_static/blender_import_obj/blender_import_obj2.jpg" width="800" alt="Editing surfaces in Blender">
Using the fixed surfaces in MNE-Python
In Blender, you can export a surface as an .obj file by selecting it and go
to File > Export > Wavefront (.obj). You need to again select the Y
Forward option and check the Keep Vertex Order box.
<img src="file://../../_static/blender_import_obj/blender_import_obj3.jpg" width="200" alt="Exporting .obj files in Blender">
Each surface needs to be exported as a separate file. We recommend saving
them in the conv folder and ending the file name with _fixed.obj,
although this is not strictly necessary.
In order to be able to run this tutorial script top to bottom, we here
simulate the edits you did manually in Blender using Python code:
End of explanation
"""
# Read the fixed surface
coords, faces = mne.read_surface(conv_dir / 'inner_skull_fixed.obj')
# Backup the original surface
shutil.copy(bem_dir / 'inner_skull.surf', bem_dir / 'inner_skull_orig.surf')
# Overwrite the original surface with the fixed version
# In real study you should provide the correct metadata using ``volume_info=``
# This could be accomplished for example with:
#
# _, _, vol_info = mne.read_surface(bem_dir / 'inner_skull.surf',
# read_metadata=True)
# mne.write_surface(bem_dir / 'inner_skull.surf', coords, faces,
# volume_info=vol_info, overwrite=True)
"""
Explanation: Back in Python, you can read the fixed .obj files and save them as
FreeSurfer .surf files. For the :func:mne.make_bem_model function to find
them, they need to be saved using their original names in the surf
folder, e.g. bem/inner_skull.surf. Be sure to first backup the original
surfaces in case you make a mistake!
End of explanation
"""
# Load the fixed surface
coords, faces = mne.read_surface(bem_dir / 'outer_skin.surf')
# Make sure we are in the correct directory
head_dir = bem_dir.parent
# Remember to backup the original head file in advance!
# Overwrite the original head file
#
# mne.write_head_bem(head_dir / 'sample-head.fif', coords, faces,
# overwrite=True)
"""
Explanation: Editing the head surfaces
Sometimes the head surfaces are faulty and require manual editing. We use
:func:mne.write_head_bem to convert the fixed surfaces to .fif files.
Low-resolution head
For EEG forward modeling, it is possible that outer_skin.surf would be
manually edited. In that case, remember to save the fixed version of
-head.fif from the edited surface file for coregistration.
End of explanation
"""
# If ``-head-dense.fif`` does not exist, you need to run
# ``mne make_scalp_surfaces`` first.
# [0] because a list of surfaces is returned
surf = mne.read_bem_surfaces(head_dir / 'sample-head.fif')[0]
# For consistency only
coords = surf['rr']
faces = surf['tris']
# Write the head as an .obj file for editing
mne.write_surface(conv_dir / 'sample-head.obj',
coords, faces, overwrite=True)
# Usually here you would go and edit your meshes.
#
# Here we just use the same surface as if it were fixed
# Read in the .obj file
coords, faces = mne.read_surface(conv_dir / 'sample-head.obj')
# Remember to backup the original head file in advance!
# Overwrite the original head file
#
# mne.write_head_bem(head_dir / 'sample-head.fif', coords, faces,
# overwrite=True)
"""
Explanation: High-resolution head
We use :func:mne.read_bem_surfaces to read the head surface files. After
editing, we again output the head file with :func:mne.write_head_bem.
Here we use -head.fif for speed.
End of explanation
"""
|
dolittle007/dolittle007.github.io | notebooks/GLM-linear.ipynb | gpl-3.0 | %matplotlib inline
from pymc3 import *
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: GLM: Linear regression
Author: Thomas Wiecki
This tutorial is adapted from a blog post by Thomas Wiecki called "The Inference Button: Bayesian GLMs made easy with PyMC3".
This tutorial appeared as a post in a small series on Bayesian GLMs on my blog:
The Inference Button: Bayesian GLMs made easy with PyMC3
This world is far from Normal(ly distributed): Robust Regression in PyMC3
The Best Of Both Worlds: Hierarchical Linear Regression in PyMC3
In this blog post I will talk about:
How the Bayesian Revolution in many scientific disciplines is hindered by poor usability of current Probabilistic Programming languages.
A gentle introduction to Bayesian linear regression and how it differs from the frequentist approach.
A preview of PyMC3 (currently in alpha) and its new GLM submodule I wrote to allow creation and estimation of Bayesian GLMs as easy as frequentist GLMs in R.
Ready? Lets get started!
There is a huge paradigm shift underway in many scientific disciplines: The Bayesian Revolution.
While the theoretical benefits of Bayesian over Frequentist stats have been discussed at length elsewhere (see Further Reading below), there is a major obstacle that hinders wider adoption -- usability (this is one of the reasons DARPA wrote out a huge grant to improve Probabilistic Programming).
This is mildly ironic because the beauty of Bayesian statistics is their generality. Frequentist stats have a bazillion different tests for every different scenario. In Bayesian land you define your model exactly as you think is appropriate and hit the Inference Button(TM) (i.e. running the magical MCMC sampling algorithm).
Yet when I ask my colleagues why they use frequentist stats (even though they would like to use Bayesian stats) the answer is that software packages like SPSS or R make it very easy to run all those individuals tests with a single command (and more often then not, they don't know the exact model and inference method being used).
While there are great Bayesian software packages like JAGS, BUGS, Stan and PyMC, they are written for Bayesians statisticians who know very well what model they want to build.
Unfortunately, "the vast majority of statistical analysis is not performed by statisticians" -- so what we really need are tools for scientists and not for statisticians.
In the interest of putting my code where my mouth is I wrote a submodule for the upcoming PyMC3 that makes construction of Bayesian Generalized Linear Models (GLMs) as easy as Frequentist ones in R.
Linear Regression
While future blog posts will explore more complex models, I will start here with the simplest GLM -- linear regression.
In general, frequentists think about Linear Regression as follows:
$$ Y = X\beta + \epsilon $$
where $Y$ is the output we want to predict (or dependent variable), $X$ is our predictor (or independent variable), and $\beta$ are the coefficients (or parameters) of the model we want to estimate. $\epsilon$ is an error term which is assumed to be normally distributed.
We can then use Ordinary Least Squares or Maximum Likelihood to find the best fitting $\beta$.
Probabilistic Reformulation
Bayesians take a probabilistic view of the world and express this model in terms of probability distributions. Our above linear regression can be rewritten to yield:
$$ Y \sim \mathcal{N}(X \beta, \sigma^2) $$
In words, we view $Y$ as a random variable (or random vector) of which each element (data point) is distributed according to a Normal distribution. The mean of this normal distribution is provided by our linear predictor with variance $\sigma^2$.
While this is essentially the same model, there are two critical advantages of Bayesian estimation:
Priors: We can quantify any prior knowledge we might have by placing priors on the paramters. For example, if we think that $\sigma$ is likely to be small we would choose a prior with more probability mass on low values.
Quantifying uncertainty: We do not get a single estimate of $\beta$ as above but instead a complete posterior distribution about how likely different values of $\beta$ are. For example, with few data points our uncertainty in $\beta$ will be very high and we'd be getting very wide posteriors.
Bayesian GLMs in PyMC3
With the new GLM module in PyMC3 it is very easy to build this and much more complex models.
First, lets import the required modules.
End of explanation
"""
size = 200
true_intercept = 1
true_slope = 2
x = np.linspace(0, 1, size)
# y = a + b*x
true_regression_line = true_intercept + true_slope * x
# add noise
y = true_regression_line + np.random.normal(scale=.5, size=size)
data = dict(x=x, y=y)
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, xlabel='x', ylabel='y', title='Generated data and underlying model')
ax.plot(x, y, 'x', label='sampled data')
ax.plot(x, true_regression_line, label='true regression line', lw=2.)
plt.legend(loc=0);
"""
Explanation: Generating data
Create some toy data to play around with and scatter-plot it.
Essentially we are creating a regression line defined by intercept and slope and add data points by sampling from a Normal with the mean set to the regression line.
End of explanation
"""
with Model() as model: # model specifications in PyMC3 are wrapped in a with-statement
# Define priors
sigma = HalfCauchy('sigma', beta=10, testval=1.)
intercept = Normal('Intercept', 0, sd=20)
x_coeff = Normal('x', 0, sd=20)
# Define likelihood
likelihood = Normal('y', mu=intercept + x_coeff * x,
sd=sigma, observed=y)
# Inference!
trace = sample(3000, njobs=2) # draw 3000 posterior samples using NUTS sampling
"""
Explanation: Estimating the model
Lets fit a Bayesian linear regression model to this data. As you can see, model specifications in PyMC3 are wrapped in a with statement.
Here we use the awesome new NUTS sampler (our Inference Button) to draw 2000 posterior samples.
End of explanation
"""
with Model() as model:
# specify glm and pass in data. The resulting linear model, its likelihood and
# and all its parameters are automatically added to our model.
glm.GLM.from_formula('y ~ x', data)
trace = sample(3000, njobs=2) # draw 3000 posterior samples using NUTS sampling
"""
Explanation: This should be fairly readable for people who know probabilistic programming. However, would my non-statistican friend know what all this does? Moreover, recall that this is an extremely simple model that would be one line in R. Having multiple, potentially transformed regressors, interaction terms or link-functions would also make this much more complex and error prone.
The new glm() function instead takes a Patsy linear model specifier from which it creates a design matrix. glm() then adds random variables for each of the coefficients and an appopriate likelihood to the model.
End of explanation
"""
plt.figure(figsize=(7, 7))
traceplot(trace[100:])
plt.tight_layout();
"""
Explanation: Much shorter, but this code does the exact same thing as the above model specification (you can change priors and everything else too if we wanted). glm() parses the Patsy model string, adds random variables for each regressor (Intercept and slope x in this case), adds a likelihood (by default, a Normal is chosen), and all other variables (sigma). Finally, glm() then initializes the parameters to a good starting point by estimating a frequentist linear model using statsmodels.
If you are not familiar with R's syntax, 'y ~ x' specifies that we have an output variable y that we want to estimate as a linear function of x.
Analyzing the model
Bayesian inference does not give us only one best fitting line (as maximum likelihood does) but rather a whole posterior distribution of likely parameters. Lets plot the posterior distribution of our parameters and the individual samples we drew.
End of explanation
"""
plt.figure(figsize=(7, 7))
plt.plot(x, y, 'x', label='data')
plot_posterior_predictive_glm(trace, samples=100,
label='posterior predictive regression lines')
plt.plot(x, true_regression_line, label='true regression line', lw=3., c='y')
plt.title('Posterior predictive regression lines')
plt.legend(loc=0)
plt.xlabel('x')
plt.ylabel('y');
"""
Explanation: The left side shows our marginal posterior -- for each parameter value on the x-axis we get a probability on the y-axis that tells us how likely that parameter value is.
There are a couple of things to see here. The first is that our sampling chains for the individual parameters (left side) seem well converged and stationary (there are no large drifts or other odd patterns).
Secondly, the maximum posterior estimate of each variable (the peak in the left side distributions) is very close to the true parameters used to generate the data (x is the regression coefficient and sigma is the standard deviation of our normal).
In the GLM we thus do not only have one best fitting regression line, but many. A posterior predictive plot takes multiple samples from the posterior (intercepts and slopes) and plots a regression line for each of them. Here we are using the plot_posterior_predictive_glm() convenience function for this.
End of explanation
"""
|
bayesimpact/bob-emploi | data_analysis/notebooks/research/job_similarity/rome_mobility_similarity.ipynb | gpl-3.0 | from os import path
import pandas
import seaborn as _
rome_version = 'v330'
data_folder = '../../../data'
rome_folder = path.join(data_folder, 'rome/csv')
mobility_csv = path.join(rome_folder, 'unix_rubrique_mobilite_{}_utf8.csv'.format(rome_version))
rome_csv = path.join(rome_folder, 'unix_referentiel_code_rome_{}_utf8.csv'.format(rome_version))
appellation_csv = path.join(rome_folder, 'unix_referentiel_appellation_{}_utf8.csv'.format(rome_version))
mobility = pandas.read_csv(mobility_csv)
rome = pandas.read_csv(rome_csv)[['code_rome', 'libelle_rome']]
rome_names = rome.groupby('code_rome').first()['libelle_rome']
jobs = pandas.read_csv(appellation_csv)[['code_ogr', 'code_rome', 'libelle_appellation_court']]
jobs_names = jobs.groupby('code_ogr').first()['libelle_appellation_court']
"""
Explanation: Author: Pascal, pascal@bayesimpact.org
Date: 2016-04-26
Skip the run test because the ROME version has to be updated to make it work in the exported repository. TODO: Update ROME and remove the skiptest flag.
ROME mobility
The ROME dataset contains links between ROME job groups, it is called "mobilité" (mobility in French) as this is used to tell job seekers to which other jobs they could move.
This notebook does a sanity check on this table, before we use it in our product.
End of explanation
"""
mobility.head(2).transpose()
mobility.count()
mobility[mobility.code_appellation_source.notnull()].head(2).transpose()
"""
Explanation: First Look
Let's first check how it looks like:
End of explanation
"""
# Rename columns.
mobility.rename(columns={
'code_rome': 'group_source',
'code_appellation_source': 'job_source',
'code_rome_cible': 'group_target',
'code_appellation_cible': 'job_target',
}, inplace=True)
# Add names.
mobility['group_source_name'] = mobility['group_source'].map(rome_names)
mobility['group_target_name'] = mobility['group_target'].map(rome_names)
mobility['job_source_name'] = mobility['job_source'].map(jobs_names)
mobility['job_target_name'] = mobility['job_target'].map(jobs_names)
# Sort columns.
mobility = mobility[[
'group_source', 'group_source_name', 'job_source', 'job_source_name',
'group_target', 'group_target_name', 'job_target', 'job_target_name',
'code_type_mobilite', 'libelle_type_mobilite'
]]
mobility.head(2).transpose()
"""
Explanation: It seems pretty straight forward: it's a list of links from a job group (or a specific job) to another (group or specific job). So let's clean up a bit and add names.
End of explanation
"""
# Links from one job group to the same one.
len(mobility[mobility.group_source == mobility.group_target].index)
# Number of duplicate links.
len(mobility.index) - len(mobility.drop_duplicates())
# Number of duplicate links when we ignore the link types.
len(mobility.index) - len(mobility.drop_duplicates([
'group_source', 'job_source', 'group_target', 'job_target']))
# Reverse links.
two_links = pandas.merge(
mobility.fillna(''), mobility.fillna(''),
left_on=['group_target', 'job_target'],
right_on=['group_source', 'job_source'])
str(len(two_links[
(two_links.group_source_x == two_links.group_target_y) &
(two_links.job_source_x == two_links.job_target_y)].index) / len(mobility.index) * 100) + '%'
rome_froms = pandas.merge(
mobility[mobility.job_source.notnull()].drop_duplicates(['group_source', 'group_source_name']),
mobility[mobility.job_source.isnull()].drop_duplicates(['group_source', 'group_source_name']),
on=['group_source', 'group_source_name'], how='outer', suffixes=['_specific', '_group'])
# Number of ROME job groups that have links both for the group and for at least one specific job.
len(rome_froms[rome_froms.group_target_specific.notnull() & rome_froms.group_target_group.notnull()])
# ROME job groups that have only links for specific jobs and not for the group.
rome_froms[rome_froms.group_target_group.isnull()]['group_source_name'].tolist()
rome_froms = pandas.merge(
mobility[mobility.job_source.notnull()].drop_duplicates(['group_target', 'group_target_name']),
mobility[mobility.job_source.isnull()].drop_duplicates(['group_target', 'group_target_name']),
on=['group_target', 'group_target_name'], how='outer', suffixes=['_specific', '_group'])
# Number of ROME job groups that have links both to the group and to at least one specific job.
len(rome_froms[rome_froms.group_source_specific.notnull() & rome_froms.group_source_group.notnull()])
# ROME job groups that have only links to specific jobs and not to the whole group.
rome_froms[rome_froms.group_source_group.isnull()]['group_target_name'].tolist()
# Number of links specific to jobs (as opposed to groups) that are already specified by group links.
mobility['has_job_source'] = ~mobility.job_source.isnull()
mobility['has_job_target'] = ~mobility.job_target.isnull()
any_job_mobility = mobility.drop_duplicates(['group_source', 'has_job_source', 'group_target', 'has_job_target'])
len(any_job_mobility) - len(any_job_mobility.drop_duplicates(['group_source', 'group_target']))
"""
Explanation: Sanity
Let's do some sanity checks:
- Is there links from a job group to itself?
- Is there duplicate links?
- If there is a link from A to B, is there one from B to A?
- When using specific jobs, is there also links from or to the job group?
End of explanation
"""
# In this snippet, we count # of links to groups & to specific jobs for each job.
mobility_from_group = mobility[mobility.job_source.isnull()][['group_source', 'group_target', 'job_target']]
# Count # of groups that are linked from each group.
mobility_from_group['target_groups'] = (
mobility_from_group[mobility_from_group.job_target.isnull()]
.groupby('group_source')['group_source'].transform('count'))
mobility_from_group['target_groups'].fillna(0, inplace=True)
# Count # of specific jobs that are linked from each group.
mobility_from_group['target_jobs'] = (
mobility_from_group[mobility_from_group.job_target.notnull()]
.groupby('group_source')['group_source'].transform('count'))
mobility_from_group['target_jobs'].fillna(0, inplace=True)
mobility_from_group = mobility_from_group.groupby('group_source', as_index=False).max()[
['group_source', 'target_groups', 'target_jobs']]
mobility_from_job = mobility[mobility.job_source.notnull()][['job_source', 'group_target', 'job_target']]
# Count # of groups that are linked from each job.
mobility_from_job['target_groups'] = (
mobility_from_job[mobility_from_job.job_target.isnull()]
.groupby('job_source')['job_source'].transform('count'))
mobility_from_job['target_groups'].fillna(0, inplace=True)
# Count # of jobs that are linked from each job.
mobility_from_job['target_jobs'] = (
mobility_from_job[mobility_from_job.job_target.notnull()]
.groupby('job_source')['job_source'].transform('count'))
mobility_from_job['target_jobs'].fillna(0, inplace=True)
mobility_from_job = mobility_from_job.groupby('job_source', as_index=False).max()[
['job_source', 'target_groups', 'target_jobs']]
jobs_with_counts = pandas.merge(
jobs, mobility_from_group, left_on='code_rome', right_on='group_source', how='left')
jobs_with_counts = pandas.merge(
jobs_with_counts, mobility_from_job, left_on='code_ogr', right_on='job_source', how='left')
jobs_with_counts.fillna(0, inplace=True)
jobs_with_counts['target_groups'] = jobs_with_counts.target_groups_x + jobs_with_counts.target_groups_y
jobs_with_counts['target_jobs'] = jobs_with_counts.target_jobs_x + jobs_with_counts.target_jobs_y
jobs_with_counts['total'] = jobs_with_counts['target_groups'] + jobs_with_counts['target_jobs']
jobs_with_counts = jobs_with_counts[['code_ogr', 'libelle_appellation_court', 'target_groups', 'target_jobs', 'total']]
# Jobs that don't have any links from them or from their group.
jobs_with_counts[jobs_with_counts.total == 0]['libelle_appellation_court'].tolist()
jobs_with_counts.total.hist()
str(len(jobs_with_counts.total[jobs_with_counts.total >= 5].index) / len(jobs_with_counts.index)*100) + '%'
"""
Explanation: So to summarize:
* There are no self links from a job group to itself, or within a group.
* There are no duplicate links (even with a different type).
* 33% of links go both way: so it means that direction is meaningful.
* When using specific jobs, in most cases there's also links concerning the whole job group; but for some rare cases it happens that there's nothing for the group.
* However when there's a link to or from a specific job, there's never an equivalent group link that would encompass it.
Coverage
Let's check how much of the ROME code is covered by links, more specifically: for a given job, how many mobility jobs and job groups are linked from it (either directly or because of its group)?
End of explanation
"""
|
brian-rose/ClimateModeling_courseware | Lectures/Lecture15 -- Insolation.ipynb | mit | # Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
"""
Explanation: ATM 623: Climate Modeling
Brian E. J. Rose, University at Albany
Lecture 15: Insolation
Warning: content out of date and not maintained
You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date.
Here you are likely to find broken links and broken code.
About these notes:
This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways:
The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware
The latest versions can be viewed as static web pages rendered on nbviewer
A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website.
Also here is a legacy version from 2015.
Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from climlab import constants as const
from climlab.solar.insolation import daily_insolation
"""
Explanation: Contents
Distribution of insolation
Computing daily insolation with climlab
Global, seasonal distribution of insolation (present-day orbital parameters)
<a id='section1'></a>
1. Distribution of insolation
These notes closely follow section 2.7 of Dennis L. Hartmann, "Global Physical Climatology", Academic Press 1994.
The amount of solar radiation incident on the top of the atmosphere (what we call the "insolation") depends on
latitude
season
time of day
This insolation is the primary driver of the climate system. Here we will examine the geometric factors that determine insolation, focussing primarily on the daily average values.
Solar zenith angle
We define the solar zenith angle $\theta_s$ as the angle between the local normal to Earth's surface and a line between a point on Earth's surface and the sun.
<img src='../images/Hartmann_Fig2.5.png'>
From the above figure (reproduced from Hartmann's book), the ratio of the shadow area to the surface area is equal to the cosine of the solar zenith angle.
Instantaneous solar flux
We can write the solar flux per unit surface area as
$$ Q = S_0 \left( \frac{\overline{d}}{d} \right)^2 \cos \theta_s $$
where $\overline{d}$ is the mean distance for which the flux density $S_0$ (i.e. the solar constant) is measured, and $d$ is the actual distance from the sun.
Question:
what factors determine $\left( \frac{\overline{d}}{d} \right)^2$ ?
under what circumstances would this ratio always equal 1?
Calculating the zenith angle
Just like the flux itself, the solar zenith angle depends latitude, season, and time of day.
Declination angle
The seasonal dependence can be expressed in terms of the declination angle of the sun: the latitude of the point on the surface of Earth directly under the sun at noon (denoted by $\delta$).
$\delta$ currenly varies between +23.45º at northern summer solstice (June 21) to -23.45º at northern winter solstice (Dec. 21).
Hour angle
The hour angle $h$ is defined as the longitude of the subsolar point relative to its position at noon.
Formula for zenith angle
With these definitions and some spherical geometry (see Appendix A of Hartmann's book), we can express the solar zenith angle for any latitude $\phi$, season, and time of day as
$$ \cos \theta_s = \sin \phi \sin \delta + \cos\phi \cos\delta \cos h $$
Sunrise and sunset
If $\cos\theta_s < 0$ then the sun is below the horizon and the insolation is zero (i.e. it's night time!)
Sunrise and sunset occur when the solar zenith angle is 90º and thus $\cos\theta_s=0$. The above formula then gives
$$ \cos h_0 = - \tan\phi \tan\delta $$
where $h_0$ is the hour angle at sunrise and sunset.
Polar night
Near the poles special conditions prevail. Latitudes poleward of 90º-$\delta$ are constantly illuminated in summer, when $\phi$ and $\delta$ are of the same sign. Right at the pole there is 6 months of perpetual daylight in which the sun moves around the compass at a constant angle $\delta$ above the horizon.
In the winter, $\phi$ and $\delta$ are of opposite sign, and latitudes poleward of 90º-$|\delta|$ are in perpetual darkness. At the poles, six months of daylight alternate with six months of daylight.
At the equator day and night are both 12 hours long throughout the year.
Daily average insolation
Substituting the expression for solar zenith angle into the insolation formula gives the instantaneous insolation as a function of latitude, season, and time of day:
$$ Q = S_0 \left( \frac{\overline{d}}{d} \right)^2 \Big( \sin \phi \sin \delta + \cos\phi \cos\delta \cos h \Big) $$
which is valid only during daylight hours, $|h| < h_0$, and $Q=0$ otherwise (night).
To get the daily average insolation, we integrate this expression between sunrise and sunset and divide by 24 hours (or $2\pi$ radians since we express the time of day in terms of hour angle):
$$ \overline{Q}^{day} = \frac{1}{2\pi} \int_{-h_0}^{h_0} Q ~dh$$
$$ = \frac{S_0}{2\pi} \left( \frac{\overline{d}}{d} \right)^2 \int_{-h_0}^{h_0} \Big( \sin \phi \sin \delta + \cos\phi \cos\delta \cos h \Big) ~ dh $$
which is easily integrated to get our formula for daily average insolation:
$$ \overline{Q}^{day} = \frac{S_0}{\pi} \left( \frac{\overline{d}}{d} \right)^2 \Big( h_0 \sin\phi \sin\delta + \cos\phi \cos\delta \sin h_0 \Big)$$
where the hour angle at sunrise/sunset $h_0$ must be in radians.
The daily average zenith angle
It turns out that, due to optical properties of the Earth's surface (particularly bodies of water), the surface albedo depends on the solar zenith angle. It is therefore useful to consider the average solar zenith angle during daylight hours as a function of latidude and season.
The appropriate daily average here is weighted with respect to the insolation, rather than weighted by time. The formula is
$$ \overline{\cos\theta_s}^{day} = \frac{\int_{-h_0}^{h_0} Q \cos\theta_s~dh}{\int_{-h_0}^{h_0} Q ~dh} $$
<img src='../images/Hartmann_Fig2.8.png'>
The average zenith angle is much higher at the poles than in the tropics. This contributes to the very high surface albedos observed at high latitudes.
<a id='section2'></a>
2. Computing daily insolation with climlab
Here are some examples calculating daily average insolation at different locations and times.
These all use a function called
daily_insolation
in the package
climlab.solar.insolation
to do the calculation. The code implements the above formulas to calculates daily average insolation anywhere on Earth at any time of year.
The code takes account of orbital parameters to calculate current Sun-Earth distance.
We can look up past orbital variations to compute their effects on insolation using the package
climlab.solar.orbital
See the next lecture!
Using the daily_insolation function
End of explanation
"""
help(daily_insolation)
"""
Explanation: First, get a little help on using the daily_insolation function:
End of explanation
"""
daily_insolation(45,1)
"""
Explanation: Here are a few simple examples.
First, compute the daily average insolation at 45ºN on January 1:
End of explanation
"""
daily_insolation(45,181)
"""
Explanation: Same location, July 1:
End of explanation
"""
lat = np.linspace(-90., 90., 30)
Q = daily_insolation(lat, 80)
fig, ax = plt.subplots()
ax.plot(lat,Q)
ax.set_xlim(-90,90); ax.set_xticks([-90,-60,-30,-0,30,60,90])
ax.set_xlabel('Latitude')
ax.set_ylabel('W/m2')
ax.grid()
ax.set_title('Daily average insolation on March 21')
"""
Explanation: We could give an array of values. Let's calculate and plot insolation at all latitudes on the spring equinox = March 21 = Day 80
End of explanation
"""
lat = np.linspace( -90., 90., 500)
days = np.linspace(0, const.days_per_year, 365 )
Q = daily_insolation( lat, days )
"""
Explanation: In-class exercises
Try to answer the following questions before reading the rest of these notes.
What is the daily insolation today here at Albany (latitude 42.65ºN)?
What is the annual mean insolation at the latitude of Albany?
At what latitude and at what time of year does the maximum daily insolation occur?
What latitude is experiencing either polar sunrise or polar sunset today?
<a id='section3'></a>
3. Global, seasonal distribution of insolation (present-day orbital parameters)
Calculate an array of insolation over the year and all latitudes (for present-day orbital parameters). We'll use a dense grid in order to make a nice contour plot
End of explanation
"""
fig, ax = plt.subplots(figsize=(10,8))
CS = ax.contour( days, lat, Q , levels = np.arange(0., 600., 50.) )
ax.clabel(CS, CS.levels, inline=True, fmt='%r', fontsize=10)
ax.set_xlabel('Days since January 1', fontsize=16 )
ax.set_ylabel('Latitude', fontsize=16 )
ax.set_title('Daily average insolation', fontsize=24 )
ax.contourf ( days, lat, Q, levels=[-1000., 0.], colors='k' )
"""
Explanation: And make a contour plot of Q as function of latitude and time of year.
End of explanation
"""
Qaverage = np.average(np.mean(Q, axis=1), weights=np.cos(np.deg2rad(lat)))
print( 'The annual, global average insolation is %.2f W/m2.' %Qaverage)
"""
Explanation: Time and space averages
Take the area-weighted global, annual average of Q...
End of explanation
"""
summer_solstice = 170
winter_solstice = 353
fig, ax = plt.subplots(figsize=(10,8))
ax.plot( lat, Q[:,(summer_solstice, winter_solstice)] );
ax.plot( lat, np.mean(Q, axis=1), linewidth=2 )
ax.set_xbound(-90, 90)
ax.set_xticks( range(-90,100,30) )
ax.set_xlabel('Latitude', fontsize=16 );
ax.set_ylabel('Insolation (W m$^{-2}$)', fontsize=16 );
ax.grid()
"""
Explanation: Also plot the zonally averaged insolation at a few different times of the year:
End of explanation
"""
%load_ext version_information
%version_information numpy, matplotlib, climlab
"""
Explanation: <div class="alert alert-success">
[Back to ATM 623 notebook home](../index.ipynb)
</div>
Version information
End of explanation
"""
|
kylepjohnson/notebooks | hands_on_machine_learning/chapter_2_end_to_end_machine_learning_project.ipynb | mit | import os
import pandas as pd
housing_file = os.path.expanduser('~/handson-ml/datasets/housing/housing.csv')
def load_housing_data(housing_path):
return pd.read_csv(housing_path)
housing = load_housing_data(housing_file)
housing.head()
housing.info()
housing["ocean_proximity"].value_counts()
housing.describe()
"""
Explanation: Frame the problem
signal: a piece of information fed into an ML system (see Shannon's information theory)
pipeline: a series of processes to process data
component: name of a processing unit in a pipeline
Select a performance measure
standard deviation (σ): square root of the variance, being the average of the squared deviation from the mean
root mean square error (RMSE): measures the standard deviation of errors made by a system's predictions
RMSE(X, h) = sqrt(1/m Σ(h(x) - y))
(See book for actual formula.)
m is the number of instances in dataset
x all features of the nth instance
y is the label for the nth instance
X is matrix of all feature values (except target)
h is the hypothosis: the system's prediction function
y^ is y-hat, the prediction value of h(x)
RMSE(X, h) is the cost function
generally preferred
MEA: Mean absolute error: also called (average absolute deviation) (see formula in book)
MAE(X, h) = 1/m Σ|h(x) - y|
RMSE corresponds to the Euclidean norm, or l2 norm, which measures distances with straight line, is usual geometry
MAE corresponds to Manhattan norm, or l1 norm, which measures distance only with orthogonal lines (as going through city blocks)
lk norm: see formula
l0 norm (of vector v): gives the cardinality of a vector (ie, number of elements)
l infinity: the maximum absolute value of a vector
The higher the norm index, the more it focuses on large values and not on smaller ones; RMSE is more sensitive to outliers than MAE. When outliers are exceptionally rare (as in a bell curve) the RMSE performs very well
Explore housing data
End of explanation
"""
%matplotlib inline
import matplotlib.pylab as plt
housing.hist(bins=50, figsize=(20, 15))
plt.show()
"""
Explanation: 25%, 50%, 75% are all percentiles: the percentile below which a values fall; also called quartiles (25% being first quartile)
End of explanation
"""
# simple set aside 20%
import numpy as np
import numpy.random as rnd
def split_train_test_data(data, test_ratio):
shuffled_indices = rand.permutation(len(data)) # also can give constant for repeated values
set_test_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:set_test_size]
train_indices = shuffled_indices[set_test_size:]
return data.iloc[train_indices], data.iloc[test_indices]
train_set, test_set = split_train_test_data(housing, 0.2)
print('len train set:', len(train_set))
print('len test set:', len(test_set))
# set aside 20 but ensure test always remains the same
import hashlib
def test_set_check(identifier, test_ratio, hash):
return hash(identifier).digest()[-1] < 256 * test_ratio
def split_train_test_by_id(data, test_ratio, id_column, hash=hashlib.md5):
ids = data[id_column]
in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio, hash))
return data.loc[~in_test_set], data.loc[in_test_set]
# use row index as id
housing_with_id = housing.reset_index() # adds an 'index' column
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "index")
# or just use sklearn cross validation
from sklearn.cross_validation import train_test_split
"""
Explanation: These are tail heavy: extend farther to the right of the median than to the left
Create a test set
data snooping: looking at a test set
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/0c01a2fff1983eb8b64e3b93aea3242d/plot_topo_compare_conditions.ipynb | bsd-3-clause | # Authors: Denis Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne.viz import plot_evoked_topo
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
"""
Explanation: Compare evoked responses for different conditions
In this example, an Epochs object for visual and auditory responses is created.
Both conditions are then accessed by their respective names to create a sensor
layout plot of the related evoked responses.
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = 1
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: MEG + STI 014 - bad channels (modify to your needs)
include = [] # or stim channels ['STI 014']
# bad channels in raw.info['bads'] will be automatically excluded
# Set up amplitude-peak rejection values for MEG channels
reject = dict(grad=4000e-13, mag=4e-12)
# pick MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=include, exclude='bads')
# Create epochs including different events
event_id = {'audio/left': 1, 'audio/right': 2,
'visual/left': 3, 'visual/right': 4}
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0), reject=reject)
# Generate list of evoked objects from conditions names
evokeds = [epochs[name].average() for name in ('left', 'right')]
"""
Explanation: Set parameters
End of explanation
"""
colors = 'blue', 'red'
title = 'MNE sample data\nleft vs right (A/V combined)'
plot_evoked_topo(evokeds, color=colors, title=title, background_color='w')
plt.show()
"""
Explanation: Show topography for two different conditions
End of explanation
"""
|
numerical-mooc/assignment-bank-2015 | croberts94/Final Project.ipynb | mit | #Import necessary libraries and functions
import numpy as np
from scipy.stats import norm #Phi() is the normal CDF
#Allow plots in notebook and format plots
%matplotlib inline
import matplotlib.pyplot as pyplot
from matplotlib import rcParams
rcParams['figure.dpi'] = 100
rcParams['font.size'] = 16
rcParams['font.family'] = 'StixGeneral'
def bs_formula(type, S, K, T, r, sigma):
"""Computes price of European call or put using the Black-Scholes formula
Parameters:
----------
type: string
Type of option;"C" for a call or "P" for a put
S: array of float
Initial asset price or an array of initial asset prices
K: float
Strike price
T: float
Expiration time
r: float
risk-free interest rate, expressed between 0 and 1
sigma: float
market volatility, expressed between 0 and 1
Returns:
-------
V: array of float
Initial option value or an arrat of initial option values
"""
if type == "C":
eps = 1
elif type == "P":
eps = -1
d1 = (np.log(S/K) + T*(r + 0.5*sigma**2))/(sigma*np.sqrt(T))
d2 = (np.log(S/K) + T*(r - 0.5*sigma**2))/(sigma*np.sqrt(T))
V = eps*S*norm.cdf(eps*d1) - eps*K*np.exp(-r*T)*norm.cdf(eps*d2)
V = np.clip(V, 0, np.inf)
return V
#Parameters
K = 40 #strike price
T = 0.5 #expiration time
r = 0.1 #interest rate
sigma = 0.25 #volatility
S = np.linspace(1, 100,100) #array of possible current asset prices
"""
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2015 C.M. Roberts.
Option Valuation using Numerical Methods:
A Python Programming Approach
There are many different kinds of assets traded in modern financial markets, nearly all falling within one of the five main categories of stock, bond, commodity, currency, or derivative. Most folks have a basic understanding of stocks (equity in a business) and bonds (financial contracts issued by the government), and those who are more economically savvy may also be familiar with the trade of commodities (goods such as gold, oil, or grain) and currencies (investments in money, both foreign and domestic). However, few individuals outside of the financial and academic worlds know much about derivatives. A derivative is a financial instrument whose value is derived from some other asset such as a stock or commodity. In his excellent book, <em>In Pursuit of the Unknown: 17 Equations That Changed the World</em>, the English mathematician, Ian Stewart, states,
<br><br>
<em style="text-align: center;">“Since the turn of the century the greatest source of growth in the financial sector has been in financial instruments known as derivatives. Derivatives are not money, nor are they investments in stocks or shares. They are investments in investments, promises about promises… This is finance in cloud cuckoo land, yet it has become the standard practice of the world’s banking system.”</em>
<br><br>
Mr. Stewart certainly has a rather sour view on derivatives, but his words also help describe their importance in today’s financial landscape. One simply can not make it in the financial world without a firm understanding of derivatives and their qualities.
In this module, we will learn about some basic derivatives, how they can be characterized mathematically, and how their value can be estimated using different numerical schemes.
Keeping Our Options Open
Perhaps the most common derivative is the option, in which the owner of the option has the right to <em>buy </em>the underlying asset at a specific price by some specified date (this is called a <strong>call</strong>) or else the owner has the right to <em>sell</em> the underlying asset at a specific price and date (this is called a <strong>put</strong>). The price specified in the option contract is called the strike price and the date is simply referred to as the expiration date. For the time being, we will consider only European options, a style of option whereby the owner may only exercise the option (that is, buy or sell the underlying asset) at the expiration date and no sooner. Letting $K$ be the strike price and $S$ be the value of the underlying asset, the payoff $V$ of an option at expiration time can be characterized as
$$V_{call} = \textrm{max}(S - K, 0)$$
$$V_{put} = \textrm{max}(0, K - S)$$
The payoffs are described this way because if the owner does not stand to make money by exercising the option, they will opt to simply let it expire and may choose to buy or sell the asset at the market price, $S$, thereby having a payoff of $0. <br><br>
Now let us put ourselves in the shoes of a trader who is considering whether or not to buy (and thus become the owner of) a certain option. We know the terms of the contract, that is the strike price and time of expiration. We also know some facts about the current state of the market including the present value of the asset, the risk-free interest rate (i.e. how much interest money would accrue sitting in a bank), and the level of volatility in the market. Knowing all of this, what can we calculate to be the fair price of the option? <br>
As it turns out, this is no simple task. Luckily for us, in 1973 two economists named Fischer Black and Myron Scholes (with the help of a third economist, Robert Merton) derived an equation describing the price of an option over time. The equation is
$$\frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2S^2\frac{\partial^2V}{\partial S^2} + rS\frac{\partial V}{\partial S} - rV = 0$$
where $t$ is time, $\sigma$ is volatility, and $r$ is the risk-free interest rate. This is pretty exciting stuff and the group was awarded the Nobel Prize in Economics in 1997 for their work. For our purposes, we must note that the Black-Scholes equation has an analytic solution for European puts and calls, called the Black-Scholes formula and it is as follows:
$$V(S,t) = \epsilon S\Phi(\epsilon d1) - \epsilon Ke^{-r(T-t)}\Phi(\epsilon d2)$$
where $$ d1 = \frac{\ln(S/K)+(T-t)(r+\sigma^2/2)}{\sigma\sqrt{T-t}}$$<br>
$$ d2 = \frac{\ln(S/K)+(T-t)(r-\sigma^2/2)}{\sigma\sqrt{T-t}}$$<br>
$$\Phi(\zeta) = \frac{1}{2\pi}\int_{-\infty}^\zeta e^{-\eta^2/2}d\eta $$<br>
$$\epsilon = \bigg{{1 \textrm{ for a call} \atop -1 \textrm{ for a put}} $$
Here, $T$ is the time of expiration and $V(S,t)$ is the value of the option at any time $t$. Armed with this formula, let us return to the issue at hand: valuing an option. Let us suppose that we know the option has a strike price $K = \$40$, expiration $T = 0.5 \textrm{ years}$, and we know the market has a risk-free interest rate $r = 0.1$ and a volatility $\sigma = 0.25$. Using Python and the Black-Scholes formula, the fair price for the option can be calculated for a range of possible current asset prices.
End of explanation
"""
V_call = bs_formula("C", S, K, T, r, sigma)
print("Exact value of European call given initial asset price of $45 is $%.3f" %V_call[44])
V_put = bs_formula("P", S, K, T, r, sigma)
print("Exact value of European put given initial asset price of $45 is $%.3f" %V_put[44])
"""
Explanation: Since we have defined a function that can value a European option, let's go ahead and apply it. We will assume an initial asset price of \$45.
End of explanation
"""
pyplot.plot(S,V_call,color='blue', lw=2, label="European Call")
pyplot.plot(S,V_put,color='red', lw=2, label="European Put")
pyplot.xlabel('Initial Asset Value (S)')
pyplot.ylabel('Value of Option (V)')
pyplot.grid()
pyplot.legend(loc='upper left',prop={'size':15});
"""
Explanation: Great! We have our result. In fact, we calculated a whole array of results, each one based upon a different initial asset price. If we graph all of these results, we may gain a better understanding of how European options function and how calls and puts differ in their payoffs.
End of explanation
"""
#import function to solve matrices
from scipy.linalg import solve
def cn_call(V, N, r, dt, sigma, S_max, K):
"""Solves for value of European call using Crank-Nicolson scheme
Parameters:
----------
V: array of float
option values if call expired immediately
N: integer
number of time steps
r: float
risk-free interest rate
dt: float
time step length
sigma: array of floats
volatility over asset lifetime
S_max: float
maxmum asset value
K: float
strike price
Returns:
-------
Vn: array of float
option values given parameters
"""
M = np.shape(V)[0] - 1 #number of initial values
i = np.arange(1,M) #array of indexes
Vn = np.copy(V)
for t in range(N):
a = dt/4 * (r*i - sigma[t]**2*i**2)
b = dt/2 * (r + sigma[t]**2*i**2)
c = -dt/4 * (r*i + sigma[t]**2*i**2)
#create LHS of Ax = b
A = np.diag(1+b) + np.diag(c[:-1], 1) + np.diag(a[1:],-1)
#create RHS of Ax = b
B = np.diag(1-b) + np.diag(-c[:-1], 1) + np.diag(-a[1:],-1) #create matrix of RHS coefficients
B = np.dot(B,Vn[1:-1]) #multiply coeff's by current option values
B[-1] += -2*c[-1] * (S_max - K) #apply boundary condition
#solve Ax = b
Vn[1:-1] = solve(A,B)
return Vn
#Parameters
N = 100 #number of time steps
T = 0.5 #expiration time
dt = T/N #timestep size
K = 40 #strike price
r = 0.1 #interest rate
S_max = 4*K #arbitrary maximum asset value of four times strike price
S = np.linspace(0, S_max, 161) #array of some possible current asset prices
V0 = np.clip(S - K, 0, S_max-K) #initial payoff value of option
#constant volatility of 0.25
sigma_const = np.zeros(N)[:] + 0.25
"""
Explanation: No solution? There's a solution for that!
The Black-Scholes formula is a godsend, but sometimes it doesn't work. One such case is when volatility is not constant over the lifetime of an option. In such an instance, the Black-Scholes equation (recall the difference between the <em>equation</em> and the <em>formula</em>) still applies, but a neat, analytic solution just doesn't exist. To value an option under these circumstances, we have to use a numerical scheme which will provide an estimate of the option's value. Several numerical schemes exist that are capable of doing this, but here we choose to focus on the Crank-Nicolson method due to its accuracy and stability.
To implement the Crank-Nicolson scheme, we first construct a two-dimensional grid of asset price versus time and we then discretize the Black-Scholes equation using a forward difference in time and central difference in asset price. A key feature of the Crank-Nicolson method is that for asset price, we actually average the central difference of the current time step with the central difference of the next time step. This approach yields the following terms:
$$\frac{\partial V}{\partial t} \approx \frac{V^{n+1}_m - V^{n}_m }{\Delta t}$$
$$\frac{\partial V}{\partial S} \approx \frac{V^{n}{m+1} - V^{n}{m-1} + V^{n+1}{m+1} - V^{n+1}{m-1}}{4 \Delta S}$$
$$ \frac{\partial^2 V}{\partial S^2} \approx \frac{V^{n}{m+1} - 2 V^{n}{m} + V^{n}{m-1} + V^{n+1}{m+1} - 2 V^{n+1}{m} + V^{n+1}{m-1}}{2 \Delta S^2}$$
where $n$ is the index in time and $m$ is the index in asset price. By taking into account that $S = m\Delta S$, substituting the above terms into the Black-Scholes equation, and then separating those terms which are known (with time index $n$) from those that are unknown (with time index $n+1$), we get
$$\frac{\Delta t}{4}(rm - \sigma^2m^2)V^{n+1}{m-1} + (1 + \frac{\Delta t}{2}(r + \sigma^2m^2))V^{n+1}{m} + (-\frac{\Delta t}{4}(rm + \sigma^2m^2))V^{n+1}{m+1} = \ \frac{\Delta t}{4}(-rm + \sigma^2m^2)V^{n}{m-1} + (1 + \frac{\Delta t}{2}(r + \sigma^2m^2))V^{n}{m} + (\frac{\Delta t}{4}(rm + \sigma^2m^2))V^{n}{m+1}$$
or, if we define $a = \frac{\Delta t}{4}(rm - \sigma^2m^2)$, $b = \frac{\Delta t}{2}(r + \sigma^2m^2)$, and $c = -\frac{\Delta t}{4}(rm + \sigma^2m^2)$, we get
$$ aV^{n+1}{m-1} + (1+b)V^{n+1}{m} + cV^{n+1}{m+1} = -aV^{n}{m-1} + (1-b)V^{n}{m} -cV^{n}{m+1} $$
which is a bit easier to handle. This equation only takes into account one time step into the future and a total of three asset prices. To solve for a number of asset prices at once, we can create a system of linear equations where each equation applies to a different subset of the set of asset prices (for example, if the first equation deals with $m-1$, $m$, and $m+1$, the second will deal with $m$, $m+1$, and $m+2$). Such a system will be in the form
$$[A_1][V^{n+1}{int}] = [A_2][V^{n}{int}] + [B.C.] $$
where $[B.C]$ is a column vector containing appropriate boundary conditions. To determine these boundary conditions, we first have to determine if we are valuing a call or put. If we are concerned with a call, we know the payoff is $V(S,t) = \max(S - K, 0)$. Given a set of asset prices ranging from $0$ to some $S_{max}$, we know that $V(0,t) = 0$. This is our first of two boundary conditions. Our second boundary condition is derived from our knowledge that $V(S_{max},t) = S_{max} - K$. Letting the largest possible asset price $S_{max}$ have the index $M$, we can arrive at the equation:
$$ aV^{n+1}{M-2} + (1+b)V^{n+1}{M-1} + cV^{n+1}{M} = -aV^{n}{M-2} + (1-b)V^{n}{M-1} -cV^{n}{M} $$
Substituting those terms having index $M$ with $S_{max} - K$ and once again moving all known values to the right side of the equation, we get
$$ aV^{n+1}{M-2} + (1+b)V^{n+1}{M-1} = -aV^{n}{M-2} + (1-b)V^{n}{M-1} - 2c(S_{max} - K)$$
Thus, $$ [B.C.] = \left[ \begin{array}{c} 0 \ \vdots \ \ 0 \ - 2c(S_{max} - K) \end{array} \right]$$
With $[B.C.]$ now determined and $[A_1]$ and $[A_2]$ easily determined from our discretization of the Black-Scholes equation, we can now construct a linear system of equations for a European call. Given a set of asset prices of size $M$, ranging from 0 to $S_{max}$, such a system can be characterized by
$$ \left[ \begin{array}{cccccc} (1+b) & c & 0 & \cdots & & 0 \ a & (1+b) & c & 0 & \cdots & 0 \ 0 & & \ddots & & & \vdots \ \vdots & & & a & (1+b) & c \ 0 & \cdots & & 0 & a & (1+b) \end{array} \right] \left[ \begin{array}{c}V^{n+1}{1}\V^{n+1}{2}\ \vdots \ V^{n+1}{M-1}\ V^{n+1}{M} \end{array} \right] = \ \left[ \begin{array}{cccccc} (1-b) & -c & 0 & \cdots & & 0 \ -a & (1-b) & -c & 0 & \cdots & 0 \ 0 & & \ddots & & & \vdots \ \vdots & & & -a & (1-b) & -c \ 0 & \cdots & & 0 & -a & (1-b) \end{array} \right] \left[ \begin{array}{c}V^{n}{1}\V^{n}{2}\ \vdots \ V^{n}{M-1}\ V^{n}{M} \end{array} \right] + \left[ \begin{array}{c} 0 \ \vdots \ \ 0 \ - 2c(S_{max} - K) \end{array} \right]$$
This system applies only to one time step, so in order to succesfully value an option, we must solve this system repeatedly for each time step from the initial time to the time the option expires.
Now that we have derived the Crank-Nicolson scheme for valuing European calls, let's define a Python function to implement it.
End of explanation
"""
#apply CN for constant volatility
V_cn = cn_call(V0, N, r, dt, sigma_const, S_max, K)
print("CN estimated value of European call given initial asset price of $45 is $%.3f" %V_cn[45])
#recalculate analytic solution with new S array
V_call = bs_formula("C", S, K, T, r, sigma)
pyplot.plot(S,V_cn,color='red', lw = 2,label='CN')
pyplot.plot(S,V_call,color='green', ls='--', lw = 3, label='Analytic Solution')
pyplot.xlabel('Initial Asset Value (S)')
pyplot.ylabel('Value of Option (V)')
pyplot.legend(loc='upper left',prop={'size':15});
"""
Explanation: Let us proceed by computing option values for the same initial asset price as before using the Crank-Nicolson function we have just defined. Then, we can graphically compare the Crank-Nicolson results to the analytic results.
End of explanation
"""
#volatilty stepping from 0.0 to 0.8
sigma_step = np.zeros(N)
sigma_step[int(N/2):]+= 0.8
#apply CN for non-contstant volatility
V_cn_step = cn_call(V0, N, r, dt, sigma_step, S_max, K)
print("CN estimated value of European call given initial asset price of $45 is $%.3f" %V_cn_step[45])
pyplot.plot(S,V_cn_step,color='blue', lw=2, label='CN, step-sigma')
pyplot.plot(S,V_cn,color='red', lw = 2,label='CN, constant-sigma')
pyplot.plot(S,V_call,color='green', ls='--', lw = 3, label='Analytic Solution')
pyplot.xlabel('Initial Asset Value (S)')
pyplot.ylabel('Value of Option (V)')
pyplot.legend(loc='upper left',prop={'size':15});
pyplot.xlim(20,70)
pyplot.ylim(0,35)
"""
Explanation: That looks pretty great! Clearly, some error exists, but we can get pretty near to the exact, analytic result using the Crank-Nicolson scheme. We will now move on to pricing an option under a non-constant volatility.
End of explanation
"""
def binomial(type, S0, k, r, sigma, T, N ,american="false"):
""" Computes option value for European or American options using the binomial method
Paramters:
---------
type: string
type of option; "C" for call, "P" for put
S0: float
initial asset price
k: float
strike price
r: float
risk-free interest rate
sigma:float
volatility
T: float
Expiration time
N: integer
number of time steps
american: string (Boolean input)
american="true" for American option, american="false" for European option
Returns:
-------
V[0]: float
option value given parameters
"""
dt = T/N #time step
u = np.exp(sigma * np.sqrt(dt))
d = 1/u
K = np.ones(N+1)*k #strike price array
p = (np.exp(r * dt) - d)/ (u - d)
V = np.zeros(N+1) #initialize option value array
#expiration asset prices (S)
S = np.asarray([(S0 * u**j * d**(N - j)) for j in range(N + 1)])
#expiration option values (V)
if type =="C":
V = np.clip(S - K, 0, np.inf)
elif type =="P":
V = np.clip(K - S, 0, np.inf)
#calculate backwards the option prices
for i in range(N-1, -1, -1):
#Current Option Value: V = e^(-r*dt)(pVu + (1-p)Vd)
V[:-1]=np.exp(-r * dt) * (p * V[1:] + (1-p) * V[:-1])
#Current Assett Values
S[:-1] = S[:-1]*u
if american=='true':
#Check if current exercise value is greater than exercise at expiration. If so, exercise early.
if type =="C":
V = np.maximum(V, S - K)
elif type =="P":
V = np.maximum(V, K - S)
#Return value of option at t=0
return V[0]
"""
Explanation: Well would you look at that. Having a non-constant volatility can completely shift our valuation for an option! Perhaps it's unrealistic to expect anyone to know precisely how market volatility will change over a given period of time (an old joke comes to mind about how weathermen and economists are the only people who can consistently be wrong and still keep their jobs), but the point is that as factors in the market change, the analytic solution starts to become irrelevant. A strong numerical scheme such as the Crank-Nicolson method is an indispensable tool for traders in an ever-shifting financial landscape.
Life, Liberty, and the Freedom to Exercise Early
So far, we have focused only on European options where the owner may exercise the option only at the time of expiration. We now move on to American options, a style in which the option can be exercised at any time during its lifetime. It should be noted that these names merely denote the option style and have nothing to do with where these options are actually traded.<br>
Due to the nature of American options, it is necessary to check at every time step for the possibility of early exercise, making a Black-Scholes approach insufficient. Instead, a popular method for tackling the valuation of American options is the binomial model, proposed by Cox, Ross, and Rubinstein in 1979. In the binomial model, we start with the knowledge that over the course of one time step, the stock price $S$ can move up to some value $Su$ with probability $p$ or down to some value $Sd$ with probability $1-p$. For a call option, then, we can define the value of the option after one up-tick to be <br>
$$V_u = \max(Su - K,0)$$ <br>and the value after a down-tick to be <br>
$$V_d = \max(Sd - K,0)$$.
Building from this, the current value of the option can be taken to be the expected value of its possible future values, discounted by the interest that would be accrued between now and said future values. This can be expressed as
$$ V = e^{-rdt}[pV_u + (1-p)V_d] $$
and we shall note here that
$$ u = e^{\sigma\sqrt{dt}} $$ <br>
$$ d = 1/u = e^{-\sigma\sqrt{dt}} $$ <br>
$$p = \frac{e^{rdt}-d}{u - d}$$
We won't be troubled over the derivation of $u$,$d$, and $p$ for the purposes of this lesson, but the <a href="https://www.researchgate.net/profile/Stephen_Ross3/publication/4978679_Option_pricing_A_simplified_approach/links/09e4151363b7910ad9000000.pdf">Cox, Ross, Rubenstein paper</a> is actually quite interesting and is worth the read.
So at every time step, the value of the asset (and, correspondingly, the option) has the possibility of moving up or down. Over the course of many time steps, the possibilities spread out, forming what is known as a binomial tree (pictured below).
<img src="./figures/bintree.PNG">
<em style = "text-align: left; font-size: 0.8em">Image source: https://upload.wikimedia.org/wikipedia/commons/2/2e/Arbre_Binomial_Options_Reelles.png</em>
Each box in the tree is referred to as a leaf. The easiest and most common way of finding an option's value using the binomial method is to use given information to find the asset values at all of the final leaves (that is, the leaves existing at the time of expiration), and then working backwards towards a fair value for the option at the beginning of its lifetime. The first step is to use the time of expiration $T$, the number of time steps $N$, the risk-free interest rate $r$, and the market volatility $\sigma$ (we once again assume this to be constant over the lifetime of the option) to find $u$,$d$, and $p$. Next, we can express the leaves at the expiration time as a list of the form
$$ S_0d^Nu^0,\ S_0d^{N-1}u^1,\ S_0d^{N-2}u^2,...,S_0d^2u^{N-2},\ S_0d^1u^{N-1},\ S_0d^0u^{N} $$
where $S_0$ is the initial asset value. Using the formulae mentioned earlier in this section, we can then use these final asset values to make a list of final option values. These final option values can then be used to determine the option values at the preceding time step, and then these option values can be used to solve for the previous option values, and so on and so forth until we have arrived at the initial value of the option. If the option is American, at each iteration we must also compare the value of holding the option longer versus the value of exercising it early. If the option has a higher value if exercised early, then we assume that the owner of the option would do so and we replace the recursively calculated value at that leaf with the early exercise value. To perform this scheme using Python, we can write a function such as the one below:
End of explanation
"""
#Parameters
N = 100 #number of time steps
T = 0.5 #expiration time
K = 40 #strike price
r = 0.1 #interest rate
sigma = 0.25 #volatility
S0 = 45 #initial asset price
print("Given an initial asset price of $45:")
V_bin_EC = binomial("C", S0, K, r, sigma, T, N ,american="false")
print("The value of a European Call is $%.3f" %V_bin_EC)
V_bin_EC = binomial("P", S0, K, r, sigma, T, N ,american="false")
print("The value of a European Put is $%.3f" %V_bin_EC)
V_bin_EC = binomial("C", S0, K, r, sigma, T, N ,american="true")
print("The value of an American Call is $%.3f" %V_bin_EC)
V_bin_EC = binomial("P", S0, K, r, sigma, T, N ,american="true")
print("The value of an American Put is $%.3f" %V_bin_EC)
"""
Explanation: <em style="font-size: 0.8em">Please note that while the above code is the original work of the author, it owes much of its overall structure to a code found <a href = "http://gosmej1977.blogspot.be/2013/02/american-options.html">here</a>. I would be remiss not to say thank you to one Julien Gosme for providing the framework for this code on his/her blog.</em>
Let's now define our parameters once again and use the binomial function to estimate the value of different options.
End of explanation
"""
from random import gauss
def asset_path(St, sigma, r, dt):
"""Simulates next step in potential path an asset price might take
Parameters:
----------
St: float
current asset price
sigma: float
volatility
r: float
risk-free interest rate
dt:float
length of time step
Returns:
-------
St: float
next time step asset price
"""
St = St * np.exp((r - 0.5 * sigma**2)*dt + sigma * gauss(0,1.0) * np.sqrt(dt))
return St
"""
Explanation: If we compare our analytic values for a European call/put to those estimated above, we see that the binomial model does a pretty good job of estimating an option's value. Also, notice how the values for the European and American calls are identical, while the value of the American put is greater than its European counterpart. This is because under the assumptions of our model (i.e. no <a href="http://www.investopedia.com/terms/d/dividend.asp">dividends</a> and no <a href="http://www.investopedia.com/terms/a/arbitrage.asp">arbitrage</a>), it is never optimal for the owner of an American call to exercise early. However, there do exist some circumstances where the owner of an American put would exercise early, thus raising its value compared to a plain old European put. For a mathematical proof of why this is the case, check out this <a href="http://www.math.nyu.edu/~cai/Courses/Derivatives/lecture8.pdf">lecture outline</a> from NYU.
Also, it may seem like we've wandered off pretty far from the realm of partial differential equations, but in fact we never left. If we were to shorten the length of the time step used in the binomial model to an infinitesimally tiny size, effectively migrating from discrete to continuous time, we would observe that the binomial model <a href = "http://www.bus.lsu.edu/academics/finance/faculty/dchance/Instructional/TN00-08.pdf">converges to the Black-Scholes model</a> (for European options, at least). We are still looking at the very same problem governed by the same PDE, but whereas the analytic and finite-difference (e.g. Crank-Nicolson) methods take a careful, highbrow approach, the binomial method trades elegance for elbow grease to get the job done. It's the quintessential American way!
Tokyo Royale
Okay, so that title is a pretty lame joke, but it fits because what we are going to be looking at in this section is valuing an Asian option using the Monte Carlo method. Again, the name of the option has nothing to do with where it is traded, rather a couple of English financial analysts happened to be in Tokyo when they devised it. The Asian option is different from other options because its payoff is derived from the average asset price over the option's lifetime, making it path-dependent. These options have an advantage of being less susceptible to volatility than European or American options, but they also pose a challenge for estimating their value, as there are a huge number of possible paths an asset's price can take over even a relatively small period of time.
This challenge can be met using the Monte Carlo method, which owes its name to the fact that its underlying principle is akin to rolling a dice over and over, as in a casino. To use this method, we start by simulating a single path that the price of the asset may take between the time the option is created to the time of expiration. The asset price is assumed to follow
$$ dS = \mu Sdt + \sigma SdW(t) $$
where $dW(t)$ is a Wiener (i.e. Brownian) process and $\mu$ is the expected return on the asset in a risk-neutral world. The assumption that an asset price follows a random walk underpins both the Black-Scholes and binomial models and by invoking it here, we are maintaining consistency with the work we have done so far in this module. If we let $dS$ be the change in asset price over some very small time step $dt$ and substitute $r$ for $\mu$ (because they are synonymous in this context), we can rearrange this equation to be
$$ S(t + dt) - S(t) = rS(t)dt + \sigma S(t)Z\sqrt{dt} $$
where $Z\sim N(0,1)$. It is more accurate to simulate $\ln S(t)$ than $S(t)$, so we use <a href="https://en.wikipedia.org/wiki/It%C3%B4%27s_lemma">Ito's lemma</a> to transform our equation, yielding
$$\ln S(t + dt) - \ln S(t) = (r - \frac{\sigma^2}{2})dt + \sigma Z\sqrt{dt}$$
which is equivalent to
$$S(t + dt) = S(t)e^{(r - \frac{\sigma^2}{2})dt + \sigma Z\sqrt{dt}}$$
A Python function has been defined below that simulates the path of an asset based on this equation.
<br><em style="font-size: 0.8em">Please note that the author first encountered this derivation in <a href="http://www.scienpress.com/Upload/CMF/Vol%201_1_3.pdf">this paper</a> and most of the steps presented in this section of the module follow those presented in it. If any concepts used in this section are unclear, you may consider going to this paper and reading the Monte Carlo section. However, it would probably be even better to check out <a href="http://www.math.umn.edu/~adams005/Financial/Materials/bemis5.pdf"> this presentation</a> on the derivation of the Black-Scholes equation in order to understand why Brownian motion factors into our analysis at all and gain a better understanding of how we have handled the stochastic elements of our equations and why. These topics are too involved to be covered in this module but are certainly worth appreciating.</em>
End of explanation
"""
#parameters
S0 = 45 #initial asset price
K = 40 #strike price
sigma = 0.25 #volatility
r = 0.1 #risk-free interest rate
T = 0.5 #time of expiration
N = 100 #number of time steps
def monte_carlo(sims, N, T, S0, sigma, r):
"""Performs a number of monte-carlo simulations of asset price
Parameters:
----------
sims: integer
number of simulations to be performed
N: integer
number of time steps in each simulations
T: float
expiration time of option
S0: float
intiial asset price
sigma: float
volatility
r: float
risk-free interest rate
Returns:
-------
all_paths: 2D array of float
simulated asset price paths with each row being a seperate simulation
Also, the function outputs a plot of its simulations
"""
dt = T/N
all_paths = np.zeros(N)
for trial in range (0,sims):
prices = [S0]
St = S0
for t in range(1,N):
St = asset_path(St, sigma, r, dt)
prices.append(St)
if trial < 1:
all_paths += prices
else:
all_paths = np.vstack((all_paths, prices))
t = range(0,N)
pyplot.plot(t,prices)
pyplot.xlabel('Time Step (N)')
pyplot.ylabel('Asset Price ( S(t) )')
return all_paths
pyplot.show()
"""
Explanation: The next step of the Monte Carlo method is to simulate many of these paths. The law of large numbers tells us that the more paths we simulate, the closer the average of these paths will be to the true mean path. Let us try this for a European call using the same parameters as before.
End of explanation
"""
sims = 10
test = monte_carlo(sims, N, T, S0, sigma, r)
"""
Explanation: Time to test our simulation function! We'll stick to 10 simulations just to make sure it works.
End of explanation
"""
sims = 1000
MC_sim = monte_carlo(sims, N, T, S0, sigma, r)
"""
Explanation: Hey, not too shabby! This looks pretty believable, so let's move on to something more rigorous. How about 1,000 simulations?
End of explanation
"""
print("Monte-Carlo estimated value of European call is $%.3f" %(np.max((np.average(MC_sim[:,-1]) - K),0)))
"""
Explanation: Wow, look at all those lines and colors! Sometimes math really can be art. For our final step, we estimate the value of a European call by taking the average of the final asset prices for each simulated path and subtracting the strike price.
End of explanation
"""
mean_path = np.zeros(N)
for i in range(N):
mean_path[i] = np.average(MC_sim[:,i])
print("Monte-Carlo estimated value of Asian call is $%.3f" %(np.average(mean_path) - K))
"""
Explanation: That result is not quite perfect, but we're certainly in the ballpark. Perhaps with more simulations and a more powerful computer, the answer would be even closer to the analytic result. Let's move on to valuing an Asian option. Since we already performed the Monte-Carlo simulations, the only thing we need to change is how we process the results. The first step will be to iteratively go through the matrix of resulting asset prices, averaging each column, which will yield an array characterizing the expected - or mean - path. We will then apply the payoff equation for an Asian call which is
$$V_{call} = \textrm{max}(\ \textrm{avg}(\ S(t)\ )-K,0) $$
End of explanation
"""
#Add custom CSS
from IPython.core.display import HTML
css_file = './styles/connor_style.css'
HTML(open(css_file, "r").read())
#Enable spellcheck
%%javascript
require(['base/js/utils'],
function(utils) {
utils.load_extensions('calico-spell-check', 'calico-document-tools', 'calico-cell-tools');
});
"""
Explanation: There you have it! We have successfully estimated the value of an Asian call, something that could not have been achieved analytically, nor with the Crank-Nicolson or binomial methods. There isn't any great way to check the accuracy of this estimate, besides maybe adding more and more simulations, but we do expect an Asian call to be valued below a European call due to the averaged nature of its payoff. Our result here at least meets that rather basic criterion.
Conclusion
In this module, we have explored three different styles of options and four different methods for valuing them. The most basic style, the European option, can be valued analytically using the Black-Scholes formula under known, constant market conditions. If we have reason to believe that those conditions are non-constant, we can use the Crank-Nicolson method to estimate the option's value. In the case of an American option, which is similar to the European style but allows for early exercise, we can employ the binomial model and work our way backwards from the set of all possible option payoffs to accurately value the option. For a path dependent option such as that described by the Asian style, the Monte-Carlo method gives us the ability to extract an option's value estimate by analyzing a large number of simulated paths. In conclusion, a number of financial derivative styles exist, each with unique mathematical properties. It is crucial that traders and academics alike keep an equally diverse set of numerical schemes in their tool sets and apply them appropriately in order to determine an option's value.
<strong> Special thanks to:</strong>
<ul>
<li> Dr. Lorena Barba and her TA's, Naty Clementi and Gil Forsyth, for their patience and assistance and for putting on an <a href="http://openedx.seas.gwu.edu/courses/GW/MAE6286/2014_fall/about"> excellent course</a>.</li>
<li> Dr. Hugo Junghenn for his course "Mathematics of Finance" where I first came into contact with many of the concepts presented in this module. His book on option valuation can be found <a href="http://www.amazon.com/Option-Valuation-Financial-Mathematics-Chapman/dp/1439889112">here</a>.</li>
<li>Tingyu Wang for their <a href="http://nbviewer.ipython.org/github/numerical-mooc/assignment-bank/blob/705c3e47e5fd441c30a38c1ab17a80a75441e7d5/Black-Scholes-Equation/Black-Scholes-Equation.ipynb">MAE 6286 project</a> completed in 2014 that helped provide a jumping-off point for this module.</li>
<li>C.R. Nwozo and S.E. Fadugba whose <a href="http://www.scienpress.com/Upload/CMF/Vol%201_1_3.pdf">paper</a> was a source of inspiration and guidance for the creation of this module.</li>
</ul>
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.12.2/examples/notebooks/generated/pca_fertility_factors.ipynb | bsd-3-clause | %matplotlib inline
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.multivariate.pca import PCA
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
"""
Explanation: statsmodels Principal Component Analysis
Key ideas: Principal component analysis, world bank data, fertility
In this notebook, we use principal components analysis (PCA) to analyze the time series of fertility rates in 192 countries, using data obtained from the World Bank. The main goal is to understand how the trends in fertility over time differ from country to country. This is a slightly atypical illustration of PCA because the data are time series. Methods such as functional PCA have been developed for this setting, but since the fertility data are very smooth, there is no real disadvantage to using standard PCA in this case.
End of explanation
"""
data = sm.datasets.fertility.load_pandas().data
data.head()
"""
Explanation: The data can be obtained from the World Bank web site, but here we work with a slightly cleaned-up version of the data:
End of explanation
"""
columns = list(map(str, range(1960, 2012)))
data.set_index('Country Name', inplace=True)
dta = data[columns]
dta = dta.dropna()
dta.head()
"""
Explanation: Here we construct a DataFrame that contains only the numerical fertility rate data and set the index to the country names. We also drop all the countries with any missing data.
End of explanation
"""
ax = dta.mean().plot(grid=False)
ax.set_xlabel("Year", size=17)
ax.set_ylabel("Fertility rate", size=17);
ax.set_xlim(0, 51)
"""
Explanation: There are two ways to use PCA to analyze a rectangular matrix: we can treat the rows as the "objects" and the columns as the "variables", or vice-versa. Here we will treat the fertility measures as "variables" used to measure the countries as "objects". Thus the goal will be to reduce the yearly fertility rate values to a small number of fertility rate "profiles" or "basis functions" that capture most of the variation over time in the different countries.
The mean trend is removed in PCA, but its worthwhile taking a look at it. It shows that fertility has dropped steadily over the time period covered in this dataset. Note that the mean is calculated using a country as the unit of analysis, ignoring population size. This is also true for the PC analysis conducted below. A more sophisticated analysis might weight the countries, say by population in 1980.
End of explanation
"""
pca_model = PCA(dta.T, standardize=False, demean=True)
"""
Explanation: Next we perform the PCA:
End of explanation
"""
fig = pca_model.plot_scree(log_scale=False)
"""
Explanation: Based on the eigenvalues, we see that the first PC dominates, with perhaps a small amount of meaningful variation captured in the second and third PC's.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8, 4))
lines = ax.plot(pca_model.factors.iloc[:,:3], lw=4, alpha=.6)
ax.set_xticklabels(dta.columns.values[::10])
ax.set_xlim(0, 51)
ax.set_xlabel("Year", size=17)
fig.subplots_adjust(.1, .1, .85, .9)
legend = fig.legend(lines, ['PC 1', 'PC 2', 'PC 3'], loc='center right')
legend.draw_frame(False)
"""
Explanation: Next we will plot the PC factors. The dominant factor is monotonically increasing. Countries with a positive score on the first factor will increase faster (or decrease slower) compared to the mean shown above. Countries with a negative score on the first factor will decrease faster than the mean. The second factor is U-shaped with a positive peak at around 1985. Countries with a large positive score on the second factor will have lower than average fertilities at the beginning and end of the data range, but higher than average fertility in the middle of the range.
End of explanation
"""
idx = pca_model.loadings.iloc[:,0].argsort()
"""
Explanation: To better understand what is going on, we will plot the fertility trajectories for sets of countries with similar PC scores. The following convenience function produces such a plot.
End of explanation
"""
def make_plot(labels):
fig, ax = plt.subplots(figsize=(9,5))
ax = dta.loc[labels].T.plot(legend=False, grid=False, ax=ax)
dta.mean().plot(ax=ax, grid=False, label='Mean')
ax.set_xlim(0, 51);
fig.subplots_adjust(.1, .1, .75, .9)
ax.set_xlabel("Year", size=17)
ax.set_ylabel("Fertility", size=17);
legend = ax.legend(*ax.get_legend_handles_labels(), loc='center left', bbox_to_anchor=(1, .5))
legend.draw_frame(False)
labels = dta.index[idx[-5:]]
make_plot(labels)
"""
Explanation: First we plot the five countries with the greatest scores on PC 1. These countries have a higher rate of fertility increase than the global mean (which is decreasing).
End of explanation
"""
idx = pca_model.loadings.iloc[:,1].argsort()
make_plot(dta.index[idx[-5:]])
"""
Explanation: Here are the five countries with the greatest scores on factor 2. These are countries that reached peak fertility around 1980, later than much of the rest of the world, followed by a rapid decrease in fertility.
End of explanation
"""
make_plot(dta.index[idx[:5]])
"""
Explanation: Finally we have the countries with the most negative scores on PC 2. These are the countries where the fertility rate declined much faster than the global mean during the 1960's and 1970's, then flattened out.
End of explanation
"""
fig, ax = plt.subplots()
pca_model.loadings.plot.scatter(x='comp_00',y='comp_01', ax=ax)
ax.set_xlabel("PC 1", size=17)
ax.set_ylabel("PC 2", size=17)
dta.index[pca_model.loadings.iloc[:, 1] > .2].values
"""
Explanation: We can also look at a scatterplot of the first two principal component scores. We see that the variation among countries is fairly continuous, except perhaps that the two countries with highest scores for PC 2 are somewhat separated from the other points. These countries, Oman and Yemen, are unique in having a sharp spike in fertility around 1980. No other country has such a spike. In contrast, the countries with high scores on PC 1 (that have continuously increasing fertility), are part of a continuum of variation.
End of explanation
"""
|
gsentveld/lunch_and_learn | notebooks/Unzip_Files_Keep_CSV_Files.ipynb | mit | # Get the project folders that we are interested in
PROJECT_DIR = os.path.dirname(dotenv_path)
EXTERNAL_DATA_DIR = PROJECT_DIR + os.environ.get("EXTERNAL_DATA_DIR")
RAW_DATA_DIR = PROJECT_DIR + os.environ.get("RAW_DATA_DIR")
# Get the list of filenames
files=os.environ.get("FILES").split()
print("Project directory is : {0}".format(PROJECT_DIR))
print("External directory is : {0}".format(EXTERNAL_DATA_DIR))
print("Raw data directory is : {0}".format(RAW_DATA_DIR))
print("Base names of files : {0}".format(" ".join(files)))
"""
Explanation: Dealing with ZIP files
The ZIP files contain a CSV file and a fixed width file. We only want the CSV file. We will store those in the RAW directory.
Lets get those variable for the EXTERNAL and the RAW directories.
End of explanation
"""
import zipfile
print ("Extracting files to: {}".format(RAW_DATA_DIR))
for file in files:
# format the full zip filename in the EXTERNAL DATA DIR
fn=EXTERNAL_DATA_DIR+'/'+file+'.zip'
# and format the csv member name in that zip file
member=file + '.csv'
print("{0} extract {1}.".format(fn, member))
# To make it easier to deal with files, use the with <> as <>: construction.
# It will deal with opening and closing handlers for you.
with zipfile.ZipFile(fn) as zfile:
zfile.extract(member, path=RAW_DATA_DIR)
"""
Explanation: zipfile package
While some python packages that read files can handle compressed files, the zipfile package can deal with more complex zip files. The files we downloaded from have 2 files as their content. We just want the CSV files.
<br/>
File objects are a bit more complex than other data structures. Opening, reading from, writing to them can all raise exceptions due to the permissions you may or may not have.
<br/>Access to the file is done via a file handler and not directly. You need to properly close them once you are done, otherwise your program keeps that file open as far as the operating system is concerned, potentially blocking other programs from accessing it.
<br/>
To deal with that, you want to use the <b><code>with zipfile.ZipFile() as zfile</code></b> construction. Once the program leaves that scope, Python will nicely close any handlers to the object reference created. This also works great for database connections and other constructions that have these characteristics.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ncc/cmip6/models/sandbox-2/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-2', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NCC
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:25
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
malogrisard/NTDScourse | algorithms/02_ass_clustering.ipynb | mit | # Load libraries
# Math
import numpy as np
# Visualization
%matplotlib notebook
import matplotlib.pyplot as plt
plt.rcParams.update({'figure.max_open_warning': 0})
from mpl_toolkits.axes_grid1 import make_axes_locatable
from scipy import ndimage
# Print output of LFR code
import subprocess
# Sparse matrix
import scipy.sparse
import scipy.sparse.linalg
# 3D visualization
import pylab
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot
# Import data
import scipy.io
# Import functions in lib folder
import sys
sys.path.insert(1, 'lib')
# Import helper functions
%load_ext autoreload
%autoreload 2
from lib.utils import construct_kernel
from lib.utils import compute_kernel_kmeans_EM
from lib.utils import compute_purity
# Import distance function
import sklearn.metrics.pairwise
# Remove warnings
import warnings
warnings.filterwarnings("ignore")
"""
Explanation: A Network Tour of Data Science
Xavier Bresson, Winter 2016/17
Assignment 1 : Unsupervised Clustering with the Normalized Association
End of explanation
"""
# Load dataset: W is the Adjacency Matrix and Cgt is the ground truth clusters
mat = scipy.io.loadmat('datasets/mnist_2000_graph.mat')
W = mat['W']
n = W.shape[0]
Cgt = mat['Cgt'] - 1; Cgt = Cgt.squeeze()
nc = len(np.unique(Cgt))
print('Number of nodes =',n)
print('Number of classes =',nc);
# Degree Matrix
d = scipy.sparse.csr_matrix.sum(W,axis=-1)
# Compute D^(-0.5)
d_sqrt = np.sqrt(d)
d_sqrt_inv = 1./d_sqrt
D_sqrt_inv = scipy.sparse.diags(d_sqrt_inv.A.squeeze(), 0)
# Create Identity matrix
I = scipy.sparse.identity(d.size, dtype=W.dtype)
# Construct A
A = I - D_sqrt_inv*W*D_sqrt_inv
# Perform EVD on A
U = scipy.sparse.linalg.eigsh(A, k=4, which='SM')
fig = plt.figure(1)
ax = fig.gca(projection='3d')
ax.scatter(U[1][:,1], U[1][:,2], U[1][:,3], c=Cgt)
plt.title('$Y^*$')
"""
Explanation: Question 1: Write down the mathematical relationship between Normalized Cut (NCut) and Normalized Association (NAssoc) for K clusters. It is not necessary to provide details.
The Normalized Cut problem is defined as:<br><br>
$$
\min_{{S_k}}\ NCut({S_k}) := \sum_{k=1}^K \frac{Cut(S_k,S_k^c)}{Vol(S_k)} \ \textrm{ s.t. } \ \cup_{k=1}^{K} S_k = V, \ S_k \cap S_{k'}=\emptyset, \ \forall k \not= k' \quad\quad\quad(1)
$$
and the Normalized Association problem is defined as:<br><br>
$$
\max_{{S_k}}\ NAssoc({S_k}):= \sum_{k=1}^K \frac{Assoc(S_k,S_k)}{Vol(S_k)} \ \textrm{ s.t. } \ \cup_{k=1}^{K} S_k = V, \ S_k \cap S_{k'}=\emptyset, \ \forall k \not= k' .
$$
We may rewrite the Cut operator and the Volume operator with the Assoc operator as:<br><br>
$$
Vol(S_k) = \sum_{i\in S_k, j\in V} W_{ij} \
Assoc(S_k,S_k) = \sum_{i\in S_k, j\in S_k} W_{ij} \
Cut(S_k,S_k^c) = \sum_{i\in S_k, j\in S_k^c=V\setminus S_k} W_{ij} = \sum_{i\in S_k, j\in V} W_{ij} - \sum_{i\in S_k, j\in S_k} W_{ij} = Vol(S_k) - Assoc(S_k,S_k)
$$
Answer to Q1: Your answer here.
$$
NCut({S_k}) = K - NAssoc({S_k})
$$
Question 2: Using the relationship between NCut and NAssoc from Q1, it is therefore equivalent to maximize NAssoc by minimizing or maximizing NCut? That is
$$
\max_{{S_k}}\ NAssoc({S_k}) \ \textrm{ s.t. } \cup_{k=1}^{K} S_k = V, \quad S_k \cap S_{k'}=\emptyset, \ \forall k \not= k'
$$
$$
\Updownarrow
$$
$$
\min_{{S_k}}\ NCut({S_k}) \ \textrm{ s.t. } \cup_{k=1}^{K} S_k = V, \quad S_k \cap S_{k'}=\emptyset, \ \forall k \not= k'
$$
or
$$
\max_{{S_k}}\ NCut({S_k}) \ \textrm{ s.t. } \cup_{k=1}^{K} S_k = V, \quad S_k \cap S_{k'}=\emptyset, \ \forall k \not= k'
$$
It is not necessary to provide details.
Answer to Q2: Your answer here. \
We need to minimize NCut
Question 3: Solving the NCut problem in Q2 is NP-hard => let us consider a spectral relaxation of NCut. Write down the Spectral Matrix A of NCut that satisfies the equivalent functional optimization problem of Q2:
$$
\min_{Y}\ tr( Y^\top A Y) \ \textrm{ s.t. } \ Y^\top Y = I_K \textrm{ and } Y \in Ind_S, \quad\quad\quad(3)
$$
where
$$
Y \in Ind_S \ \textrm{ reads as } \ Y_{ik} =
\left{
\begin{array}{ll}
\big(\frac{D_{ii}}{Vol(S_k)}\big)^{1/2} & \textrm{if} \ i \in S_k\
0 & \textrm{otherwise}
\end{array}
\right..
$$
and
$$
A=???
$$
It is not necessary to provide details.
Hint: Let us introduce the indicator matrix $F$ of the clusters $S_k$ such that:
$$
F_{ik} =
\left{
\begin{array}{ll}
1 & \textrm{if} \ i \in S_k\
0 & \textrm{otherwise}
\end{array}
\right..
$$
We may rewrite the Cut operator and the Volume operator with $F$ as:
$$
Vol(S_k) = \sum_{i\in S_k, j\in V} W_{ij} = F_{\cdot,k}^\top D F_{\cdot,k}\
Cut(S_k,S_k^c) = \sum_{i\in S_k, j\in V} W_{ij} - \sum_{i\in S_k, j\in S_k} W_{ij} = F_{\cdot,k}^\top D F_{\cdot,k} - F_{\cdot,k}^\top W F_{\cdot,k} = F_{\cdot,k}^\top (D - W) F_{\cdot,k} \quad
$$
We thus have
$$
\frac{Cut(S_k,S_k^c)}{Vol(S_k)} = \frac{ F_{\cdot,k}^\top (D - W) F_{\cdot,k} }{ F_{\cdot,k}^\top D F_{\cdot,k} }
$$
Set $\hat{F}{\cdot,k}=D^{1/2}F{\cdot,k}$ and observe that
$$
\frac{ F_{\cdot,k}^\top (D - W) F_{\cdot,k} }{ F_{\cdot,k}^\top D F_{\cdot,k} } = \frac{ \hat{F}{\cdot,k}^\top D^{-1/2}(D - W)D^{-1/2} \hat{F}{\cdot,k} }{ \hat{F}{\cdot,k}^\top \hat{F}{\cdot,k} } = \frac{ \hat{F}{\cdot,k}^\top (I - D^{-1/2}WD^{-1/2}) \hat{F}{\cdot,k} }{ \hat{F}{\cdot,k}^\top \hat{F}{\cdot,k} } ,
$$
with $L_N=I - D^{-1/2}WD^{-1/2}$ is the normalized graph Laplacian. Set $Y_{\cdot,k}=\frac{\hat{F}{\cdot,k}}{\|\hat{F}{\cdot,k}\|_2}$:
$$
\frac{ \hat{F}{\cdot,k}^\top L_N \hat{F}{\cdot,k} }{ \hat{F}{\cdot,k}^\top \hat{F}{\cdot,k} } = Y_{\cdot,k}^\top L_N Y_{\cdot,k} \quad\quad\quad(2)
$$
Using (2), we can rewrite (1) as a functional optimization problem:
$$
\min_{Y}\ tr( Y^\top A Y) \ \textrm{ s.t. } \ Y^\top Y = I_K \textrm{ and } Y \in Ind_S,
$$
where
$$
Y \in Ind_S \ \textrm{ reads as } \ Y_{ik} =
\left{
\begin{array}{ll}
\big(\frac{D_{ii}}{Vol(S_k)}\big)^{1/2} & \textrm{if} \ i \in S_k\
0 & \textrm{otherwise}
\end{array}
\right..
$$
and
$$
A=???
$$
Answer to Q3:
$$
A=L_N
$$
Question 4: Drop the cluster indicator constraint $Y\in Ind_S$ in Q3, how do you compute the solution $Y^\star$ of (3)? Why the first column of $Y^\star$ is not relevant for clustering?
Answer to Q4: Your answer here.
We compute $Y^*$ by computing the EVD of $A$.
For any clustering, the eigenvalues are counted in increasing order (the smallest eigenvalue first). Hence, the first column is not relevant here.
Question 5: Plot in 3D the 2nd, 3rd, 4th columns of $Y^\star$. <br>
Hint: Compute the degree matrix $D$.<br>
Hint: You may use function D_sqrt_inv = scipy.sparse.diags(d_sqrt_inv.A.squeeze(), 0) for creating $D^{-1/2}$.<br>
Hint: You may use function I = scipy.sparse.identity(d.size, dtype=W.dtype) for creating a sparse identity matrix.<br>
Hint: You may use function lamb, U = scipy.sparse.linalg.eigsh(A, k=4, which='SM') to perform the eigenvalue decomposition of A.<br>
Hint: You may use function ax.scatter(Xdisp, Ydisp, Zdisp, c=Cgt) for 3D visualization.
End of explanation
"""
# Your code here
#lamb, Y_star = scipy.sparse.linalg.eigsh(A, k=4, which='SM')
# Normalize the rows of Y* with the L2 norm, i.e. ||y_i||_2 = 1
#Y_star = Y_star/np.sqrt(np.sum((Y_star)**2))
Y_star = U[1]
Y_star = ( Y_star.T / np.sqrt(np.sum(Y_star**2,axis=1)+1e-10) ).T
# Your code here
# Run standard K-Means
Ker=construct_kernel(Y_star,'linear')
n = Y_star.shape[0]
Theta= np.ones(n)
[C_kmeans, En_kmeans]=compute_kernel_kmeans_EM(nc,Ker,Theta,10)
accuracy = compute_purity(C_kmeans,Cgt,nc)
print('accuracy = ',accuracy,'%')
fig = plt.figure(2)
ax = fig.gca(projection='3d')
plt.scatter(Y_star[:,1], U[1][:,2], U[1][:,3], c=Cgt)
plt.title('$Y^*$')
"""
Explanation: Question 6: Solve the unsupervised clustering problem for MNIST following the popular technique of [Ng, Jordan, Weiss, “On Spectral Clustering: Analysis and an algorithm”, 2002], i.e. <br>
(1) Compute $Y^\star$? solution of Q4. <br>
(2) Normalize the rows of $Y^\star$? with the L2-norm. <br>
Hint: You may use function X = ( X.T / np.sqrt(np.sum(X**2,axis=1)+1e-10) ).T for the L2-normalization of the rows of X.<br>
(3) Run standard K-Means on normalized $Y^\star$? to get the clusters, and compute the clustering accuracy. You should get more than 50% accuracy.
End of explanation
"""
|
bigdata-i523/hid335 | project/BDA-Project-Data.ipynb | gpl-3.0 | import requests, zipfile, io
import pandas as pd
URL = 'http://samhda.s3-us-gov-west-1.amazonaws.com/s3fs-public/field-uploads-protected/studies/NSDUH-2015/NSDUH-2015-datasets/NSDUH-2015-DS0001/NSDUH-2015-DS0001-bundles-with-study-info/NSDUH-2015-DS0001-bndl-data-tsv.zip'
def get_data():
r = requests.get('http://samhda.s3-us-gov-west-1.amazonaws.com/s3fs-public/field-uploads-protected/studies/NSDUH-2015/NSDUH-2015-datasets/NSDUH-2015-DS0001/NSDUH-2015-DS0001-bundles-with-study-info/NSDUH-2015-DS0001-bndl-data-tsv.zip')
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall()
file = pd.read_table('~/NSDUH-2015-DS0001-bndl-data-tsv/NSDUH-2015-DS0001-data/NSDUH-2015-DS0001-data-excel.tsv', low_memory=False)
data = pd.DataFrame(file)
print(data.shape)
data.to_csv('nsduh15-dataset.csv', sep=',', encoding='utf-8')
get_data()
"""
Explanation: BDA_Fall17: Data for Final Project
Sean M. Shiverick, IU-Bloomington
2015 National Survey on Drug Abuse and Health (NSDUH)
Substance Abuse and Mental Health Services Administration
Center for Behavioral Health Statistics and Quality, October 27, 2016
http://datafiles.samhsa.gov/study/national-survey-drug-use-and-health-nsduh-2015-nid16893
Data Cleaning and Preparation
Step 1. Download data from URL, unzip files, write data to csv
get_data() function retrieves datafiles from URL, unzips files, extracts data
Reads NSDUH-2015-DS0001-data-excel.tsv file, converts to dataFrame object
Print data frame shape, and exports dataframe to CSV file as nsduh15-dataset.csv
End of explanation
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
file = pd.read_csv('nsduh15-dataset.csv', low_memory=False)
data = pd.DataFrame(file)
data.shape
df = pd.DataFrame(data, columns=['QUESTID2', 'CATAG6', 'IRSEX','IRMARITSTAT',
'EDUHIGHCAT', 'IRWRKSTAT18', 'COUTYP2', 'HEALTH2','STDANYYR1',
'HEPBCEVER1','HIVAIDSEV1','CANCEREVR1','INHOSPYR','AMDELT',
'AMDEYR','ADDPR2WK1','ADWRDST1','DSTWORST1','IMPGOUTM1',
'IMPSOCM1','IMPRESPM1','SUICTHNK1','SUICPLAN1','SUICTRY1',
'PNRNMLIF','PNRNM30D','PNRWYGAMT','PNRNMFLAG','PNRNMYR',
'PNRNMMON','OXYCNNMYR','DEPNDPYPNR','ABUSEPYPNR','PNRRSHIGH',
'HYDCPDAPYU','OXYCPDAPYU','OXCNANYYR2','TRAMPDAPYU','MORPPDAPYU',
'FENTPDAPYU','BUPRPDAPYU','OXYMPDAPYU','DEMEPDAPYU','HYDMPDAPYU',
'HERFLAG','HERYR','HERMON','ABODHER', 'MTDNPDAPYU',
'IRHERFY','TRBENZAPYU','ALPRPDAPYU','LORAPDAPYU','CLONPDAPYU',
'DIAZPDAPYU','SVBENZAPYU','TRIAPDAPYU','TEMAPDAPYU','BARBITAPYU',
'SEDOTANYR2','COCFLAG','COCYR','COCMON','CRKFLAG',
'CRKYR','AMMEPDAPYU','METHAMFLAG','METHAMYR','METHAMMON',
'HALLUCFLAG','LSDFLAG','ECSTMOFLAG','DAMTFXFLAG','KETMINFLAG',
'TXYRRESOV1','TXYROUTPT1','TXYRMHCOP1','TXYREMRGN1','TXCURRENT1',
'TXLTYPNRL1','TXYRNOSPIL','AUOPTYR1','MHLMNT3','MHLTHER3',
'MHLDOC3','MHLCLNC3','MHLDTMT3','AUINPYR1','AUALTYR1'])
df.shape
df.head()
df.tail()
"""
Explanation: Step 2. Use Pandas to Subset dataset as data frame
Import python modules
load data file and save as DataFrame object
Subset dataframe by column
End of explanation
"""
df.replace([83, 85, 91, 93, 94, 97, 98, 99, 991, 993], np.nan, inplace=True)
df.fillna(0, inplace=True)
df.head()
"""
Explanation: Step 3. Remove Missing Values and Recode Values
Recode null and NaN missing values
Replace values for Bad Data, Don't know, Refused, Blank, Skip with NaN
Replace NaN with 0
End of explanation
"""
df['STDANYYR1'].replace(2,0,inplace=True)
df['HEPBCEVER1'].replace(2,0,inplace=True)
df['HIVAIDSEV1'].replace(2,0,inplace=True)
df['CANCEREVR1'].replace(2,0,inplace=True)
df['INHOSPYR'].replace(2,0,inplace=True)
df['AMDELT'].replace(2,0,inplace=True)
df['AMDEYR'].replace(2,0,inplace=True)
df['ADDPR2WK1'].replace(2,0,inplace=True)
df['DSTWORST1'].replace(2,0,inplace=True)
df['IMPGOUTM1'].replace(2,0,inplace=True)
df['IMPSOCM1'].replace(2,0,inplace=True)
df['IMPRESPM1'].replace(2,0,inplace=True)
df['SUICTHNK1'].replace(2,0,inplace=True)
df['SUICPLAN1'].replace(2,0,inplace=True)
df['SUICTRY1'].replace(2,0,inplace=True)
df['PNRNMLIF'].replace(2,0,inplace=True)
df['PNRNM30D'].replace(2,0,inplace=True)
df['PNRWYGAMT'].replace(2,0,inplace=True)
df['PNRRSHIGH'].replace(2,0,inplace=True)
df['TXYRRESOV1'].replace(2,0,inplace=True)
df['TXYROUTPT1'].replace(2,0,inplace=True)
df['TXYRMHCOP1'].replace(2,0,inplace=True)
df['TXYREMRGN1'].replace(2,0,inplace=True)
df['TXCURRENT1'].replace(2,0,inplace=True)
df['TXLTYPNRL1'].replace(2,0,inplace=True)
df['AUOPTYR1'].replace(2,0,inplace=True)
df['AUINPYR1'].replace(2,0,inplace=True)
df['AUALTYR1'].replace(2,0,inplace=True)
df.head()
df['PNRRSHIGH'].replace(3,1,inplace=True)
df['TXLTYPNRL1'].replace(3,1,inplace=True)
df['TXYREMRGN1'].replace(3,1,inplace=True)
df['AUOPTYR1'].replace(3,1,inplace=True)
df['AUALTYR1'].replace(3,1,inplace=True)
df.head()
df['SEX'] = df['IRSEX'].replace([1,2], [0,1])
df['MARRIED'] = df['IRMARITSTAT'].replace([1,2,3,4], [4,3,2,1])
df['EDUCAT'] = df['EDUHIGHCAT'].replace([1,2,3,4,5], [2,3,4,5,1])
df['EMPLOY18'] = df['IRWRKSTAT18'].replace([1,2,3,4], [2,1,0,0])
df['CTYMETRO'] = df['COUTYP2'].replace([1,2,3],[3,2,1])
df['EMODSWKS'] = df['ADWRDST1'].replace([1,2,3,4], [0,1,2,3])
df['TXLTPNRL'] = df['TXLTYPNRL1'].replace(6,0)
df['TXYRRESOV'] = df['TXYRRESOV1'].replace(5,1)
df['TXYROUTPT'] = df['TXYROUTPT1'].replace(5,1)
df['TXYRMHCOP'] = df['TXYRMHCOP1'].replace(5,1)
df.head()
df.shape
df.columns
"""
Explanation: 3.2 Recode values for selected features:
Order matters here, because some variables were recoded into new variables
* Recode 2=0:
['STDANYYR1','HEPBCEVER1', 'HIVAIDSEV1', 'CANCEREVR1', 'INHOSPYR ',
'AMDELT','AMDEYR','ADDPR2WK1','DSTWORST1', 'IMPGOUTM1',
'IMPSOCM1','IMPRESPM1','SUICTHNK1','SUICPLAN1','SUICTRY1',
'PNRNMLIF','PNRNM30D','PNRWYGAMT','PNRRSHIGH'
'TXYRRESOV1','TXYROUTPT1','TXYRMHCOP1','TXYREMRGN1', 'TXCURRENT1',
'TXLTYPNRL1','AUOPTYR1','AUINPYR1','AUALTYR1']
* Recode ['PNRRSHIGH', 'TXLTYPNRL1','TXYREMRGN1', 'AUOPTYR1','AUALTYR1']: 3=1
* Recode ['TXYRRESOV1', 'TXYROUTPT1','TXYRMHCOP1']: 5=1
* Recode TXLTYPNRL: 6=0
* Recode IRSEX to male=0, female=1
* Recode IRMARITSTAT: 1=4, 2=3, 3=2, 4=1
* Recode EDUHIGHCAT: 5=0
* Recode IRWRKSTAT18: 1=2, 2=1, 3=0, 4=0
* Recode COUTYP2: 1=3, 3=1
* Recode ADWRDST1: 1=0, 2=1, 3=2, 4=3
End of explanation
"""
df = df.rename(columns={'QUESTID2':'QID','CATAG6':'AGECAT',
'STDANYYR1':'STDPYR','HEPBCEVER1':'HEPEVR','CANCEREVR1':'CANCEVR','INHOSPYR':'HOSPYR',
'AMDELT':'DEPMELT','AMDEYR':'DEPMEYR','ADDPR2WK1':'DEPMWKS','DSTWORST1':'DEPWMOS',
'IMPGOUTM1':'EMOPGOUT','IMPSOCM1':'EMOPSOC','IMPRESPM1':'EMOPWRK',
'SUICTHNK1':'SUICTHT','SUICPLAN1':'SUICPLN','SUICTRY1':'SUICATT',
'PNRNMLIF':'PRLUNDR','PNRNM30D':'PRLUNDR30','PNRWYGAMT':'PRLGRTYR',
'PNRNMFLAG':'PRLMISEVR','PNRNMYR':'PRLMISYR','PNRNMMON':'PRLMISMO',
'OXYCNNMYR':'PRLOXYMSYR','DEPNDPYPNR':'PRLDEPYR','ABUSEPYPNR':'PRLABSRY',
'PNRRSHIGH':'PRLHIGH','HYDCPDAPYU':'HYDRCDYR','OXYCPDAPYU':'OXYCDPRYR',
'OXCNANYYR2':'OXYCTNYR','TRAMPDAPYU':'TRMADLYR','MORPPDAPYU':'MORPHPRYR',
'FENTPDAPYU':'FENTNYLYR','BUPRPDAPYU':'BUPRNRPHN','OXYMPDAPYU':'OXYMORPHN',
'DEMEPDAPYU':'DEMEROL','HYDMPDAPYU':'HYDRMRPHN','HERFLAG':'HEROINEVR',
'HERYR':'HEROINYR', 'HERMON':'HEROINMO','ABODHER':'HEROINAB',
'MTDNPDAPYU':'METHADONE','IRHERFY':'HEROINFQY',
'TRBENZAPYU':'TRQBENZODZ','ALPRPDAPYU':'TRQALPRZM','LORAPDAPYU':'TRQLRZPM',
'CLONPDAPYU':'TRQCLNZPM','DIAZPDAPYU':'TRQDIAZPM','SVBENZAPYU':'SDBENZDPN',
'TRIAPDAPYU':'SDTRZLM','TEMAPDAPYU':'SDTMZPM','BARBITAPYU':'SDBARBTS',
'SEDOTANYR2':'SDOTHYR','COCFLAG':'COCNEVR','COCYR':'COCNYR','COCMON':'COCNMO',
'CRKFLAG':'CRACKEVR','CRKYR':'CRACKYR','AMMEPDAPYU':'AMPHTMNYR',
'METHAMFLAG':'METHEVR','METHAMYR':'METHYR','METHAMMON':'METHMO',
'HALLUCFLAG':'HLCNEVR','LSDFLAG':'LSDEVR','ECSTMOFLAG':'MDMAEVR',
'DAMTFXFLAG':'DMTEVR','KETMINFLAG':'KETMNEVR',
'TXYRRESOV':'TRTRHBOVN','TXYROUTPT':'TRTRHBOUT','TXYRMHCOP':'TRTMHCTR',
'TXYREMRGN1':'TRTERYR','TXCURRENT1':'TRTCURRCV','TXLTPNRL':'TRTCURPRL',
'TXYRNOSPIL':'TRTGAPYR','AUOPTYR1':'MHTRTOYR','MHLMNT3':'MHTRTCLYR',
'MHLTHER3':'MHTRTTHPY','MHLDOC3':'MHTRTDRYR', 'MHLCLNC3':'MHTRTMDOUT',
'MHLDTMT3':'MHTRTHPPGM','AUINPYR1':'MHTRTHSPON','AUALTYR1':'MHTRTALT'})
df.shape
"""
Explanation: Step 4. Rename Select Features for Description
End of explanation
"""
df1 = df[['QID','AGECAT','SEX', 'MARRIED', 'EDUCAT',
'EMPLOY18','CTYMETRO','HEALTH2','STDPYR','HEPEVR','CANCEVR','HOSPYR',
'DEPMELT','DEPMEYR','DEPMWKS','DEPWMOS','EMODSWKS','EMOPGOUT',
'EMOPSOC','EMOPWRK','SUICTHT','SUICPLN','SUICATT',
'PRLUNDR','PRLUNDR30','PRLGRTYR','PRLMISEVR','PRLMISYR',
'PRLMISMO','PRLOXYMSYR','PRLDEPYR','PRLABSRY','PRLHIGH',
'HYDRCDYR','OXYCDPRYR','OXYCTNYR','TRMADLYR','MORPHPRYR',
'FENTNYLYR','BUPRNRPHN','OXYMORPHN','DEMEROL','HYDRMRPHN',
'HEROINEVR','HEROINYR','HEROINMO','HEROINAB','METHADONE','HEROINFQY',
'TRQBENZODZ','TRQALPRZM','TRQLRZPM','TRQCLNZPM','TRQDIAZPM',
'SDBENZDPN','SDTRZLM','SDTMZPM','SDBARBTS','SDOTHYR',
'COCNEVR','COCNYR','COCNMO','CRACKEVR','CRACKYR',
'AMPHTMNYR','METHEVR','METHYR','METHMO',
'HLCNEVR','LSDEVR','MDMAEVR','DMTEVR','KETMNEVR',
'TRTRHBOVN','TRTRHBOUT','TRTMHCTR','TRTERYR','TRTCURRCV',
'TRTCURPRL','TRTGAPYR','MHTRTOYR','MHTRTCLYR','MHTRTTHPY',
'MHTRTDRYR','MHTRTMDOUT','MHTRTHPPGM','MHTRTHSPON','MHTRTALT']]
df1.shape
df1.head()
"""
Explanation: Step 5. Revised Data Frame with updated features
End of explanation
"""
df1.to_csv('nsduh-2015.csv', sep=',', encoding='utf-8')
"""
Explanation: Step 6. Export data file to CSV
End of explanation
"""
df1['HEALTH'] = df1['HEALTH2']+df1['STDPYR']+df1['HEPEVR']+df1['CANCEVR']+df1['HOSPYR']
df1['MENTHLTH'] = df1[['DEPMELT', 'DEPMEYR', 'DEPMWKS', 'DEPWMOS', 'EMODSWKS',
'EMOPGOUT','EMOPSOC', 'EMOPWRK','SUICTHT', 'SUICPLN']].sum(axis=1)
df1['PRLMISAB'] = df1[['PRLUNDR', 'PRLUNDR30', 'PRLGRTYR', 'PRLMISEVR', 'PRLMISYR',
'PRLMISMO', 'PRLOXYMSYR','PRLDEPYR', 'PRLABSRY','PRLHIGH']].sum(axis=1)
df1['PRLANY'] = df1[['HYDRCDYR', 'OXYCDPRYR', 'OXYCTNYR', 'TRMADLYR', 'MORPHPRYR',
'FENTNYLYR','BUPRNRPHN', 'OXYMORPHN','DEMEROL', 'HYDRMRPHN']].sum(axis=1)
df1['HEROINUSE'] = df1[['HEROINEVR', 'HEROINYR', 'HEROINMO', 'HEROINAB', 'METHADONE']].sum(axis=1)
df1['TRQLZRS'] = df1[['TRQBENZODZ', 'TRQALPRZM', 'TRQLRZPM', 'TRQCLNZPM', 'TRQDIAZPM']].sum(axis=1)
df1['SEDATVS'] = df1[['SDBENZDPN','SDTRZLM', 'SDTMZPM','SDBARBTS', 'SDOTHYR', 'SDOTHYR']].sum(axis=1)
df1['COCAINE'] = df1[['COCNEVR', 'COCNYR', 'COCNMO', 'CRACKEVR', 'CRACKYR']].sum(axis=1)
df1['AMPHETMN'] = df1[['AMPHTMNYR','METHEVR', 'METHYR','METHMO']].sum(axis=1)
df1['HALUCNG'] = df1[['HLCNEVR', 'LSDEVR','MDMAEVR', 'DMTEVR', 'KETMNEVR']].sum(axis=1)
df1['TRTMENT'] = df1[['TRTRHBOVN', 'TRTRHBOUT', 'TRTMHCTR','TRTERYR',
'TRTCURRCV', 'TRTCURPRL', 'TRTGAPYR']].sum(axis=1)
df1['MHTRTMT'] = df1[['MHTRTOYR','MHTRTCLYR', 'MHTRTTHPY', 'MHTRTDRYR',
'MHTRTMDOUT', 'MHTRTHPPGM','MHTRTHSPON', 'MHTRTALT']].sum(axis=1)
df1.shape
df1.keys()
"""
Explanation: Step 7. Sum selected columns to create aggregate variables
Several ways to create new variables based on sum of related columns:
1. Simple way to add columns in new variables: df['C'] = df['A'] + df['B']
2. Use sum function to sum columns: df['C'] = df[['A', 'B']].sum(axis=1)
3. Use lambda function across rows, using axis=1 for columns:
df['C'] = df.apply(lambda row: row['A']+row['B'], axis=1)
End of explanation
"""
df1['HEALTH'].replace([0,1,2,3,4,5], [5,4,3,2,1,0])
"""
Explanation: 7.1 Recode health variable: higher score == better health
End of explanation
"""
df2 = pd.DataFrame(df1, columns=['QID', 'AGECAT', 'SEX', 'MARRIED',
'EDUCAT', 'EMPLOY18', 'CTYMETRO','HEALTH', 'MENTHLTH','SUICATT',
'PRLMISEVR','PRLMISAB','PRLANY','HEROINEVR','HEROINUSE','HEROINFQY',
'TRQLZRS', 'SEDATVS', 'COCAINE', 'AMPHETMN','TRTMENT','MHTRTMT'
])
df2.shape
df2.keys()
"""
Explanation: Step 8. Save Data Subset as Data Frame
End of explanation
"""
df2.to_csv('project-data.csv', sep=',', encoding='utf-8')
"""
Explanation: Step 9. Export data frame to CSV file
End of explanation
"""
|
cdt15/lingam | examples/DrawGraph.ipynb | mit | import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import make_dot
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
"""
Explanation: Draw Causal Graph
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
And to draw the causal graph, we need to import make_dot method from lingam.utils.
End of explanation
"""
x3 = np.random.uniform(size=10000)
x0 = 3.0*x3 + np.random.uniform(size=10000)
x2 = 6.0*x3 + np.random.uniform(size=10000)
x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=10000)
x5 = 4.0*x0 + np.random.uniform(size=10000)
x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=10000)
X = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5'])
model = lingam.DirectLiNGAM()
model.fit(X)
make_dot(model.adjacency_matrix_)
"""
Explanation: Draw the result of LiNGAM
First, we can draw a simple graph that is the result of LiNGAM.
End of explanation
"""
labels = [f'var{i}' for i in range(X.shape[1])]
make_dot(model.adjacency_matrix_, labels=labels)
"""
Explanation: If we want to change the variable name, we can use labels.
End of explanation
"""
dot = make_dot(model.adjacency_matrix_, labels=labels)
# Save pdf
dot.render('dag')
# Save png
dot.format = 'png'
dot.render('dag')
"""
Explanation: Save graph
The created dot data can be saved as an image file in addition to being displayed in Jupyter Notebook.
End of explanation
"""
from sklearn.linear_model import LinearRegression
target = 0
features = [i for i in range(X.shape[1]) if i != target]
reg = LinearRegression()
reg.fit(X.iloc[:, features], X.iloc[:, target])
"""
Explanation: Draw the result of LiNGAM with prediction model
For example, we create a linear regression model with x0 as the target variable.
End of explanation
"""
make_dot(model.adjacency_matrix_, prediction_feature_indices=features, prediction_coefs=reg.coef_)
"""
Explanation: By specify prediction_feature_indices and prediction_coefs that can be obtained from the prediction model, we can draw the prediction model with the causal structure.
End of explanation
"""
make_dot(model.adjacency_matrix_, prediction_feature_indices=features, prediction_target_label='Target', prediction_line_color='#0000FF')
"""
Explanation: Also, we can change the label of the target variable by prediction_target_label, omit the coefficient of prediction model without prediction_coefs, and change the color by prediction_line_color.
End of explanation
"""
import lightgbm as lgb
target = 0
features = [i for i in range(X.shape[1]) if i != target]
reg = lgb.LGBMRegressor(random_state=0)
reg.fit(X.iloc[:, features], X.iloc[:, target])
reg.feature_importances_
make_dot(model.adjacency_matrix_, prediction_feature_indices=features, prediction_feature_importance=reg.feature_importances_)
"""
Explanation: In addition to the above, we can use prediction_feature_importance to draw the importance of the prediction model as an edge label.
End of explanation
"""
|
leriomaggio/numpy_euroscipy2015 | 04_sparse_matrices.ipynb | mit | import numpy as np
# Create a random array with a lot of zeros
X = np.random.random((10, 5))
print(X)
X[X < 0.7] = 0 # note: fancy indexing
print(X)
from scipy import sparse
# turn X into a csr (Compressed-Sparse-Row) matrix
X_csr = sparse.csr_matrix(X)
print(X_csr)
# convert the sparse matrix to a dense array
print(X_csr.toarray())
# Sparse matrices support linear algebra:
y = np.random.random(X_csr.shape[1])
z1 = X_csr.dot(y)
z2 = X.dot(y)
np.allclose(z1, z2)
"""
Explanation: Scipy Sparse Matrices
Sparse Matrices are very nice in some situations.
For example, in some machine learning tasks, especially those associated
with textual analysis, the data may be mostly zeros.
Storing all these zeros is very inefficient.
We can create and manipulate sparse matrices as follows:
End of explanation
"""
# Create an empty LIL matrix and add some items
X_lil = sparse.lil_matrix((5, 5))
for i, j in np.random.randint(0, 5, (15, 2)):
X_lil[i, j] = i + j
print(X_lil)
print(X_lil.toarray())
"""
Explanation: The CSR representation can be very efficient for computations, but it is not as good for adding elements.
For that, the LIL (List-In-List) representation is better:
End of explanation
"""
X_csr = X_lil.tocsr()
print(X_csr)
"""
Explanation: Often, once an LIL matrix is created, it is useful to convert it to a CSR format
Note: many scikit-learn algorithms require CSR or CSC format
End of explanation
"""
from scipy.sparse import bsr_matrix
indptr = np.array([0, 2, 3, 6])
indices = np.array([0, 2, 2, 0, 1, 2])
data = np.array([1, 2, 3, 4, 5, 6]).repeat(4).reshape(6, 2, 2)
bsr_matrix((data,indices,indptr), shape=(6, 6)).toarray()
"""
Explanation: There are several other sparse formats that can be useful for various problems:
CSC (compressed sparse column)
BSR (block sparse row)
COO (coordinate)
DIA (diagonal)
DOK (dictionary of keys)
CSC - Compressed Sparse Column
Advantages of the CSC format
* efficient arithmetic operations CSC + CSC, CSC * CSC, etc.
* efficient column slicing
* fast matrix vector products (CSR, BSR may be faster)
Disadvantages of the CSC format
* slow row slicing operations (consider CSR)
* changes to the sparsity structure are expensive (consider LIL or DOK)
BSR - Block Sparse Row
The Block Compressed Row (BSR) format is very similar to the Compressed Sparse Row (CSR) format.
BSR is appropriate for sparse matrices with dense sub matrices like the example below.
Block matrices often arise in vector-valued finite element discretizations.
In such cases, BSR is considerably more efficient than CSR and CSC for many sparse arithmetic operations.
End of explanation
"""
from scipy.sparse import dok_matrix
S = dok_matrix((5, 5), dtype=np.float32)
for i in range(5):
for j in range(i, 5):
S[i,j] = i+j
S.toarray()
"""
Explanation: COO - Coordinate Sparse Matrix
Advantages of the CSC format
* facilitates fast conversion among sparse formats
* permits duplicate entries (see example)
* very fast conversion to and from CSR/CSC formats
Disadvantages of the CSC format
* does not directly support arithmetic operations and slicing
Intended Usage
* COO is a fast format for constructing sparse matrices
* Once a matrix has been constructed, convert to CSR or CSC format for fast arithmetic and matrix vector
operations
* By default when converting to CSR or CSC format, duplicate (i,j) entries will be summed together.
This facilitates efficient construction of finite element matrices and the like.
DOK - Dictionary of Keys
Sparse matrices can be used in arithmetic operations: they support addition, subtraction, multiplication, division, and matrix power.
Allows for efficient O(1) access of individual elements. Duplicates are not allowed. Can be efficiently converted to a coo_matrix once constructed.
End of explanation
"""
|
malogrisard/NTDScourse | toolkit/04_ex_visualization.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Random time series.
n = 1000
rs = np.random.RandomState(42)
data = rs.randn(n, 4).cumsum(axis=0)
plt.figure(figsize=(15,5))
plt.plot(data[:, :])
# df = pd.DataFrame(...)
# df.plot(...)
"""
Explanation: A Python Tour of Data Science: Data Visualization
Michaël Defferrard, PhD student, EPFL LTS2
Exercise
Data visualization is a key aspect of exploratory data analysis.
During this exercise we'll gradually build more and more complex vizualisations. We'll do this by replicating plots. Try to reproduce the lines but also the axis labels, legends or titles.
Goal of data visualization: clearly and efficiently communicate information through visual representations. While tables are generally used to look up a specific measurement, charts are used to show patterns or relationships.
Means: mainly statistical graphics for exploratory analysis, e.g. scatter plots, histograms, probability plots, box plots, residual plots, but also infographics for communication.
Data visualization is both an art and a science. It should combine both aesthetic form and functionality.
1 Time series
To start slowly, let's make a static line plot from some time series. Reproduce the plots below using:
1. The procedural API of matplotlib, the main data visualization library for Python. Its procedural API is similar to matlab and convenient for interactive work.
2. Pandas, which wraps matplotlib around his DataFrame format and makes many standard plots easy to code. It offers many helpers for data visualization.
Hint: to plot with pandas, you first need to create a DataFrame, pandas' tabular data format.
End of explanation
"""
data = [10, 40, 25, 15, 10]
categories = list('ABCDE')
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
# Right plot.
# axes[1].
# axes[1].
# Left plot.
# axes[0].
# axes[0].
"""
Explanation: 2 Categories
Categorical data is best represented by bar or pie charts. Reproduce the plots below using the object-oriented API of matplotlib, which is recommended for programming.
Question: What are the pros / cons of each plot ?
Tip: the matplotlib gallery is a convenient starting point.
End of explanation
"""
import seaborn as sns
import os
df = sns.load_dataset('iris', data_home=os.path.join('..', 'data'))
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
# Your code for Seaborn: distplot() and boxplot().
import ggplot
# Your code for ggplot.
import altair
# altair.Chart(df).mark_bar(opacity=.75).encode(
# x=...,
# y=...,
# color=...
# )
"""
Explanation: 3 Frequency
A frequency plot is a graph that shows the pattern in a set of data by plotting how often particular values of a measure occur. They often take the form of an histogram or a box plot.
Reproduce the plots with the following three libraries, which provide high-level declarative syntax for statistical visualization as well as a convenient interface to pandas:
* Seaborn is a statistical visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. Its advantage is that you can modify the produced plots with matplotlib, so you loose nothing.
* ggplot is a (partial) port of the popular ggplot2 for R. It has his roots in the influencial book the grammar of graphics. Convenient if you know ggplot2 already.
* Vega is a declarative format for statistical visualization based on D3.js, a low-level javascript library for interactive visualization. Vincent (discontinued) and altair are Python libraries to vega. Altair is quite new and does not provide all the needed functionality yet, but it is promising !
Hints:
* Seaborn, look at distplot() and boxplot().
* ggplot, we are interested by the geom_histogram geometry.
End of explanation
"""
# One line with Seaborn.
"""
Explanation: 4 Correlation
Scatter plots are very much used to assess the correlation between 2 variables. Pair plots are then a useful way of displaying the pairwise relations between variables in a dataset.
Use the seaborn pairplot() function to analyze how separable is the iris dataset.
End of explanation
"""
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
# df['pca1'] =
# df['pca2'] =
# df['tsne1'] =
# df['tsne2'] =
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
sns.swarmplot(x='pca1', y='pca2', data=df, hue='species', ax=axes[0])
sns.swarmplot(x='tsne1', y='tsne2', data=df, hue='species', ax=axes[1]);
"""
Explanation: 5 Dimensionality reduction
Humans can only comprehend up to 3 dimensions (in space, then there is e.g. color or size), so dimensionality reduction is often needed to explore high dimensional datasets. Analyze how separable is the iris dataset by visualizing it in a 2D scatter plot after reduction from 4 to 2 dimensions with two popular methods:
1. The classical principal componant analysis (PCA).
2. t-distributed stochastic neighbor embedding (t-SNE).
Hints:
* t-SNE is a stochastic method, so you may want to run it multiple times.
* The easiest way to create the scatter plot is to add columns to the pandas DataFrame, then use the Seaborn swarmplot().
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb | bsd-3-clause | # sphinx_gallery_thumbnail_number = 10
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
"""
Explanation: Source localization with MNE/dSPM/sLORETA/eLORETA
The aim of this tutorial is to teach you how to compute and apply a linear
inverse method such as MNE/dSPM/sLORETA/eLORETA on evoked/raw/epochs data.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname) # already has an average reference
events = mne.find_events(raw, stim_channel='STI 014')
event_id = dict(aud_l=1) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
exclude='bads')
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, reject=reject)
"""
Explanation: Process MEG data
End of explanation
"""
noise_cov = mne.compute_covariance(
epochs, tmax=0., method=['shrunk', 'empirical'], rank=None, verbose=True)
fig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)
"""
Explanation: Compute regularized noise covariance
For more details see tut_compute_covariance.
End of explanation
"""
evoked = epochs.average().pick_types(meg=True)
evoked.plot(time_unit='s')
evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag',
time_unit='s')
# Show whitening
evoked.plot_white(noise_cov, time_unit='s')
del epochs # to save memory
"""
Explanation: Compute the evoked response
Let's just use MEG channels for simplicity.
End of explanation
"""
# Read the forward solution and compute the inverse operator
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'
fwd = mne.read_forward_solution(fname_fwd)
# make an MEG inverse operator
info = evoked.info
inverse_operator = make_inverse_operator(info, fwd, noise_cov,
loose=0.2, depth=0.8)
del fwd
# You can write it to disk with::
#
# >>> from mne.minimum_norm import write_inverse_operator
# >>> write_inverse_operator('sample_audvis-meg-oct-6-inv.fif',
# inverse_operator)
"""
Explanation: Inverse modeling: MNE/dSPM on evoked and raw data
End of explanation
"""
method = "dSPM"
snr = 3.
lambda2 = 1. / snr ** 2
stc, residual = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori=None,
return_residual=True, verbose=True)
"""
Explanation: Compute inverse solution
End of explanation
"""
plt.figure()
plt.plot(1e3 * stc.times, stc.data[::100, :].T)
plt.xlabel('time (ms)')
plt.ylabel('%s value' % method)
plt.show()
"""
Explanation: Visualization
View activation time-series
End of explanation
"""
fig, axes = plt.subplots(2, 1)
evoked.plot(axes=axes)
for ax in axes:
ax.texts = []
for line in ax.lines:
line.set_color('#98df81')
residual.plot(axes=axes)
"""
Explanation: Examine the original data and the residual after fitting:
End of explanation
"""
vertno_max, time_max = stc.get_peak(hemi='rh')
subjects_dir = data_path + '/subjects'
surfer_kwargs = dict(
hemi='rh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=time_max, time_unit='s', size=(800, 800), smoothing_steps=5)
brain = stc.plot(**surfer_kwargs)
brain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue',
scale_factor=0.6, alpha=0.5)
brain.add_text(0.1, 0.9, 'dSPM (plus location of maximal activation)', 'title',
font_size=14)
"""
Explanation: Here we use peak getter to move visualization to the time point of the peak
and draw a marker at the maximum peak vertex.
End of explanation
"""
# setup source morph
morph = mne.compute_source_morph(
src=inverse_operator['src'], subject_from=stc.subject,
subject_to='fsaverage', spacing=5, # to ico-5
subjects_dir=subjects_dir)
# morph data
stc_fsaverage = morph.apply(stc)
brain = stc_fsaverage.plot(**surfer_kwargs)
brain.add_text(0.1, 0.9, 'Morphed to fsaverage', 'title', font_size=20)
del stc_fsaverage
"""
Explanation: Morph data to average brain
End of explanation
"""
stc_vec = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori='vector')
brain = stc_vec.plot(**surfer_kwargs)
brain.add_text(0.1, 0.9, 'Vector solution', 'title', font_size=20)
del stc_vec
"""
Explanation: Dipole orientations
The pick_ori parameter of the
:func:mne.minimum_norm.apply_inverse function controls
the orientation of the dipoles. One useful setting is pick_ori='vector',
which will return an estimate that does not only contain the source power at
each dipole, but also the orientation of the dipoles.
End of explanation
"""
for mi, (method, lims) in enumerate((('dSPM', [8, 12, 15]),
('sLORETA', [3, 5, 7]),
('eLORETA', [0.75, 1.25, 1.75]),)):
surfer_kwargs['clim']['lims'] = lims
stc = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori=None)
brain = stc.plot(figure=mi, **surfer_kwargs)
brain.add_text(0.1, 0.9, method, 'title', font_size=20)
del stc
"""
Explanation: Note that there is a relationship between the orientation of the dipoles and
the surface of the cortex. For this reason, we do not use an inflated
cortical surface for visualization, but the original surface used to define
the source space.
For more information about dipole orientations, see
sphx_glr_auto_tutorials_plot_dipole_orientations.py.
Now let's look at each solver:
End of explanation
"""
|
AllenDowney/ThinkBayes2 | notebooks/chap18.ipynb | mit | # If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
"""
Explanation: Conjugate Priors
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
"""
from scipy.stats import gamma
alpha = 1.4
dist = gamma(alpha)
"""
Explanation: In the previous chapters we have used grid approximations to solve a variety of problems.
One of my goals has been to show that this approach is sufficient to solve many real-world problems.
And I think it's a good place to start because it shows clearly how the methods work.
However, as we saw in the previous chapter, grid methods will only get you so far.
As we increase the number of parameters, the number of points in the grid grows (literally) exponentially.
With more than 3-4 parameters, grid methods become impractical.
So, in the remaining three chapters, I will present three alternatives:
In this chapter we'll use conjugate priors to speed up some of the computations we've already done.
In the next chapter, I'll present Markov chain Monte Carlo (MCMC) methods, which can solve problems with tens of parameters, or even hundreds, in a reasonable amount of time.
And in the last chapter we'll use Approximate Bayesian Computation (ABC) for problems that are hard to model with simple distributions.
We'll start with the World Cup problem.
The World Cup Problem Revisited
In <<_PoissonProcesses>>, we solved the World Cup problem using a Poisson process to model goals in a soccer game as random events that are equally likely to occur at any point during a game.
We used a gamma distribution to represent the prior distribution of $\lambda$, the goal-scoring rate. And we used a Poisson distribution to compute the probability of $k$, the number of goals scored.
Here's a gamma object that represents the prior distribution.
End of explanation
"""
import numpy as np
from utils import pmf_from_dist
lams = np.linspace(0, 10, 101)
prior = pmf_from_dist(dist, lams)
"""
Explanation: And here's a grid approximation.
End of explanation
"""
from scipy.stats import poisson
k = 4
likelihood = poisson(lams).pmf(k)
"""
Explanation: Here's the likelihood of scoring 4 goals for each possible value of lam.
End of explanation
"""
posterior = prior * likelihood
posterior.normalize()
"""
Explanation: And here's the update.
End of explanation
"""
def make_gamma_dist(alpha, beta):
"""Makes a gamma object."""
dist = gamma(alpha, scale=1/beta)
dist.alpha = alpha
dist.beta = beta
return dist
"""
Explanation: So far, this should be familiar.
Now we'll solve the same problem using the conjugate prior.
The Conjugate Prior
In <<_TheGammaDistribution>>, I presented three reasons to use a gamma distribution for the prior and said there was a fourth reason I would reveal later.
Well, now is the time.
The other reason I chose the gamma distribution is that it is the "conjugate prior" of the Poisson distribution, so-called because the two distributions are connected or coupled, which is what "conjugate" means.
In the next section I'll explain how they are connected, but first I'll show you the consequence of this connection, which is that there is a remarkably simple way to compute the posterior distribution.
However, in order to demonstrate it, we have to switch from the one-parameter version of the gamma distribution to the two-parameter version. Since the first parameter is called alpha, you might guess that the second parameter is called beta.
The following function takes alpha and beta and makes an object that represents a gamma distribution with those parameters.
End of explanation
"""
alpha = 1.4
beta = 1
prior_gamma = make_gamma_dist(alpha, beta)
prior_gamma.mean()
"""
Explanation: Here's the prior distribution with alpha=1.4 again and beta=1.
End of explanation
"""
def update_gamma(prior, data):
"""Update a gamma prior."""
k, t = data
alpha = prior.alpha + k
beta = prior.beta + t
return make_gamma_dist(alpha, beta)
"""
Explanation: Now I claim without proof that we can do a Bayesian update with k goals just by making a gamma distribution with parameters alpha+k and beta+1.
End of explanation
"""
data = 4, 1
posterior_gamma = update_gamma(prior_gamma, data)
"""
Explanation: Here's how we update it with k=4 goals in t=1 game.
End of explanation
"""
posterior_conjugate = pmf_from_dist(posterior_gamma, lams)
"""
Explanation: After all the work we did with the grid, it might seem absurd that we can do a Bayesian update by adding two pairs of numbers.
So let's confirm that it works.
I'll make a Pmf with a discrete approximation of the posterior distribution.
End of explanation
"""
from utils import decorate
def decorate_rate(title=''):
decorate(xlabel='Goal scoring rate (lam)',
ylabel='PMF',
title=title)
posterior.plot(label='grid posterior', color='C1')
posterior_conjugate.plot(label='conjugate posterior',
color='C4', linestyle='dotted')
decorate_rate('Posterior distribution')
"""
Explanation: The following figure shows the result along with the posterior we computed using the grid algorithm.
End of explanation
"""
np.allclose(posterior, posterior_conjugate)
"""
Explanation: They are the same other than small differences due to floating-point approximations.
End of explanation
"""
from utils import make_uniform
xs = np.linspace(0, 1, 101)
uniform = make_uniform(xs, 'uniform')
"""
Explanation: What the Actual?
To understand how that works, we'll write the PDF of the gamma prior and the PMF of the Poisson likelihood, then multiply them together, because that's what the Bayesian update does.
We'll see that the result is a gamma distribution, and we'll derive its parameters.
Here's the PDF of the gamma prior, which is the probability density for each value of $\lambda$, given parameters $\alpha$ and $\beta$:
$$\lambda^{\alpha-1} e^{-\lambda \beta}$$
I have omitted the normalizing factor; since we are planning to normalize the posterior distribution anyway, we don't really need it.
Now suppose a team scores $k$ goals in $t$ games.
The probability of this data is given by the PMF of the Poisson distribution, which is a function of $k$ with $\lambda$ and $t$ as parameters.
$$\lambda^k e^{-\lambda t}$$
Again, I have omitted the normalizing factor, which makes it clearer that the gamma and Poisson distributions have the same functional form.
When we multiply them together, we can pair up the factors and add up the exponents.
The result is the unnormalized posterior distribution,
$$\lambda^{\alpha-1+k} e^{-\lambda(\beta + t)}$$
which we can recognize as an unnormalized gamma distribution with parameters $\alpha + k$ and $\beta + t$.
This derivation provides insight into what the parameters of the posterior distribution mean: $\alpha$ reflects the number of events that have occurred; $\beta$ reflects the elapsed time.
Binomial Likelihood
As a second example, let's look again at the Euro problem.
When we solved it with a grid algorithm, we started with a uniform prior:
End of explanation
"""
from scipy.stats import binom
k, n = 140, 250
xs = uniform.qs
likelihood = binom.pmf(k, n, xs)
"""
Explanation: We used the binomial distribution to compute the likelihood of the data, which was 140 heads out of 250 attempts.
End of explanation
"""
posterior = uniform * likelihood
posterior.normalize()
"""
Explanation: Then we computed the posterior distribution in the usual way.
End of explanation
"""
import scipy.stats
def make_beta(alpha, beta):
"""Makes a beta object."""
dist = scipy.stats.beta(alpha, beta)
dist.alpha = alpha
dist.beta = beta
return dist
"""
Explanation: We can solve this problem more efficiently using the conjugate prior of the binomial distribution, which is the beta distribution.
The beta distribution is bounded between 0 and 1, so it works well for representing the distribution of a probability like x.
It has two parameters, called alpha and beta, that determine the shape of the distribution.
SciPy provides an object called beta that represents a beta distribution.
The following function takes alpha and beta and returns a new beta object.
End of explanation
"""
alpha = 1
beta = 1
prior_beta = make_beta(alpha, beta)
"""
Explanation: It turns out that the uniform distribution, which we used as a prior, is the beta distribution with parameters alpha=1 and beta=1.
So we can make a beta object that represents a uniform distribution, like this:
End of explanation
"""
def update_beta(prior, data):
"""Update a beta distribution."""
k, n = data
alpha = prior.alpha + k
beta = prior.beta + n - k
return make_beta(alpha, beta)
"""
Explanation: Now let's figure out how to do the update. As in the previous example, we'll write the PDF of the prior distribution and the PMF of the likelihood function, and multiply them together. We'll see that the product has the same form as the prior, and we'll derive its parameters.
Here is the PDF of the beta distribution, which is a function of $x$ with $\alpha$ and $\beta$ as parameters.
$$x^{\alpha-1} (1-x)^{\beta-1}$$
Again, I have omitted the normalizing factor, which we don't need because we are going to normalize the distribution after the update.
And here's the PMF of the binomial distribution, which is a function of $k$ with $n$ and $x$ as parameters.
$$x^{k} (1-x)^{n-k}$$
Again, I have omitted the normalizing factor.
Now when we multiply the beta prior and the binomial likelihood, the result is
$$x^{\alpha-1+k} (1-x)^{\beta-1+n-k}$$
which we recognize as an unnormalized beta distribution with parameters $\alpha+k$ and $\beta+n-k$.
So if we observe k successes in n trials, we can do the update by making a beta distribution with parameters alpha+k and beta+n-k.
That's what this function does:
End of explanation
"""
data = 140, 250
posterior_beta = update_beta(prior_beta, data)
"""
Explanation: Again, the conjugate prior gives us insight into the meaning of the parameters; $\alpha$ is related to the number of observed successes; $\beta$ is related to the number of failures.
Here's how we do the update with the observed data.
End of explanation
"""
posterior_conjugate = pmf_from_dist(posterior_beta, xs)
"""
Explanation: To confirm that it works, I'll evaluate the posterior distribution for the possible values of xs and put the results in a Pmf.
End of explanation
"""
def decorate_euro(title):
decorate(xlabel='Proportion of heads (x)',
ylabel='Probability',
title=title)
posterior.plot(label='grid posterior', color='C1')
posterior_conjugate.plot(label='conjugate posterior',
color='C4', linestyle='dotted')
decorate_euro(title='Posterior distribution of x')
"""
Explanation: And we can compare the posterior distribution we just computed with the results from the grid algorithm.
End of explanation
"""
np.allclose(posterior, posterior_conjugate)
"""
Explanation: They are the same other than small differences due to floating-point approximations.
The examples so far are problems we have already solved, so let's try something new.
End of explanation
"""
from scipy.stats import multinomial
data = 3, 2, 1
n = np.sum(data)
ps = 0.4, 0.3, 0.3
multinomial.pmf(data, n, ps)
"""
Explanation: Lions and Tigers and Bears
Suppose we visit a wild animal preserve where we know that the only animals are lions and tigers and bears, but we don't know how many of each there are.
During the tour, we see 3 lions, 2 tigers, and one bear. Assuming that every animal had an equal chance to appear in our sample, what is the probability that the next animal we see is a bear?
To answer this question, we'll use the data to estimate the prevalence of each species, that is, what fraction of the animals belong to each species.
If we know the prevalences, we can use the multinomial distribution to compute the probability of the data.
For example, suppose we know that the fraction of lions, tigers, and bears is 0.4, 0.3, and 0.3, respectively.
In that case the probability of the data is:
End of explanation
"""
from scipy.stats import dirichlet
alpha = 1, 2, 3
dist = dirichlet(alpha)
"""
Explanation: Now, we could choose a prior for the prevalences and do a Bayesian update using the multinomial distribution to compute the probability of the data.
But there's an easier way, because the multinomial distribution has a conjugate prior: the Dirichlet distribution.
The Dirichlet Distribution
The Dirichlet distribution is a multivariate distribution, like the multivariate normal distribution we used in <<_MultivariateNormalDistribution>> to describe the distribution of penguin measurements.
In that example, the quantities in the distribution are pairs of flipper length and culmen length, and the parameters of the distribution are a vector of means and a matrix of covariances.
In a Dirichlet distribution, the quantities are vectors of probabilities, $\mathbf{x}$, and the parameter is a vector, $\mathbf{\alpha}$.
An example will make that clearer. SciPy provides a dirichlet object that represents a Dirichlet distribution.
Here's an instance with $\mathbf{\alpha} = 1, 2, 3$.
End of explanation
"""
dist.rvs()
dist.rvs().sum()
"""
Explanation: Since we provided three parameters, the result is a distribution of three variables.
If we draw a random value from this distribution, like this:
End of explanation
"""
sample = dist.rvs(1000)
sample.shape
"""
Explanation: The result is an array of three values.
They are bounded between 0 and 1, and they always add up to 1, so they can be interpreted as the probabilities of a set of outcomes that are mutually exclusive and collectively exhaustive.
Let's see what the distributions of these values look like. I'll draw 1000 random vectors from this distribution, like this:
End of explanation
"""
from empiricaldist import Cdf
cdfs = [Cdf.from_seq(col)
for col in sample.transpose()]
"""
Explanation: The result is an array with 1000 rows and three columns. I'll compute the Cdf of the values in each column.
End of explanation
"""
for i, cdf in enumerate(cdfs):
label = f'Column {i}'
cdf.plot(label=label)
decorate()
"""
Explanation: The result is a list of Cdf objects that represent the marginal distributions of the three variables. Here's what they look like.
End of explanation
"""
def marginal_beta(alpha, i):
"""Compute the ith marginal of a Dirichlet distribution."""
total = np.sum(alpha)
return make_beta(alpha[i], total-alpha[i])
"""
Explanation: Column 0, which corresponds to the lowest parameter, contains the lowest probabilities.
Column 2, which corresponds to the highest parameter, contains the highest probabilities.
As it turns out, these marginal distributions are beta distributions.
The following function takes a sequence of parameters, alpha, and computes the marginal distribution of variable i:
End of explanation
"""
marginals = [marginal_beta(alpha, i)
for i in range(len(alpha))]
"""
Explanation: We can use it to compute the marginal distribution for the three variables.
End of explanation
"""
xs = np.linspace(0, 1, 101)
for i in range(len(alpha)):
label = f'Column {i}'
pmf = pmf_from_dist(marginals[i], xs)
pmf.make_cdf().plot(color='C5')
cdf = cdfs[i]
cdf.plot(label=label, style=':')
decorate()
"""
Explanation: The following plot shows the CDF of these distributions as gray lines and compares them to the CDFs of the samples.
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: This confirms that the marginals of the Dirichlet distribution are beta distributions.
And that's useful because the Dirichlet distribution is the conjugate prior for the multinomial likelihood function.
If the prior distribution is Dirichlet with parameter vector alpha and the data is a vector of observations, data, the posterior distribution is Dirichlet with parameter vector alpha + data.
As an exercise at the end of this chapter, you can use this method to solve the Lions and Tigers and Bears problem.
Summary
After reading this chapter, if you feel like you've been tricked, I understand. It turns out that many of the problems in this book can be solved with just a few arithmetic operations. So why did we go to all the trouble of using grid algorithms?
Sadly, there are only a few problems we can solve with conjugate priors; in fact, this chapter includes most of the ones that are useful in practice.
For the vast majority of problems, there is no conjugate prior and no shortcut to compute the posterior distribution.
That's why we need grid algorithms and the methods in the next two chapters, Approximate Bayesian Computation (ABC) and Markov chain Monte Carlo methods (MCMC).
Exercises
Exercise: In the second version of the World Cup problem, the data we use for the update is not the number of goals in a game, but the time until the first goal.
So the probability of the data is given by the exponential distribution rather than the Poisson distribution.
But it turns out that the gamma distribution is also the conjugate prior of the exponential distribution, so there is a simple way to compute this update, too.
The PDF of the exponential distribution is a function of $t$ with $\lambda$ as a parameter.
$$\lambda e^{-\lambda t}$$
Multiply the PDF of the gamma prior by this likelihood, confirm that the result is an unnormalized gamma distribution, and see if you can derive its parameters.
Write a few lines of code to update prior_gamma with the data from this version of the problem, which was a first goal after 11 minutes and a second goal after an additional 12 minutes.
Remember to express these quantities in units of games, which are approximately 90 minutes.
End of explanation
"""
from empiricaldist import Pmf
ramp_up = np.arange(50)
ramp_down = np.arange(50, -1, -1)
a = np.append(ramp_up, ramp_down)
xs = uniform.qs
triangle = Pmf(a, xs, name='triangle')
triangle.normalize()
"""
Explanation: Exercise: For problems like the Euro problem where the likelihood function is binomial, we can do a Bayesian update with just a few arithmetic operations, but only if the prior is a beta distribution.
If we want a uniform prior, we can use a beta distribution with alpha=1 and beta=1.
But what can we do if the prior distribution we want is not a beta distribution?
For example, in <<_TrianglePrior>> we also solved the Euro problem with a triangle prior, which is not a beta distribution.
In these cases, we can often find a beta distribution that is a good-enough approximation for the prior we want.
See if you can find a beta distribution that fits the triangle prior, then update it using update_beta.
Use pmf_from_dist to make a Pmf that approximates the posterior distribution and compare it to the posterior we just computed using a grid algorithm. How big is the largest difference between them?
Here's the triangle prior again.
End of explanation
"""
k, n = 140, 250
likelihood = binom.pmf(k, n, xs)
posterior = triangle * likelihood
posterior.normalize()
"""
Explanation: And here's the update.
End of explanation
"""
alpha = 1
beta = 1
prior_beta = make_beta(alpha, beta)
prior_beta.mean()
"""
Explanation: To get you started, here's the beta distribution that we used as a uniform prior.
End of explanation
"""
prior_pmf = pmf_from_dist(prior_beta, xs)
triangle.plot(label='triangle')
prior_pmf.plot(label='beta')
decorate_euro('Prior distributions')
"""
Explanation: And here's what it looks like compared to the triangle prior.
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Now you take it from there.
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise: 3Blue1Brown is a YouTube channel about math; if you are not already aware of it, I recommend it highly.
In this video the narrator presents this problem:
You are buying a product online and you see three sellers offering the same product at the same price. One of them has a 100% positive rating, but with only 10 reviews. Another has a 96% positive rating with 50 total reviews. And yet another has a 93% positive rating, but with 200 total reviews.
Which one should you buy from?
Let's think about how to model this scenario. Suppose each seller has some unknown probability, x, of providing satisfactory service and getting a positive rating, and we want to choose the seller with the highest value of x.
This is not the only model for this scenario, and it is not necessarily the best. An alternative would be something like item response theory, where sellers have varying ability to provide satisfactory service and customers have varying difficulty of being satisfied.
But the first model has the virtue of simplicity, so let's see where it gets us.
As a prior, I suggest a beta distribution with alpha=8 and beta=2. What does this prior look like and what does it imply about sellers?
Use the data to update the prior for the three sellers and plot the posterior distributions. Which seller has the highest posterior mean?
How confident should we be about our choice? That is, what is the probability that the seller with the highest posterior mean actually has the highest value of x?
Consider a beta prior with alpha=0.7 and beta=0.5. What does this prior look like and what does it imply about sellers?
Run the analysis again with this prior and see what effect it has on the results.
Note: When you evaluate the beta distribution, you should restrict the range of xs so it does not include 0 and 1. When the parameters of the beta distribution are less than 1, the probability density goes to infinity at 0 and 1. From a mathematical point of view, that's not a problem; it is still a proper probability distribution. But from a computational point of view, it means we have to avoid evaluating the PDF at 0 and 1.
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise: Use a Dirichlet prior with parameter vector alpha = [1, 1, 1] to solve the Lions and Tigers and Bears problem:
Suppose we visit a wild animal preserve where we know that the only animals are lions and tigers and bears, but we don't know how many of each there are.
During the tour, we see three lions, two tigers, and one bear. Assuming that every animal had an equal chance to appear in our sample, estimate the prevalence of each species.
What is the probability that the next animal we see is a bear?
End of explanation
"""
|
andreaaraldo/BROKEN-PJ | anomaly_detection.ipynb | gpl-3.0 | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_samples, silhouette_score
# We resort to a third party library to plot silhouette diagrams
! pip install yellowbrick
from yellowbrick.cluster import SilhouetteVisualizer
"""
Explanation: <a href="https://colab.research.google.com/github/andreaaraldo/BROKEN-PJ/blob/master/anomaly_detection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
"""
! wget https://datahub.io/machine-learning/creditcard/r/creditcard.csv
df = pd.read_csv('creditcard.csv')
df.head()
df.info(verbose=True)
df['Class'].value_counts()
"""
Explanation: Goal:
Find fraudolent credit card transactions
Dataset:
* From DataHub
* Anonimized transactions
* Features have no precise meaning: obtained via Principal Component Analysis (PCA)
* Ground truth: transactions labeled as normal/anomaly
End of explanation
"""
df = df.drop('Time', axis=1)
"""
Explanation: The anomalies are the minority.
Remove the time, since it has no meaning for discovering anomalies for us.
End of explanation
"""
X = df.drop('Class', axis=1)
"""
Explanation: In unsupervised approaches, the label is not used
End of explanation
"""
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
"""
Explanation: All the methods we will use, except iForests, performs best if the dataset is scaled
End of explanation
"""
K =3
model = KMeans(n_clusters=K, random_state=3)
clusters = model.fit_predict(X_scaled)
"""
Explanation: K-means clustering
End of explanation
"""
clusters[0:5]
"""
Explanation: The array clusters contains the cluster id of each sample
End of explanation
"""
plt.hist(clusters)
# Inspired by https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html#sphx-glr-auto-examples-cluster-plot-kmeans-silhouette-analysis-py
fig, (ax1) = plt.subplots()
# The silhouette coefficient can range from -1, 1
ax1.set_xlim([-1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(X) + (K + 1) * 10])
"""
Explanation: Check how many elements per cluster
End of explanation
"""
print("Distances to be computed: ", "{:e}".format( X_scaled.shape[0]**2) )
"""
Explanation: If you try to compute the silhouette score with ordinary sklearn functions, it is extremely slow.
Recall that you need to compute the distances between all samples, i.e.
End of explanation
"""
! wget https://gist.githubusercontent.com/AlexandreAbraham/5544803/raw/221aa797cdbfa9e9f75fc0aabb2322dcc11c8991/unsupervised_alt.py
import unsupervised_alt
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(X_scaled, clusters)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X_scaled, clusters)
"""
Explanation: We will thus used an alternative implementation from Alexandre Abraham.
End of explanation
"""
|
eggie5/UCSD-MAS-DSE230 | hmwk1/HW-1.ipynb | mit | import findspark
findspark.init()
import pyspark
sc = pyspark.SparkContext()
textRDD = sc.newAPIHadoopFile('Data/Moby-Dick.txt',
'org.apache.hadoop.mapreduce.lib.input.TextInputFormat',
'org.apache.hadoop.io.LongWritable',
'org.apache.hadoop.io.Text',
conf={'textinputformat.record.delimiter': "\r\n\r\n"}) \
.map(lambda x: x[1])
sentences=textRDD.flatMap(lambda x: x.split(". ")).map(lambda x: x.encode('utf-8'))
def find_ngrams(input_list, n):
return zip(*[input_list[i:] for i in range(n)])
import string
replace_punctuation = string.maketrans(string.punctuation, ' '*len(string.punctuation))
sentences.map(lambda x: ' '.join(x.split()).lower())\
.map(lambda x: x.translate(None, string.punctuation))\
.flatMap(lambda x: find_ngrams(x.split(" "), 5))\
.map(lambda x: (x,1))\
.reduceByKey(lambda x,y: x+y)\
.map(lambda x:(x[1],x[0])) \
.sortByKey(False)\
.take(100)
"""
Explanation: HomeWork 1
Unigrams, bigrams, and in general n-grams are 1,2 or n words that appear consecutively in a single sentence. Consider the sentence:
"to know you is to love you."
This sentence contains:
Unigrams(single words): to(2 times), know(1 time), you(2 times), is(1 time), love(1 time)
Bigrams: "to know","know you","you is", "is to","to love", "love you" (all 1 time)
Trigrams: "to know you", "know you is", "you is to", "is to love", "to love you" (all 1 time)
The goal of this HW is to find the most common n-grams in the text of Moby Dick.
Your task is to:
Convert all text to lower case, remove all punctuations. (Finally, the text should contain only letters, numbers and spaces)
Count the occurance of each word and of each 2,3,4,5 - gram
List the 5 most common elements for each order (word, bigram, trigram...). For each element, list the sequence of words and the number of occurances.
Basically, you need to change all punctuations to a space and define as a word anything that is between whitespace or at the beginning or the end of a sentence, and does not consist of whitespace (strings consisiting of only white spaces should not be considered as words). The important thing here is to be simple, not to be 100% correct in terms of parsing English. Evaluation will be primarily based on identifying the 5 most frequent n-grams in correct order for all values of n. Some slack will be allowed in the values of frequency of ngrams to allow flexibility in text processing.
This text is short enough to process on a single core using standard python. However, you are required to solve it using RDD's for the whole process. At the very end you can use .take(5) to bring the results to the central node for printing.
The code for reading the file and splitting it into sentences is shown below:
End of explanation
"""
def printOutput(n,freq_ngramRDD):
top=freq_ngramRDD.take(5)
print '\n============ %d most frequent %d-grams'%(5,n)
print '\nindex\tcount\tngram'
for i in range(5):
print '%d.\t%d: \t"%s"'%(i+1,top[i][0],' '.join(top[i][1]))
"""
Explanation: Note: For running the file on cluster, change the file path to '/data/Moby-Dick.txt'
Let freq_ngramRDD be the final result RDD containing the n-grams sorted by their frequency in descending order. Use the following function to print your final output:
End of explanation
"""
for n in range(1,6):
# Put your logic for generating the sorted n-gram RDD here and store it in freq_ngramRDD variable
freq_ngramRDD = sentences.map(lambda x: x.lower())\
.map(lambda x: x.translate(replace_punctuation))\
.flatMap(lambda x: find_ngrams(' '.join(x.split()).split(" "), n))\
.map(lambda x: (x,1))\
.reduceByKey(lambda x,y: x+y)\
.map(lambda x:(x[1],x[0])) \
.sortByKey(False)
printOutput(n,freq_ngramRDD)
freq_ngramRDD = sentences.map(lambda x: x.lower())\
.map(lambda x: x.translate(replace_punctuation))\
.flatMap(lambda x: find_ngrams(' '.join(x.split()).split(" "), n))\
.map(lambda x: (x,1))\
.reduceByKey(lambda x,y: x+y)\
.map(lambda x:(x[1],x[0]))\
.sortByKey(False)
"""
Explanation: Your output for unigrams should look like:
```
============ 5 most frequent 1-grams
index count ngram
1. 40: "a"
2. 25: "the"
3. 21: "and"
4. 16: "to"
5. 9: "of"
```
Note: This is just a sample output and does not resemble the actual results in any manner.
Your final program should generate an output using the following code:
End of explanation
"""
|
oasis-open/cti-python-stix2 | docs/guide/environment.ipynb | bsd-3-clause | from stix2 import Environment, MemoryStore
env = Environment(store=MemoryStore())
"""
Explanation: Using Environments
An Environment object makes it easier to use STIX 2 content as part of a larger application or ecosystem. It allows you to abstract away the nasty details of sending and receiving STIX data, and to create STIX objects with default values for common properties.
Storing and Retrieving STIX Content
An Environment can be set up with a DataStore if you want to store and retrieve STIX content from the same place.
End of explanation
"""
from stix2 import CompositeDataSource, FileSystemSink, FileSystemSource, MemorySource
src = CompositeDataSource()
src.add_data_sources([MemorySource(), FileSystemSource("/tmp/stix2_source")])
env2 = Environment(source=src,
sink=FileSystemSink("/tmp/stix2_sink"))
"""
Explanation: If desired, you can instead set up an Environment with different data sources and sinks. In the following example we set up an environment that retrieves objects from memory and a directory on the filesystem, and stores objects in a different directory on the filesystem.
End of explanation
"""
from stix2 import Indicator
indicator = Indicator(id="indicator--a740531e-63ff-4e49-a9e1-a0a3eed0e3e7",
pattern_type="stix",
pattern="[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']")
env.add(indicator)
"""
Explanation: Once you have an Environment you can store some STIX content in its DataSinks with add():
End of explanation
"""
print(env.get("indicator--a740531e-63ff-4e49-a9e1-a0a3eed0e3e7").serialize(pretty=True))
"""
Explanation: You can retrieve STIX objects from the DataSources in the Environment with get(), query(), all_versions(), creator_of(), related_to(), and relationships() just as you would for a DataSource.
End of explanation
"""
from stix2 import Indicator, ObjectFactory
factory = ObjectFactory(created_by_ref="identity--311b2d2d-f010-4473-83ec-1edf84858f4c")
"""
Explanation: Creating STIX Objects With Defaults
To create STIX objects with default values for certain properties, use an ObjectFactory. For instance, say we want all objects we create to have a created_by_ref property pointing to the Identity object representing our organization.
End of explanation
"""
ind = factory.create(Indicator,
pattern_type="stix",
pattern="[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']")
print(ind.serialize(pretty=True))
"""
Explanation: Once you've set up the ObjectFactory, use its create() method, passing in the class for the type of object you wish to create, followed by the other properties and their values for the object.
End of explanation
"""
factory2 = ObjectFactory(created_by_ref="identity--311b2d2d-f010-4473-83ec-1edf84858f4c",
created="2017-09-25T18:07:46.255472Z")
env2 = Environment(factory=factory2)
ind2 = env2.create(Indicator,
created_by_ref=None,
pattern_type="stix",
pattern="[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']")
print(ind2.serialize(pretty=True))
ind3 = env2.create(Indicator,
created_by_ref="identity--962cabe5-f7f3-438a-9169-585a8c971d12",
pattern_type="stix",
pattern="[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']")
print(ind3.serialize(pretty=True))
"""
Explanation: All objects we create with that ObjectFactory will automatically get the default value for created_by_ref. These are the properties for which defaults can be set:
created_by_ref
created
external_references
object_marking_refs
These defaults can be bypassed. For example, say you have an Environment with multiple default values but want to create an object with a different value for created_by_ref, or none at all.
End of explanation
"""
environ = Environment(ObjectFactory(created_by_ref="identity--311b2d2d-f010-4473-83ec-1edf84858f4c"),
MemoryStore())
i = environ.create(Indicator,
pattern_type="stix",
pattern="[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']")
environ.add(i)
print(environ.get(i.id).serialize(pretty=True))
"""
Explanation: For the full power of the Environment layer, create an Environment with both a DataStore/Source/Sink and an ObjectFactory:
End of explanation
"""
|
donaghhorgan/COMP9033 | labs/04b - Extracting features from text data.ipynb | gpl-3.0 | import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
"""
Explanation: Lab 04b: Extracting text features
Introduction
This lab demonstrates feature extraction with text data. At the end of the lab, you should be able to use pandas and scikit-learn to:
Extract TF-IDF features from text data.
Getting started
Let's start by importing the packages we'll need. As usual, we'll import pandas for exploratory analysis, but this week we're also going to use scikit-learn (sklearn), a modelling and machine learning library for Python.
End of explanation
"""
data_file = 'data/sms.csv'
"""
Explanation: Next, let's load the data. Write the path to your sms.csv file in the cell below:
End of explanation
"""
sms = pd.read_csv(data_file, sep='\t', header=None, names=['label', 'message'])
sms.head()
"""
Explanation: Execute the cell below to load the CSV data into a pandas data frame with the columns label and message.
Note: This week, the CSV file is not comma separated, but instead tab separated. We can tell pandas about the different format using the sep argument, as shown in the cell below. For more information, see the read_csv documentation.
End of explanation
"""
tfidf = TfidfVectorizer()
matrix = tfidf.fit_transform(sms['message'])
"""
Explanation: Extracting text features
As can be seen, our data is in the form of raw text. To make it work with machine learning algorithms, we'll need to transform the data into a numerical representation. One popular way to do this with text data is to compute term frequency (TF) and inverse document frequency (IDF) measures:
Term frequency is a measure of how often a given term appears in a given document, e.g. how often the word "free" appears in a given SMS message. The more often a word appears in a document, the higher its term frequency.
Inverse document frequency is a measure of how rare a word is in a set of documents, e.g. the word "the" appears commonly in many SMS messages and so its presence (or absence) provides little information when it comes to distinguishing spam from ham. The higher the inverse document frequency of a word, the rarer it is (and, therefore, the more distinguishing power it has).
Typically, term frequency and inverse document frequency are combined as a single metric, term frequency-inverse document frequency (TF-IDF), which is simply the multiple of the individual values. Consequently, if a term has a high TF-IDF score, its presence across a set of documents (e.g. SMS messages) is low, while its number of occurrences in a given document (e.g. a candidate SMS message under evaluation) is high. If a term has a low TF-IDF score, this is an indicator that it doesn't appear very frequently in a given document, occurs very frequently across the set of documents, or both. We can exploit this information to find terms that can distinguish a certain set of documents (e.g. spam) from a larger set of documents (more on this in later labs!).
We can compute the TF-IDF score for each word in each message using the TfidfVectorizer class from scikit-learn:
End of explanation
"""
matrix.shape
"""
Explanation: The resulting matrix has the same number of rows as the input SMS data, but it has thousands of columns - each one corresponding to a new feature:
End of explanation
"""
tfidf.vocabulary_
"""
Explanation: This might seem a bit confusing at first, but it makes sense when you think about it: the rows of the matrix correspond to our original messages, while the columns of the matrix correspond to the words in those messages, and so the values in the cells of the matrix are the TF-IDF scores for each word. As not every word appears in every message, some values are empty - this is known as a sparse matrix.
We can take a look at the corresponding word feature indices via the vocabulary_ attribute of TfidfVectorizer:
End of explanation
"""
len(tfidf.vocabulary_)
"""
Explanation: As can be seen below, the vocabulary has the same number of items as there are columns in the matrix:
End of explanation
"""
row = 0
col = tfidf.vocabulary_['only']
print('Message: "%s"' % sms.loc[row, 'message'])
print('TF-IDF score: %f' % matrix[row, col])
"""
Explanation: Finally, we can examine the TF-IDF score for any combination of message and word by checking the corresponding entry in the matrix. For instance, to see the TF-IDF score for the word "only" in the first message in our data frame, we can write:
End of explanation
"""
row = 1
col = tfidf.vocabulary_['only']
print('Message: "%s"' % sms.loc[row, 'message'])
print('TF-IDF score: %f' % matrix[row, col])
"""
Explanation: If a word isn't in a message, it's TF-IDF score will be zero:
End of explanation
"""
|
MTG/sms-tools | notebooks/E1-Python-and-sounds.ipynb | agpl-3.0 | import sys
import os
import numpy as np
# to use this notebook with colab uncomment the next line
# !git clone https://github.com/MTG/sms-tools.git
# and change the next line to sys.path.append('sms-tools/software/models/')
sys.path.append('../software/models/')
from utilFunctions import wavread, wavwrite
# E1 - 1.1: Complete the read_audio_samples() function
def read_audio_samples(input_file, first_sample=50001, num_samples=10):
"""Read num_samples samples from an audio file starting at sample first_sample
Args:
input_file (str): path of a wav file
Returns:
np.array: numpy array containing the selected samples
"""
### Your code here
"""
Explanation: Exercise 1: Python and sounds
This exercise aims to get familiar with some basic audio operations using Python. There are four parts to it: 1) Reading an audio file, 2) Basic operations with audio, 3) Python array indexing, and 4) Downsampling audio - Changing the sampling rate.
Before doing the exercise, please go through the general information for all the exercises given in README.txt of the notebooks directory.
Relevant concepts
Python: Python is a powerful and easy to learn programming language, which is used in a wide variety of application areas. More information in https://www.python.org/. We will use python in all the exercises and in this first one you will start learning about it by performing some basic operations with sound files.
Jupyter notebooks: Jupiter notebooks are interactive documents containing live code, equations, visualizations and narrative text. More information in https://jupyter.org/. It supports Python and all the exercises here use it.
Wav file: The wav file format is a lossless format to store sounds on a hard drive. Each audio sample is stored as a 16 bit integer number (sometimes also as 24 bit integer or 32 bit float). In this course we will work with only one type of audio files. All the sound files we use in the assignments should be wav files that are mono (one channel), in which the samples are stored in 16 bits, and that use (most of the time) the sampling rate of 44100 Hz. Once read into python, the samples will be converted to floating point values with a range from -1 to 1, resulting in a one-dimensional array of floating point values.
Part 1 - Reading in an audio file
The read_audio_samples() function bellow should read an audio file and return a specified number of consecutive samples of the file starting at a given sample.
The input to the function is the file name (including the path), plus the location of first sample and the number of consecutive samples to take, and the output should be a numpy array.
If you use the wavread() function from the utilFunctions module available in the software/models directory, the input samples will be automatically converted to a numpy array of floating point numbers with a range from -1 to 1, which is what we want.
Remember that in python, the index of the first sample of an array is 0 and not 1.
End of explanation
"""
# E1 - 1.2: Call read_audio_samples() with the proposed input sound and default arguments
### Your code here
"""
Explanation: You can use as input the sound files from the sounds directory, thus using a relative path to it. If you run the read_audio_samples() function using the piano.wav sound file as input, with the default arguments, it should return the following samples:
array([-0.06213569, -0.04541154, -0.02734458, -0.0093997, 0.00769066, 0.02319407, 0.03503525, 0.04309214, 0.04626606, 0.0441908], dtype=float32)
End of explanation
"""
# E1 - 2.1: Complete function minMaxAudio()
def min_max_audio(input_file):
"""Compute the minimum and maximum values of the audio samples in the input file
Args:
inputFile(str): file name of the wav file (including path)
Returns:
tuple: minimum and maximum value of the audio samples, like: (min_val, max_val)
"""
### Your code here
"""
Explanation: Part 2 - Basic operations with audio
The function minMaxAudio() should read an audio file and return the minimum and maximum values of the audio samples in that file. The input to the function is the wav file name (including the path) and the output should be two floating point values returned as a tuple.
End of explanation
"""
# E1 - 2.2: Plot input sound with x-axis in seconds, and call min_max_audio() with the proposed sound file
### Your code here
"""
Explanation: If you run min_max_audio() using oboe-A4.wav as input, it should return the following output:
(-0.83486432, 0.56501967)
End of explanation
"""
# E1 - 3.1: Complete the function hop_samples()
def hop_samples(x, M):
"""Return every Mth element of the input array
Args:
x(np.array): input numpy array
M(int): hop size (positive integer)
Returns:
np.array: array containing every Mth element in x, starting from the first element in x
"""
### Your code here
"""
Explanation: Part 3 - Python array indexing
For the function hop_samples(), given a numpy array x, it should return every Mth element of x, starting from the first element. The input arguments to this function are a numpy array x and a positive integer M such that M < number of elements in x. The output of this function should be a numpy array.
End of explanation
"""
# E1 - 3.2: Plot input array, call hop_samples() with proposed input, and plot output array
### Your code here
"""
Explanation: If you run the functionhop_samples() with x = np.arange(10) and M = 2 as inputs, it should return:
array([0, 2, 4, 6, 8])
End of explanation
"""
# E1 - 4.1: Complete function down_sample_audio()
def down_sample_audio(input_file, M):
"""Downsample by a factor of M the input signal
Args:
input_file(str): file name of the wav file (including path)
M(int): downsampling factor (positive integer)
Returns:
tuple: input samples (np.array), original sampling rate (int), down-sampled signal (np.array),
and new sampling rate (int), like: (x, fs, y, fs_new)
"""
### Your code here
"""
Explanation: Part 4 - Downsampling
One of the required processes to represent an analog signal inside a computer is sampling. The sampling rate is the number of samples obtained in one second when sampling a continuous analog signal to a discrete digital signal. As mentioned we will be working with wav audio files that have a sampling rate of 44100 Hz, which is a typical value. Here you will learn a simple way of changing the original sampling rate of a sound to a lower sampling rate, and will learn the implications it has in the audio quality.
The function down_sample_audio() has as input an audio file with a given sampling rate, it should apply downsampling by a factor of M and return a down-sampled version of the input samples. The sampling rates and downsampling factors to use have to be integer values.
From the output samples if you need to create a wav audio file from an array, you can use the wavwrite() function from the utilFunctions.py module. However, in this exercise there is no need to write an audio file, we will be able to hear the sound without creating a file, just playing the array of samples.
End of explanation
"""
import IPython.display as ipd
import matplotlib.pyplot as plt
# E1 - 4.2: Plot and play input sounds, call the function down_sample_audio() for the two test cases,
# and plot and play the output sounds.
### Your code here
# E1 - 4.3: Explain the results of part 4. What happened to the output signals compared to the input ones?
# Is there a difference between the 2 cases? Why? How could we avoid damaging the signal when downsampling it?
"""
"""
"""
Explanation: Test cases for down_sample_audio():
Test Case 1: Use the file from the sounds directory vibraphone-C6.wav and a downsampling factor of M=14.
Test Case 2: Use the file from the sounds directory sawtooth-440.wav and a downsampling factor of M=14.
To play the output samples, import the Ipython.display package and use ipd.display(ipd.Audio(data=y, rate=fs_new)). To visualize the output samples import the matplotlib.pyplot package and use plt.plot(x).
You can find some related information in https://en.wikipedia.org/wiki/Downsampling_(signal_processing)
End of explanation
"""
|
JanetMatsen/Neo4j_meta4 | jupyter/old/java_calls_from_python_and_plotting.ipynb | gpl-3.0 | subprocess.check_output(['echo', 'hello'])
! pwd
! ls -l ../ConnectedComponents.jar
example_result = subprocess.check_output(['java', '-jar', '../ConnectedComponents.jar', '0.03'])
example_result
type(example_result)
print(example_result)
result_string = str(example_result,'utf-8')
type(result_string)
import re
re.findall(r'There are \d+ different connected components for cutoff \d+.\d+', result_string)
"""
Explanation: mv Untitled.jar ConnectedComponents.jar
worked:
badger:Neo4j_meta4 janet$ pwd
/Users/janet/Neo4j_meta4
badger:Neo4j_meta4 janet$ java -jar ConnectedComponents.jar 0.03
End of explanation
"""
example_result = None # wipe out pre-existing if exists
example_result = subprocess.check_output(['java', '-jar', '../ConstructBinaryNetwork.jar', '0.03'])
example_result
def connected_components(cutoff = 0.03):
print('find connected components for edges with magnitude greater than {}'.format(cutoff))
example_result = subprocess.check_output(
['java', '-jar', '../ConnectedComponents.jar', str(cutoff)])
results = str(example_result,'utf-8')
result_sentence = re.findall(r'There are \d+ different connected '
'components for cutoff \d+.\d+', results)[0]
print(result_sentence)
cc = re.findall('(\d+) different', result_sentence)
cutoff = re.findall('for cutoff (\d+.\d+)', result_sentence)
return {'cutoff': cutoff, 'connected components':cc}
connected_components()
connected_components(0.05)
results = pd.DataFrame()
#import pdb; pdb.set_trace()
for i in [0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07]:
#result = pd.DataFrame({'connected components': ['6'], 'cutoff': [i]})
#print(result)
result = pd.DataFrame(connected_components(i))
results = pd.concat([results, result], axis=0)
print(results)
results
"""
Explanation: TODO: wipe the database before each call.
End of explanation
"""
|
ajdawson/python_for_climate_scientists | course_content/notebooks/object_oriented_programming.ipynb | gpl-3.0 | class A(object):
pass
"""
Explanation: Object Oriented Programming
What is an Object?
First some semantics:
- An object is essentially a container which holds some data, and crucially some associated methods for working with that data.
- We define objects, and their behaviours, using something called a class.
- We create objects by instantiating classes, so, objects are instances of classes.
Note, these are very similar to structures, with associated functions attached.
Why do we need objects?
This is all very nice, but why bother with the overhead and confusion of objects and classes? People have been working with functional programs for decades and they seem to work!
A few core ideas:
Modularity
Separation of concerns
Abstraction over complex mechanisms
We've used a lot of objects already!
Most of the code we've been using already has made heavy use of object-oriented programming:
NumPy arrays are objects (with attributes like shape and methods like mean())
Iris cubes are objects
CIS datasets are objects
Matplotlib axes/figures/lines etc. are all objects
Object-Oriented Programming in Python
In many languages we're forced into using classes and objects for everything (e.g. Java and C#), but some languages don't support objects at all (e.g. R and Fortran 77).
In python we have (in my opinion) a nice half-way house, we have a full OO implementation when we need it (including multiple inheritance, abstract classes etc), but we can use functional code when it's more desirable to do so.
Defining a class in Python is easy:
End of explanation
"""
a_object = A()
print(type(a_object))
"""
Explanation: Note the reference to object, this means that our new class inherits from object. We won't be going into too much detail about inheritance, but for now you should always inherit from object when defining a class.
Once a class is defined you can create an instance of that class, which is an object. In Python we do this by calling the class name as if it were a function:
End of explanation
"""
class B(object):
value = 1
"""
Explanation: A class can store some data (after all, an empty class isn't very interesting!):
End of explanation
"""
b_object = B()
print(b_object.value)
"""
Explanation: We can access variables stored in a class by writing the name of the instance followed by a dot and then the name of the variable:
End of explanation
"""
class B(object):
value = 1
def show_value(self):
print('self.value is {}'.format(self.value))
"""
Explanation: Classes can also contain functions. Functions attached to classes are called methods:
End of explanation
"""
b1 = B()
b1.show_value()
b1.value = 999
b1.show_value()
"""
Explanation: The first argument to every method automatically refers to the object we're calling the method on, by convention we call that argument self.
End of explanation
"""
class C(object):
def __init__(self, value):
self.var = value
"""
Explanation: Notice we don't have to pass the self argument, Python's object system does this for you.
Some methods are called special methods. Their names start and end with a double underscore. A particularly useful special method is __init__, which initializes an object.
End of explanation
"""
c1 = C("Python!")
c2 = C("Hello")
print(c1.var)
print(c2.var)
"""
Explanation: The __init__ method is called when we create an instance of a class. Now when we call the class name we can pass the arguments required by __init__:
End of explanation
"""
class Counter(object):
def __init__(self, start=0):
self.value = start
def increment(self):
self.value += 1
counter1 = Counter()
print(counter1.value)
counter1.increment()
print(counter1.value)
counter2 = Counter(start=10)
counter2.increment()
counter2.increment()
print(counter2.value)
"""
Explanation: Methods on an object have acces to the variables defined on the object:
End of explanation
"""
|
arongdari/almc | notebooks/Growth_Rate_of_Knowledge_Graph.ipynb | gpl-2.0 | def construct_freebase(shuffle = True):
e_file = '../data/freebase/entities.txt'
r_file = '../data/freebase/relations.txt'
datafile = '../data/freebase/train_single_relation.txt'
with open(e_file, 'r') as f:
e_list = [line.strip() for line in f.readlines()]
with open(r_file, 'r') as f:
r_list = [line.strip() for line in f.readlines()]
n_e = len(e_list) # number of entities
n_r = len(r_list) # number of relations
if shuffle:
np.random.shuffle(e_list)
np.random.shuffle(r_list)
entities = {e_list[i]:i for i in range(n_e)}
relations = {r_list[i]:i for i in range(n_r)}
row_list = defaultdict(list)
col_list = defaultdict(list)
with open(datafile, 'r') as f:
for line in f.readlines():
start, relation, end = line.split('\t')
rel_no = relations[relation.strip()]
en1_no = entities[start.strip()]
en2_no = entities[end.strip()]
row_list[rel_no].append(en1_no)
col_list[rel_no].append(en2_no)
rowT = list()
colT = list()
for k in range(n_r):
mat = csr_matrix((np.ones(len(row_list[k])), (row_list[k], col_list[k])), shape=(n_e, n_e))
rowT.append(mat)
mat = csc_matrix((np.ones(len(row_list[k])), (row_list[k], col_list[k])), shape=(n_e, n_e))
colT.append(mat)
return n_e, n_r, rowT, colT
"""
Explanation: Load Freebase Datafile
Construct tensor (list of sparse matrix where each matrix represent a certain relation between entities) from triple dataset.
Maintaining the same tensor as a collection of csr matrices and csc matrices help to optimise time complexity.
End of explanation
"""
n_triple = defaultdict(list)
n_sample = 10 # repeat counting n_sample times
for s in range(n_sample):
tic = time.time()
n_triple[0].append(0)
n_e, n_r, _rowT, _colT = construct_freebase()
for i in range(1, n_e):
# counting triples by expanding tensor
cnt = 0
for k in range(n_r):
cnt += _rowT[k].getrow(i)[:,:i].nnz
cnt += _colT[k].getcol(i)[:i-1,:].nnz
n_triple[i].append(n_triple[i-1][-1] + cnt)
print(time.time()-tic)
avg_cnt = [np.mean(n_triple[i]) for i in range(n_e)]
"""
Explanation: Growth of the number of triples with respect to the number of entities
First, we will see how the number of triples will be changed as we randomly add entities into tensor starting from zero entities.
End of explanation
"""
print(n_e**2*n_r)
plt.figure(figsize=(8,6))
plt.plot(avg_cnt)
plt.title('# of entities vs # of triples')
plt.xlabel('# entities')
plt.ylabel('# triples')
import pickle
pickle.dump(n_triple, open('growth_freebase.pkl', 'wb'))
"""
Explanation: Size of tensor:
End of explanation
"""
|
ELind77/gensim | docs/notebooks/sklearn_wrapper.ipynb | lgpl-2.1 | from gensim.sklearn_integration import SklLdaModel
"""
Explanation: Using wrappers for Scikit learn API
This tutorial is about using gensim models as a part of your scikit learn workflow with the help of wrappers found at gensim.sklearn_integration
The wrappers available (as of now) are :
* LdaModel (gensim.sklearn_integration.sklearn_wrapper_gensim_ldaModel.SklLdaModel),which implements gensim's LDA Model in a scikit-learn interface
LsiModel (gensim.sklearn_integration.sklearn_wrapper_gensim_lsiModel.SklLsiModel),which implements gensim's LSI Model in a scikit-learn interface
RpModel (gensim.sklearn_integration.sklearn_wrapper_gensim_rpmodel.SklRpModel),which implements gensim's Random Projections Model in a scikit-learn interface
LDASeq Model (gensim.sklearn_integration.sklearn_wrapper_gensim_lsiModel.SklLdaSeqModel),which implements gensim's LdaSeqModel in a scikit-learn interface
LDA Model
To use LdaModel begin with importing LdaModel wrapper
End of explanation
"""
from gensim.corpora import Dictionary
texts = [
['complier', 'system', 'computer'],
['eulerian', 'node', 'cycle', 'graph', 'tree', 'path'],
['graph', 'flow', 'network', 'graph'],
['loading', 'computer', 'system'],
['user', 'server', 'system'],
['tree', 'hamiltonian'],
['graph', 'trees'],
['computer', 'kernel', 'malfunction', 'computer'],
['server', 'system', 'computer']
]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
"""
Explanation: Next we will create a dummy set of texts and convert it into a corpus
End of explanation
"""
model = SklLdaModel(num_topics=2, id2word=dictionary, iterations=20, random_state=1)
model.fit(corpus)
model.transform(corpus)
"""
Explanation: Then to run the LdaModel on it
End of explanation
"""
import numpy as np
from gensim import matutils
from gensim.models.ldamodel import LdaModel
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from gensim.sklearn_integration.sklearn_wrapper_gensim_ldamodel import SklLdaModel
rand = np.random.mtrand.RandomState(1) # set seed for getting same result
cats = ['rec.sport.baseball', 'sci.crypt']
data = fetch_20newsgroups(subset='train', categories=cats, shuffle=True)
"""
Explanation: Integration with Sklearn
To provide a better example of how it can be used with Sklearn, Let's use CountVectorizer method of sklearn. For this example we will use 20 Newsgroups data set. We will only use the categories rec.sport.baseball and sci.crypt and use it to generate topics.
End of explanation
"""
vec = CountVectorizer(min_df=10, stop_words='english')
X = vec.fit_transform(data.data)
vocab = vec.get_feature_names() # vocab to be converted to id2word
id2word = dict([(i, s) for i, s in enumerate(vocab)])
"""
Explanation: Next, we use countvectorizer to convert the collection of text documents to a matrix of token counts.
End of explanation
"""
obj = SklLdaModel(id2word=id2word, num_topics=5, iterations=20)
lda = obj.fit(X)
"""
Explanation: Next, we just need to fit X and id2word to our Lda wrapper.
End of explanation
"""
from sklearn.model_selection import GridSearchCV
from gensim.models.coherencemodel import CoherenceModel
def scorer(estimator, X, y=None):
goodcm = CoherenceModel(model=estimator.gensim_model, texts= texts, dictionary=estimator.gensim_model.id2word, coherence='c_v')
return goodcm.get_coherence()
obj = SklLdaModel(id2word=dictionary, num_topics=5, iterations=20)
parameters = {'num_topics': (2, 3, 5, 10), 'iterations': (1, 20, 50)}
model = GridSearchCV(obj, parameters, scoring=scorer, cv=5)
model.fit(corpus)
model.best_params_
"""
Explanation: Example for Using Grid Search
End of explanation
"""
from sklearn.pipeline import Pipeline
from sklearn import linear_model
def print_features_pipe(clf, vocab, n=10):
''' Better printing for sorted list '''
coef = clf.named_steps['classifier'].coef_[0]
print coef
print 'Positive features: %s' % (' '.join(['%s:%.2f' % (vocab[j], coef[j]) for j in np.argsort(coef)[::-1][:n] if coef[j] > 0]))
print 'Negative features: %s' % (' '.join(['%s:%.2f' % (vocab[j], coef[j]) for j in np.argsort(coef)[:n] if coef[j] < 0]))
id2word = Dictionary([_.split() for _ in data.data])
corpus = [id2word.doc2bow(i.split()) for i in data.data]
model = SklLdaModel(num_topics=15, id2word=id2word, iterations=10, random_state=37)
clf = linear_model.LogisticRegression(penalty='l2', C=0.1) # l2 penalty used
pipe = Pipeline((('features', model,), ('classifier', clf)))
pipe.fit(corpus, data.target)
print_features_pipe(pipe, id2word.values())
print(pipe.score(corpus, data.target))
"""
Explanation: Example of Using Pipeline
End of explanation
"""
from gensim.sklearn_integration import SklLsiModel
"""
Explanation: LSI Model
To use LsiModel begin with importing LsiModel wrapper
End of explanation
"""
model = SklLsiModel(num_topics=15, id2word=id2word)
clf = linear_model.LogisticRegression(penalty='l2', C=0.1) # l2 penalty used
pipe = Pipeline((('features', model,), ('classifier', clf)))
pipe.fit(corpus, data.target)
print_features_pipe(pipe, id2word.values())
print(pipe.score(corpus, data.target))
"""
Explanation: Example of Using Pipeline
End of explanation
"""
from gensim.sklearn_integration import SklRpModel
"""
Explanation: Random Projections Model
To use RpModel begin with importing RpModel wrapper
End of explanation
"""
model = SklRpModel(num_topics=2)
np.random.mtrand.RandomState(1) # set seed for getting same result
clf = linear_model.LogisticRegression(penalty='l2', C=0.1) # l2 penalty used
pipe = Pipeline((('features', model,), ('classifier', clf)))
pipe.fit(corpus, data.target)
print_features_pipe(pipe, id2word.values())
print(pipe.score(corpus, data.target))
"""
Explanation: Example of Using Pipeline
End of explanation
"""
from gensim.sklearn_integration import SklLdaSeqModel
"""
Explanation: LDASeq Model
To use LdaSeqModel begin with importing LdaSeqModel wrapper
End of explanation
"""
test_data = data.data[0:2]
test_target = data.target[0:2]
id2word = Dictionary(map(lambda x: x.split(), test_data))
corpus = [id2word.doc2bow(i.split()) for i in test_data]
model = SklLdaSeqModel(id2word=id2word, num_topics=2, time_slice=[1, 1, 1], initialize='gensim')
clf = linear_model.LogisticRegression(penalty='l2', C=0.1) # l2 penalty used
pipe = Pipeline((('features', model,), ('classifier', clf)))
pipe.fit(corpus, test_target)
print_features_pipe(pipe, id2word.values())
print(pipe.score(corpus, test_target))
"""
Explanation: Example of Using Pipeline
End of explanation
"""
|
JoseGuzman/myIPythonNotebooks | MachineLearning/NaiveBayesanClassifier.ipynb | gpl-2.0 | %pylab inline
import pandas as pd
# first row contains units
df = pd.read_excel(io='../data/Cell_types.xlsx', sheetname='PFC', skiprows=1)
del df['CellID'] # remove column with cell IDs
df.head() # show first elements
"""
Explanation: <H1> Naive Bayesan classifier</H1>
<H2>Bayesan theorem</H2>
We will try to compute the probability of having a type of Strain $P(Y=y)$ given a feature vector X (i.e. a vector containing the Input resistance, sag ratio, etc...). We will use the “naive” assumption of independence between every pair of features.
Given a class variable $Y$ and a dependent feature vector $X_1$ through $X_n$, Bayes’ theorem states the following relationship:
$$P(Y | X_i) = \frac{P(Y) P(X_i|Y)}{P(X_i)}$$
End of explanation
"""
pd.Categorical(df.Strain).codes # CB57BL is zero, GAD67 is one
df['Gender'] = pd.Categorical(df.Gender).codes
df['Strain'] = pd.Categorical(df.Strain).codes
df.head()
df.shape # as with NumPy the number of rows first
df.iloc[[0]].values[0] # get a row as NumPy array
# create X and Y
Y = df['Strain'].values
del df['Strain'] # remove Strain
X = [ df.iloc[[i]].values[0] for i in range(df.shape[0]) ]
len(X)==len(Y)
X[0] # data from CB57BL
"""
Explanation: We use pandas to split up the matrix into the feature vectors we're interested in. We will also to convert textual category data (Strain, Gender) into an ordinal number that we can work with.
End of explanation
"""
from sklearn.naive_bayes import GaussianNB
myclassifier = GaussianNB()
myclassifier.fit(X,Y)
df.iloc[[-2]].values # this is a GAD67 mice
df.iloc[[-1]].values # this is a GAD67 mice
"""
Explanation: <H2> Gaussian naive Bayesan classifier </H2>
End of explanation
"""
def predict(idx):
if myclassifier.predict( X[idx]):
print('Cell %2d is GAD67 mice'%idx)
else:
print('Cell %2d is CB57BL mice'%idx)
# test with training data (similar to myclassifier.score(X,Y) )
for i in range(df.shape[0]):
predict(i)
"""
Explanation: We now test with the classifier with the training data
End of explanation
"""
d = np.array([[ -75.50, 98.25, 1.49, 24.75, 90. ,21.5, 24.5 ,60, 1, 85.95, -48.6,
430.95, 0.5, 385.55]])
test_df = pd.DataFrame(d, columns=df.columns)
test_df
if myclassifier.predict( test_df.iloc[[0]].values[0]):
print('Test is GAD67 mice')
else:
print('Test is CB57BL mice')
"""
Explanation: We test with some fictitious data
End of explanation
"""
|
tensorflow/docs | site/en/tutorials/images/cnn.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
"""
Explanation: Convolutional Neural Network (CNN)
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/cnn">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/cnn.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/cnn.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/cnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates training a simple Convolutional Neural Network (CNN) to classify CIFAR images. Because this tutorial uses the Keras Sequential API, creating and training your model will take just a few lines of code.
Import TensorFlow
End of explanation
"""
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
# Normalize pixel values to be between 0 and 1
train_images, test_images = train_images / 255.0, test_images / 255.0
"""
Explanation: Download and prepare the CIFAR10 dataset
The CIFAR10 dataset contains 60,000 color images in 10 classes, with 6,000 images in each class. The dataset is divided into 50,000 training images and 10,000 testing images. The classes are mutually exclusive and there is no overlap between them.
End of explanation
"""
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
"""
Explanation: Verify the data
To verify that the dataset looks correct, let's plot the first 25 images from the training set and display the class name below each image:
End of explanation
"""
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
"""
Explanation: Create the convolutional base
The 6 lines of code below define the convolutional base using a common pattern: a stack of Conv2D and MaxPooling2D layers.
As input, a CNN takes tensors of shape (image_height, image_width, color_channels), ignoring the batch size. If you are new to these dimensions, color_channels refers to (R,G,B). In this example, you will configure your CNN to process inputs of shape (32, 32, 3), which is the format of CIFAR images. You can do this by passing the argument input_shape to your first layer.
End of explanation
"""
model.summary()
"""
Explanation: Let's display the architecture of your model so far:
End of explanation
"""
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
"""
Explanation: Above, you can see that the output of every Conv2D and MaxPooling2D layer is a 3D tensor of shape (height, width, channels). The width and height dimensions tend to shrink as you go deeper in the network. The number of output channels for each Conv2D layer is controlled by the first argument (e.g., 32 or 64). Typically, as the width and height shrink, you can afford (computationally) to add more output channels in each Conv2D layer.
Add Dense layers on top
To complete the model, you will feed the last output tensor from the convolutional base (of shape (4, 4, 64)) into one or more Dense layers to perform classification. Dense layers take vectors as input (which are 1D), while the current output is a 3D tensor. First, you will flatten (or unroll) the 3D output to 1D, then add one or more Dense layers on top. CIFAR has 10 output classes, so you use a final Dense layer with 10 outputs.
End of explanation
"""
model.summary()
"""
Explanation: Here's the complete architecture of your model:
End of explanation
"""
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_images, train_labels, epochs=10,
validation_data=(test_images, test_labels))
"""
Explanation: The network summary shows that (4, 4, 64) outputs were flattened into vectors of shape (1024) before going through two Dense layers.
Compile and train the model
End of explanation
"""
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print(test_acc)
"""
Explanation: Evaluate the model
End of explanation
"""
|
mjbrodzik/ipython_notebooks | charis/Calculate_ti_model_overall_variability.ipynb | apache-2.0 | from __future__ import print_function
%pylab notebook
# import datetime as dt
import glob
import matplotlib.pyplot as plt
#import matplotlib.dates as md
#from nose.tools import set_trace
import pandas as pd
import re
import seaborn as sns
import os
import sys
sns.set()
sns.axes_style("darkgrid")
"""
Explanation: Using the 20 best SA models, plot overall variability in melt data that we generated
End of explanation
"""
dir = "/Users/brodzik/projects/CHARIS/derived_hypsometries"
drainageIDs = ["IN_Hunza_at_DainyorBridge",
"AM_Vakhsh_at_Komsomolabad",
"SY_Naryn_at_NarynTown",
"GA_SaptaKosi_at_Chatara",
"GA_Karnali_at_Benighat"]
alldf = pd.DataFrame([])
for drainageID in drainageIDs:
#file = "%s/REECv0_CycleSummary/%s.annual_melt.last20.dat" % (dir, drainageID)
#print("last20 file %s" % file, file=sys.stderr)
file = "%s/REECv0_ModelRankSummary/%s.annual_melt.best20.dat" % (dir, drainageID)
print("best20 file %s" % file, file=sys.stderr)
df = pd.read_pickle(file)
melt = df.copy()
melt.drop(['Snow_on_land_min_ddf','Snow_on_land_max_ddf',
'Snow_on_ice_min_ddf','Snow_on_ice_max_ddf',
'Exposed_glacier_ice_min_ddf','Exposed_glacier_ice_max_ddf'], axis=1, inplace=True)
# This idiotic step is necessary for seaborn to work in the plots
melt["Snow_on_land_melt_km3"] = melt["Snow_on_land_melt_km3"].astype(float)
melt["Snow_on_ice_melt_km3"] = melt["Snow_on_ice_melt_km3"].astype(float)
melt["Exposed_glacier_ice_melt_km3"] = melt["Exposed_glacier_ice_melt_km3"].astype(float)
alldf = alldf.append(melt)
alldf["ID"] = alldf.drainageID.str.extract(r"_(.+)_at")
alldf
fig, axes = plt.subplots(3, 1, figsize=(7,10))
alldf.boxplot(ax=axes[0],
column='Snow_on_ice_melt_km3',
by='ID',
rot=0)
axes[0].set_title("Melt from Snow on Ice")
alldf.boxplot(ax=axes[1],
column='Exposed_glacier_ice_melt_km3',
by='ID',
rot=0)
axes[1].set_title("Melt from Exposed Glacier Ice")
alldf.boxplot(ax=axes[2],
column='Snow_on_land_melt_km3',
by='ID',
rot=0)
axes[2].set_title("Melt from Snow on Land")
for ax in axes:
ax.set_ylabel('Melt ($km^3$)')
fig.suptitle("Variability in Melt from 20 Best Models (2001-2014)")
fig.tight_layout()
fig.subplots_adjust(top=0.95)
fig, axes = plt.subplots(3, 1, figsize=(7,10))
soi_color = '#%02x%02x%02x' % (141, 160, 203)
egi_color = '#%02x%02x%02x' % (252, 141, 98)
sol_color = '#%02x%02x%02x' % (102, 194, 165)
order=['Naryn','Vakhsh','Hunza','Karnali','SaptaKosi']
axes[0] = sns.boxplot(ax=axes[0],
x='ID',
y='Snow_on_ice_melt_km3',
order=order,
color=soi_color,
data=alldf)
axes[0].set_title("Melt from Snow on Ice")
axes[0].set_xlabel("")
#axes[0].set_xticklabels([])
axes[1] = sns.boxplot(ax=axes[1],
x='ID',
y='Exposed_glacier_ice_melt_km3',
order=order,
color=egi_color,
data=alldf)
axes[1].set_title("Melt from Exposed Glacier Ice")
axes[1].set_xlabel("")
#axes[1].set_xticklabels([])
axes[2] = sns.boxplot(ax=axes[2],
x='ID',
y='Snow_on_land_melt_km3',
order=order,
color=sol_color,
data=alldf)
axes[2].set_title("Melt from Snow on Land")
#axes[2].set_xticklabels(['Naryn (SY)','Vakhsh (AM)','Hunza (IN)','Karnali (GA)','SaptaKosi (BR)'])
axes[2].set_xlabel('Calibration Basin (Used for Major Basin)')
#ymax = 1.1 * alldf[['Snow_on_land_melt_km3', 'Snow_on_ice_melt_km3', 'Exposed_glacier_ice_melt_km3']].max().max()
for ax in axes:
ax.set_ylabel('Melt ($km^3$)')
# ax.set_ylim([0., ymax])
fig.suptitle("Variability in Melt from 20 Best Models (2001-2014)")
fig.tight_layout()
fig.subplots_adjust(top=0.93)
"""
Explanation: Make a plot of overall variability by basin and surface type
End of explanation
"""
test = alldf.copy()
#test.drop(['year','cycle','drainageID'],inplace=True,axis=1)
test.drop(['year','rank','drainageID'],inplace=True,axis=1)
test.set_index('ID', inplace=True)
multicol = pd.MultiIndex.from_tuples([('Melt', 'Snow on land melt'),
('Melt', 'Snow on ice melt'),
('Melt', 'Exposed glacier ice melt')])
test.columns = multicol
test = test.stack()
test = test.reset_index()
test.columns = ['ID', 'Surface', 'Melt']
test
plt.rcParams
params = {'legend.fontsize': 14,
'legend.handlelength': 2}
plt.rcParams.update(params)
fig, ax = plt.subplots(figsize=(9,6))
my_palette = {"Snow on land melt": sol_color,
"Exposed glacier ice melt": egi_color,
"Snow on ice melt": soi_color}
order=['Naryn','Vakhsh','Hunza','Karnali','SaptaKosi']
sns.boxplot(ax=ax,
x="ID",
hue="Surface",
y="Melt",
order=order,
data=test,
palette=my_palette,
width=0.6)
ax.set_ylabel('Melt ($km^3$)')
ax.set_title("Variability in Melt from 20 Best Models (2001-2014)")
ax.set_xticklabels(['Naryn (SY)','Vakhsh (AM)','Hunza (IN)','Karnali (GA)','SaptaKosi (BR)'])
ax.set_xlabel('Calibration Basin (Used for Major Basin)')
for item in ([ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(14)
ax.title.set_fontsize(20)
lg = ax.legend(title="Surface", fontsize=14)
title = lg.get_title()
title.set_fontsize(14)
file = "%s/REECv0_ModelRankSummary/Calibration_basins.model_variability.v2.pdf" % (dir)
fig.savefig(file)
"""
Explanation: How to combine all 3 columns of data into a Single melt column with another column as label
End of explanation
"""
|
uliang/First-steps-with-the-Python-language | Day 1 - Unit 1.3.ipynb | mit | # Our first function
def my_first_function():
pass
"""
Explanation: 9. Python functions
Sometimes, a portion of code is reused over and over again in the entire script. To prevent repetitive coding, we are able to define our own custom defined functions using the def keyword. When invoked, functions will instruct the computer to perform a set list of instructions, possibly returning an output at the end. Functions can also be pre-defined and saved in another file (with extension .py) so that it can be used for another project.
You will have encountered many pre-defined functions in Python up to now. Functions like print, len, etc... But there are many other pre-built functions in Python which are extremely useful and makes code more transparent and readable.
We will also encounter what is known as anonymous functions which are defined using the lambda keyword. These are short functions and can only be defined in one line of code. While technically def does what lambda does, nevertheless lambda allows for less pendantic and more natural style of coding. You will encounter anonymous functions alot when working with pandas especially when cleaning up data frames.
Finally we introduce what is perhaps one of the more important concepts in Python programming: List comprehensions. These are syntatical shortcuts for the for loop which translate not only to better code style, but faster scripts.
9.1 Learning objectives
The objectives of this unit are:
To use the def keyword to defined functions.
Recall the terms arguments, keyword arguments, the signature of a function.
To use lambda to define anonymous functions.
To use build in functions: zip and enumerate.
To learn how to refactor for loops into list comprehension statements.
9.2 Let's build our own functions
All functions are defined using the def keyword. In general, the format of function definition looks like this:
def my_function_name(arg_1, arg_2, ..., arg_n) :
code
return <something, or None>
First, we tell Python that we are going to define a function by typing out def. Then we proceed by naming our function. The normal rules for naming variables apply to naming functions as well - one cannot start a name with a number or use special characters or use words which have been reserved for Python.
Then after the name, we describe the signature of the function by writing down every argument to the function seperated by commas and enclosed in round braces ( ). And argument to a function is an input to the code which will be executed when the function is called. It is not mandatory that a function recieve inputs. Sometimes, a function just needs to run a set of instructions, without any input. We end the def statement with a :. The newline under this marks the beginning of the function code.
Every line of code meant for the function must be indented. There are no enclosing { } which marks the "body" of the function. In Python, the "body" of the function is denoted with indentation only. Thus, every line of code meant for the function must be on the same indentation level. Finally, at the end of the function we return an output or None. While this last syntax is not mandatory, it is not good practice to leave off a function definition without a return statement.
End of explanation
"""
def my_first_function():
print("Hello world!")
"""
Explanation: For our first function, we see above that my_first_function does not take in any input and does nothing. The pass keyword is a kind of temporary placeholder and basically does nothing. We use pass because one cannot leave a function "body" without any code at all.
Now let's code something into my_first_function so that it does something useful.
End of explanation
"""
my_first_function
"""
Explanation: my_first_function will print the string "Hello world!" whenever it is called. Calling a function basically means instructing Python to run the code contained in the function. Notice that after defining a function and running the cell, there is no output. But that doesn't mean nothing has happened. In fact, Python has populated the global namespace with a new name, my_first_function and is ready to do what ever has been coded into this function when it is called.
End of explanation
"""
my_first_function()
"""
Explanation: It is good to understand what happens when we type my_first_function and execute a cell. Notice that the output says <function ... This means that the variable my_first_function represents an object of type function. The rest of the output indicates that this function is represented by a name my_first_function in the module __main__. We will not describe what modules are in this course, but suffices for our purposes to think of __main__ as file containing all the functions that we will define in this Jupyter Notebook session.
To actually execute the instructions in my_first_function, we must type my_first_function().
End of explanation
"""
def my_first_function(name):
print("Hello %s" % (name))
return None
my_first_function("Tang U-Liang")
# Passing two arguments
def special_product(x,y):
prod = x-y+x*y
return prod
"""
Explanation: 9.3 Functions with arguments
Functions won't be useful if we are unable to pass input into it. Most of the time, the set of instructions will act on the input we have supplied to the function and produces some output which is then passed to a variable to be stored. Let's modify my_first_function to print out a name supplied as input to it.
End of explanation
"""
answer = special_product(1,3)
print(answer)
"""
Explanation: When defining functions with arguments, the same variable name used in the signature must be used in the body of the function. Now there is nothing inherently special about using name to represent the argument for names to my_first_function. After all, the computer doesn't "understand" that we intend to print out a name when calling my_first_function. However, we should use recognizable variable names to improve readibility of our code and to make our intentions transparent.
In the function special_product, I passed two arguments named x and y. Inside the function, it performs the operation and assigns the result to a variable named prod. Then the function uses the keyword return to send the answer out from the function environment to the global environment.
End of explanation
"""
special_product(1,3)
"""
Explanation: What happened is that the function special_product performs the said operation on inputs 1 and 3. It then outputs the answer, in this case 7. We assign the output 7 to a variable named answer and print it.
Note that we do not need to explicit declare a variable to "capture" the answer. The following works too.
End of explanation
"""
# x = 1, and y =3
print(special_product(1,3))
# x =3 and y = 1
print(special_product(3,1))
"""
Explanation: Passing arguments in correct sequence matters. Python will pass values to arguments according to the sequence as it was declared in the signature.
End of explanation
"""
print(prod)
"""
Explanation: What will happen if we try to display the variable prod directly?
End of explanation
"""
special_product(1,)
"""
Explanation: 9.3.1 Function scope
But isn't the variable prod defined already when we defined the function special_product? This happens because the variable prod is only available in the scope of the function. The global environment is another scope. In general, variables from one scope are not accessible in another scope with exeptions given by scoping rules (which are programming language dependant). For this course, it suffices to know that variables defined in the function scope will NOT be accessible from the global scope.
(This can be overriden using the global keyword. But this is not encouraged.)
9.4 Function defaults
When we define functions, all variables we define in the signature must be assigned values. We cannot leave any out.
End of explanation
"""
def special_product(x, y=1): # default value of y is 1
return x-y+x*y
# We don't have to pass any value to arguments with default values
print(special_product(2))
# Default values can be overriden
print(special_product(2,9))
"""
Explanation: Therefore, it becomes quite a hassle if we have to call the function in various places in our code with the same input in one of the arguments. To do that we can assign default values to particular arguments in the following manner.
def my_function(arg_1, arg_2 = default_value, ...):
code
Note that arguments assigned default values must come after arguments without default values. Also, don't worry that you cannot input values other than defaults. You are still able to override default values when you need to.
End of explanation
"""
special_product(1,3) == special_product(3,1)
special_product(x=1, y=3) == special_product( y=3, x=1)
"""
Explanation: 9.5 Passing arguments to functions by keyword
The arguments to a function have names, just as variables have names. The names of arguments to a function are called keywords. We can pass arguments to functions by assigning values explicitly to keywords like so:
my_function(keyword_1 = value_1, keyword_2 = value_2,...)
This gives enormous flexibility in using Python interactively. Most functions given in the matplotlib and seaborn libraries have many arguments almost all of have them have default values. However, we often use a few of these keywords and it is quite a pain to remember the exact sequence of arguments in the function signature. Passing argments to keywords allows us to pass arguments in any order convenient to us.
End of explanation
"""
import math
def is_prime(p):
"""
This function determines if p is prime or not.
Returns:
bool, True if p is prime.
"""
m = int(math.floor(math.sqrt(p)))
for d in range(2, m+1):
if p%d == 0:
return False
return True
for p in range(2, 101):
if is_prime(p):
print(p)
"""
Explanation: 9.5.1 An application: A function to search for primes
To end this section, below is a function to determine whether a number is prime or not. We use this to refactor our prime listing code from the previous unit.
End of explanation
"""
def my_first_function(name):
print("Hello %s" % (name))
"""
Explanation: 10. Lambda expressions
Lambda expressions are used to define short functions that may be written in one line of code. This is more than just a convenience. Most arguments to pandas and seaborn functions are intended to take in callables (functions) and lambda expressions provide a good syntactical way of passing functions as arguments to other functions.
Recall our definition of my_first_function:
End of explanation
"""
printer = lambda name: print("Hello %s" % (name))
"""
Explanation: Notice that this function essentially consists of one line, namely the print statement. Using lambda expressions, this can be shortened to:
End of explanation
"""
printer
"""
Explanation: We use the lambda keyword to define lambda expressions. After lambda we type in the arguments to the function but without enclosing it in ( ). All arguments must be seperated by commas. Once that is done, type a : and follow it with one line of code which does whatever you want it to do. In this case here, I simply want to print a name. Functionally, this lambda expression is equivalent to my_first_function. However, as you can see below, they are different objects.
End of explanation
"""
printer("Joe")
"""
Explanation: Notice that printer is of class function but is given a name <lambda>. However, we can call printer just as we called my_first_function, by passing arguments to it.
End of explanation
"""
special_product = lambda x, y: x-y+x*y
special_product(10,9)
"""
Explanation: Lambda expressions can take on more than one argument. Here is the function special_product refactored as a lambda expression.
End of explanation
"""
import pandas as pd # Importing the pandas library
wine = pd.read_csv("winequality-red.csv", sep=';')
wine.sample(5)
"""
Explanation: Notice that I did not need to put a return to indicate which output to pass to the global environment. This is because lambda expressions are meant to be written in one line, hence it is understood that that one line of code is the output.
10.1 Use cases
Lambda expressions are also known as anonymous functions because we rarely assign lambda expressions to variables. Instead, they are passed directly to keywords or as arguments to most pandas functions or methods. Here is an example to how this is used in a pandas dataframe.
In what follows, we intend to calculate the ratio of sulphates to alchohol content for each sample (row) and assign it as a new column to the data frame. The data frame is displayed below and has been assigned to variable named wine.
End of explanation
"""
def ratio(df):
""" This function calculates the ratio of sulphates to alcohol content in the wine dataframe
Returns
Series, shape (n_samples, ) Array containing the ratio of sulphate to alcohol content for each sample
"""
ratio_col = df.sulphates/df.alcohol
return ratio_col
(wine.assign(ratio_sul_to_alc=ratio)
.head(5))
"""
Explanation: Here's how this could be achieved. We first define the function that calculates the ratio and then proceed to create the new calculated column.
End of explanation
"""
(wine.assign(ratio_sul_to_alc=lambda df: df.sulphates/df.alcohol)
.head(5))
"""
Explanation: As you can see, a new column has been added with the calculated column named ratio_sul_to_alc. However, we had to define a function named ratio which we may or may not use again. We would like to achieve the same thing, but without populating the global namespace with unnecessary variables.
So let's do the same thing but with lambda expressions.
End of explanation
"""
pair = (1,4)
print(pair)
"""
Explanation: Notice that they give the same answer. We will learn how to do this in detail in the next unit. For now, the purpose of this example is to illustrate how lambda expressions are a great help in simplifying and making code more compact and readable.
11. Built in functions
Python has some built in functions for coding purposes. While there are quite a few of them, the following two will be used quite often in handling dataframes and visualization. These are
zip is a utility function that produces tuples from two lists of equal length.
enumerate also creates a tuple in the form $ (i, item_i)$ for $0\leq i < $ len(items).
We will also cover the concept of list comprehension in this section as it is an important programming concept and syntax in Python.
11.1 zip
To understand what zip does, we need to describe a rather simple data structure called tuple. A tuple is like a list, with the difference being that its elements are immutable. Tuples are created by enclosing a list of objects seperated by commas within two round braces ()
End of explanation
"""
pair[0] = 2
"""
Explanation: As with lists, tuples can also be indexed and sliced. However, once assigned, individual components of a tuple cannot be changed. For example, the following code will raise and error
End of explanation
"""
zipped = list() # This creates and empty list
my_colleagues = ['Andy', 'Lisa', 'Dayton']
ages = [29, 24, 50]
for i in range(0,3):
zipped.append((my_colleagues[i], ages[i]))
print(zipped)
"""
Explanation: Think of tuples as lists which you wish to protect from changing by accidental assignment. Another way of thinking about tuples are also as constant lists, or as "coordinates" in $\mathbb{R}^n$.
Given two lists with items $$ x_1, \ldots, x_n$$ and $$ y_1, \ldots, y_n$$ zip produces a new list of tuples in the following form $$ (x_1, y_1), \ldots, (x_n, y_n)$$
To understand how zip works, let's try to replicate its function using a for loop.
End of explanation
"""
for tup in zip(my_colleagues, ages):
name = tup[0]
age = tup[1]
print("%s's age is %d" % (name, age))
"""
Explanation: Imagine having to write such a snippet of code every time we need to do something with elements from two lists! As you can imagine, it can cause code to be bloated and distracts from the main logic of the program.
Here's how zip is typically used in a program.
End of explanation
"""
for name, age in zip(my_colleagues, ages): # The syntax name, age is what is known as list unpacking
print("%s's age is %d" % (name, age))
"""
Explanation: In fact, we can do even better in terms of readibility. We can utilize what is known as list unpacking to rewrite this for loop.
End of explanation
"""
staff_id = dict()
for i, name in enumerate(my_colleagues):
id_no = 's2017-'+str(i) # The str function coerces and integer i into 'i'
staff_id[id_no] = name
print("A list of staff id numbers")
print(staff_id.keys())
print("and the respective staff names")
print(staff_id.values())
"""
Explanation: Of course, zip is used in many other context other than to simplify for loops. Can you think of any other situations where you might need to use zip?
11.2 enumerate
As the name of this function suggests, enumerate is useful when we wish to produce a count of items in the list. This is one of the most useful functions you will ever use in Python. It's utility can not be understated.
enumerate works by producing from a list $$ x_0, x_1, \ldots, x_n$$ the following list of tuples (note the 0 indexing) $$(0, x_0), (1, x_1), \ldots, (n, x_n) $$.
We can use list unpacking to capture both the enumerated index and the object itself. You will use enumerate most often in for loops. Below is an example where we wish to assign staff names to staff id numbers based on a running serial number.
End of explanation
"""
%%timeit
serial_numbers = list()
for i in range(0,5000): # 5000 staff, so we need 5000 int's
serial_numbers.append('s'+str(i)) # our serial numbers are prefixed with 's'
"""
Explanation: 11.3 List comprehension
One one the great things about for loops in Python is there are easy to write and understand. However, this comes at a cost: time. for loops in Python are slow. Thus, if the code is iterated over a large number of loops, it will take time.
Let's see this. The scenario here is that we want to assign a running serial number to staff id. This will involve coercing int into str type. The thing is, we have 5000 staff. So let's see how much computer time it takes by using %timeit.
End of explanation
"""
%%timeit
serial_numbers = ['s'+str(i) for i in range(5000)]
"""
Explanation: Notice that the entire script needed about 1.8 ms to execute. This isn't exactly a short amount of time as far as computers go. Just imagine that we have to do this for 10 times in a row!
Fortunately Python implements what is known as list comprehension which is a way of writing for loops in a more compact and abstracted way. If we think about a for loop to create a list element by element, the code will look something like this:
for i in iterable:
do code and return result_i
append(result_i) to list[i]
Python list comprehension provides an alternative syntax which does the exact same thing but faster. The syntax is:
[do code for i in iterable]
The code is enclosed in [ ] because we are using list comprehension to create a list using a set instruction for each item in the iterable (think of iterables as a list). If one is familiar with set notation from calculus courses, list comprehension is syntatically similiar to the following $${\, f(x_n) \mid x_n \in A}$$ where $f$ is some function meant to be evaluated element-wise on each element in a set $A$.
Now let's refactor the for loop above using list comprehension and time the script.
End of explanation
"""
months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October",
"November", "December"]
# multiline statements are allowed in Python as long as they are enclosed in some sort of braces.
short_name = []
mk_list = short_name.append # Here's a neat trick, assign the append method to a variable mk_list.
# mk_list is now a function
for month in months:
mk_list(month[0:3].upper()) # .upper() is a string method that simply capitalizes all letters in a string.
print(short_name)
"""
Explanation: That's an improvement of about 10.2 %!
Let's see another example to really familiarize ourselves with list comprehensions. In the following example, we wish to to extract the first three letters of the months in a year and capitalize them
End of explanation
"""
short_name = [month[0:3].upper() for month in months]
print(short_name)
"""
Explanation: To refactor this into a list comprehension statement, we first identify the code that is being looped over. That is
mk_list(month[0:3].upper())
However, this composite statement can be broken down into steps:
month[0:3] just extracts the first 3 letters from month
Calling the .upper() method on the string of 3 letters captilizes them.
Calling mk_list is essentially the task of appending the result of the previous two steps to the list short_name.
In list comprehension, the last step is taken care of. Thus, the essential part of the code is month[0:3].upper(). Now we identify the iterable: This is simply the list months (note plural).
What is the variable to indicate the particular month as we iterate over the lists of months? This is simply denoted by the name month (note singular). There is nothing particularly special about using month to name each object in the list months. I could have easily used mon as well. In that case, the essential part of the code which is being looped should be written mon[0:3].upper(). With that clarified, the list comprehension statement is
End of explanation
"""
from datetime import datetime
DAY_OF_WEEK = {1: "MONDAY", 2: "TUESDAY", 3:"WEDNESDAY", 4:"THURSDAY", 5:"FRIDAY", 6:"SATURDAY", 0:"SUNDAY"}
def todays_date():
t0 = datetime.today()
return t0.isoweekday(), t0.day, t0.month, t0.year
# Returns day difference if target date is within same month and year
def day_diff(start_date, end_date):
return (end_date[0] - start_date[0])
# Returns day difference if target date may be in differing months but within same year.
# Remember to account for leap years!
def month_diff(start_date, end_date):
start_month, end_month, end_year = start_date[1], end_date[1], end_date[2]
total_days = 0
for m in range(min(start_month, end_month), max(start_month, end_month)):
# Enter your answer here
# End of answer
# It is quite possible that start_month exceeds end_month. In this case,
# we are actually counting days "backwards"! We then have to actually return
# the negative value so that this number of days is subtracted from the total.
if start_month < end_month:
return total_days
else:
return -1*total_days
# Returns day difference across different years
def year_diff(start_date, end_date):
start_year, end_year = start_date[2], end_date[2]
total_days = 0
# Adjusting for the fact that in a leap year, the extra day occurs on the last day of Feb.
leap_year_adj = 0
if end_date[1] >= 3 and end_date[2]%4==0:
leap_year_adj += 1
if start_date[1] >= 3 and start_date[2]%4==0:
leap_year_adj += -1
for y in range(start_year, end_year):
if y%4==0:
total_days += 366
else:
total_days += 365
return total_days + leap_year_adj
# Returns day of week for given date
def weekday_from_date(day, month, year):
curr_date = todays_date()
# Checking whether the target_date is in the future (relative to the current date)
# or not
conds = [curr_date[3] < year,
curr_date[3] == year and curr_date[2] < month,
curr_date[3] == year and curr_date[2] == month and curr_date[1] < day]
if any(conds):
start_date, end_date = curr_date[1:], (day, month, year)
is_future = True
else:
start_date, end_date = (day, month, year), curr_date[1:]
is_future = False
# Getting the difference in days between the current date and the target date
number_days = (year_diff(start_date, end_date)
+ month_diff(start_date, end_date)
+ day_diff(start_date, end_date))
if is_future:
target_weekday = curr_date[0] + number_days
else:
target_weekday = curr_date[0] - number_days
return DAY_OF_WEEK[target_weekday%7]
weekday_from_date(15,10,1984)
"""
Explanation: 12. A concluding demonstration
In this last section, I want to pose a challenge for us to solve.
I want you to create a function which will return the day of the week for a given date input. For example, this function should return THURSDAY for an input of 14-09-2017 (in DD-MM-YYYY) format. The function must be able to accept any date in the past or the future. It must retain its validity even when the date in the past century. Your function signature can be the following:
def weekday_from_date(day=1, month=1, year=2017):
<code>
return <weekday as an int or str>
When returning and int to represent a weekday, we use 1=Monday, 2=Tuesday, ...,6=Saturday, 0=Sunday.
I have actually prepared most of the coding already. You will need to just code in one small section to complete this
assignment.
How this function works
In order to determine the day of the week from a given date, the most straightforward way is to count the number of days starting from today to the target date. This is complicated only by the fact that months can have 28-31 days.
We first have to determine whether the target date is in the future or past. This tells us whether to add or subtract the day difference to the current week day.
Next of course is to determine the day difference between target date and current date.
An example will serve to illustrate the idea: Days between 15-09-2017 and 18-10-2020 = days between 15-09-2017 to 15-09-2020 + days between 15-09-2020 to 15-10-2020 + days between 15-10-2020 to 18-10-2020.
The correct day of week in terms of its integer code is just the remainder of current day of week $\pm$ day difference divided by 7 since there are seven days in a week.
The assignment
You will need to fill in the empty section for the function month_diff which is meant to calculate the number of days between two dates which differs only by month, e.g. it calculates the number of days between 1st April 2017 to 1st September 2017 and not 4th April 2017 to 7th May 2017.
What code will achieve the correct answer?
You may check the correctness of your code with this website. Run the final function weekday_from_date with a choice of dates as you like and use the website to counter check the returned value. If the answers match, more likely than not, you've succeeded.
End of explanation
"""
|
materialsvirtuallab/ceng114 | lectures/Lecture 14 - Climate Change analysis.ipynb | bsd-2-clause | from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
#Some initial imports
from __future__ import division
import matplotlib.pyplot as plt
import matplotlib as mpl
import prettyplotlib as ppl
import brewer2mpl
import numpy as np
import math
from pandas import read_table, Series, DataFrame
%matplotlib inline
# Here, we customize the various matplotlib parameters for font sizes and define a color scheme.
# As mentioned in the lecture, the typical defaults in most software are not optimal from a
# data presentation point of view. You need to work hard at selecting these parameters to ensure
# an effective data presentation.
colors = brewer2mpl.get_map('Set1', 'qualitative', 3).mpl_colors
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['lines.color'] = 'r'
mpl.rcParams['axes.titlesize'] = 32
mpl.rcParams['axes.labelsize'] = 24
mpl.rcParams['axes.labelsize'] = 24
mpl.rcParams['xtick.labelsize'] = 24
mpl.rcParams['ytick.labelsize'] = 24
# Loading the data. This was downloaded from NASA's website with some minimal preprocessign
co2 = read_table("CO2.txt", sep="\s+")
temps = read_table("Temperatures.txt", sep="\s+")
co2['decade'] = Series(np.floor(co2['year']/10) * 10, index=co2.index)
temps['decade'] = Series(np.floor(temps['Year']/10) * 10, index=temps.index)
"""
Explanation: Introduction
This notebook performs the analysis of the climate change model in lecture 14 of NANO114.
End of explanation
"""
# Plot for 1965-2010. Previous data is not that reliable.
co2_yr = co2.groupby('year')['average'].mean()
co2_series = co2_yr.loc[1965:2010]
temp_data = temps.set_index('Year')
temp_series = temp_data.loc[1965:2010]["J-D"]
plt.figure(figsize=(12,8))
ppl.plot(co2_series, temp_series / 100, 'o', color=colors[0], markersize=10)
plt.ylabel("Annual Avg. Global Temp.\nrelative to 1951-1980 ($^o$C)")
plt.xlabel("CO2 (ppm)")
annotation = plt.annotate("r = %.3f" % temp_series.corr(co2_series), xy=(300, 0.4), fontsize=24)
"""
Explanation: Plotting CO2 vs Temperature on an Annual Basis
In this section, we plot the average global temperature (in NASA, these were reported as differences from the average of 1951-1980 temperatures) versus CO2 levels in parts per million (ppm). Even from the scatter plot, we can see that there is a clear relationship between CO2 levels and global temperatures.
End of explanation
"""
co2_decade = co2.groupby('decade')['average'].mean()
co2_series = co2_decade
temp_decade = temps.groupby('decade')['J-D'].mean()
temp_series = temp_decade / 100
df = DataFrame({"CO2 in ppm (X)": co2_series, "Avg. Temp. Diff. (Y)": temp_series})
print df
"""
Explanation: CO2 vs Temperature by Decade
For the purposes of illustration, we will now use data that is averaged by decade. That reduces the number of data points, which makes it far easier to work through the numbers by hand in a lecture. Here are the numbers by decade.
End of explanation
"""
df["X^2"] = df["CO2 in ppm (X)"] ** 2
df["Y^2"] = df["Avg. Temp. Diff. (Y)"] ** 2
df["XY"] = df["CO2 in ppm (X)"] * df["Avg. Temp. Diff. (Y)"]
import pandas
pandas.set_option('display.precision', 5)
print df
"""
Explanation: Let us now compute the various products and sums needed.
End of explanation
"""
SS_x = df["X^2"].sum() - df["CO2 in ppm (X)"].sum() ** 2 / 7
SS_y = df["Y^2"].sum() - df["Avg. Temp. Diff. (Y)"].sum() ** 2 / 7
SP = df["XY"].sum() - df["CO2 in ppm (X)"].sum() * df["Avg. Temp. Diff. (Y)"].sum() / 7
print "SS_x = %.3f" % (SS_x)
print "SS_y = %.3f" % (SS_y)
print "SP = %.3f" % (SP)
"""
Explanation: We can now compute our various sums of squares.
End of explanation
"""
print "r = %.3f" % (SP / np.sqrt(SS_x * SS_y))
"""
Explanation: Finally, we then have the correlation coefficient as $$r = \frac{SP}{\sqrt{SS_x SS_y}}$$
End of explanation
"""
plt.figure(figsize=(12,8))
ppl.plot(co2_series, temp_series, 'o', color=colors[0], markersize=10)
plt.ylabel("Annual Avg. Global Temp.\nrelative to 1951-1980 ($0.01 ^o$C)")
plt.xlabel("CO2 (ppm)")
#plt.annotate("r = %.3f" % temp_series.corr(co2_series), xy=(330, 0.4), fontsize=24)
from pandas import ols
model = ols(y=temp_series, x=co2_series)
x = np.arange(280, 400, 1)
ppl.plot(x, model.beta['x'] * x + model.beta['intercept'], 'k--')
plt.show()
print model
"""
Explanation: Using the Python Data Analysis (pandas) library, we can also do a ordinary least squares regression and print out the associated statistics.
End of explanation
"""
|
ToqueWillot/M2DAC | FDMS/TME3/Model_V7.ipynb | gpl-2.0 | # from __future__ import exam_success
from __future__ import absolute_import
from __future__ import print_function
# Standard imports
%matplotlib inline
import os
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
import scipy.stats as stats
# Sk cheats
from sklearn.cross_validation import cross_val_score
from sklearn import grid_search
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import SVR
#from sklearn.preprocessing import Imputer # get rid of nan
from sklearn.decomposition import NMF # to add features based on the latent representation
from sklearn.decomposition import ProjectedGradientNMF
# Faster gradient boosting
import xgboost as xgb
# For neural networks models
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, RMSprop
"""
Explanation: FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Notes
We tried different regressor model, like GBR, SVM, MLP, Random Forest and KNN as recommanded by the winning team of the Kaggle on taxi trajectories. So far GBR seems to be the best, slightly better than the RF.
The new features we exctracted only made a small impact on predictions but still improved them consistently.
We tried to use a LSTM to take advantage of the sequential structure of the data but it didn't work too well, probably because there is not enought data (13M lines divided per the average length of sequences (15), less the 30% of fully empty data)
End of explanation
"""
%%time
#filename = "data/train.csv"
filename = "data/reduced_train_100000.csv"
#filename = "data/reduced_train_1000000.csv"
raw = pd.read_csv(filename)
raw = raw.set_index('Id')
raw.columns
raw['Expected'].describe()
"""
Explanation: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Few words about the dataset
Predictions is made in the USA corn growing states (mainly Iowa, Illinois, Indiana) during the season with the highest rainfall (as illustrated by Iowa for the april to august months)
The Kaggle page indicate that the dataset have been shuffled, so working on a subset seems acceptable
The test set is not a extracted from the same data as the training set however, which make the evaluation trickier
Load the dataset
End of explanation
"""
# Considering that the gauge may concentrate the rainfall, we set the cap to 1000
# Comment this line to analyse the complete dataset
l = len(raw)
raw = raw[raw['Expected'] < 300] #1000
print("Dropped %d (%0.2f%%)"%(l-len(raw),(l-len(raw))/float(l)*100))
"""
Explanation: Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail"
If we encounter the value 327.40 meter per hour, we should probably start building Noah's ark
Therefor, it seems reasonable to drop values too large, considered as outliers
End of explanation
"""
raw.head(10)
raw.describe()
"""
Explanation: Our data look like this:
End of explanation
"""
# We select all features except for the minutes past,
# because we ignore the time repartition of the sequence for now
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getXy(raw):
selected_columns = list([ u'minutes_past',u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX, docY = [], []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
docY.append(float(raw.loc[i]["Expected"]))
else:
m = data.loc[i].as_matrix()
docX.append(m)
docY.append(float(raw.loc[i][:1]["Expected"]))
X , y = np.array(docX) , np.array(docY)
return X,y
"""
Explanation: Numerous features, with a lot of variability.
We regroup the data by ID
End of explanation
"""
#noAnyNan = raw.loc[raw[features_columns].dropna(how='any').index.unique()]
noAnyNan = raw.dropna()
"""
Explanation: We prepare 3 subsets
Fully filled data only
Used at first to evaluate the models and no worry about features completion
End of explanation
"""
noFullNan = raw.loc[raw[features_columns].dropna(how='all').index.unique()]
"""
Explanation: Partly filled data
Constitue most of the dataset, used to train the models
End of explanation
"""
fullNan = raw.drop(raw[features_columns].dropna(how='all').index)
print("Complete dataset:",len(raw))
print("Fully filled: ",len(noAnyNan))
print("Partly filled: ",len(noFullNan))
print("Fully empty: ",len(fullNan))
"""
Explanation: Fully empty data
Can't do much with those, therefor we isolate them and decide what value to predict using the data analysis we made during the Data Visualization
End of explanation
"""
%%time
#X,y=getXy(noAnyNan)
X,y=getXy(noFullNan)
"""
Explanation: Features engineering
Here we add the appropriate features, which we will detail below
First we get the data and labels:
End of explanation
"""
X[0][:10,:8]
"""
Explanation: Our data still look the same:
End of explanation
"""
# used to fill fully empty datas
global_means = np.nanmean(noFullNan,0)
XX=[]
for t in X:
nm = np.nanmean(t,0)
for idx,j in enumerate(nm):
if np.isnan(j):
nm[idx]=global_means[idx]
XX.append(nm)
XX=np.array(XX)
# rescale to clip min at 0 (for non negative matrix factorization)
XX_rescaled=XX[:,:]-np.min(XX,0)
%%time
nmf = NMF(max_iter=5000)
W = nmf.fit_transform(XX_rescaled)
#H = nn.components_
"""
Explanation: Now let's add some features.
We start by averaging on each row every sequence of measures, and we use a Non-negative matrix factorisation to produce a new feature indicating the location of each measures in a latent space. Hopefully this will allow us to predict similar sequences the same way.
("RuntimeWarning: Mean of empty slice" is normal)
End of explanation
"""
# reduce the sequence structure of the data and produce
# new hopefully informatives features
def addFeatures(X,mf=0):
# used to fill fully empty datas
#global_means = np.nanmean(X,0)
XX=[]
nbFeatures=float(len(X[0][0]))
for idxt,t in enumerate(X):
# compute means, ignoring nan when possible, marking it when fully filled with nan
nm = np.nanmean(t,0)
tt=[]
for idx,j in enumerate(nm):
if np.isnan(j):
nm[idx]=global_means[idx]
tt.append(1)
else:
tt.append(0)
tmp = np.append(nm,np.append(tt,tt.count(0)/nbFeatures))
# faster if working on fully filled data:
#tmp = np.append(np.nanmean(np.array(t),0),(np.array(t)[1:] - np.array(t)[:-1]).sum(0) )
# add the percentiles
tmp = np.append(tmp,np.nanpercentile(t,10,axis=0))
tmp = np.append(tmp,np.nanpercentile(t,50,axis=0))
tmp = np.append(tmp,np.nanpercentile(t,90,axis=0))
for idx,i in enumerate(tmp):
if np.isnan(i):
tmp[idx]=0
# adding the dbz as a feature
test = t
try:
taa=test[:,0]
except TypeError:
taa=[test[0][0]]
valid_time = np.zeros_like(taa)
valid_time[0] = taa[0]
for n in xrange(1,len(taa)):
valid_time[n] = taa[n] - taa[n-1]
valid_time[-1] = valid_time[-1] + 60 - np.sum(valid_time)
valid_time = valid_time / 60.0
sum=0
try:
column_ref=test[:,2]
except TypeError:
column_ref=[test[0][2]]
for dbz, hours in zip(column_ref, valid_time):
# See: https://en.wikipedia.org/wiki/DBZ_(meteorology)
if np.isfinite(dbz):
mmperhr = pow(pow(10, dbz/10)/200, 0.625)
sum = sum + mmperhr * hours
if not(mf is 0):
tmp = np.append(tmp,mf[idxt])
XX.append(np.append(np.array(sum),tmp))
#XX.append(np.array([sum]))
#XX.append(tmp)
return np.array(XX)
%%time
XX=addFeatures(X,mf=W)
#XX=addFeatures(X)
"""
Explanation: Now we add other simpler features:
* mean
* number of NaNs
* flag for each row only filled with NaN
* percentiles (10%, 50%, 90%)
* dBZ (not Dragon Ball Z)
End of explanation
"""
def splitTrainTest(X, y, split=0.2):
tmp1, tmp2 = [], []
ps = int(len(X) * (1-split))
index_shuf = range(len(X))
random.shuffle(index_shuf)
for i in index_shuf:
tmp1.append(X[i])
tmp2.append(y[i])
return tmp1[:ps], tmp2[:ps], tmp1[ps:], tmp2[ps:]
X_train,y_train, X_test, y_test = splitTrainTest(XX,y)
"""
Explanation: We are now ready to train our models, so we prepare some training and evaluation dataset.
End of explanation
"""
# used for the crossvalidation
def manualScorer(estimator, X, y):
err = (estimator.predict(X_test)-y_test)**2
return -err.sum()/len(err)
"""
Explanation:
End of explanation
"""
svr = SVR(kernel='rbf', C=600.0)
"""
Explanation: Here we try a few models.
Support Vector Regression
End of explanation
"""
%%time
parameters = {'C':range(600,1001,200)}
grid_svr = grid_search.GridSearchCV(svr, parameters,scoring=manualScorer)
grid_svr.fit(X_train,y_train)
print(grid_svr.grid_scores_)
print("Best: ",grid_svr.best_params_)
"""
Explanation: We tried a lot of parameters but here for demonstration purpose and speed we focus on one.
End of explanation
"""
%%time
srv = svr.fit(X_train,y_train)
print(svr.score(X_train,y_train))
print(svr.score(X_test,y_test))
#np.shape(svr.support_vectors_)
#svr.support_vectors_.mean(0)
%%time
svr_score = cross_val_score(svr, X_train, y_train, cv=5)
print("Score: %s\nMean: %.03f"%(svr_score,svr_score.mean()))
"""
Explanation: We use the best parameters
End of explanation
"""
knn = KNeighborsRegressor(n_neighbors=6,weights='distance',algorithm='ball_tree')
#parameters = {'weights':('distance','uniform'),'algorithm':('auto', 'ball_tree', 'kd_tree', 'brute')}
parameters = {'n_neighbors':range(1,10,1)}
grid_knn = grid_search.GridSearchCV(knn, parameters,scoring=manualScorer)
%%time
grid_knn.fit(X_train,y_train)
print(grid_knn.grid_scores_)
print("Best: ",grid_knn.best_params_)
knn = grid_knn.best_estimator_
knn= knn.fit(X_train,y_train)
print(knn.score(X_train,y_train))
print(knn.score(X_test,y_test))
"""
Explanation: Knn
We tried this one because of the Kaggle challenge on Taxi, as the analysis on sequence may contain some similarities
End of explanation
"""
etreg = ExtraTreesRegressor(n_estimators=200, max_depth=None, min_samples_split=1, random_state=0)
parameters = {'n_estimators':range(100,200,20)}
grid_rf = grid_search.GridSearchCV(etreg, parameters,n_jobs=2,scoring=manualScorer)
%%time
grid_rf.fit(X_train,y_train)
print(grid_rf.grid_scores_)
print("Best: ",grid_rf.best_params_)
#etreg = grid_rf.best_estimator_
%%time
etreg = etreg.fit(X_train,y_train)
print(etreg.score(X_train,y_train))
print(etreg.score(X_test,y_test))
"""
Explanation: Extra Trees Regressor
Similar to random forest, slightly faster and performed better here
End of explanation
"""
rfr = RandomForestRegressor(n_estimators=200, criterion='mse', max_depth=None, min_samples_split=2,
min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto',
max_leaf_nodes=None, bootstrap=True, oob_score=False, n_jobs=-1,
random_state=None, verbose=0, warm_start=False)
%%time
rfr = rfr.fit(X_train,y_train)
print(rfr.score(X_train,y_train))
print(rfr.score(X_test,y_test))
"""
Explanation: Random Forest
Ladies and gentlemens, the winning solution of 80% of Kaggles's challenges
End of explanation
"""
# the dbz feature does not influence xgbr so much
xgbr = xgb.XGBRegressor(max_depth=6, learning_rate=0.1, n_estimators=700, silent=True,
objective='reg:linear', nthread=-1, gamma=0, min_child_weight=1,
max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5,
seed=0, missing=None)
%%time
xgbr = xgbr.fit(X_train,y_train)
# without the nmf features
# print(xgbr.score(X_train,y_train))
## 0.993948231144
# print(xgbr.score(X_test,y_test))
## 0.613931733332
# with nmf features
print(xgbr.score(X_train,y_train))
print(xgbr.score(X_test,y_test))
"""
Explanation: Gradient Boosting Regressor
End of explanation
"""
gbr = GradientBoostingRegressor(loss='ls', learning_rate=0.1, n_estimators=900,
subsample=1.0, min_samples_split=2, min_samples_leaf=1, max_depth=4, init=None,
random_state=None, max_features=None, alpha=0.5,
verbose=0, max_leaf_nodes=None, warm_start=False)
%%time
gbr = gbr.fit(X_train,y_train)
#os.system('say "終わりだ"') #its over!
#parameters = {'max_depth':range(2,5,1),'alpha':[0.5,0.6,0.7,0.8,0.9]}
#parameters = {'subsample':[0.2,0.4,0.5,0.6,0.8,1]}
#parameters = {'subsample':[0.2,0.5,0.6,0.8,1],'n_estimators':[800,1000,1200]}
#parameters = {'max_depth':range(2,4,1)}
parameters = {'n_estimators':[400,800,1100]}
#parameters = {'loss':['ls', 'lad', 'huber', 'quantile'],'alpha':[0.3,0.5,0.8,0.9]}
#parameters = {'learning_rate':[0.1,0.5,0.9]}
grid_gbr = grid_search.GridSearchCV(gbr, parameters,n_jobs=2,scoring=manualScorer)
%%time
grid_gbr = grid_gbr.fit(X_train,y_train)
print(grid_gbr.grid_scores_)
print("Best: ",grid_gbr.best_params_)
print(gbr.score(X_train,y_train))
print(gbr.score(X_test,y_test))
"""
Explanation:
End of explanation
"""
modelList = [svr,knn,etreg,rfr,xgbr,gbr]
reducedModelList = [knn,etreg,xgbr,gbr]
score_train = [[str(f).split("(")[0],f.score(X_train,y_train)] for f in modelList]
score_test = [[str(f).split("(")[0],f.score(X_test,y_test)] for f in modelList]
for idx,i in enumerate(score_train):
print(i[0])
print(" train: %.03f"%i[1])
print(" test: %.03f"%score_test[idx][1])
#reducedModelList = [knn,etreg,xgbr,gbr]
globalPred = np.array([f.predict(XX) for f in reducedModelList]).T
#globalPred.mean(1)
globalPred[0]
y[0]
err = (globalPred.mean(1)-y)**2
print(err.sum()/len(err))
err = (globalPred.mean(1)-y)**2
print(err.sum()/len(err))
for f in modelList:
print(str(f).split("(")[0])
err = (f.predict(XX)-y)**2
print(err.sum()/len(err))
err = (XX[:,0]-y)**2
print(err.sum()/len(err))
for f in modelList:
print(str(f).split("(")[0])
print(f.score(XX,y))
XX[:10,0] # feature 0 is marshall-palmer
svrMeta = SVR()
%%time
svrMeta = svrMeta.fit(globalPred,y)
err = (svrMeta.predict(globalPred)-y)**2
print(err.sum()/len(err))
"""
Explanation:
End of explanation
"""
in_dim = len(XX[0])
out_dim = 1
model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(Dense(128, input_shape=(in_dim,)))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(1, init='uniform'))
model.add(Activation('linear'))
#sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
#model.compile(loss='mean_squared_error', optimizer=sgd)
rms = RMSprop()
model.compile(loss='mean_squared_error', optimizer=rms)
#model.fit(X_train, y_train, nb_epoch=20, batch_size=16)
#score = model.evaluate(X_test, y_test, batch_size=16)
prep = []
for i in y_train:
prep.append(min(i,20))
prep=np.array(prep)
mi,ma = prep.min(),prep.max()
fy = (prep-mi) / (ma-mi)
#my = fy.max()
#fy = fy/fy.max()
model.fit(np.array(X_train), fy, batch_size=10, nb_epoch=10, validation_split=0.1)
pred = model.predict(np.array(X_test))*ma+mi
err = (pred-y_test)**2
err.sum()/len(err)
r = random.randrange(len(X_train))
print("(Train) Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_train[r]]))[0][0]*ma+mi,y_train[r]))
r = random.randrange(len(X_test))
print("(Test) Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_test[r]]))[0][0]*ma+mi,y_test[r]))
"""
Explanation: Here for legacy
End of explanation
"""
%%time
filename = "data/reduced_test_5000.csv"
#filename = "data/test.csv"
test = pd.read_csv(filename)
test = test.set_index('Id')
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getX(raw):
selected_columns = list([ u'minutes_past',u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX= []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
else:
m = data.loc[i].as_matrix()
docX.append(m)
X = np.array(docX)
return X
#%%time
#X=getX(test)
#tmp = []
#for i in X:
# tmp.append(len(i))
#tmp = np.array(tmp)
#sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
#plt.title("Number of ID per number of observations\n(On test dataset)")
#plt.plot()
#testFull = test.dropna()
testNoFullNan = test.loc[test[features_columns].dropna(how='all').index.unique()]
%%time
X=getX(testNoFullNan) # 1min
#XX = [np.array(t).mean(0) for t in X] # 10s
XX=[]
for t in X:
nm = np.nanmean(t,0)
for idx,j in enumerate(nm):
if np.isnan(j):
nm[idx]=global_means[idx]
XX.append(nm)
XX=np.array(XX)
# rescale to clip min at 0 (for non negative matrix factorization)
XX_rescaled=XX[:,:]-np.min(XX,0)
%%time
W = nmf.transform(XX_rescaled)
XX=addFeatures(X,mf=W)
pd.DataFrame(xgbr.predict(XX)).describe()
reducedModelList = [knn,etreg,xgbr,gbr]
globalPred = np.array([f.predict(XX) for f in reducedModelList]).T
predTest = globalPred.mean(1)
predFull = zip(testNoFullNan.index.unique(),predTest)
testNan = test.drop(test[features_columns].dropna(how='all').index)
pred = predFull + predNan
tmp = np.empty(len(testNan))
tmp.fill(0.445000) # 50th percentile of full Nan dataset
predNan = zip(testNan.index.unique(),tmp)
testLeft = test.drop(testNan.index.unique()).drop(testFull.index.unique())
tmp = np.empty(len(testLeft))
tmp.fill(1.27) # 50th percentile of full Nan dataset
predLeft = zip(testLeft.index.unique(),tmp)
len(testFull.index.unique())
len(testNan.index.unique())
len(testLeft.index.unique())
pred = predFull + predNan + predLeft
pred.sort(key=lambda x: x[0], reverse=False)
#reducedModelList = [knn,etreg,xgbr,gbr]
globalPred = np.array([f.predict(XX) for f in reducedModelList]).T
#globalPred.mean(1)
submission = pd.DataFrame(pred)
submission.columns = ["Id","Expected"]
submission.head()
submission.loc[submission['Expected']<0,'Expected'] = 0.445
submission.to_csv("submit4.csv",index=False)
filename = "data/sample_solution.csv"
sol = pd.read_csv(filename)
sol
ss = np.array(sol)
%%time
for a,b in predFull:
ss[a-1][1]=b
ss
sub = pd.DataFrame(pred)
sub.columns = ["Id","Expected"]
sub.Id = sub.Id.astype(int)
sub.head()
sub.to_csv("submit3.csv",index=False)
"""
Explanation: Predict on testset
End of explanation
"""
|
deeplook/notebooks | meetups/meetup_analysis.ipynb | mit | %matplotlib inline
import re
import os
import json
import requests
import pandas as pd
server = 'https://api.meetup.com'
group_urlname = 'Python-Users-Berlin-PUB'
from meetup_api_key import key
"""
Explanation: Analysing Public Member Info on Meetup.com
In this notebook we do some simple analysis of information about members registered on meetup.com. We extract the info using the official meetup API where you can also get your API key as a registered member.
N.B. This is work in progress.
Getting Started
End of explanation
"""
requests.get("https://api.meetup.com/%s?key=%s" % (group_urlname, key)).json()
"""
Explanation: Get information about a group on Meetup.com:
End of explanation
"""
url = server + "/2/members?offset=1&page=2&order=name&group_urlname=%s&key=%s" % (group_urlname, key)
info = requests.get(url).json()
# hide key so it doesn't show up in some repository:
for f in ('next', 'url'):
info['meta'][f] = re.sub('key=\w+', 'key=******', info['meta'][f])
info
def get_all_members(group_urlname, verbose=False):
"Read members info from a sequence of pages."
total = []
offset = 1
page = 200
url = "{server}/2/members?offset={offset}&format=json&group_urlname={group_urlname}&page={page}&key={key}&order=name"
url = url.format(server=server, offset=offset, page=page, group_urlname=group_urlname, key=key)
info = requests.get(url).json()
total += info['results']
if verbose:
print(url)
print(len(total), info['meta']['count'])
while True:
next_url = info['meta']['next']
print(next_url)
if not next_url:
break
js = requests.get(next_url).json()
total += info['results']
print(len(total), info['meta']['count'])
if verbose:
print('found %d members' % len(total))
return total
path = 'pub-members.json'
if os.path.exists(path):
members = json.load(open(path))
else:
members = get_all_members('Python-Users-Berlin-PUB')
json.dump(members, open(path, 'w'))
members[0]
"""
Explanation: Get information about two members of that group:
End of explanation
"""
members[0]['topics']
pd.DataFrame(members[0]['topics'])
"""
Explanation: PUB Members' Interests
End of explanation
"""
df = pd.concat([pd.DataFrame(m['topics']) for m in members])
len(df)
s = df.groupby('name').size().sort_values(ascending=True)[-20:]
s.plot.barh(title='Most cited topics people are interested in', figsize=(10, 5))
"""
Explanation: Now build a dataframe with this information for all members:
End of explanation
"""
path = 'pydata-members.json'
if os.path.exists(path):
members = json.load(open(path))
else:
members = get_all_members('PyData-Berlin')
json.dump(members, open(path, 'w'))
df = pd.concat([pd.DataFrame(m['topics']) for m in members])
s = df.groupby('name').size().sort_values(ascending=True)[-20:]
s.plot.barh(title='Most cited topics people are interested in', figsize=(10, 5))
"""
Explanation: PyData Members' Interests
End of explanation
"""
|
Unidata/unidata-python-workshop | notebooks/Jupyter_Notebooks/Plotting and Interactivity.ipynb | mit | # Import matplotlib as use the inline magic so plots show up in the notebook
import matplotlib.pyplot as plt
%matplotlib inline
# Make some "data"
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
"""
Explanation: <div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Plotting and Jupyter Notebooks</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
One of the most common tasks we face as scientists is making plots. Visually assessing data is one of the best ways to explore it - who can look at a wall of tabular data and tell anything? In this lesson we'll show how to make some basic plots in notebooks and introduce interactive widgets.
Matplotlib has many more features than we could possibly talk about - this is just a taste of making a basic plot. Be sure to browse the matplotlib gallery for ideas, inspiration, and a sampler of what's possible.
End of explanation
"""
# Make a simple line plot
plt.plot(x, y)
# Play with the line style
plt.plot(x, y, color='tab:red', linestyle='--')
# Make a scatter plot
plt.plot(x, y, color='tab:orange', linestyle='None', marker='o')
"""
Explanation: Basic Line and Scatter Plots
End of explanation
"""
# Let's make some more complicated "data" using a sine wave with some
# noise superimposed. This gives us lots of things to manipulate - the
# amplitude, frequency, noise amplitude, and DC offset.
import numpy as np
x = np.linspace(0, 2*np.pi, 100)
y = 10 * np.sin(x) + np.random.random(100)*5 + 20
# Have a look at the basic form of the data
plt.plot(x, y)
plt.xlabel('X Values')
plt.ylabel('Y Values')
plt.title('My Temperature Data')
# Let's add some interactive widgets
from ipywidgets import interact
def plot_pseudotemperature(f, A, An, offset):
x = np.linspace(0, 2*np.pi, 100)
y = A * np.sin(f * x) + np.random.random(100) * An + offset
fig = plt.figure()
plt.plot(x, y)
plt.xlabel('X Values')
plt.ylabel('Y Values')
plt.title('My Temperature Data')
plt.show()
interact(plot_pseudotemperature,
f = (0, 10),
A = (1, 5),
An = (1, 10),
offset = (10, 40))
# We can specify the type of slider, range, and defaults as well
from ipywidgets import FloatSlider, IntSlider
def plot_pseudotemperature2(f, A, An, offset, title):
x = np.linspace(0, 2*np.pi, 100)
y = A * np.sin(f * x) + np.random.random(100) * An + offset
fig = plt.figure()
plt.plot(x, y)
plt.xlabel('X Values')
plt.ylabel('Y Values')
plt.title(title)
plt.show()
interact(plot_pseudotemperature2,
f = IntSlider(min=1, max=7, value=3),
A = FloatSlider(min=1, max=10, value=5),
An = IntSlider(min=1, max=10, value=1),
offset = FloatSlider(min=1, max=40, value=20),
title = 'My Improved Temperature Plot')
"""
Explanation: Adding Interactivity to Plots
End of explanation
"""
|
google-aai/sc17 | cats/step_4_to_4_part1.ipynb | apache-2.0 | # Enter your username:
YOUR_GMAIL_ACCOUNT = '******' # Whatever is before @gmail.com in your email address
# Libraries for this section:
import os
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import pandas as pd
import cv2
import warnings
warnings.filterwarnings('ignore')
# Grab the filenames:
TRAINING_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/training_small/')
files = os.listdir(TRAINING_DIR) # Grab all the files in the VM images directory
print(files[0:5]) # Let's see some filenames
"""
Explanation: Exploring the Training Set
Author(s): kozyr@google.com, bfoo@google.com
In this notebook, we gather exploratory data from our training set to do feature engineering and model tuning. Before running this notebook, make sure that:
You have already run steps 2 and 3 to collect and split your data into training, validation, and test.
Your training data is in a Google storage folder such as gs://[your-bucket]/[dataprep-dir]/training_images/
In the spirit of learning to walk before learning to run, we'll write this notebook in a more basic style than you'll see in a professional setting.
Setup
TODO for you: In Screen terminal 1 (to begin with Screen in the VM, first type
screen and Ctrl+a c), go to the VM shell and type Ctrl+a 1,
create a folder to store your training and debugging images, and then copy a small
sample of training images from Cloud Storage:
mkdir -p ~/data/training_small
gsutil -m cp gs://$BUCKET/catimages/training_images/000*.png ~/data/training_small/
gsutil -m cp gs://$BUCKET/catimages/training_images/001*.png ~/data/training_small/
mkdir -p ~/data/debugging_small
gsutil -m cp gs://$BUCKET/catimages/training_images/002*.png ~/data/debugging_small
echo "done!"
Note that we only take the images starting with those IDs to limit the total number we'll copy over to under 3 thousand images.
End of explanation
"""
def show_pictures(filelist, dir, img_rows=2, img_cols=3, figsize=(20, 10)):
"""Display the first few images.
Args:
filelist: list of filenames to pull from
dir: directory where the files are stored
img_rows: number of rows of images to display
img_cols: number of columns of images to display
figsize: sizing for inline plots
Returns:
None
"""
plt.close('all')
fig = plt.figure(figsize=figsize)
for i in range(img_rows * img_cols):
a=fig.add_subplot(img_rows, img_cols,i+1)
img = mpimg.imread(os.path.join(dir, filelist[i]))
plt.imshow(img)
plt.show()
show_pictures(files, TRAINING_DIR)
"""
Explanation: Eyes on the data!
End of explanation
"""
# What does the actual image matrix look like? There are three channels:
img = cv2.imread(os.path.join(TRAINING_DIR, files[0]))
print('\n***Colors in the middle of the first image***\n')
print('Blue channel:')
print(img[63:67,63:67,0])
print('Green channel:')
print(img[63:67,63:67,1])
print('Red channel:')
print(img[63:67,63:67,2])
def show_bgr(filelist, dir, img_rows=2, img_cols=3, figsize=(20, 10)):
"""Make histograms of the pixel color matrices of first few images.
Args:
filelist: list of filenames to pull from
dir: directory where the files are stored
img_rows: number of rows of images to display
img_cols: number of columns of images to display
figsize: sizing for inline plots
Returns:
None
"""
plt.close('all')
fig = plt.figure(figsize=figsize)
color = ('b','g','r')
for i in range(img_rows * img_cols):
a=fig.add_subplot(img_rows, img_cols, i + 1)
img = cv2.imread(os.path.join(TRAINING_DIR, files[i]))
for c,col in enumerate(color):
histr = cv2.calcHist([img],[c],None,[256],[0,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
plt.ylim([0,500])
plt.show()
show_bgr(files, TRAINING_DIR)
"""
Explanation: Check out the colors at rapidtables.com/web/color/RGB_Color, but don't forget to flip order of the channels to BGR.
End of explanation
"""
# Pull in blue channel for each image, reshape to vector, count unique values:
unique_colors = []
landscape = []
for f in files:
img = np.array(cv2.imread(os.path.join(TRAINING_DIR, f)))[:,:,0]
# Determine if landscape is more likely than portrait by comparing
#amount of zero channel in 3rd row vs 3rd col:
landscape_likely = (np.count_nonzero(img[:,2]) > np.count_nonzero(img[2,:])) * 1
# Count number of unique blue values:
col_count = len(set(img.ravel()))
# Append to array:
unique_colors.append(col_count)
landscape.append(landscape_likely)
unique_colors = pd.DataFrame({'files': files, 'unique_colors': unique_colors,
'landscape': landscape})
unique_colors = unique_colors.sort_values(by=['unique_colors'])
print(unique_colors[0:10])
# Plot the pictures with the lowest diversity of unique color values:
suspicious = unique_colors['files'].tolist()
show_pictures(suspicious, TRAINING_DIR, 1)
"""
Explanation: Do some sanity checks
For example:
* Do we have blank images?
* Do we have images with very few colors?
End of explanation
"""
def get_label(str):
"""
Split out the label from the filename of the image, where we stored it.
Args:
str: filename string.
Returns:
label: an integer 1 or 0
"""
split_filename = str.split('_')
label = int(split_filename[-1].split('.')[0])
return(label)
# Example:
get_label('12550_0.1574_1.png')
"""
Explanation: Get labels
Extract labels from the filename and create a pretty dataframe for analysis.
End of explanation
"""
df = unique_colors[:]
df['label'] = df['files'].apply(lambda x: get_label(x))
df['landscape_likely'] = df['landscape']
df = df.drop(['landscape', 'unique_colors'], axis=1)
df[:10]
"""
Explanation: Create DataFrame
End of explanation
"""
def general_img_features(band):
"""
Define a set of features that we can look at for each color band
Args:
band: array which is one of blue, green, or red
Returns:
features: unique colors, nonzero count, mean, standard deviation,
min, and max of the channel's pixel values
"""
return [len(set(band.ravel())), np.count_nonzero(band),
np.mean(band), np.std(band),
band.min(), band.max()]
def concat_all_band_features(file, dir):
"""
Extract features from a single image.
Args:
file - single image filename
dir - directory where the files are stored
Returns:
features - descriptive statistics for pixels
"""
img = cv2.imread(os.path.join(dir, file))
features = []
blue = np.float32(img[:,:,0])
green = np.float32(img[:,:,1])
red = np.float32(img[:,:,2])
features.extend(general_img_features(blue)) # indices 0-4
features.extend(general_img_features(green)) # indices 5-9
features.extend(general_img_features(red)) # indices 10-14
return features
# Let's see an example:
print(files[0] + '\n')
example = concat_all_band_features(files[0], TRAINING_DIR)
print(example)
# Apply it to our dataframe:
feature_names = ['blue_unique', 'blue_nonzero', 'blue_mean', 'blue_sd', 'blue_min', 'blue_max',
'green_unique', 'green_nonzero', 'green_mean', 'green_sd', 'green_min', 'green_max',
'red_unique', 'red_nonzero', 'red_mean', 'red_sd', 'red_min', 'red_max']
# Compute a series holding all band features as lists
band_features_series = df['files'].apply(lambda x: concat_all_band_features(x, TRAINING_DIR))
# Loop through lists and distribute them across new columns in the dataframe
for i in range(len(feature_names)):
df[feature_names[i]] = band_features_series.apply(lambda x: x[i])
df[:10]
# Are these features good for finding cats?
# Let's look at some basic correlations.
df.corr().round(2)
"""
Explanation: Basic Feature Engineering
Below, we show an example of a very simple set of features that can be derived from an image. This function simply pulls the mean, standard deviation, min, and max of pixel values in one image band (red, green, or blue)
End of explanation
"""
THRESHOLD = 0.05
def show_harris(filelist, dir, band=0, img_rows=4, img_cols=4, figsize=(20, 10)):
"""
Display Harris corner detection for the first few images.
Args:
filelist: list of filenames to pull from
dir: directory where the files are stored
band: 0 = 'blue', 1 = 'green', 2 = 'red'
img_rows: number of rows of images to display
img_cols: number of columns of images to display
figsize: sizing for inline plots
Returns:
None
"""
plt.close('all')
fig = plt.figure(figsize=figsize)
def plot_bands(src, band_img):
a=fig.add_subplot(img_rows, img_cols, i + 1)
dst = cv2.cornerHarris(band_img, 2, 3, 0.04)
dst = cv2.dilate(dst,None) # dilation makes the marks a little bigger
# Threshold for an optimal value, it may vary depending on the image.
new_img = src.copy()
new_img[dst > THRESHOLD * dst.max()]=[0, 0, 255]
# Note: openCV reverses the red-green-blue channels compared to matplotlib,
# so we have to flip the image before showing it
imgplot = plt.imshow(cv2.cvtColor(new_img, cv2.COLOR_BGR2RGB))
for i in range(img_rows * img_cols):
img = cv2.imread(os.path.join(dir, filelist[i]))
plot_bands(img, img[:,:,band])
plt.show()
show_harris(files, TRAINING_DIR)
"""
Explanation: These coarse features look pretty bad individually. Most of this is due to features capturing absolute pixel values. But photo lighting could vary significantly between different image shots. What we end up with is a lot of noise.
Are there some better feature detectors we can consider? Why yes, there are! Several common features involve finding corners in pictures, and looking for pixel gradients (differences in pixel values between neighboring pixels in different directions).
Harris Corner Detector
The following snippet runs code to visualize harris corner detection for a few sample images. Configuring the threshold determines how strong of a signal we need to determine if a pixel corresponds to a corner (high pixel gradients in all directions).
Note that because a Harris corner detector returns another image map with values corresponding to the likelihood of a corner at that pixel, it can also be fed into general_img_features() to extract additional features. What do you notice about corners on cat images?
End of explanation
"""
|
JackDi/phys202-2015-work | assignments/assignment07/AlgorithmsEx02.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
"""
Explanation: Algorithms Exercise 2
Imports
End of explanation
"""
s=[]
i=0
def find_peaks(a):
"""Find the indices of the local maxima in a sequence."""
# YOUR CODE HERE
if a[0]>a[1]: #if the first number is bigger than the second number
s.append(0) #add 0 as a peak
for x in range (len(a)-1): #
if a[x]>a[x-1] and a[x]>a[x+1] and x!=0: #if the current number is bigger than the one before it and the one after it
# print (x)
s.append(x) #add it to the list of peaks
if a[-1]>a[-2]: #if the last number is bigger than the second to last one it is a peak
# print (len(a)-1)
s.append(len(a)-1) #add the location of the last number to the list of locations
return s
#below here is used for testing, not sure why assert tests are not working since my tests do
# p2 = find_peaks(np.array([0,1,2,3]))
# p2
p1 = find_peaks([2,0,1,0,2,0,1])
p1
# p3 = find_peaks([3,2,1,0])
# p3
# np.shape(p1)
# y=np.array([0,2,4,6])
# np.shape(y)
# print(s)
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
"""
Explanation: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
End of explanation
"""
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
# YOUR CODE HERE
# num=[]
# pi_digits_str[0]
# for i in range(len(pi_digits_str)):
# num[i]=pi_digits_str[i]
f=plt.figure(figsize=(12,8))
plt.title("Histogram of Distances between Peaks in Pi")
plt.ylabel("Number of Occurences")
plt.xlabel("Distance from Previous Peak")
plt.tick_params(direction='out')
plt.box(True)
plt.grid(False)
test=np.array(list(pi_digits_str),dtype=np.int)
peaks=find_peaks(test)
dist=np.diff(peaks)
plt.hist(dist,bins=range(15));
assert True # use this for grading the pi digits histogram
"""
Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
Convert that string to a Numpy array of integers.
Find the indices of the local maxima in the digits of $\pi$.
Use np.diff to find the distances between consequtive local maxima.
Visualize that distribution using an appropriately customized histogram.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_cluster_stats_time_frequency_repeated_measures_anova.ipynb | bsd-3-clause | # Authors: Denis Engemann <denis.engemann@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.time_frequency import single_trial_power
from mne.stats import f_threshold_mway_rm, f_mway_rm, fdr_correction
from mne.datasets import sample
print(__doc__)
"""
Explanation: .. _tut_stats_cluster_sensor_rANOVA_tfr
Mass-univariate twoway repeated measures ANOVA on single trial power
This script shows how to conduct a mass-univariate repeated measures
ANOVA. As the model to be fitted assumes two fully crossed factors,
we will study the interplay between perceptual modality
(auditory VS visual) and the location of stimulus presentation
(left VS right). Here we use single trials as replications
(subjects) while iterating over time slices plus frequency bands
for to fit our mass-univariate model. For the sake of simplicity we
will confine this analysis to one single channel of which we know
that it exposes a strong induced response. We will then visualize
each effect by creating a corresponding mass-univariate effect
image. We conclude with accounting for multiple comparisons by
performing a permutation clustering test using the ANOVA as
clustering function. The results final will be compared to
multiple comparisons using False Discovery Rate correction.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
event_id = 1
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = io.Raw(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443'] # bads
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = raw.info['ch_names'][picks[0]]
# Load conditions
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
reject=reject)
# make sure all conditions have the same counts, as the ANOVA expects a
# fully balanced data matrix and does not forgive imbalances that generously
# (risk of type-I error)
epochs.equalize_event_counts(event_id, copy=False)
# Time vector
times = 1e3 * epochs.times # change unit to ms
# Factor to downs-sample the temporal dimension of the PSD computed by
# single_trial_power.
decim = 2
frequencies = np.arange(7, 30, 3) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
n_cycles = frequencies / frequencies[0]
baseline_mask = times[::decim] < 0
# now create TFR representations for all conditions
epochs_power = []
for condition in [epochs[k].get_data()[:, 97:98, :] for k in event_id]:
this_power = single_trial_power(condition, sfreq=sfreq,
frequencies=frequencies, n_cycles=n_cycles,
decim=decim)
this_power = this_power[:, 0, :, :] # we only have one channel.
# Compute ratio with baseline power (be sure to correct time vector with
# decimation factor)
epochs_baseline = np.mean(this_power[:, :, baseline_mask], axis=2)
this_power /= epochs_baseline[..., np.newaxis]
epochs_power.append(this_power)
"""
Explanation: Set parameters
End of explanation
"""
n_conditions = len(epochs.event_id)
n_replications = epochs.events.shape[0] / n_conditions
# we will tell the ANOVA how to interpret the data matrix in terms of
# factors. This done via the factor levels argument which is a list
# of the number factor levels for each factor.
factor_levels = [2, 2] # number of levels in each factor
effects = 'A*B' # this is the default signature for computing all effects
# Other possible options are 'A' or 'B' for the corresponding main effects
# or 'A:B' for the interaction effect only (this notation is borrowed from the
# R formula language)
n_frequencies = len(frequencies)
n_times = len(times[::decim])
# Now we'll assemble the data matrix and swap axes so the trial replications
# are the first dimension and the conditions are the second dimension
data = np.swapaxes(np.asarray(epochs_power), 1, 0)
# reshape last two dimensions in one mass-univariate observation-vector
data = data.reshape(n_replications, n_conditions, n_frequencies * n_times)
# so we have replications * conditions * observations:
print(data.shape)
# while the iteration scheme used above for assembling the data matrix
# makes sure the first two dimensions are organized as expected (with A =
# modality and B = location):
#
# A1B1 A1B2 A2B1 B2B2
# trial 1 1.34 2.53 0.97 1.74
# trial ... .... .... .... ....
# trial 56 2.45 7.90 3.09 4.76
#
# Now we're ready to run our repeated measures ANOVA.
fvals, pvals = f_mway_rm(data, factor_levels, effects=effects)
effect_labels = ['modality', 'location', 'modality by location']
# let's visualize our effects by computing f-images
for effect, sig, effect_label in zip(fvals, pvals, effect_labels):
plt.figure()
# show naive F-values in gray
plt.imshow(effect.reshape(8, 211), cmap=plt.cm.gray, extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
# create mask for significant Time-frequency locations
effect = np.ma.masked_array(effect, [sig > .05])
plt.imshow(effect.reshape(8, 211), cmap='RdBu_r', extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.colorbar()
plt.xlabel('time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(r"Time-locked response for '%s' (%s)" % (effect_label, ch_name))
plt.show()
# Note. As we treat trials as subjects, the test only accounts for
# time locked responses despite the 'induced' approach.
# For analysis for induced power at the group level averaged TRFs
# are required.
"""
Explanation: Setup repeated measures ANOVA
End of explanation
"""
# First we need to slightly modify the ANOVA function to be suitable for
# the clustering procedure. Also want to set some defaults.
# Let's first override effects to confine the analysis to the interaction
effects = 'A:B'
# A stat_fun must deal with a variable number of input arguments.
def stat_fun(*args):
# Inside the clustering function each condition will be passed as
# flattened array, necessitated by the clustering procedure.
# The ANOVA however expects an input array of dimensions:
# subjects X conditions X observations (optional).
# The following expression catches the list input and swaps the first and
# the second dimension and finally calls the ANOVA function.
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=False)[0]
# The ANOVA returns a tuple f-values and p-values, we will pick the former.
pthresh = 0.00001 # set threshold rather high to save some time
f_thresh = f_threshold_mway_rm(n_replications, factor_levels, effects,
pthresh)
tail = 1 # f-test, so tail > 0
n_permutations = 256 # Save some time (the test won't be too sensitive ...)
T_obs, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
epochs_power, stat_fun=stat_fun, threshold=f_thresh, tail=tail, n_jobs=1,
n_permutations=n_permutations, buffer_size=None)
# Create new stats image with only significant clusters
good_clusers = np.where(cluster_p_values < .05)[0]
T_obs_plot = np.ma.masked_array(T_obs,
np.invert(clusters[np.squeeze(good_clusers)]))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Time-locked response for \'modality by location\' (%s)\n'
' cluster-level corrected (p <= 0.05)' % ch_name)
plt.show()
# now using FDR
mask, _ = fdr_correction(pvals[2])
T_obs_plot2 = np.ma.masked_array(T_obs, np.invert(mask))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot2], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Time-locked response for \'modality by location\' (%s)\n'
' FDR corrected (p <= 0.05)' % ch_name)
plt.show()
# Both, cluster level and FDR correction help getting rid of
# putatively spots we saw in the naive f-images.
"""
Explanation: Account for multiple comparisons using FDR versus permutation clustering test
End of explanation
"""
|
karlstroetmann/Formal-Languages | Python/Regexp-2-NFA.ipynb | gpl-2.0 | class RegExp2NFA:
def __init__(self, Sigma):
self.Sigma = Sigma
self.StateCount = 0
"""
Explanation: From Regular Expressions to <span style="font-variant:small-caps;">Fsm</span>s
This notebook shows how a given regular expression $r$ can be transformed into an equivalent finite state machine.
It implements the theory that is outlined in section 4.4. of the
lecture notes.
The class RegExp2NFA administers two member variables:
- Sigma is the <em style="color:blue">alphabet</em>, i.e. the set of characters used.
- StateCount is a counter that is needed to create <em style="color:blue">unique</em> state names.
End of explanation
"""
def toNFA(self, r):
if r == 0:
return self.genEmptyNFA()
if r == '':
return self.genEpsilonNFA()
if isinstance(r, str) and len(r) == 1:
return self.genCharNFA(r)
if r[0] == 'cat':
return self.catenate(self.toNFA(r[1]), self.toNFA(r[2]))
if r[0] == 'or':
return self.disjunction(self.toNFA(r[1]), self.toNFA(r[2]))
if r[0] == 'star':
return self.kleene(self.toNFA(r[1]))
raise ValueError(f'{r} is not a proper regular expression.')
RegExp2NFA.toNFA = toNFA
del toNFA
"""
Explanation: The member function toNFA takes an object self of class RegExp2NFA and a regular expression r and returns a finite state machine
that accepts the same language as described by r. The regular expression is represented in Python as follows:
- The regular expression $\emptyset$ is represented as the number 0.
- The regular expression $\varepsilon$ is represented as the empty string ''.
- The regular expression $c$ that matches the character $c$ is represented by the character $c$.
- The regular expression $r_1 \cdot r_2$ is represented by the triple $\bigl(\texttt{'cat'}, \texttt{repr}(r_1), \texttt{repr}(r_2)\bigr)$.
Here, and in the following, for a given regular expression $r$ the expression $\texttt{repr}(r)$ denotes the Python representation of the regular
expressions $r$.
- The regular expression $r_1 + r_2$ is represented by the triple $\bigl(\texttt{'or'}, \texttt{repr}(r_1), \texttt{repr}(r_2)\bigr)$.
- The regular expression $r^*$ is represented by the pair $\bigl(\texttt{'star'}, \texttt{repr}(r)\bigr)$.
End of explanation
"""
def genEmptyNFA(self):
q0 = self.getNewState()
q1 = self.getNewState()
delta = {}
return {q0, q1}, self.Sigma, delta, q0, { q1 }
RegExp2NFA.genEmptyNFA = genEmptyNFA
del genEmptyNFA
"""
Explanation: The <span style="font-variant:small-caps;">Fsm</span> genEmptyNFA() is defined as
$$\bigl\langle { q_0, q_1 }, \Sigma, {}, q_0, { q_1 } \bigr\rangle. $$
Note that this <span style="font-variant:small-caps;">Fsm</span> has no transitions at all.
Graphically, this <span style="font-variant:small-caps;">Fsm</span> looks as follows:
End of explanation
"""
def genEpsilonNFA(self):
q0 = self.getNewState()
q1 = self.getNewState()
delta = { (q0, ''): {q1} }
return {q0, q1}, self.Sigma, delta, q0, { q1 }
RegExp2NFA.genEpsilonNFA = genEpsilonNFA
del genEpsilonNFA
"""
Explanation: The <span style="font-variant:small-caps;">Fsm</span> genEpsilonNFA is defined as
$$ \bigl\langle { q_0, q_1 }, \Sigma,
\bigl{ \langle q_0, \varepsilon\rangle \mapsto {q_1} \bigr}, q_0, { q_1 } \bigr\rangle.
$$
Graphically, this <span style="font-variant:small-caps;">Fsm</span> looks as follows:
End of explanation
"""
def genCharNFA(self, c):
q0 = self.getNewState()
q1 = self.getNewState()
delta = { (q0, c): {q1} }
return {q0, q1}, self.Sigma, delta, q0, { q1 }
RegExp2NFA.genCharNFA = genCharNFA
del genCharNFA
"""
Explanation: For a letter $c \in \Sigma$ the <span style="font-variant:small-caps;">Fsm</span> genCharNFA$(c)$ is defined as
$$ A(c) =
\bigl\langle { q_0, q_1 }, \Sigma,
\bigl{ \langle q_0, c \rangle \mapsto {q_1}\bigr}, q_0, { q_1 } \bigr\rangle.
$$
Graphically, this <span style="font-variant:small-caps;">Fsm</span> looks as follows:
End of explanation
"""
def catenate(self, f1, f2):
M1, Sigma, delta1, q1, A1 = f1
M2, Sigma, delta2, q3, A2 = f2
q2, = A1
delta = delta1 | delta2
delta[q2, ''] = {q3}
return M1 | M2, Sigma, delta, q1, A2
RegExp2NFA.catenate = catenate
del catenate
"""
Explanation: Given two <span style="font-variant:small-caps;">Fsm</span>s f1 and f2, the function catenate(f1, f2)
creates an <span style="font-variant:small-caps;">Fsm</span> that recognizes a string $s$ if it can be written
in the form
$$ s = s_1s_2 $$
and $s_1$ is recognized by f1 and $s_2$ is recognized by f2.
Assume that $f_1$ and $f_2$ have the following form:
- $f_1 = \langle Q_1, \Sigma, \delta_1, q_1, { q_2 }\rangle$,
- $f_2 = \langle Q_2, \Sigma, \delta_2, q_3, { q_4 }\rangle$,
- $Q_1 \cap Q_2 = {}$.
Then $\texttt{catenate}(f_1, f_2)$ is defined as:
$$ \bigl\langle Q_1 \cup Q_2, \Sigma,
\bigl{ \langle q_2,\varepsilon\rangle \mapsto {q_3} \bigr}
\cup \delta_1 \cup \delta_2, q_1, { q_4 } \bigr\rangle.
$$
Graphically, this <span style="font-variant:small-caps;">Fsm</span> looks as follows:
End of explanation
"""
def disjunction(self, f1, f2):
M1, Sigma, delta1, q1, A1 = f1
M2, Sigma, delta2, q2, A2 = f2
q3, = A1
q4, = A2
q0 = self.getNewState()
q5 = self.getNewState()
delta = delta1 | delta2
delta[q0, ''] = { q1, q2 }
delta[q3, ''] = { q5 }
delta[q4, ''] = { q5 }
return { q0, q5 } | M1 | M2, Sigma, delta, q0, { q5 }
RegExp2NFA.disjunction = disjunction
del disjunction
"""
Explanation: Given two <span style="font-variant:small-caps;">Fsm</span>s f1 and f2, the function disjunction(f1, f2)
creates an <span style="font-variant:small-caps;">Fsm</span> that recognizes a string $s$ if it is either
is recognized by f1 or by f2.
Assume again that the states of
$f_1$ and $f_2$ are different and that $f_1$ and $f_2$ have the following form:
- $f_1 = \langle Q_1, \Sigma, \delta_1, q_1, { q_3 }\rangle$,
- $f_2 = \langle Q_2, \Sigma, \delta_2, q_2, { q_4 }\rangle$,
- $Q_1 \cap Q_2 = {}$.
Then $\texttt{disjunction}(f_1, f_2)$ is defined as follows:
$$ \bigl\langle { q_0, q_5 } \cup Q_1 \cup Q_2, \Sigma,
\bigl{ \langle q_0,\varepsilon\rangle \mapsto {q_1, q_2},
\langle q_3,\varepsilon\rangle \mapsto {q_5},
\langle q_4,\varepsilon\rangle \mapsto {q_5} \bigr}
\cup \delta_1 \cup \delta_2, q_0, { q_5 } \bigr\rangle
$$
Graphically, this <span style="font-variant:small-caps;">Fsm</span> looks as follows:
End of explanation
"""
def kleene(self, f):
M, Sigma, delta0, q1, A = f
q2, = A
q0 = self.getNewState()
q3 = self.getNewState()
delta = delta0
delta[q0, ''] = { q1, q3 }
delta[q2, ''] = { q1, q3 }
return { q0, q3 } | M, Sigma, delta, q0, { q3 }
RegExp2NFA.kleene = kleene
del kleene
"""
Explanation: Given an <span style="font-variant:small-caps;">Fsm</span> f, the function kleene(f)
creates an <span style="font-variant:small-caps;">Fsm</span> that recognizes a string $s$ if it can be written as
$$ s = s_1 s_2 \cdots s_n $$
and all $s_i$ are recognized by f. Note that $n$ might be $0$.
If f is defined as
$$ f = \langle Q, \Sigma, \delta, q_1, { q_2 } \rangle,
$$
then kleene(f) is defined as follows:
$$ \bigl\langle { q_0, q_3 } \cup Q, \Sigma,
\bigl{ \langle q_0,\varepsilon\rangle \mapsto {q_1, q_3},
\langle q_2,\varepsilon\rangle \mapsto {q_1, q_3} \bigr}
\cup \delta, q_0, { q_3 } \bigr\rangle.
$$
Graphically, this <span style="font-variant:small-caps;">Fsm</span> looks as follows:
End of explanation
"""
def getNewState(self):
self.StateCount += 1
return self.StateCount
RegExp2NFA.getNewState = getNewState
del getNewState
"""
Explanation: The function getNewState returns a new number that has not yet been used as a state.
End of explanation
"""
|
drvinceknight/gt | nbs/chapters/08-Evolutionary-Game-Theory.ipynb | mit | import numpy as np
import nashpy as nash
import matplotlib.pyplot as plt
"""
Explanation: Evolutionary Game Theory
In the previous chapter, we considered the case of fitness being independant of the distribution of the whole population (the rates of increase of 1 type just depended on the quantity of that type). That was a specific case of Evolutionary game theory which considers frequency dependent selection.
Frequency dependent selection
Video
Consider. Let $x=(x_1, x_2)$ correspond to the population sizes of both types. The fitness functions are given by:
$$f_1(x)\qquad f_2(x)$$
As before we ensure a constant population size: $x_1 + x_2 = 1$. We have:
$$
\frac{dx_1}{dt}=x_1(f_1(x)-\phi) \qquad \frac{dx_2}{dt}=x_2(f_2(x)-\phi)
$$
we again have:
$$
\frac{dx_1}{dt} + \frac{dx_2}{dt}=x_1(f_1(x)-\phi) + x_2(f_2(x)-\phi)=0
$$
So $\phi=x_1f_1(x)+x_2f_2(x)$ (the average fitness).
We can substitute: $x_2=1-x_1$ to obtain:
$$
\frac{dx_1}{dt}=x_1(f_1(x)-x_1f_1(x)-x_2f_2(x))=x_1((1-x_1)f_1(x)-(1-x_1)f_2(x))
$$
$$
\frac{dx_1}{dt}=x_1(1-x_1)(f_1(x)-f_2(x))
$$
We see that we have 3 equilibria:
$x_1=0$
$x_1=1$
Whatever distribution of $x$ that ensures: $f_1(x)=f_2(x)$
Evolutionary Game Theory
Now we will consider potential differences of these equilibria. First we will return to considering Normal form games:
$$
A =
\begin{pmatrix}
a & b\
c & d
\end{pmatrix}
$$
Evolutionary Game theory assigns strategies as types in a population, and indivividuals randomly encounter other individuals and play their corresponding strategy. The matrix $A$ correspods to the utility of a row player in a game where the row player is a given individual and the column player is the population.
This gives:
$$f_1=ax_1+bx_2\qquad f_2=cx_1+dx_2$$
or equivalently:
$$f=Ax\qquad \phi=fx$$
thus we have the same equation as before but in matrix notation:
$$\frac{dx}{dt}=x(f-\phi)$$
In this case, the 3 stable distributions correspond to:
An entire population playing the first strategy;
An entire population playing the second strategy;
A population playing a mixture of first and second (such that there is indifference between the fitness).
We now consider the utility of a stable population in a mutated population.
Mutated population
Given a strategy vector $x=(x_1, x_2)$, some $\epsilon>0$ and another strategy $y=(y_1, y_2)$, the post entry population $x_{\epsilon}$ is given by:
$$
x_{\epsilon} = (x_1 + \epsilon(y_1 - x_1), x_2 + \epsilon(y_2 - x_2))
$$
Evolutionary Stable Strategies
Video
Given a stable population distribution, $x$ it represents an Evolutionary Stable Strategy (ESS) if and only if there exists $\bar\epsilon>0$:
$$u(x, x_{\epsilon})>u(y, x_{\epsilon})\text{ for all }0<\epsilon<\bar\epsilon, y$$
where $u(x, y)$ corresponds to the fitness of strategy $x$ in population $y$ which is given by:
$$xAy^T$$
For the first type to be an ESS this corresponds to:
$$a(1-\epsilon)+b\epsilon > c(1-\epsilon) + d\epsilon$$
For small values of $\epsilon$ this corresponds to:
$$a>c$$
However if $a=c$, this corresponds to:
$$b>d$$
Thus the first strategy is an ESS (ie resists invasion) iff one of the two hold:
$a > c$
$a=c$ and $b > d$
End of explanation
"""
A = np.array([[4, 3], [2, 1]])
game = nash.Game(A)
timepoints = np.linspace(0, 10, 1000)
epsilon = 10 ** -1
xs = game.replicator_dynamics(
y0=[1 - epsilon, epsilon],
timepoints=timepoints,
)
plt.plot(xs);
"""
Explanation: The case of $a>c$:
End of explanation
"""
A = np.array([[4, 3], [4, 1]])
game = nash.Game(A)
xs = game.replicator_dynamics(
y0=[1 - epsilon, epsilon],
timepoints=timepoints,
)
plt.plot(xs);
"""
Explanation: The case of $a=c$ and $b>d$:
End of explanation
"""
A = np.array([[4, 3], [4, 5]])
game = nash.Game(A)
xs = game.replicator_dynamics(
y0=[1 - epsilon, epsilon],
timepoints=timepoints,
)
plt.plot(xs);
"""
Explanation: $a=c$ and $b < d$:
End of explanation
"""
A = np.array([[1, 3], [4, 1]])
game = nash.Game(A)
xs = game.replicator_dynamics(
y0=[1 - epsilon, epsilon],
timepoints=timepoints,
)
plt.plot(xs);
"""
Explanation: $a < c$:
End of explanation
"""
import nashpy as nash
game = nash.Game(A, A.transpose())
list(game.support_enumeration())
"""
Explanation: We see in the above case that the population seems to stabilise at a mixed strategy. This leads to the general definition of the fitness of a mixed strategy: $x=(x_1, x_2)$:
$$u(x,x) = x_1f_1(x)+x_2f_2(x)$$
General condition for ESS
Video
If $x$ is an ESS, then for all $y\ne x$, either:
$u(x,x)>u(y,x)$
$u(x,x)=u(y,x)$ and $u(x,y)>u(y,y)$
Conversely, if either (1) or (2) holds for all $y\ne x$ then $x$ is an ESS.
Proof
If $x$ is an ESS, then by definition:
$$u(x,x_{\epsilon})>u(y,x_{\epsilon})$$
which corresponds to:
$$(1-\epsilon)u(x,x)+\epsilon u(x,y)>(1-\epsilon)u(y,x)+\epsilon u(y,y)$$
If condition 1 of the theorem holds then the above inequality can be satisfied for \(\epsilon\) sufficiently small. If condition 2 holds then the inequality is satisfied.
Conversely:
If $u(x,x) < u(y,x)$ then we can find $\epsilon$ sufficiently small such that the inequality is violated.
If $u(x, x) = u(y,x)$ and $u(x,y) \leq u(y,y)$ then the inequality is violated.
This result gives us an efficient way of computing ESS. The first condition is in fact almost a condition for Nash Equilibrium (with a strict inequality), the second is thus a stronger condition that removes certain Nash equilibria from consideration. This becomes particularly relevant when considering Nash equilibrium in mixed strategies.
To find ESS in a pairwise context population game we:
Write down the associated two-player game $(A, A^T)\in{\mathbb{R}^{m\times n}}^2$;
Identify all symmetric Nash equilibria of the game;
Test the Nash equilibrium against the two conditions of the above Theorem.
Let us apply it to the one example that seemed to stabilise at a mixed strategy:
$$
A =\begin{pmatrix}
1 & 3\
4 & 1
\end{pmatrix}
$$
End of explanation
"""
import sympy as sym
sym.init_printing()
A = sym.Matrix(A)
y_1, y_2 = sym.symbols("y_1, y_2")
y = sym.Matrix([y_1, y_2])
A, y
rhs = sym.expand((y.transpose() * A * y)[0].subs({y_2: 1 - y_1}))
rhs
lhs = sym.expand((sym.Matrix([[.4, .6]]) * A * y)[0].subs({y_2: 1-y_1}))
lhs
sym.factor(lhs - rhs)
"""
Explanation: Looking at $x=(.4, .6)$ (which is the only symmetric nash equilibrium), we have
$$u(x, x)=u(y, x)$$
and (recall $y_1 + y_2 = 1$):
$$
u(x, y)=2.8y_1 + 1.8y_2=2.8y_1 + 1.8(1-y_1)=y_1+1.8
$$
\begin{align}
u(y, y)&=y_1^2+3y_1y_2+4y_1y_2+y_2^2\
&=y_1^2+7y_1-7y_1^2+1 - 2y_1 + y_1^2\
&=5y_1-5y_1^2+1
\end{align}
Thus:
$$u(x, y) - u(y, y) = -4y_1+5y_1^2+.8 = 5(y_1 - .4)^2$$
however $y_1\ne.4$ thus $x=(.4, .6)$ is an ESS.
Here is some code to verify the above calculations:
End of explanation
"""
|
mercybenzaquen/foundations-homework | foundations_hw/08/Homework8_benzaquen_police_killings.ipynb | mit | !pip install pandas
!pip install matplotlib
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: The Counted (project by The Guardian to count the people killed by police in the US)
Why is this necessary?
From The Guardian's http://www.theguardian.com/us-news/ng-interactive/2015/jun/01/the-counted-police-killings-us-database
"The US government has no comprehensive record of the number of people killed by law enforcement. This lack of basic data has been glaring amid the protests, riots and worldwide debate set in motion by the fatal police shooting of Michael Brown, an unarmed 18-year-old, in Ferguson, Missouri, in August 2014."
End of explanation
"""
df = pd.read_csv('the-counted-2015.csv', encoding = "ISO-8859-1")
"""
Explanation: # Open your dataset up using pandas in a Jupyter notebook
End of explanation
"""
df.head()
"""
Explanation: Do a .head() to get a feel for your data
End of explanation
"""
df.tail(1) #there is one line per incident, so tail will give us the last incident.
"""
Explanation: Write down 12 questions to ask your data, or 12 things to hunt for in the data
1) How many people were killed by police in 2015?
End of explanation
"""
df.sort_values('age', ascending=False).head(10)
"""
Explanation: 2)Who was/were the oldest person killed?
End of explanation
"""
df.sort_values('age', ascending=True).head(5)
"""
Explanation: 3)Who was/were the youngest person killed?
End of explanation
"""
df['age'].describe() #I could not get the average :s!
#I thought using describe I would get something like this:
#count 18617.000000
#mean 53.314841
#std 10.679143
#min 25.000000
#25% 45.400000
#50% 53.000000
#75% 60.500000
#max 98.100000
#Name: age, dtype: float64
"""
Explanation: 4)What was the age average of people killed?
End of explanation
"""
df['state'].value_counts()
"""
Explanation: 5)What was the state with more killings by police in 2015?
End of explanation
"""
df['city'].value_counts()
"""
Explanation: 6)What was the city with more killings by police in 2015?
End of explanation
"""
los_angeles = df['city'] == 'Los Angeles'
#df['complete_date'] = df['day'], df['year']
df[los_angeles]
#I wanted to add a new column to order the incidents chronologically (month, day, year) but I got an error saying,
#ValueError: Length of values does not match length of index
"""
Explanation: 7)List all the incidents in Los Angeles
End of explanation
"""
df['month'].value_counts()
"""
Explanation: 8) What was the month with more police killings in 2015?
End of explanation
"""
df['July'] = df['month'] == 'July'
df['July'].value_counts()
july_count = df.groupby('month')['day'].value_counts()
pd.DataFrame(july_count)
#I tried to do df.groupby('July')['day'].value_counts() but it did not work.
#Here I am getting the results but I am not able to see July.
"""
Explanation: 9) What was the day with more police killings in July?
End of explanation
"""
df['raceethnicity'].value_counts()
#this results do not align with those in the Guardian's website. ???
"""
Explanation: 10) How are these killings distributed by race?
End of explanation
"""
df['gender'].value_counts()
"""
Explanation: 11) And by gender?
End of explanation
"""
df['armed'].value_counts()
df.head(20)
"""
Explanation: 12) How many of the people killed where carrying a firearm?
End of explanation
"""
df['lawenforcementagency'].value_counts()
"""
Explanation: 13) Which was the law enforcement agency with more killings?
End of explanation
"""
df['classification'].value_counts()
"""
Explanation: 13) How many people were killed in custody?
End of explanation
"""
male = df['gender'] == 'Male'
latino = df['raceethnicity'] == 'Hispanic/Latino'
knife = df['armed'] == 'Knife'
df[male&latino&knife]
#How can I count them now? It tells me it is not possible?
df['gender'].value_counts()
#doesn't look right!
plt.style.use('ggplot')
df['age'].value_counts().hist()
#we had to create a new list since histograms need floats and we had strings (unknown)
age2 = []
for point in df['age']:
if point != 'Unknown':
age2.append(float(point))
else:
age2.append(0)
df['age_2'] = age2
df['age_2'].hist()
df['age'].sort_values()
#we still have unknown values plotted
no_unknowns = df.drop(df.index[[147, 1072, 1066, 88]])
no_unknowns['age'].sort_values()
age2 = []
for point in no_unknowns['age']:
age2.append(float(point))
no_unknowns['age2'] = age2
no_unknowns['age2'].hist()
df['state'].value_counts().hist()
df['state'].value_counts()
"""
Explanation: 14) List the people killed who where male, Hispanic/Latino and armed with a knife?
End of explanation
"""
|
ajdawson/python_for_climate_scientists | course_content/notebooks/matplotlib_intro.ipynb | gpl-3.0 | import matplotlib.pyplot as plt
"""
Explanation: An introduction to matplotlib
Matplotlib is a Python package used widely throughout the scientific Python community to produce high quality 2D publication graphics. It transparently supports a wide range of output formats including PNG (and other raster formats), PostScript/EPS, PDF and SVG and has interfaces for all of the major desktop GUI (Graphical User Interface) toolkits.
Matplotlib comes with a convenience sub-package called pyplot. It is a general convention to import this module as plt:
End of explanation
"""
fig = plt.figure()
plt.show()
"""
Explanation: The matplotlib figure
At the heart of every matplotlib plot is the "Figure" object. The "Figure" object is the top level concept that can be drawn to one of the many output formats, or simply just to screen. Any object that can be drawn in this way is known as an "Artist" in matplotlib.
Let's create our first artist using pyplot, and then show it:
End of explanation
"""
ax = plt.axes()
plt.show()
"""
Explanation: On its own, drawing the figure artist is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).
By far the most useful artist in matplotlib is the "Axes" artist. The Axes artist represents the "data space" of a typical plot. A rectangular axes (the most common axes, but not the only axes, e.g. polar plots) will have two Axis Artists with tick labels and tick marks.
There is no limit on the number of Axes artists that can exist on a Figure artist. Let's go ahead and create a figure with a single Axes Artist, and show it using pyplot:
End of explanation
"""
ax = plt.axes()
line1, = ax.plot([0, 1, 2, 1.5], [3, 1, 2, 4])
plt.show()
"""
Explanation: Matplotlib's pyplot module makes the process of creating graphics easier by allowing us to skip some of the tedious Artist construction. For example, we did not need to manually create the Figure artist with plt.figure because it was implicit that we needed a figure when we created the Axes artist.
Under the hood matplotlib still had to create a Figure artist; we just didn't need to capture it into a variable. We can access the created object with the "state" functions found in pyplot: plt.gcf() and plt.gca().
Exercise 1
Go to matplotlib.org and search for what these strangely named functions do.
Hint: you will find multiple results so remember we are looking for the pyplot versions of these functions.
Working with the axes
Most of your time building a graphic in matplotlib will be spent on the Axes artist. Whilst the matplotlib documentation for the Axes artist is very detailed, it is also rather difficult to navigate (though this is an area of ongoing improvement).
As a result, it is often easier to find new plot types by looking at the pyplot module's documentation.
The first and most common Axes method is plot. Go ahead and look at the plot documentation from the following sources:
http://matplotlib.org/api/pyplot_summary.html
http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot
http://matplotlib.org/api/axes_api.html?#matplotlib.axes.Axes.plot
Plot can be used to draw one or more lines in axes data space:
End of explanation
"""
plt.plot([0, 1, 2, 1.5], [3, 1, 2, 4])
plt.show()
"""
Explanation: Notice how the axes view limits (ax.viewLim) have been updated to include the whole of the line.
Should we want to add some spacing around the edges of our axes we could set the axes margin using the Axes artist's margins method. Alternatively, we could manually set the limits with the Axes artist's set_xlim and set_ylim methods.
Exercise 2
Modify the previous example to produce three different figures that control the limits of the axes.
1. Manually set the x and y limits to [0.5, 2] and [1, 5] respectively.
2. Define a margin such that there is 10% whitespace inside the axes around the drawn line (Hint: numbers to margins are normalised such that 0% is 0.0 and 100% is 1.0).
3. Set a 10% margin on the axes with the lower y limit set to 0. (Note: order is important here)
The previous example can be simplified to be even shorter. We are not using the line artist returned by ax.plot() so we don't need to store it in a variable. In addition, in exactly the same way that we didn't need to manually create a Figure artist when using the pyplot.axes method, we can remove the plt.axes if we use the plot function from pyplot. Our simple line example then becomes:
End of explanation
"""
top_right_ax = plt.subplot(2, 3, 3)
bottom_left_ax = plt.subplot(2, 3, 4)
plt.show()
"""
Explanation: The simplicity of this example shows how visualisations can be produced quickly and easily with matplotlib, but it is worth remembering that for full control of Figure and Axes artists we can mix the convenience of pyplot with the power of matplotlib's object oriented design.
Exercise 3
By calling plot multiple times, create a single axes showing the line plots of $y=sin(x)$ and $y=cos(x)$ in the interval $[0, 2\pi]$ with 200 linearly spaced $x$ samples.
Multiple axes on the same figure (aka subplot)
Matplotlib makes it relatively easy to add more than one Axes artist to a figure. The add_subplot method on a Figure artist, which is wrapped by the subplot function in pyplot, adds an Axes artist in the grid position specified. To compute the position, we must tell matplotlib the number of rows and columns to separate the figure into, and which number the axes to be created is (1 based).
For example, to create axes at the top right and bottom left of a $3 x 2$ notional grid of Axes artists the grid specifications would be 2, 3, 3 and 2, 3, 4 respectively:
End of explanation
"""
import numpy as np
x = np.linspace(-180, 180, 60)
y = np.linspace(-90, 90, 30)
x2d, y2d = np.meshgrid(x, y)
data = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))
plt.contourf(x, y, data)
plt.show()
plt.imshow(data, extent=[-180, 180, -90, 90],
interpolation='nearest', origin='lower')
plt.show()
plt.pcolormesh(x, y, data)
plt.show()
plt.scatter(x2d, y2d, c=data, s=15)
plt.show()
plt.bar(x, data.sum(axis=0), width=np.diff(x)[0])
plt.show()
plt.plot(x, data.sum(axis=0), linestyle='--',
marker='d', markersize=10, color='red', alpha=0.5)
plt.show()
"""
Explanation: Exercise 3 continued: Copy the answer from the previous task (plotting $y=sin(x)$ and $y=cos(x)$) and add the appropriate plt.subplot calls to create a figure with two rows of Axes artists, one showing $y=sin(x)$ and the other showing $y=cos(x)$.
Further plot types
Matplotlib comes with a huge variety of different plot types. Here is a quick demonstration of the more common ones.
End of explanation
"""
fig = plt.figure()
ax = plt.axes()
# Adjust the created axes so its topmost extent is 0.8 of the figure.
fig.subplots_adjust(top=0.8)
fig.suptitle('Figure title', fontsize=18, fontweight='bold')
ax.set_title('Axes title', fontsize=16)
ax.set_xlabel('The X axis')
ax.set_ylabel('The Y axis $y=f(x)$', fontsize=16)
ax.text(0.5, 0.5, 'Text centered at (0.5, 0.5)\nin data coordinates.',
horizontalalignment='center', fontsize=14)
plt.show()
"""
Explanation: Titles, Legends, colorbars and annotations
Matplotlib has convenience functions for the addition of plot elements such as titles, legends, colorbars and text based annotation.
The suptitle pyplot function allows us to set the title of a figure, and the set_title method on an Axes artist allows us to set the title of an individual axes. Additionally Axes artists have methods named set_xlabel and set_ylabel to label the respective x and y Axis artists (that's Axis, not Axes). Finally, we can add text, located by data coordinates, with the text method on an Axes artist.
End of explanation
"""
x = np.linspace(-3, 7, 200)
plt.plot(x, 0.5 * x ** 3 - 3 * x ** 2, linewidth=2,
label='$f(x)=0.5x^3-3x^2$')
plt.plot(x, 1.5 * x ** 2 - 6 * x, linewidth=2, linestyle='--',
label='Gradient of $f(x)$', )
plt.legend(loc='lower right')
plt.grid()
plt.show()
"""
Explanation: The creation of a legend is as simple as adding a "label" to lines of interest. This can be done in the call to plt.plot and then followed up with a call to plt.legend:
End of explanation
"""
x = np.linspace(-180, 180, 60)
y = np.linspace(-90, 90, 30)
x2d, y2d = np.meshgrid(x, y)
data = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))
plt.contourf(x, y, data)
plt.colorbar(orientation='horizontal')
plt.show()
"""
Explanation: Colorbars are created with the plt.colorbar function:
End of explanation
"""
x = np.linspace(-3, 7, 200)
plt.plot(x, 0.5*x**3 - 3*x**2, linewidth=2)
plt.annotate('Local minimum',
xy=(4, -18),
xytext=(-2, -40), fontsize=15,
arrowprops={'facecolor': 'black', 'headlength': 10})
plt.grid()
plt.show()
"""
Explanation: Matplotlib comes with powerful annotation capabilities, which are described in detail at http://matplotlib.org/users/annotations_intro.html.
The annotation's power can mean that the syntax is a little harder to read, which is demonstrated by one of the simplest examples of using annotate.
End of explanation
"""
plt.plot(range(10))
plt.savefig('my_plot.png')
from IPython.display import Image
Image(filename='my_plot.png')
"""
Explanation: Saving your plots
You can save a figure using plt.savefig. This function accepts a filename as input, and saves the current figure to the given file. The format of the file is inferred from the file extension:
End of explanation
"""
plt.gcf().canvas.get_supported_filetypes_grouped()
"""
Explanation: Matplotlib supports many output file formats, including most commonly used ones. You can see a list of the supported file formats including the filename extensions they are recognised by with:
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(1234)
n_steps = 500
t = np.arange(n_steps)
# Probability distribution:
mu = 0.002 # Mean
sigma = 0.01 # Standard deviation
# Generate a random walk, with position X as a function of time:
S = mu + sigma * np.random.randn(n_steps)
X = S.cumsum()
# Calculate the 1 sigma upper and lower analytic population bounds:
lower_bound = mu * t - sigma * np.sqrt(t)
upper_bound = mu * t + sigma * np.sqrt(t)
"""
Explanation: Further steps
Matplotlib has extremely comprehensive documentation at http://matplotlib.org/. Particularly useful parts for beginners are the pyplot summary and the example gallery:
pyplot summary: http://matplotlib.org/api/pyplot_summary.html
example gallery: http://matplotlib.org/examples/index.html
Exercise 4: random walks
This exercise requires the use of many of the elements we've discussed (and a few extra ones too, remember the documentation for matplotlib is comprehensive!). We'll start by defining a random walk and some statistical population data for us to plot:
End of explanation
"""
|
zhmz90/DeepLearningCourseFromGoogle | udacity/2_fullyconnected.ipynb | mit | # These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import cPickle as pickle
import numpy as np
import tensorflow as tf
"""
Explanation: Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
"""
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print 'Training set', train_dataset.shape, train_labels.shape
print 'Validation set', valid_dataset.shape, valid_labels.shape
print 'Test set', test_dataset.shape, test_labels.shape
"""
Explanation: First reload the data we generated in 1_notmist.ipynb.
End of explanation
"""
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print 'Training set', train_dataset.shape, train_labels.shape
print 'Validation set', valid_dataset.shape, valid_labels.shape
print 'Test set', test_dataset.shape, test_labels.shape
"""
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
"""
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
"""
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
"""
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print 'Initialized'
for step in xrange(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print 'Loss at step', step, ':', l
print 'Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :])
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print 'Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels)
print 'Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels)
"""
Explanation: Let's run this computation and iterate:
End of explanation
"""
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
"""
Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of sesion.run().
End of explanation
"""
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print "Initialized"
for step in xrange(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print "Minibatch loss at step", step, ":", l
print "Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)
print "Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels)
print "Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)
"""
Explanation: Let's run it:
End of explanation
"""
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(
tf.truncated_normal([image_size * image_size, 1024]))
biases1 = tf.Variable(tf.zeros([1024]))
weights2 = tf.Variable(
tf.truncated_normal([1024,10]))
biases2 = tf.Variable(tf.zeros([10]))
#tf.nn.relu_layer
# Training computation.
hidden = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
logits = tf.matmul(hidden, weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1),weights2)+biases2)
test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1),weights2)+biases2)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print "Initialized"
for step in xrange(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print "Minibatch loss at step", step, ":", l
print "Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)
print "Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels)
print "Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)
"""
Explanation: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units (nn.relu()) and 1024 hidden nodes. This model should improve your validation / test accuracy.
End of explanation
"""
|
DeepLearningUB/EBISS2017 | 1. Learning from data and optimization.ipynb | mit | # numerical derivative at a point x
def f(x):
return x**2
def fin_dif(x, f, h = 0.00001):
'''
This method returns the derivative of f at x
by using the finite difference method
'''
return (f(x+h) - f(x))/h
x = 2.0
print "{:2.4f}".format(fin_dif(x,f))
"""
Explanation: Basic Concepts
What is "learning from data"?
In general Learning from Data is a scientific discipline that is concerned with the design and development of algorithms that allow computers to infer (from data) a model that allows compact representation (unsupervised learning) and/or good generalization (supervised learning).
This is an important technology because it enables computational systems to adaptively improve their performance with experience accumulated from the observed data.
Most of these algorithms are based on the iterative solution of a mathematical problem that involves data and model. If there was an analytical solution to the problem, this should be the adopted one, but this is not the case for most of the cases.
So, the most common strategy for learning from data is based on solving a system of equations as a way to find a series of parameters of the model that minimizes a mathematical problem. This is called optimization.
The most important technique for solving optimization problems is gradient descend.
Preliminary: Nelder-Mead method for function minimization.
The most simple thing we can try to minimize a function $f(x)$ would be to sample two points relatively near each other, and just repeatedly take a step down away from the largest value. This simple algorithm has a severe limitation: it can't get closer to the true minima than the step size.
The Nelder-Mead method dynamically adjusts the step size based off the loss of the new point. If the new point is better than any previously seen value, it expands the step size to accelerate towards the bottom. Likewise if the new point is worse it contracts the step size to converge around the minima. The usual settings are to half the step size when contracting and double the step size when expanding.
This method can be easily extended into higher dimensional examples, all that's required is taking one more point than there are dimensions. Then, the simplest approach is to replace the worst point with a point reflected through the centroid of the remaining n points. If this point is better than the best current point, then we can try stretching exponentially out along this line. On the other hand, if this new point isn't much better than the previous value, then we are stepping across a valley, so we shrink the step towards a better point.
See "An Interactive Tutorial on Numerical Optimization": http://www.benfrederickson.com/numerical-optimization/
Gradient descend (for hackers): 1-D
Let's suppose that we have a function $f: \Re \rightarrow \Re$. For example:
$$f(x) = x^2$$
Our objective is to find the argument $x$ that minimizes this function (for maximization, consider $-f(x)$). To this end, the critical concept is the derivative.
The derivative of $f$ of a variable $x$, $f'(x)$ or $\frac{\mathrm{d}f}{\mathrm{d}x}$, is a measure of the rate at which the value of the function changes with respect to the change of the variable. It is defined as the following limit:
$$ f'(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h} $$
The derivative specifies how to scale a small change in the input in order to obtain the corresponding change in the output:
$$ f(x + h) \approx f(x) + h f'(x)$$
End of explanation
"""
old_min = 0
temp_min = 15
step_size = 0.01
precision = 0.0001
def f(x):
return x**2 - 6*x + 5
def f_derivative(x):
import math
return 2*x -6
mins = []
cost = []
while abs(temp_min - old_min) > precision:
old_min = temp_min
move = f_derivative(old_min) * step_size
temp_min = old_min - move
cost.append((3-temp_min)**2)
mins.append(temp_min)
# rounding the result to 2 digits because of the step size
print "Local minimum occurs at {:3.6f}.".format(round(temp_min,2))
"""
Explanation: It can be shown that the “centered difference formula" is better when computing numerical derivatives:
$$ \lim_{h \rightarrow 0} \frac{f(x + h) - f(x - h)}{2h} $$
The error in the "finite difference" approximation can be derived from Taylor's theorem and, assuming that $f$ is differentiable, is $O(h)$. In the case of “centered difference" the error is $O(h^2)$.
The derivative tells how to chage $x$ in order to make a small improvement in $f$.
Minimization
Then, we can follow these steps to decrease the value of the function:
Start from a random $x$ value.
Compute the derivative $f'(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x - h)}{2h}$.
Walk a small step (possibly weighted by the derivative module) in the opposite direction of the derivative, because we know that $f(x - h \mbox{ sign}(f'(x))$ is less than $f(x)$ for small enough $h$.
The search for the minima ends when the derivative is zero because we have no more information about which direction to move. $x$ is a critical o stationary point if $f'(x)=0$.
A minimum (maximum) is a critical point where $f(x)$ is lower (higher) than at all neighboring points.
There is a third class of critical points: saddle points.
If $f$ is a convex function, this should be the minimum (maximum) of our functions. In other cases it could be a local minimum (maximum) or a saddle point.
There are two problems with numerical derivatives:
+ It is approximate.
+ It is very slow to evaluate (two function evaluations: $f(x + h) , f(x - h)$ ).
Step size
Usually, we multiply the derivative by a step size. This step size (often called alpha) has to be chosen carefully, as a value too small will result in a long computation time, while a value too large will not give you the right result (by overshooting) or even fail to converge.
Analytical derivative
Let's suppose now that we know the analytical derivative. This is only one function evaluation!
End of explanation
"""
# your solution
"""
Explanation: Exercise
What happens if step_size=1.0?
End of explanation
"""
x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
x, y = (zip(*enumerate(cost)))
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-', alpha=0.7)
plt.ylim([-10,150])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show
x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.ylim([-10,250])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.plot(mins,cost,'o', alpha=0.3)
ax.text(mins[-1],
cost[-1]+20,
'End (%s steps)' % len(mins),
ha='center',
color=sns.xkcd_rgb['blue'],
)
plt.show
"""
Explanation: An important feature of gradient descent is that there should be a visible improvement over time.
In the following example, we simply plotted the
change in the value of the minimum against the iteration during which it was calculated. As we can see, the distance gets smaller over time, but barely changes in later iterations.
End of explanation
"""
def f(x):
return sum(x_i**2 for x_i in x)
def fin_dif_partial_centered(x, f, i, h=1e-6):
w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]
w2 = [x_j - (h if j==i else 0) for j, x_j in enumerate(x)]
return (f(w1) - f(w2))/(2*h)
def gradient_centered(x, f, h=1e-6):
return[round(fin_dif_partial_centered(x,f,i,h), 10)
for i,_ in enumerate(x)]
x = [1.0,1.0,1.0]
print '{:.6f}'.format(f(x)), gradient_centered(x,f)
"""
Explanation: From derivatives to gradient: $n$-dimensional function minimization.
Let's consider a $n$-dimensional function $f: \Re^n \rightarrow \Re$. For example:
$$f(\mathbf{x}) = \sum_{n} x_n^2$$
Our objective is to find the argument $\mathbf{x}$ that minimizes this function.
The gradient of $f$ is the vector whose components are the $n$ partial derivatives of $f$. It is thus a vector-valued function.
The gradient points in the direction of the greatest rate of increase of the function.
$$\nabla {f} = (\frac{\partial f}{\partial x_1}, \dots, \frac{\partial f}{\partial x_n})$$
End of explanation
"""
def euc_dist(v1,v2):
import numpy as np
import math
v = np.array(v1)-np.array(v2)
return math.sqrt(sum(v_i ** 2 for v_i in v))
"""
Explanation: The function we have evaluated, $f({\mathbf x}) = x_1^2+x_2^2+x_3^2$, is $3$ at $(1,1,1)$ and the gradient vector at this point is $(2,2,2)$.
Then, we can follow this steps to maximize (or minimize) the function:
Start from a random $\mathbf{x}$ vector.
Compute the gradient vector.
Walk a small step in the opposite direction of the gradient vector.
It is important to be aware that this gradient computation is very expensive: if $\mathbf{x}$ has dimension $n$, we have to evaluate $f$ at $2*n$ points.
How to use the gradient.
$f(x) = \sum_i x_i^2$, takes its mimimum value when all $x$ are 0.
Let's check it for $n=3$:
End of explanation
"""
# choosing a random vector
import random
import numpy as np
x = [random.randint(-10,10) for i in range(3)]
x
def step(x,grad,alpha):
return [x_i - alpha * grad_i for x_i, grad_i in zip(x,grad)]
tol = 1e-15
alpha = 0.01
while True:
grad = gradient_centered(x,f)
next_x = step(x,grad,alpha)
if euc_dist(next_x,x) < tol:
break
x = next_x
print [round(i,10) for i in x]
"""
Explanation: Let's start by choosing a random vector and then walking a step in the opposite direction of the gradient vector. We will stop when the difference (in $\mathbf x$) between the new solution and the old solution is less than a tolerance value.
End of explanation
"""
step_size = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
"""
Explanation: Choosing Alpha
The step size, alpha, is a slippy concept: if it is too small we will slowly converge to the solution, if it is too large we can diverge from the solution.
There are several policies to follow when selecting the step size:
Constant size steps. In this case, the size step determines the precision of the solution.
Decreasing step sizes.
At each step, select the optimal step (the one that get the lower $f(\mathbf x)$).
The last policy is good, but too expensive. In this case we would consider a fixed set of values:
End of explanation
"""
import numpy as np
import random
# f = 2x
x = np.arange(10)
y = np.array([2*i for i in x])
# f_target = 1/n Sum (y - wx)**2
def target_f(x,y,w):
return np.sum((y - x * w)**2.0) / x.size
# gradient_f = 2/n Sum 2wx**2 - 2xy
def gradient_f(x,y,w):
return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size
def step(w,grad,alpha):
return w - alpha * grad
def BGD_multi_step(target_f,
gradient_f,
x,
y,
toler = 1e-6):
alphas = [100, 10, 1, 0.1, 0.001, 0.00001]
w = random.random()
val = target_f(x,y,w)
i = 0
while True:
i += 1
gradient = gradient_f(x,y,w)
next_ws = [step(w, gradient, alpha) for alpha in alphas]
next_vals = [target_f(x,y,w) for w in next_ws]
min_val = min(next_vals)
next_w = next_ws[next_vals.index(min_val)]
next_val = target_f(x,y,next_w)
if (abs(val - next_val) < toler):
return w
else:
w, val = next_w, next_val
print '{:.6f}'.format(BGD_multi_step(target_f, gradient_f, x, y))
%%timeit
BGD_multi_step(target_f, gradient_f, x, y)
def BGD(target_f, gradient_f, x, y, toler = 1e-6, alpha=0.01):
w = random.random()
val = target_f(x,y,w)
i = 0
while True:
i += 1
gradient = gradient_f(x,y,w)
next_w = step(w, gradient, alpha)
next_val = target_f(x,y,next_w)
if (abs(val - next_val) < toler):
return w
else:
w, val = next_w, next_val
print '{:.6f}'.format(BGD(target_f, gradient_f, x, y))
%%timeit
BGD(target_f, gradient_f, x, y)
"""
Explanation: Learning from data
In general, we have:
A dataset ${(\mathbf{x},y)}$ of $n$ examples.
A target function $f_\mathbf{w}$, that we want to minimize, representing the discrepancy between our data and the model we want to fit. The model is represented by a set of parameters $\mathbf{w}$.
The gradient of the target function, $g_f$.
In the most common case $f$ represents the errors from a data representation model $M$.
For example, to fit the model clould be to find the optimal parameters $\mathbf{w}$ that minimize the following expression:
$$ f_\mathbf{w} = \frac{1}{n} \sum_{i} (y_i - M(\mathbf{x}_i,\mathbf{w}))^2 $$
For example, $(\mathbf{x},y)$ can represent:
$\mathbf{x}$: the behavior of a "Candy Crush" player; $y$: monthly payments.
$\mathbf{x}$: sensor data about your car engine; $y$: probability of engine error.
$\mathbf{x}$: finantial data of a bank customer; $y$: customer rating.
If $y$ is a real value, it is called a regression problem.
If $y$ is binary/categorical, it is called a classification problem.
Let's suppose that our model is a one-dimensional linear model $M(\mathbf{x},\mathbf{w}) = w \cdot x $.
Batch gradient descend
We can implement gradient descend in the following way (batch gradient descend):
End of explanation
"""
def in_random_order(data):
import random
indexes = [i for i,_ in enumerate(data)]
random.shuffle(indexes)
for i in indexes:
yield data[i]
import numpy as np
import random
def SGD(target_f,
gradient_f,
x,
y,
toler = 1e-6,
epochs=100,
alpha_0=0.01):
data = zip(x,y)
w = random.random()
alpha = alpha_0
min_w, min_val = float('inf'), float('inf')
epoch = 0
iteration_no_increase = 0
while epoch < epochs and iteration_no_increase < 100:
val = target_f(x, y, w)
if min_val - val > toler:
min_w, min_val = w, val
alpha = alpha_0
iteration_no_increase = 0
else:
iteration_no_increase += 1
alpha *= 0.9
for x_i, y_i in in_random_order(data):
gradient_i = gradient_f(x_i, y_i, w)
w = w - (alpha * gradient_i)
epoch += 1
return min_w
print 'w: {:.6f}'.format(SGD(target_f, gradient_f, x, y))
"""
Explanation: Stochastic Gradient Descend
The last function evals the whole dataset $(\mathbf{x}_i,y_i)$ at every step.
If the dataset is large, this strategy is too costly. In this case we will use a strategy called SGD (Stochastic Gradient Descend).
When learning from data, the cost function is additive: it is computed by adding sample reconstruction errors.
Then, we can compute the estimate the gradient (and move towards the minimum) by using only one data sample (or a small data sample).
Thus, we will find the minimum by iterating this gradient estimation over the dataset.
A full iteration over the dataset is called epoch. During an epoch, data must be used in a random order.
If we apply this method we have some theoretical guarantees to find a good minimum:
+ SGD essentially uses the inaccurate gradient per iteration. Since there is no free food, what is the cost by using approximate gradient? The answer is that the convergence rate is slower than the gradient descent algorithm.
+ The convergence of SGD has been analyzed using the theories of convex minimization and of stochastic approximation: it converges almost surely to a global minimum when the objective function is convex or pseudoconvex, and otherwise converges almost surely to a local minimum.
End of explanation
"""
%reset
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets.samples_generator import make_regression
from scipy import stats
import random
%matplotlib inline
# x: input data
# y: noisy output data
x = np.random.uniform(0,1,20)
# f = 2x + 0
def f(x): return 2*x + 0
noise_variance =0.1
noise = np.random.randn(x.shape[0])*noise_variance
y = f(x) + noise
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.xlabel('$x$', fontsize=15)
plt.ylabel('$f(x)$', fontsize=15)
plt.plot(x, y, 'o', label='y')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)')
plt.ylim([0,2])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show
# f_target = 1/n Sum (y - wx)**2
def target_f(x,y,w):
return np.sum((y - x * w)**2.0) / x.size
# gradient_f = 2/n Sum 2wx**2 - 2xy
def gradient_f(x,y,w):
return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size
def in_random_order(data):
indexes = [i for i,_ in enumerate(data)]
random.shuffle(indexes)
for i in indexes:
yield data[i]
def SGD(target_f,
gradient_f,
x,
y,
toler = 1e-6,
epochs=100,
alpha_0=0.01):
data = zip(x,y)
w = random.random()
alpha = alpha_0
min_w, min_val = float('inf'), float('inf')
iteration_no_increase = 0
w_cost = []
epoch = 0
while epoch < epochs and iteration_no_increase < 100:
val = target_f(x, y, w)
if min_val - val > toler:
min_w, min_val = w, val
alpha = alpha_0
iteration_no_increase = 0
else:
iteration_no_increase += 1
alpha *= 0.9
for x_i, y_i in in_random_order(data):
gradient_i = gradient_f(x_i, y_i, w)
w = w - (alpha * gradient_i)
w_cost.append(target_f(x,y,w))
epoch += 1
return min_w, np.array(w_cost)
w, target_value = SGD(target_f, gradient_f, x, y)
print 'w: {:.6f}'.format(w)
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x, y, 'o', label='t')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)', alpha=0.5)
plt.plot([0, 1], [0*w, 1*w], 'r-', label='fitted line', alpha=0.5, linestyle='--')
plt.xlabel('input x')
plt.ylabel('target t')
plt.title('input vs. target')
plt.ylim([0,2])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(np.arange(target_value.size), target_value, 'o', alpha = 0.2)
plt.xlabel('Iteration')
plt.ylabel('Cost')
plt.grid()
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show()
"""
Explanation: Example: Stochastic Gradient Descent and Linear Regression
The linear regression model assumes a linear relationship between data:
$$ y_i = w_1 x_i + w_0 $$
Let's generate a more realistic dataset (with noise), where $w_1 = 2$ and $w_0 = 0$.
The bias trick. It is a little cumbersome to keep track separetey of $w_i$, the feature weights, and $w_0$, the bias. A common used trick is to combine these parameters into a single structure that holds both of them by extending the vector $x$ with one additional dimension that always holds the constant $1$. With this dimension the model simplifies to a single multiply $f(\mathbf{x},\mathbf{w}) = \mathbf{w} \cdot \mathbf{x}$.
End of explanation
"""
def get_batches(iterable, n = 1):
current_batch = []
for item in iterable:
current_batch.append(item)
if len(current_batch) == n:
yield current_batch
current_batch = []
if current_batch:
yield current_batch
%reset
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets.samples_generator import make_regression
from scipy import stats
import random
%matplotlib inline
# x: input data
# y: noisy output data
x = np.random.uniform(0,1,2000)
# f = 2x + 0
def f(x): return 2*x + 0
noise_variance =0.1
noise = np.random.randn(x.shape[0])*noise_variance
y = f(x) + noise
plt.plot(x, y, 'o', label='y')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)')
plt.xlabel('$x$', fontsize=15)
plt.ylabel('$t$', fontsize=15)
plt.ylim([0,2])
plt.title('inputs (x) vs targets (y)')
plt.grid()
plt.legend(loc=2)
plt.gcf().set_size_inches((10,3))
plt.show()
# f_target = 1/n Sum (y - wx)**2
def target_f(x,y,w):
return np.sum((y - x * w)**2.0) / x.size
# gradient_f = 2/n Sum 2wx**2 - 2xy
def gradient_f(x,y,w):
return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size
def in_random_order(data):
indexes = [i for i,_ in enumerate(data)]
random.shuffle(indexes)
for i in indexes:
yield data[i]
def get_batches(iterable, n = 1):
current_batch = []
for item in iterable:
current_batch.append(item)
if len(current_batch) == n:
yield current_batch
current_batch = []
if current_batch:
yield current_batch
def SGD_MB(target_f, gradient_f, x, y, epochs=100, alpha_0=0.01):
data = zip(x,y)
w = random.random()
alpha = alpha_0
min_w, min_val = float('inf'), float('inf')
epoch = 0
while epoch < epochs:
val = target_f(x, y, w)
if val < min_val:
min_w, min_val = w, val
alpha = alpha_0
else:
alpha *= 0.9
np.random.shuffle(data)
for batch in get_batches(data, n = 100):
x_batch = np.array(zip(*batch)[0])
y_batch = np.array(zip(*batch)[1])
gradient = gradient_f(x_batch, y_batch, w)
w = w - (alpha * gradient)
epoch += 1
return min_w
w = SGD_MB(target_f, gradient_f, x, y)
print 'w: {:.6f}'.format(w)
plt.plot(x, y, 'o', label='t')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)', alpha=0.5)
plt.plot([0, 1], [0*w, 1*w], 'r-', label='fitted line', alpha=0.5, linestyle='--')
plt.xlabel('input x')
plt.ylabel('target t')
plt.ylim([0,2])
plt.title('input vs. target')
plt.grid()
plt.legend(loc=2)
plt.gcf().set_size_inches((10,3))
plt.show()
"""
Explanation: Mini-batch Gradient Descent
In code, general batch gradient descent looks something like this:
python
nb_epochs = 100
for i in range(nb_epochs):
grad = evaluate_gradient(target_f, data, w)
w = w - learning_rate * grad
For a pre-defined number of epochs, we first compute the gradient vector of the target function for the whole dataset w.r.t. our parameter vector.
Stochastic gradient descent (SGD) in contrast performs a parameter update for each training example and label:
python
nb_epochs = 100
for i in range(nb_epochs):
np.random.shuffle(data)
for sample in data:
grad = evaluate_gradient(target_f, sample, w)
w = w - learning_rate * grad
Mini-batch gradient descent finally takes the best of both worlds and performs an update for every mini-batch of $n$ training examples:
python
nb_epochs = 100
for i in range(nb_epochs):
np.random.shuffle(data)
for batch in get_batches(data, batch_size=50):
grad = evaluate_gradient(target_f, batch, w)
w = w - learning_rate * grad
Minibatch SGD has the advantage that it works with a slightly less noisy estimate of the gradient. However, as the minibatch size increases, the number of updates done per computation done decreases (eventually it becomes very inefficient, like batch gradient descent).
There is an optimal trade-off (in terms of computational efficiency) that may vary depending on the data distribution and the particulars of the class of function considered, as well as how computations are implemented.
End of explanation
"""
|
vravishankar/Jupyter-Books | pandas/01.Pandas - Series Object.ipynb | mit | import numpy as np
import pandas as pd
pd.__version__
np.__version__
# set some options to control output display
pd.set_option('display.notebook_repr_html',False)
pd.set_option('display.max_columns',10)
pd.set_option('display.max_rows',10)
"""
Explanation: Pandas
Pandas is a high-performance python library that provides a comprehensive set of data structures for manipulating tabular data, providing high-performance indexing, automatic alignment, reshaping, grouping, joining and statistical analyses capabilities.
The two primary data structures in pandas are the Series and the DataFrame objects.
Series Object
The Series object is the fundamental building block of pandas. A Series represents an one-dimensional array based on the NumPy ndarray but with a labeled index that significantly helps to access the elements.
A Series always has an index even if one is not specified, by default pandas will create an index that consists of sequential integers starting from zero. Access to elements is not by integer position but using values in the index referred as Labels.
Importing pandas into the application is simple. It is common to import both pandas and numpy with their objects mapped into the pd and np namespaces respectively.
End of explanation
"""
# create one item series
s1 = pd.Series(1)
s1
"""
Explanation: Creating Series
A Series can be created and initialised by passing either a scalar value, a NumPy nd array, a Python list or a Python Dict as the data parameter of the Series constructor.
End of explanation
"""
# get value with label 0
s1[0]
# create from list
s2 = pd.Series([1,2,3,4,5])
s2
# get the values in the series
s2.values
# get the index of the series
s2.index
"""
Explanation: '0' is the index and '1' is the value. The data type (dtype) is also shown. We can also retrieve the value using the associated index.
End of explanation
"""
# explicitly create an index
# index is alpha, not an integer
s3 = pd.Series([1,2,3], index=['a','b','c'])
s3
s3.index
"""
Explanation: Creating Series with named index
Pandas will create different index types based on the type of data identified in the index parameter. These different index types are optimized to perform indexing operations for that specific data type. To specify the index at the time of creation of the Series, use the index parameter of the constructor.
End of explanation
"""
# look up by label value and not object position
s3['b']
# position also works
s3[2]
# create series from an existing index
# scalar value will be copied at each index label
s4 = pd.Series(2,index=s2.index)
s4
"""
Explanation: Please note the type of the index items. It is not string but 'object'.
End of explanation
"""
np.random.seed(123456)
pd.Series(np.random.randn(5))
# 0 through 9
pd.Series(np.linspace(0,9,10))
# o through 8
pd.Series(np.arange(0,9))
"""
Explanation: It is a common practice to initialize the Series objects using NumPy ndarrays, and with various NumPy functions that create arrays. The following code creates a Series from five normally distributed values:
End of explanation
"""
s6 = pd.Series({'a':1,'b':2,'c':3,'d':4})
s6
"""
Explanation: A Series can also be created from a Python dictionary. The keys of the dictionary are used as the index lables for the Series:
End of explanation
"""
# example series which also contains a NaN
s = pd.Series([0,1,1,2,3,4,5,6,7,np.NaN])
s
# length of the Series
len(s)
s.size
# shape is a tuple with one value
s.shape
# number of values not part of NaN can be found using count() method
s.count()
# all unique values
s.unique()
# count of non-NaN values, returned max to min order
s.value_counts()
"""
Explanation: Size, Shape, Count and Uniqueness of Values
End of explanation
"""
# first five
s.head()
# first three
s.head(3)
# last five
s.tail()
# last 2
s.tail(n=2) # equivalent to s.tail(2)
"""
Explanation: Peeking at data with heads, tails and take
pandas provides the .head() and .tail() methods to examine just the first few or last records in a Series. By default, these return the first five or last rows respectively, but you can use the n parameter or just pass an integer to specify the number of rows:
End of explanation
"""
# only take specific items
s.take([0,3,9])
"""
Explanation: The .take() method will return the rows in a series that correspond to the zero-based positions specified in a list:
End of explanation
"""
# single item lookup
s3['a']
# lookup by position since index is not an integer
s3[2]
# multiple items
s3[['a','c']]
# series with an integer index but not starting with 0
s5 = pd.Series([1,2,3], index =[11,12,13])
s5[12] # by value as value passed and index are both integer
"""
Explanation: Looking up values in Series
Values in a Series object can be retrieved using the [] operator and passing either a single index label or a list of index labels.
End of explanation
"""
# force lookup by index label
s5.loc[12]
"""
Explanation: To alleviate the potential confusion in determining the label-based lookups versus position-based lookups, index based lookup can be enforced using the .loc[] accessor:
End of explanation
"""
# force lookup by position or location
s5.iloc[1]
# multiple items by index label
s5.loc[[12,10]]
# multiple items by position or location
s5.iloc[[1,2]]
"""
Explanation: Lookup by position can be enforced using the iloc[] accessor:
End of explanation
"""
s5.loc[[12,-1,15]]
"""
Explanation: If a location / position passed to .iloc[] in a list is out of bounds, an exception will be thrown. This is different than with .loc[], which if passed a label that does not exist, will return NaN as the value for that label:
End of explanation
"""
s3
# label based lookup
s3.ix[['a','b']]
# position based lookup
s3.ix[[1,2]]
"""
Explanation: A Series also has a property .ix that can be used to look up items either by label or by zero-based array position.
End of explanation
"""
# this looks by label and not position
# note that 1,2 have NaN as those labels do not exist in the index
s5.ix[[1,2,10,11]]
"""
Explanation: This can become complicated if the indexes are integers and you pass a list of integers to ix. Since they are of the same type, the lookup will be by index label instead of position:
End of explanation
"""
s6 = pd.Series([1,2,3,4], index=['a','b','c','d'])
s6
s7 = pd.Series([4,3,2,1], index=['d','c','b','a'])
s7
s6 + s7
"""
Explanation: Alignment via index labels
A fundamental difference between a NumPy ndarray and a pandas Series is the ability of a Series to automatically align data from another Series based on label values before performing an operation.
End of explanation
"""
a1 = np.array([1,2,3,4,5])
a2 = np.array([5,4,3,2,1])
a1 + a2
"""
Explanation: This is a very different result that what it would have been if it were two pure NumPy arrays being added. A NumPy ndarray would add the items in identical positions of each array resulting in different values.
End of explanation
"""
# multiply all values in s3 by 2
s3 * 2
# scalar series using the s3's index
# not efficient as it will no use vectorisation
t = pd.Series(2,s3.index)
s3 * t
"""
Explanation: The process of adding two Series objects differs from the process of addition of arrays as it first aligns data based on index label values instead of simply applying the operation to elements in the same position. This becomes significantly powerful when using pandas Series to combine data based on labels instead of having to first order the data manually.
Arithmetic Operations
Arithemetic Operations <pre>(+,-,*,/)</pre> can be applied either to a Series or between 2 Series objects
End of explanation
"""
# we will add this to s9
s8 = pd.Series({'a':1,'b':2,'c':3,'d':5})
s8
s9 = pd.Series({'b':6,'c':7,'d':9,'e':10})
s9
# NaN's result for a and e demonstrates alignment
s8 + s9
s10 = pd.Series([1.0,2.0,3.0],index=['a','a','b'])
s10
s11 = pd.Series([4.0,5.0,6.0], index=['a','a','c'])
s11
# will result in four 'a' index labels
s10 + s11
"""
Explanation: To reinforce the point that alignment is being performed when applying arithmetic operations across two Series objects, look at the following two Series as examples:
End of explanation
"""
nda = np.array([1,2,3,4,5])
nda.mean()
# mean of numpy array values with a NaN
nda = np.array([1,2,3,4,np.NaN])
nda.mean()
# Series object ignores NaN values - does not get factored
s = pd.Series(nda)
s.mean()
# handle NaN values like Numpy
s.mean(skipna=False)
"""
Explanation: The reason for the above result is that during alignment, pandas actually performs a cartesian product of the sets of all the unique index labels in both Series objects, and then applies the specified operation on all items in the products.
To explain why there are four 'a' index values s10 contains two 'a' labels and s11 also contains two 'a' labels. Every combination of 'a' labels in each will be calculated resulting in four 'a' labels. There is one 'b' label from s10 and one 'c' label from s11. Since there is no matching label for either in the other Series object, they only result in a sing row in the resulting Series object.
Each combination of values for 'a' in both Series are computed, resulting in the four values: 1+4,1+5,2+4 and 2+5.
So remember that an index can have duplicate labels, and during alignment this will result in a number of index labels equivalent to the products of the number of the labels in each Series.
The special case of Not-A-Number (NaN)
pandas mathematical operators and functions handle NaN in a special manner (compared to NumPy ndarray) that does not break the computations. pandas is lenient with missing data assuming that it is a common situation.
End of explanation
"""
# which rows have values that are > 5
s = pd.Series(np.arange(0,10))
s > 5
# select rows where values are > 5
# overloading the Series object [] operator
logicalResults = s > 5
s[logicalResults]
# a little shorter version
s[s > 5]
# using & operator
s[(s>5)&(s<9)]
# using | operator
s[(s > 3) | (s < 5)]
# are all items >= 0?
(s >=0).all()
# are any items < 2
s[s < 2].any()
"""
Explanation: Boolean selection
Items in a Series can be selected, based on the value instead of index labels, via the utilization of a Boolean selection.
End of explanation
"""
(s < 2).sum()
"""
Explanation: The result of these logical expressions is a Boolean selection, a Series of True and False values. The .sum() method of a Series, when given a series of Boolean values, will treat True as 1 and False as 0. The following demonstrates using this to determine the number of items in a Series that satisfy a given expression:
End of explanation
"""
# sample series of five items
s = pd.Series(np.random.randn(5))
s
# change the index
s.index = ['a','b','c','d','e']
s
# concat copies index values verbatim
# potentially making duplicates
np.random.seed(123456)
s1 = pd.Series(np.random.randn(3))
s2 = pd.Series(np.random.randn(3))
combined = pd.concat([s1,s2])
combined
# reset the index
combined.index = np.arange(0,len(combined))
combined
"""
Explanation: Reindexing a Series
Reindexing in pandas is a process that makes the data in a Series or DataFrame match a given set of labels.
This process of performing a reindex includes the following steps:
1. Reordering existing data to match a set of labels.
2. Inserting NaN markers where no data exists for a label.
3. Possibly, filling missing data for a label using some type of logic
End of explanation
"""
np.random.seed(123456)
s1 = pd.Series(np.random.randn(4),['a','b','c','d'])
# reindex with different number of labels
# results in dropped rows and/or NaN's
s2 = s1.reindex(['a','c','g'])
s2
"""
Explanation: Greater flexibility in creating a new index is provided using the .reindex() method. An example of the flexibility of .reindex() over assigning the .index property directly is that the list provided to .reindex() can be of a different length than the number of rows in the Series:
End of explanation
"""
# s2 is a different series than s1
s2['a'] = 0
s2
# this did not modify s1
s1
"""
Explanation: There are several things here that are important to point out about .reindex() method.
First is that the result of .reindex() method is a new Series. This new Series has an index with labels that are provided as parameter to reindex().
For each item in the given parameter list, if the original Series contains that label, then the value is assigned to that label.
If that label does not exist in the original Series, pandas assigns a NaN value.
Rows in the Series without a label specified in the parameter of .reindex() is not included in the result.
To demonstrate that the result of .reindex() is a new Series object, changing a value in s2 does not change the values in s1:
End of explanation
"""
# different types for the same values of labels causes big issue
s1 = pd.Series([0,1,2],index=[0,1,2])
s2 = pd.Series([3,4,5],index=['0','1','2'])
s1 + s2
"""
Explanation: Reindex is also useful when you want to align two Series to perform an operation on matching elements from each series; however, for some reason, the two Series has index labels that will not initially align.
End of explanation
"""
# reindex by casting the label types and we will get the desired result
s2.index = s2.index.values.astype(int)
s1 + s2
"""
Explanation: The reason why this happens in pandas are as follows:
pandas first tries to align by the indexes and finds no matches, so it copies the index labels from the first series and tries to append the indexes from the second Series.
However, since they are different type, it defaults back to zero-based integer sequence that results in duplicate values.
Finally, all values are NaN because the operation tries to add the item in the first Series with the integer label 0, which has a value of 0, but can't find the item in the other series and therefore the result in NaN.
End of explanation
"""
# fill with 0 instead on NaN
s2 = s.copy()
s2.reindex(['a','f'],fill_value=0)
"""
Explanation: The default action of inserting NaN as a missing value during reindexing can be changed by using the fill_value parameter of the method.
End of explanation
"""
# create example to demonstrate fills
s3 = pd.Series(['red','green','blue'],index=[0,3,5])
s3
# forward fill using ffill method
s3.reindex(np.arange(0,7), method='ffill')
# backward fill using bfill method
s3.reindex(np.arange(0,7),method='bfill')
"""
Explanation: When performing a reindex on ordered data such as a time series, it is possible to perform interpolation or filling of values. The following example demonstrates forward filling, often referred to as "last known value".
End of explanation
"""
np.random.seed(123456)
s = pd.Series(np.random.randn(3),index=['a','b','c'])
s
# change a value in the Series
# this done in-place
# a new Series is not returned that has a modified value
s['d'] = 100
s
# value at a specific index label can be changed by assignment:
s['d'] = -100
s
"""
Explanation: Modifying a Series in-place
There are several ways that an existing Series can be modified in-place having either its values changed or having rows added or deleted.
A new item can be added to a Series by assigning a value to an index label that does not already exist.
End of explanation
"""
del(s['a'])
s
"""
Explanation: Items can be removed from a Series using the del() function and passing the index label(s) to be removed.
End of explanation
"""
# a series to use for slicing
# using index labels not starting at 0 to demonstrate
# position based slicing
s = pd.Series(np.arange(100,110),index=np.arange(10,20))
s
# items at position 0,2,4
s[0:6:2]
# equivalent to
s.iloc[[0,2,4]]
# first five by slicing, same as .head(5)
s[:5]
# fourth position to the end
s[4:]
# every other item in the first five positions
s[:5:2]
# every other item starting at the fourth position
s[4::2]
# reverse the series
s[::-1]
# every other starting at position 4, in reverse
s[4::-2]
# :-2 which means positions 0 through (10-2) which is [8]
s[:-2]
# last 3 items
# equivalent to tail(3)
s[-3:]
# equivalent to s.tail(4).head(3)
s[-4:-1]
"""
Explanation: Slicing a Series
End of explanation
"""
# preserve s
# slice with first 2 rows
copy = s.copy()
slice = copy[:2]
slice
"""
Explanation: An important thing to keep in mind when using slicing, is that the result of the slice is actually a view into the original Series. Modification of values through the result of the slice will modify the original Series.
End of explanation
"""
slice[11] = 1000
copy
"""
Explanation: Now the assignment of a value to an element of a slice will change the value in the original Series:
End of explanation
"""
# used to demonstrate the next two slices
s = pd.Series(np.arange(0,5),index=['a','b','c','d','e'])
s
# slicing with integer values will extract items based on position:
s[1:3]
# with non-integer index, it is also possible to slice with values in the same type of the index:
s['b':'d']
"""
Explanation: Slicing can be performed on Series objects with a non-integer index.
End of explanation
"""
|
ARM-software/lisa | ipynb/deprecated/examples/trace_analysis/TraceAnalysis_FunctionsProfiling.ipynb | apache-2.0 | import logging
from conf import LisaLogging
LisaLogging.setup()
"""
Explanation: Trace Analysis Examples
Kernel Functions Profiling
Details on functions profiling are given in Plot Functions Profiling Data below.
End of explanation
"""
# Generate plots inline
%matplotlib inline
import json
import os
# Support to access the remote target
import devlib
from env import TestEnv
from executor import Executor
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
# Support for trace events analysis
from trace import Trace
"""
Explanation: Import required modules
End of explanation
"""
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
"password" : 'juno',
# Folder where all the results will be collected
"results_dir" : "TraceAnalysis_FunctionsProfiling",
# Define devlib modules to load
"modules": ['cpufreq'],
"exclude_modules" : [ 'hwmon' ],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"functions" : [
"pick_next_task_fair",
"select_task_rq_fair",
"enqueue_task_fair",
"update_curr_fair",
"dequeue_task_fair",
],
"buffsize" : 100 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
# "rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
"""
Explanation: Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
End of explanation
"""
def experiment(te):
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# FTrace the execution of this workload
te.ftrace.start()
rtapp.run(out_dir=te.res_dir)
te.ftrace.stop()
# Collect and keep track of the trace
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
# Collect and keep track of the Kernel Functions performance data
stats_file = os.path.join(te.res_dir, 'trace.stats')
te.ftrace.get_stats(stats_file)
# Dump platform descriptor
te.platform_dump(te.res_dir)
experiment(te)
"""
Explanation: Workload Execution and Functions Profiling Data Collection
Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
End of explanation
"""
# Base folder where tests folder are located
res_dir = te.res_dir
logging.info('Content of the output folder %s', res_dir)
!tree {res_dir}
with open(os.path.join(res_dir, 'platform.json'), 'r') as fh:
platform = json.load(fh)
print json.dumps(platform, indent=4)
logging.info('LITTLE cluster max capacity: %d',
platform['nrg_model']['little']['cpu']['cap_max'])
trace = Trace(res_dir, platform=platform)
"""
Explanation: Parse Trace and Profiling Data
End of explanation
"""
# Get the DataFrame for the specified list of kernel functions
df = trace.data_frame.functions_stats(['enqueue_task_fair', 'dequeue_task_fair'])
df
# Get the DataFrame for the single specified kernel function
df = trace.data_frame.functions_stats('select_task_rq_fair')
df
"""
Explanation: Report Functions Profiling Data
End of explanation
"""
# Plot Average and Total execution time for the specified
# list of kernel functions
trace.analysis.functions.plotProfilingStats(
functions = [
'select_task_rq_fair',
'enqueue_task_fair',
'dequeue_task_fair'
],
metrics = [
# Average completion time per CPU
'avg',
# Total execution time per CPU
'time',
]
)
# Plot Average execution time for the single specified kernel function
trace.analysis.functions.plotProfilingStats(
functions = 'update_curr_fair',
)
"""
Explanation: Plot Functions Profiling Data
The only method of the FunctionsAnalysis class that is used for functions profiling is plotProfilingStats. This method is used to plot functions profiling metrics for the specified kernel functions. For each speficied metric a barplot is generated which reports the value of the metric when the kernel function has been executed on each CPU.
The default metric is avg if not otherwise specified. A list of kernel functions to plot can also be passed to plotProfilingStats. Otherwise, by default, all the kernel functions are plotted.
End of explanation
"""
|
jseabold/statsmodels | examples/notebooks/recursive_ls.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from pandas_datareader.data import DataReader
np.set_printoptions(suppress=True)
"""
Explanation: Recursive least squares
Recursive least squares is an expanding window version of ordinary least squares. In addition to availability of regression coefficients computed recursively, the recursively computed residuals the construction of statistics to investigate parameter instability.
The RecursiveLS class allows computation of recursive residuals and computes CUSUM and CUSUM of squares statistics. Plotting these statistics along with reference lines denoting statistically significant deviations from the null hypothesis of stable parameters allows an easy visual indication of parameter stability.
Finally, the RecursiveLS model allows imposing linear restrictions on the parameter vectors, and can be constructed using the formula interface.
End of explanation
"""
print(sm.datasets.copper.DESCRLONG)
dta = sm.datasets.copper.load_pandas().data
dta.index = pd.date_range('1951-01-01', '1975-01-01', freq='AS')
endog = dta['WORLDCONSUMPTION']
# To the regressors in the dataset, we add a column of ones for an intercept
exog = sm.add_constant(dta[['COPPERPRICE', 'INCOMEINDEX', 'ALUMPRICE', 'INVENTORYINDEX']])
"""
Explanation: Example 1: Copper
We first consider parameter stability in the copper dataset (description below).
End of explanation
"""
mod = sm.RecursiveLS(endog, exog)
res = mod.fit()
print(res.summary())
"""
Explanation: First, construct and fit the model, and print a summary. Although the RLS model computes the regression parameters recursively, so there are as many estimates as there are datapoints, the summary table only presents the regression parameters estimated on the entire sample; except for small effects from initialization of the recursions, these estimates are equivalent to OLS estimates.
End of explanation
"""
print(res.recursive_coefficients.filtered[0])
res.plot_recursive_coefficient(range(mod.k_exog), alpha=None, figsize=(10,6));
"""
Explanation: The recursive coefficients are available in the recursive_coefficients attribute. Alternatively, plots can generated using the plot_recursive_coefficient method.
End of explanation
"""
print(res.cusum)
fig = res.plot_cusum();
"""
Explanation: The CUSUM statistic is available in the cusum attribute, but usually it is more convenient to visually check for parameter stability using the plot_cusum method. In the plot below, the CUSUM statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level.
End of explanation
"""
res.plot_cusum_squares();
"""
Explanation: Another related statistic is the CUSUM of squares. It is available in the cusum_squares attribute, but it is similarly more convenient to check it visually, using the plot_cusum_squares method. In the plot below, the CUSUM of squares statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level.
End of explanation
"""
start = '1959-12-01'
end = '2015-01-01'
m2 = DataReader('M2SL', 'fred', start=start, end=end)
cpi = DataReader('CPIAUCSL', 'fred', start=start, end=end)
def ewma(series, beta, n_window):
nobs = len(series)
scalar = (1 - beta) / (1 + beta)
ma = []
k = np.arange(n_window, 0, -1)
weights = np.r_[beta**k, 1, beta**k[::-1]]
for t in range(n_window, nobs - n_window):
window = series.iloc[t - n_window:t + n_window+1].values
ma.append(scalar * np.sum(weights * window))
return pd.Series(ma, name=series.name, index=series.iloc[n_window:-n_window].index)
m2_ewma = ewma(np.log(m2['M2SL'].resample('QS').mean()).diff().iloc[1:], 0.95, 10*4)
cpi_ewma = ewma(np.log(cpi['CPIAUCSL'].resample('QS').mean()).diff().iloc[1:], 0.95, 10*4)
"""
Explanation: Example 2: Quantity theory of money
The quantity theory of money suggests that "a given change in the rate of change in the quantity of money induces ... an equal change in the rate of price inflation" (Lucas, 1980). Following Lucas, we examine the relationship between double-sided exponentially weighted moving averages of money growth and CPI inflation. Although Lucas found the relationship between these variables to be stable, more recently it appears that the relationship is unstable; see e.g. Sargent and Surico (2010).
End of explanation
"""
fig, ax = plt.subplots(figsize=(13,3))
ax.plot(m2_ewma, label='M2 Growth (EWMA)')
ax.plot(cpi_ewma, label='CPI Inflation (EWMA)')
ax.legend();
endog = cpi_ewma
exog = sm.add_constant(m2_ewma)
exog.columns = ['const', 'M2']
mod = sm.RecursiveLS(endog, exog)
res = mod.fit()
print(res.summary())
res.plot_recursive_coefficient(1, alpha=None);
"""
Explanation: After constructing the moving averages using the $\beta = 0.95$ filter of Lucas (with a window of 10 years on either side), we plot each of the series below. Although they appear to move together prior for part of the sample, after 1990 they appear to diverge.
End of explanation
"""
res.plot_cusum();
"""
Explanation: The CUSUM plot now shows substantial deviation at the 5% level, suggesting a rejection of the null hypothesis of parameter stability.
End of explanation
"""
res.plot_cusum_squares();
"""
Explanation: Similarly, the CUSUM of squares shows substantial deviation at the 5% level, also suggesting a rejection of the null hypothesis of parameter stability.
End of explanation
"""
endog = dta['WORLDCONSUMPTION']
exog = sm.add_constant(dta[['COPPERPRICE', 'INCOMEINDEX', 'ALUMPRICE', 'INVENTORYINDEX']])
mod = sm.RecursiveLS(endog, exog, constraints='COPPERPRICE = ALUMPRICE')
res = mod.fit()
print(res.summary())
"""
Explanation: Example 3: Linear restrictions and formulas
Linear restrictions
It is not hard to implement linear restrictions, using the constraints parameter in constructing the model.
End of explanation
"""
mod = sm.RecursiveLS.from_formula(
'WORLDCONSUMPTION ~ COPPERPRICE + INCOMEINDEX + ALUMPRICE + INVENTORYINDEX', dta,
constraints='COPPERPRICE = ALUMPRICE')
res = mod.fit()
print(res.summary())
"""
Explanation: Formula
One could fit the same model using the class method from_formula.
End of explanation
"""
|
ahwillia/RecNetLearn | tutorials/FORCE_Learning.ipynb | mit | from __future__ import division
from scipy.integrate import odeint,ode
from numpy import zeros,ones,eye,tanh,dot,outer,sqrt,linspace,cos,pi,hstack
from numpy.random import uniform,normal,choice
import pylab as plt
import numpy as np
%matplotlib inline
"""
Explanation: FORCE Learning Tutorial
Exercises by: Larry Abbott<br>
MIT tutorial organized by: Emily Mackevicius<br>
Notebook by: Alex Williams<br>
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br>This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
Overview
This notebook provides solutions to Larry Abbott's exercises on recurrent neural networks, as part of the MIT computational neuroscience tutorial series. Click here to see the video of Larry's lecture. In contrast to Larry's notes, I've written all the equations below in matrix form. Comparing across these notations might be helpful. Any errors below are undoubtedly my own, so please bring them my attention.
Relevant Papers
Original FORCE learning paper
Sussillo D, Abbott LF (2009). Generating coherent patterns of activity from chao*tic neural networks. Neuron. 63(4):544-57
Application of recurrent neural network learning to model dynamics in primate visual cortex
Mante V, Sussillo D, Shenoy KV, Newsome WT (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature. 503(7474):78-84.
How recurrent neural networks respond to sinusoidal input (exercise 2)
Rajan K, Abbott LF, Sompolinsky H (2010). Stimulus-Dependent Suppression of Chaos in Recurrent Neural Networks. Phys. Rev. E 82:011903.
End of explanation
"""
def f1(x,t0):
return -x + g*dot(J,tanh(x))
N = 1000
J = normal(0,sqrt(1/N),(N,N))
x0 = uniform(-0.5,0.5,N)
t = linspace(0,50,500)
plt.figure(figsize=(10,5))
for s,g in enumerate(linspace(0.5,1.5,3)):
plt.subplot(1,3,s+1)
x = odeint(f1,x0,t)
plt.plot(t,x[:,choice(N,10)])
plt.title('g = '+str(g),fontweight='bold')
plt.show()
"""
Explanation: Exercise 1
Simulate the model:
$$\frac{d\mathbf{x}}{dt} = -\mathbf{x} + g J \tanh{[\mathbf{x}]} $$
with $x \in \mathcal{R}^N$ (vector), $J \in \mathcal{R}^{N \times N}$ (matrix), $g \in \mathcal{R}$ (scalar). Randomly draw each element of $J$ from a Gaussian distribution with zero mean and variance $1/N$. Characterize the output of the system for increasing values of $g$.
End of explanation
"""
def f2(x,t0):
return -x + g*dot(J,tanh(x)) + u*A*cos(o*t0)
plt.figure(figsize=(10,15))
g = 1.5
u = uniform(-1,1,N)
s = 1 # subplot index
for o in [1,2,4]:
for A in [0.5,2,5]:
plt.subplot(3,3,s)
x = odeint(f2,x0,t)
plt.plot(t,x[:,choice(N,3)])
plt.title('A = '+str(A)+', omega = '+str(o),fontweight='bold')
s += 1
plt.show()
"""
Explanation: Exercise 2
Simulate the model with $g=1.5$ and a sinusoidal input:
$$\frac{d\mathbf{x}}{dt} = -\mathbf{x} + g J \tanh{[\mathbf{x}]} + A \cos (\omega t) \mathbf{u} $$
with $\mathbf{u} \in \mathcal{R}^N$ and each $u_i \in [-1,1]$. Vary the scalar parameters $A$ and $\omega$.
End of explanation
"""
target = lambda t0: cos(2*pi*t0/50) # target pattern
def f3(t0,x):
return -x + g*dot(J,tanh_x) + dot(w,tanh_x)*u
dt = 1 # time step
tmax = 800 # simulation length
w = uniform(-1/sqrt(N),1/sqrt(N),N) # initial weights
P = eye(N) # Running estimate of the inverse correlation matrix
lr = 1.0 # learning rate
# simulation data: state, output, time, weight updates
x,z,t,wu = [x0],[],[0],[0]
# Set up ode solver
solver = ode(f3)
solver.set_initial_value(x0)
# Integrate ode, update weights, repeat
while t[-1] < tmax:
tanh_x = tanh(x[-1]) # cache
z.append(dot(w,tanh_x))
error = target(t[-1]) - z[-1]
q = dot(P,tanh_x)
c = lr / (1 + dot(q,tanh_x))
P = P - c*outer(q,q)
w = w + c*error*q
if t[-1]>300: lr = 0
wu.append(np.sum(np.abs(c*error*q)))
solver.integrate(solver.t+dt)
x.append(solver.y)
t.append(solver.t)
# last update for readout neuron
z.append(dot(w,tanh_x))
x = np.array(x)
t = np.array(t)
plt.figure(figsize=(10,5))
plt.subplot(2,1,1)
plt.plot(t,target(t),'-r',lw=2)
plt.plot(t,z,'-b')
plt.legend(('target','output'))
plt.ylim([-1.1,3])
plt.xticks([])
plt.subplot(2,1,2)
plt.plot(t,wu,'-k')
plt.yscale('log')
plt.ylabel('$|\Delta w|$',fontsize=20)
plt.xlabel('time',fontweight='bold',fontsize=16)
plt.show()
"""
Explanation: Exercise 3
Model an output or readout neuron for the network as:
$$z = \mathbf{w}^T \tanh[\mathbf{x}]$$
The output $z$ is a scalar formed by the dot product of two N-dimensional vectors ($\mathbf{w}^T$ denotes the transpose of $\mathbf{w}$). We will implement the FORCE learning rule (Susillo & Abbott, 2009), by adjusting the readout weights, $w_i$, so that $z$ matches a target function:
$$f(t) = \cos\left(\frac{2 \pi t}{50} \right)$$
The rule works by implementing recursive least-squares:
$$\mathbf{w} \rightarrow \mathbf{w} + c(f-z) \mathbf{q}$$
$$\mathbf{q} = P \tanh [\mathbf{x}]$$
$$c = \frac{1}{1+ \mathbf{q}^T \tanh(\mathbf{x})}$$
$$P_{ij} \rightarrow P_{ij} - c q_i q_j$$
End of explanation
"""
|
xgcm/xmitgcm | doc/demo_read_input_grid.ipynb | mit | #We're going to download a sample grid from figshare
!wget https://ndownloader.figshare.com/files/14072594
!tar -xf 14072594
import xmitgcm
# We generate the extra metadata needed for multi-faceted grids
llc90_extra_metadata = xmitgcm.utils.get_extra_metadata(domain='llc', nx=90)
# Then we read the grid from the input files
grid = xmitgcm.utils.get_grid_from_input('./grid_llc90/tile<NFACET>.mitgrid',
geometry='llc',
extra_metadata=llc90_extra_metadata)
"""
Explanation: Use case: reading the full grid from MITgcm input files
In some configurations (llc, cube-sphere,...) the model's grid is provided in a set of input binary files rather than being defined in the namelist. In such configurations, land processors elimination results in blank areas in the XC, YC,... fields outputed by the model. In some cases (regridding, plotting,...), it can be useful to retrieve directly the original fields. To do so, we can use the xmitgcm.utils.get_grid_from_input function.
Example 1: LLC90
End of explanation
"""
grid
"""
Explanation: grid is a xarray dataset that contains lat/lon (XC, YC, XG, YG) and the grid's scale factors:
End of explanation
"""
%matplotlib inline
grid['XC'].sel(face=4).plot()
"""
Explanation: The obtained dataset is then ready for use for the desired application:
End of explanation
"""
!wget https://ndownloader.figshare.com/files/14072591
!tar -xf 14072591
# We generate the extra metadata needed for multi-faceted grids
aste_extra_metadata = xmitgcm.utils.get_extra_metadata(domain='aste', nx=270)
# Then we read the grid from the input files
grid_aste = xmitgcm.utils.get_grid_from_input('./grid_aste270/tile<NFACET>.mitgrid',
geometry='llc',
extra_metadata=aste_extra_metadata)
grid_aste
grid_aste['YC'].sel(face=2).plot()
"""
Explanation: Example 2: ASTE
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_make_inverse_operator.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import (make_inverse_operator, apply_inverse,
write_inverse_operator)
print(__doc__)
data_path = sample.data_path()
fname_fwd_meeg = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_fwd_eeg = data_path + '/MEG/sample/sample_audvis-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
# Load data
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
forward_meeg = mne.read_forward_solution(fname_fwd_meeg, surf_ori=True)
noise_cov = mne.read_cov(fname_cov)
# Restrict forward solution as necessary for MEG
forward_meg = mne.pick_types_forward(forward_meeg, meg=True, eeg=False)
# Alternatively, you can just load a forward solution that is restricted
forward_eeg = mne.read_forward_solution(fname_fwd_eeg, surf_ori=True)
# make an M/EEG, MEG-only, and EEG-only inverse operators
info = evoked.info
inverse_operator_meeg = make_inverse_operator(info, forward_meeg, noise_cov,
loose=0.2, depth=0.8)
inverse_operator_meg = make_inverse_operator(info, forward_meg, noise_cov,
loose=0.2, depth=0.8)
inverse_operator_eeg = make_inverse_operator(info, forward_eeg, noise_cov,
loose=0.2, depth=0.8)
write_inverse_operator('sample_audvis-meeg-oct-6-inv.fif',
inverse_operator_meeg)
write_inverse_operator('sample_audvis-meg-oct-6-inv.fif',
inverse_operator_meg)
write_inverse_operator('sample_audvis-eeg-oct-6-inv.fif',
inverse_operator_eeg)
# Compute inverse solution
stcs = dict()
stcs['meeg'] = apply_inverse(evoked, inverse_operator_meeg, lambda2, "dSPM",
pick_ori=None)
stcs['meg'] = apply_inverse(evoked, inverse_operator_meg, lambda2, "dSPM",
pick_ori=None)
stcs['eeg'] = apply_inverse(evoked, inverse_operator_eeg, lambda2, "dSPM",
pick_ori=None)
# Save result in stc files
names = ['meeg', 'meg', 'eeg']
for name in names:
stcs[name].save('mne_dSPM_inverse-%s' % name)
"""
Explanation: Assemble inverse operator and compute MNE-dSPM inverse solution
Assemble M/EEG, MEG, and EEG inverse operators and compute dSPM
inverse solution on MNE evoked dataset and stores the solution
in stc files for visualisation.
End of explanation
"""
plt.close('all')
plt.figure(figsize=(8, 6))
for ii in range(len(stcs)):
name = names[ii]
stc = stcs[name]
plt.subplot(len(stcs), 1, ii + 1)
plt.plot(1e3 * stc.times, stc.data[::150, :].T)
plt.ylabel('%s\ndSPM value' % str.upper(name))
plt.xlabel('time (ms)')
plt.show()
"""
Explanation: View activation time-series
End of explanation
"""
|
pligor/predicting-future-product-prices | 04_time_series_prediction/07_price_history_varlen_rnn_cells.ipynb | agpl-3.0 | from __future__ import division
import tensorflow as tf
from os import path
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import StratifiedShuffleSplit
from time import time
from matplotlib import pyplot as plt
import seaborn as sns
from mylibs.jupyter_notebook_helper import show_graph
from tensorflow.contrib import rnn
from tensorflow.contrib import learn
import shutil
from tensorflow.contrib.learn.python.learn import learn_runner
from IPython.display import Image
from IPython.core.display import HTML
from mylibs.tf_helper import getDefaultGPUconfig
from data_providers.binary_shifter_varlen_data_provider import \
BinaryShifterVarLenDataProvider
from data_providers.price_history_varlen_data_provider import PriceHistoryVarLenDataProvider
from models.model_05_price_history_rnn_varlen import PriceHistoryRnnVarlen
from sklearn.metrics import r2_score
from mylibs.py_helper import factors
from fastdtw import fastdtw
from scipy.spatial.distance import euclidean
from statsmodels.tsa.stattools import coint
from cost_functions.huber_loss import huber_loss
dtype = tf.float32
seed = 16011984
random_state = np.random.RandomState(seed=seed)
config = getDefaultGPUconfig()
%matplotlib inline
from common import get_or_run_nn
"""
Explanation: https://r2rt.com/recurrent-neural-networks-in-tensorflow-iii-variable-length-sequences.html
End of explanation
"""
num_epochs = 10
series_max_len = 60
num_features = 1 #just one here, the function we are predicting is one-dimensional
state_size = 400
target_len = 30
batch_size = 47
"""
Explanation: Step 0 - hyperparams
End of explanation
"""
csv_in = '../price_history_03a_fixed_width.csv'
npz_path = '../price_history_03_dp_60to30_from_fixed_len.npz'
# XX, YY, sequence_lens, seq_mask = PriceHistoryVarLenDataProvider.createAndSaveDataset(
# csv_in=csv_in,
# npz_out=npz_path,
# input_seq_len=60, target_seq_len=30)
# XX.shape, YY.shape, sequence_lens.shape, seq_mask.shape
dp = PriceHistoryVarLenDataProvider(filteringSeqLens = lambda xx : xx >= target_len,
npz_path=npz_path)
dp.inputs.shape, dp.targets.shape, dp.sequence_lengths.shape, dp.sequence_masks.shape
"""
Explanation: Step 1 - collect data (and/or generate them)
End of explanation
"""
model = PriceHistoryRnnVarlen(rng=random_state, dtype=dtype, config=config)
graph = model.getGraph(batch_size=batch_size, state_size=state_size,
rnn_cell= PriceHistoryRnnVarlen.RNN_CELLS.GRU,
target_len=target_len, series_max_len=series_max_len)
show_graph(graph)
"""
Explanation: Step 2 - Build model
End of explanation
"""
rnn_cell = PriceHistoryRnnVarlen.RNN_CELLS.GRU
num_epochs, state_size, batch_size
def experiment():
dynStats, predictions_dict = model.run(epochs=num_epochs,
rnn_cell=rnn_cell,
state_size=state_size,
series_max_len=series_max_len,
target_len=target_len,
npz_path=npz_path,
batch_size=batch_size)
return dynStats, predictions_dict
from os.path import isdir
data_folder = '../../../../Dropbox/data'
assert isdir(data_folder)
dyn_stats, preds_dict = get_or_run_nn(experiment,
filename='002_rnn_gru_60to30', nn_runs_folder= data_folder + '/nn_runs')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
average_huber_loss = np.mean([np.mean(huber_loss(dp.targets[ind], preds_dict[ind]))
for ind in range(len(dp.targets))])
average_huber_loss
"""
Explanation: Step 3 training the network
GRU cell
End of explanation
"""
rnn_cell = PriceHistoryRnnVarlen.RNN_CELLS.GRU
num_epochs = 50
state_size, batch_size
def experiment():
dynStats, predictions_dict = model.run(epochs=num_epochs,
rnn_cell=rnn_cell,
state_size=state_size,
series_max_len=series_max_len,
target_len=target_len,
npz_path=npz_path,
batch_size=batch_size)
return dynStats, predictions_dict
dyn_stats, preds_dict = get_or_run_nn(experiment,
filename='002_rnn_gru_60to30_50epochs',
nn_runs_folder= data_folder + '/nn_runs')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
average_huber_loss = np.mean([np.mean(huber_loss(dp.targets[ind], preds_dict[ind]))
for ind in range(len(dp.targets))])
average_huber_loss
"""
Explanation: Conclusion
GRU has performed much better than basic RNN
GRU cell - 50 epochs
End of explanation
"""
|
ThunderShiviah/code_guild | interactive-coding-challenges/sorting_searching/selection_sort/selection_sort_challenge.ipynb | mit | def selection_sort(data, start=0):
# TODO: Implement me (recursive)
pass
def selection_sort_iterative(data):
# TODO: Implement me (iterative)
pass
"""
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement selection sort.
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Is a naiive solution sufficient (ie not stable, not based on a heap)?
Yes
Test Cases
Empty input -> []
One element -> [element]
Two or more elements
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
"""
# %load test_selection_sort.py
from nose.tools import assert_equal
class TestSelectionSort(object):
def test_selection_sort(self, func):
print('Empty input')
data = []
func(data)
assert_equal(data, [])
print('One element')
data = [5]
func(data)
assert_equal(data, [5])
print('Two or more elements')
data = [5, 1, 7, 2, 6, -3, 5, 7, -1]
func(data)
assert_equal(data, sorted(data))
print('Success: test_selection_sort\n')
def main():
test = TestSelectionSort()
test.test_selection_sort(selection_sort)
try:
test.test_selection_sort(selection_sort_iterative)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
"""
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.