repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
karolaya/PDI | PS-05/.ipynb_checkpoints/problem_set_5-checkpoint.ipynb | mit | '''This is a definition script, so we do not have to rewrite code'''
import numpy as np
import os
import cv2
import matplotlib.pyplot as mplt
import random
import json
# set matplotlib to print inline (Jupyter)
%matplotlib inline
# path prefix
pth = '../data/'
# files to be used as samples
# list *files* holds the names of the test images
files = sorted(os.listdir(pth))
print files
# Usefull function
def rg(img_path):
return cv2.imread(pth+img_path, cv2.IMREAD_GRAYSCALE)
"""
Explanation: <center>Digital Image Processing - Problem Set 5</center>
Student Names:
Karolay Ardila Salazar
Julián Sibaja García
Andrés Simancas Mateus
Definitions
End of explanation
"""
image = rg(files[-11])
def huMoments(image):
ret, threshold = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, _ = cv2.findContours(threshold, cv2.RETR_LIST,cv2.CHAIN_APPROX_NONE)
print "Number of contours detected = " + str(len(contours))
for i in range(len(contours)):
Hu = cv2.HuMoments(cv2.moments(contours[i])).flatten()
print str(i) + ' = ' + str(Hu)
mplt.figure()
mplt.imshow(image, cmap='gray')
mplt.title(files[-11])
huMoments(image)
"""
Explanation: Problem 1
Write a function that describes <i>each</i> object in a binary image using the Hu statistical moments. The Hu moments are invariant to rotation, scale and translation. These moments can be defined for <i>each</i> region in a binary image. The OpenCV function to compute these moments is <tt>cv2.HuMoments</tt>. Write down the equations that compute the seven Hu moments for a region.
Análisis
La siguiente función tiene como objetivo encontrar los Hu moments que describen a cada objeto de la imagen <tt>shapes.png</tt>. Para esto primero guardamos la matriz de la imagen en una variable. Luego definimos nuestra función que recibe una imagen como parámetro. Aplicamos threshold a la imagen para luego con la resultante hallar los objetos con la función <tt>cv2.findContours</tt>. Ya que tenemos los valores de los contours, podemos mostrar cuantos de estos encontró con su longitud y confirmamos que sean la misma cantidad de objetos que se encuentran en la imagen. Ahora, para cada uno de estos objetos aplicamos <tt>cv2.huMoments</tt> y mostramos en pantalla los resultados.
Seven Hu moments
$ I_1 = n_{20} + n_{02} $ <br>
$ I_2 = (n_{20} - n_{02})^2 + 4n_{11}^2 $ <br>
$ I_3 = (n_{30} - n_{12})^2 + (n_{21} - n_{03})^2 $ <br>
$ I_4 = (n_{30} + n_{12})^2 + (n_{21} + n_{03})^2 $ <br>
$ I_5 = (n_{30} - 3n_{12})(n_{30} + n_{12})[(n_{30} + n_{12})^2 - 3(n_{21} + n_{03})^2] + (3n_{21} - n_{03})(n_{21} + n_{03})(3(n_{30} + n_{12})^2 - (n_{21} + n_{03})^2) $ <br>
$ I_6 = (n_{20} - n_{02})[(n_{30} + n_{12})^2 - (n_{21} + n_{03})^2] + 4n_{11}(n_{30} + n_{12})(n_{21} + n_{03}) $ <br>
$ I_7 = (3n_{21} - n_{03})(n_{30} + n_{12})[(n_{30} + n_{12})^2 - 3(n_{21} + n_{03})^2] - (n_{30} - 3n_{12})(n_{21} + n_{03})(3(n_{30} + n_{12})^2 - (n_{21} + n_{03})^2) $ <br>
End of explanation
"""
img = rg(files[-11])
img_2 = rg(files[18])
def cornerDetection(img):
corners = cv2.goodFeaturesToTrack(img,100,0.01,10)
corners = np.int0(corners)
for k in corners:
x,y = k.ravel()
cv2.circle(img, (x,y), 3, 255, -1)
print x,y
mplt.figure()
mplt.imshow(cv2.cvtColor(img, cv2.COLOR_GRAY2BGR))
cornerDetection(img)
cornerDetection(img_2)
"""
Explanation: Problem 2
Write a function that detects corners on an image using the Harris corner detection method. You can use the OpenCV built-in functions. Your function should output the $N$ detected corner locations in a $2 \times N$ matrix. Visualize your results by plotting the corners on top of the input image. Apply your function to the binary image <tt> shapes.png</tt> and to the grayscale image <tt>face.tif</tt>.
Análisis
Primeramente cargamos las dos imágenes de interés. Luego creamos la función cornerDetection que recive como argumento una imagen. Guardamos los corners que hayamos por la función <tt>cv2.goodFeaturesToTrack</tt> en una variable y luego iteramos sobre estos para hayar las posiciones x,y donde los mostraremos en forma de círculo en la imagen a través de <tt>.ravel</tt>. Finalmente mostramos la imagen resultante.
End of explanation
"""
# Images to test
MIN_DIST = 30
LEVEL_OFFSET_INV = 10
LEVEL_OFFSET = 90
imgs = [files[i] for i in [5, -3, -20]]
# Bad bottle detector
def badBottleDetector(img):
h, w = img.shape
# Smooth
kernel = np.ones((3,3),np.float32)/9
simg = cv2.filter2D(img, -1, kernel)
simg_c = simg.copy()
_, simg_ct = cv2.threshold(simg_c, np.mean(simg_c)+LEVEL_OFFSET, 1, cv2.THRESH_BINARY)
_, simg_cn = cv2.threshold(simg_c, LEVEL_OFFSET_INV, 1, cv2.THRESH_BINARY)
kernel = np.ones((5,5), np.uint8)
simg_ct = cv2.erode(simg_ct, kernel, iterations=1)
bottles = list()
sizes = list()
indices = list()
centers = list()
for i in range(h):
ruler = np.zeros([1, w]) + simg_cn[i: i+1]
ruler = ruler[0]
seed = 0
break_points = list()
for j in range(len(ruler)):
if (ruler[j] != seed) or (j == len(ruler)-1 and ruler[j] == 1):
break_points.append(j)
seed = int(not seed)
dist = list()
center = list()
for j in range(0, len(break_points)-1, 2):
if break_points[j+1] - break_points[j] > MIN_DIST:
dist.append(break_points[j+1] - break_points[j])
center.append(int((break_points[j+1] + break_points[j])/2))
if dist:
bottles.append(len(dist))
sizes.append(np.max(dist))
indices.append(i)
centers.append(center)
counts = np.bincount(bottles)
num_bottles = np.argmax(counts)
base_size = int(np.mean(sizes))
centers = [c for c in centers if len(c) == num_bottles]
center = list()
for i in range(num_bottles):
bt = [c[i] for c in centers]
center.append(int(np.mean(bt)))
index = 0
for i in range(len(sizes)):
if sizes[i] > base_size:
index = i
break
liquid_min_limit = indices[index]
# Check which bottle has air at index liquid_min_limit
ruler = np.zeros([1, w]) + simg_ct[liquid_min_limit: liquid_min_limit+1]
ruler = ruler[0]
seed = 0
break_points = list()
for j in range(len(ruler)):
if (ruler[j] != seed) or (j == len(ruler)-1 and ruler[j] == 1):
break_points.append(j)
seed = int(not seed)
dist = list()
centers = list()
for j in range(0, len(break_points)-1, 2):
if break_points[j+1] - break_points[j] > MIN_DIST:
dist.append(break_points[j+1] - break_points[j])
centers.append(int((break_points[j+1] + break_points[j])/2))
final = [0]*len(center)
for c in centers:
ct = [np.abs(cc - c) for cc in center]
final[np.argmin(ct)] = 1
print('Final decision. Bottles not correct are marked with 1s: ')
print(final)
printer([img, simg_cn, simg_ct], ['Original image', 'inv', 'liquid'])
def printer(iss, des):
# Printing
f, ax = mplt.subplots(1, len(iss), figsize=(10,10))
for i in range(len(iss)):
ax[i].imshow(iss[i], cmap='gray')
ax[i].set_title(des[i])
for i in imgs:
badBottleDetector(rg(i))
"""
Explanation: Problem 3
A company that bottles a variety of industrial chemicals has heard
of your success solving imaging problems and hires you to design an approach
for detecting when bottles are not full. The bottles appear as shown below
as they move along a conveyor line past an automatic
filling and capping station. A bottle is considered imperfectly filled when the
level of the liquid is below the midway point between the bottom of the neck and
the shoulder of the bottle.The shoulder is defined as the region of the bottle
where the sides and slanted portion of the bottle intersect. The bottles are
moving, but the company has an imaging system equipped with an illumination
flash front end that effectively stops motion, so you will be given images that
look very close to the sample shown below.
<img src="../data/files/bottles.png" />
Propose a solution for detecting
bottles that are not filled properly. State clearly all assumptions that you
make and that are likely to impact the solution you propose. Implement your
solution and apply it to the images <tt>bottles.tif, new_bottles.jpg</tt> and <tt> three_bottles.jpg</tt>. Visualize the results
of your algorithm by highlighting with false colors
the regions that are detected as correctly
filled bottles and the regions that are detected as not properly filled bottles.
Comentario
La idea es encontrar las botellas que no están llenas apropiadamente. No se requirió de comandos demasiado complejos de OpenCV, de hecho ninguno además de threshold, filter2D y erode. La idea se puede resumir como sigue:
Encontrar el ancho promedio de las botellas, este ancho promedio es aproximadamente el mínimo valor posible de líquido en una botella correcta,
Encontrar la altura en la imagen en la que ocurre por primera vez el ancho promedio (este va a ser el nivel límite de líquido)
Encontrar la cantidad de botellas de la imagen,
Encontrar los centros de estas botellas,
Al nivel mínimo verificar qué botella tiene aire todavía e indexarla.
Para empezar se suaviza la imagen con un filtro2D y se umbraliza la misma para separar las botellas del fondo; este valor se almacena. Luego la idea es encontrar el número de botellas que hay y su ancho; para esto creamos una regla (un vector de ceros) que sumamos con cada fila de la imagen. Lo resultante de la operación es un vector con intervalos de ceros y unos (unos son regiones de las botellas), se miden las regiones de unos y el número de las mismas; la moda del número de regiones es el número de botellas que hay, la media de máximo de cada medición es el ancho de las botellas. Debido a que este ancho hallado es ciertamente menor que el ancho real, sirve como el ancho de la botella en el que ocurre la altura de líquido mínima.
Cabe notar que en la medición de las regiones hecha anteriormente se calcularon los centros de las botellas (centros de cada región) y se estimó la altura mínima del líquido a través del ancho promedio.
Luego de esto se umbralizó la imagen inicial con el fin de obtener las regiones de aire únicamente (fue necesario hacer erosión para remover elementos pequeños). Esta imagen se evaluó en la fila de altura de agua mínima y se extrajeron las regiones de aire a esta altura; el centro de estas regiones se contrasto con el centro de las botellas para saber a qué botella pertenecían y de esta forma identificarlas.
End of explanation
"""
img_origin = rg('hubble-original.tif')
BIG_OBJECT_COUNT = 80
def printer(iss, des):
# Printing
f, ax = mplt.subplots(1, len(iss), figsize=(15,15))
for i in range(len(iss)):
ax[i].imshow(iss[i], cmap='gray')
ax[i].set_title(des[i])
def connectedLabeling(mat, i, j, h, w):
mat[i][j] = 0
if i+1 < h and mat[i+1][j] == 1:
return 1 + connectedLabeling(mat, i+1, j, h, w)
elif i-1 > -1 and mat[i-1][j] == 1:
return 1 + connectedLabeling(mat, i-1, j, h, w)
elif j+1 < w and mat[i][j+1] == 1:
return 1 + connectedLabeling(mat, i, j+1, h, w)
elif j-1 > -1 and mat[i][j-1] == 1:
return 1 + connectedLabeling(mat, i, j-1, h, w)
elif i-1 > -1 and j-1 > -1 and mat[i-1][j-1] == 1:
return 1 + connectedLabeling(mat, i-1, j-1, h, w)
elif i-1 > -1 and j+1 < w and mat[i-1][j+1] == 1:
return 1 + connectedLabeling(mat, i-1, j+1, h, w)
elif i+1 < h and j-1 > -1 and mat[i+1][j-1] == 1:
return 1 + connectedLabeling(mat, i+1, j-1, h, w)
elif i+1 < h and j+1 < w and mat[i+1][j+1] == 1:
return 1 + connectedLabeling(mat, i+1, j+1, h, w)
else:
return 1
def countObjects(img, avg_size, th):
kernel = np.ones((avg_size,avg_size),np.float32)/(avg_size*avg_size)
fimg = cv2.filter2D(img, -1, kernel)
_, timg = cv2.threshold(fimg, 255*th, 1, cv2.THRESH_BINARY)
masked = img * timg
# Count the objects in the masked image
timgc = timg.copy()
h, w = maskc.shape
for i in range(h):
for j in range(w):
if timgc[i][j] == 1:
neighbors = connectedLabeling(timgc, i, j, h, w)
if neighbors > BIG_OBJECT_COUNT:
# print(neighbors)
timgc[i][j] = 1
count = np.sum(timgc)
print('Objetos grandes encontrados: ' + str(count))
print('Se definió objeto grande aquel cuyo conjunto conexo tiene más de ' + str(BIG_OBJECT_COUNT) + ' miembros')
print('Este parámetro se puede variar')
printer([img, timg, timgc, maskc], ['Original', 'Threshold', 'Counting', 'Masked'])
sizes = [1, 5, 15, 25]
ths = [0.5, 0.5, 0.25, 0.3]
for i in range(len(sizes)):
countObjects(img_origin, sizes[i], ths[i])
"""
Explanation: Problem 4
Suppose that you are observing objects in the night sky. Suppose that only ‘big’ objects are important to your observation. In this scenario, ‘small’ objects are considered noise. Write a python function that processes the image as follows:
Use a 15x15 averaging filter to blur the image.
Apply a threshold of 0.25 to binarize the resulting blurred image.
Use the binary image to ‘mask’ the noise of the original image: simply perform an element-wise multiplication of the binary image and the original image.
Use connected component analysis on the binary image to count the number of ‘big’ objects found.
The function should take three inputs: an image matrix, the size of the averaging filter and threshold value. Make sure your function displays the intermediary results of each step outlined above.
Apply your function to the input image ‘hubble-original.tif’. Try different values of smoothing kernel size and threshold value. Analyze the relationship between number of objects found and smoothing kernel size and threshold value. In particular, you might want to observe the result when using an averaging filter of size n=1 (i.e. no smoothing).
Comentarios
La idea del programa siguiente es encontrar cuerpos grandes en las imágenes; para ello se realizan los procedimientos anteriormente pedidos, esto es el filtro 15x15, la aplicación de la umbralización para crear la máscara, la eliminación de ruido de la imagen original y el análisis de cuerpo conexo. Estos pasos son sencillos de realizar y ya se han implementado anteriormente, a excepción del análisis de cuerpo conexo. Para comprobar conexión en un cuerpo, se elige un punto de semilla (aquel punto con intensidad de 1), a partir de este punto se analizan los vecinos de forma recursiva, esto es, se vuelve a llamar la función sobre los vecinos si estos tienen intensidad de 1. Esto arroja la cantidad de cuerpos conexos en la imagen, sin embargo, no hay garantía que estos cuerpos sean grandes. Para asegurar lo anterior se cuentan también el número de miembros de cada cuerpo, si este exede cierto umbral establecido, se cuenta como cuerpo grande.
End of explanation
"""
def getPointsAndDescriptors(img,show_img = False):
# Getting Keypoint structure object and Descriptor Array
sift = cv2.SIFT()
kp, D = sift.detectAndCompute(img,None)
# Getting the array of points (x,y,s)
points = np.zeros((3,len(kp)))
for i in range(len(kp)):
points[0][i] = kp[i].pt[0]
points[1][i] = kp[i].pt[1]
points[2][i] = kp[i].size
if show_img == True:
img_s = cv2.drawKeypoints(img, kp, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
mplt.imshow(img_s), mplt.xticks([]), mplt.yticks([]), mplt.figure()
return points, D
# 25 -45 ind of cars images
for i in range(25,45):
img_name = files[i]
img = rg(img_name)
points, D = getPointsAndDescriptors(img)
f = open("data_image_"+str(i-24)+".json","w")
data = {"points":points.tolist(),"Descriptors": D.tolist()}
json.dump(data,f, sort_keys=True, indent=4)
f.close()
print "Points and SIFT descriptors for image "+str(i-24)+" extracted"
print "Done!"
"""
Explanation: Problem 5
Write a function that extracts local interest points and computes
their descriptors using the SIFT transform. You can find implementations of
the SIFT transform in OpenCV.
Your function should return two matrices: A first matrix of size $3 \times N$, where $N$ is the number of detected points in the image, and the 3 elements correspond to the $x$, $y$ locations and $s$ size of the detected points. A second matrix of size $128 \times N$ that contains the SIFT descriptor of each interest point.
Apply your function to all car images <tt>image_00XX.jpg</tt>.
Store the results of each image in a separate data file.
Comentarios
La Siguiente función hace uso de la implementación de la transformada de SIFT que tiene OpenCV. La función utilizada fue detectAndCompute. La función que se creó, recibe la array de una imagen en escala de grises y regresa los puntos (x,y) y la escala de la imagen en la que fueron encontrados como una matriz de numpy 3xN y también los respectivos descriptores de SIFT para cada punto en otra matriz. También se puede pasar como segundo parametro a la función un booleano que determina si se imprime o no la imagen con los puntos sobre ella, en el caso por defecto, que es False, no imprime nada. Los datos se guardan en archivos de texto en formato json.
End of explanation
"""
|
WomensCodingCircle/CodingCirclePython | Lesson10_Regexs/RegularExpressions.ipynb | mit | import re
# To run the examples we are going to use some of the logs from the
# django project, a web framework for python
django_logs = '''commit 722344ee59fb89ea2cd5b906d61b35f76579de4e
Author: Simon Charette <charette.s@gmail.com>
Date: Thu May 19 09:31:49 2016 -0400
Refs #24067 -- Fixed contenttypes rename tests failures on Oracle.
Broke the initial migration in two to work around #25530 and added
'django.contrib.auth' to the available_apps to make sure its tables are also
flushed as Oracle doesn't implement cascade deletion in sql_flush().
Thanks Tim for the report.
commit 9fed4ec418a4e391a3af8790137ab147efaf17c2
Author: Simon Charette <charette.s@gmail.com>
Date: Sat May 21 13:18:22 2016 -0400
Removed an obsolete comment about a fixed ticket.
commit 94486fb005e878d629595942679ba6d23401bc22
Author: Markus Holtermann <info@markusholtermann.eu>
Date: Sat May 21 13:20:40 2016 +0200
Revert "Disable patch coverage checks"
Mistakenly pushed to django/django instead of another repo
This reverts commit 6dde884c01156e36681aa51a5e0de4efa9575cfd.
commit 6dde884c01156e36681aa51a5e0de4efa9575cfd
Author: Markus Holtermann <info@markusholtermann.eu>
Date: Sat May 21 13:18:18 2016 +0200
Disable patch coverage checks
commit 46a38307c245ab7ed0b4d5d5ebbaf523a81e3b75
Author: Tim Graham <timograham@gmail.com>
Date: Fri May 20 10:50:51 2016 -0400
Removed versionadded/changed annotations for 1.9.
commit 1915a7e5c56d996b0e98decf8798c7f47ff04e76
Author: Tim Graham <timograham@gmail.com>
Date: Fri May 20 09:18:55 2016 -0400
Increased the default PBKDF2 iterations.
commit 97c3dfe12e095005dad9e6750ad5c5a54eee8721
Author: Tim Graham <timograham@gmail.com>
Date: Thu May 19 22:28:24 2016 -0400
Added stub 1.11 release notes.
commit 8df083a3ce21ca73ff77d3844a578f3da3ae78d7
Author: Tim Graham <timograham@gmail.com>
Date: Thu May 19 22:20:21 2016 -0400
Bumped version; master is now 1.11 pre-alpha.'''
"""
Explanation: Regexs
Up until now, to search in text we have used string methods find, startswith, endswith, etc. But sometimes you need more power.
Regular expressions are their own little language that allows you to search through text and find matches with incredibly complex patterns.
A regular expression, also referred to as "regex" or "regexp", provides a concise and flexible means for matching strings of text, such as particular characters, words, or patterns of characters.
To use regular you need to import python's regex library re
https://docs.python.org/2/library/re.html
End of explanation
"""
print(re.match('a', 'abcde'))
print(re.match('c', 'abcde'))
print(re.search('a', 'abcde'))
print(re.search('c', 'abcde'))
print(re.match('version', django_logs))
print(re.search('version', django_logs))
if re.search('commit', django_logs):
print("Someone has been doing work.")
"""
Explanation: Searching
The simplest thing you can do with regexs in python is search through text to see if there is a match. To do this you use the methods search or match. match only checks if it matches at the beginning of the string and search check the whole string.
re.match(pattern, string)
re.search(pattern, string)
End of explanation
"""
# Start simple, match any character 2 times
print(re.search('..', django_logs))
# just to prove it works
print(re.search('..', 'aa'))
print(re.search('..', 'a'))
print(re.search('..', '^%'))
# to match a commit hash (numbers and letters a-f repeated) we can use a regex
commit_pattern = '[0-9a-f]+'
print(re.search(commit_pattern, django_logs))
# Let's match the time syntax
time_pattern = '\d\d:\d\d:\d\d'
time_pattern = '\d{2}:\d{2}:\d{2}'
print(re.search(time_pattern, django_logs))
"""
Explanation: TRY IT
Search for the word May in the django logs
Special Characters
So far we can't do anything that you couldn't do with find, but don't worry. Regexs have many special characters to allow you to look for thing like the beginning of a word, whitespace or classes of characters.
You include the character in the pattern.
^ Matches the beginning of a line
$ Matches the end of the line
. Matches any character
\s Matches whitespace
\S Matches any non-whitespace character
* Repeats a character zero or more times
*? Repeats a character zero or more times (non-greedy)
+ Repeats a character one or more times
+? Repeats a character one or more times (non-greedy)
? Repeats a character 0 or one time
[aeiou] Matches a single character in the listed set
[^XYZ] Matches a single character not in the listed set
[a-z0-9] The set of characters can include a range
{10} Specifics a match the preceding character(s) {num} number or times
\d Matches any digit
\b Matches a word boundary
Hint if you want to match the literal character (like $) as opposed to its special meaning, you would escape it with a \
End of explanation
"""
print(re.search('markus holtermann', django_logs))
print(re.search('markus holtermann', django_logs, re.IGNORECASE))
"""
Explanation: TRY IT
Match anything between angled brackets < >
Ignoring case
match and search both take an optional third argument that allows you to include flags. The most common flag is ignore case.
re.search(pattern, string, re.IGNORECASE)
re.match(pattern, string, re.IGNORECASE)
End of explanation
"""
# Let's match the time syntax
time_pattern = '\d\d:\d\d:\d\d'
m = re.search(time_pattern, django_logs)
print(m.group(0))
"""
Explanation: TRY IT
search for 'django' in 'Both Django and Flask are very useful python frameworks' ignoring case
Extracting Matches
Finding is only half the battle. You can also extract what you match.
To get the string that your regex matched you can store the match object in a variable and run the group method on that
m = re.search(pattern, string)
print m.group(0)
End of explanation
"""
time_pattern = '\d\d:\d\d:\d\d'
print(re.findall(time_pattern, django_logs))
"""
Explanation: If you want to find all the matches, not just the first, you can use the findall method. It returns a list of all the matches
re.findall(pattern, string)
End of explanation
"""
time_pattern = '(\d\d):\d\d:\d\d'
hours = re.findall(time_pattern, django_logs)
print(sorted(hours))
# you can capture more than one match
time_pattern = '(\d\d):(\d\d):\d\d'
times = re.findall(time_pattern, django_logs)
print(times)
# Unpacking the tuple in the first line
for hours, mins in times:
print("{} hr {} min".format(hours, mins))
"""
Explanation: If you want to have only part of the match returned to you in findall, you can use parenthesis to set a capture point
pattern = 'sads (part to capture) asdjklajsd'
print re.findall(pattern, string) # prints part to capture
End of explanation
"""
# Lets try some now
"""
Explanation: TRY IT
Capture the host of the email address (alphanumerics between @ and .com) Hint remember to escape the . in .com
Practice
There is a lot more that you can do, but it can feel overwhelming. The best way to learn is with practice. A great way to experiment is this website http://www.regexr.com/ You can put a section of text and see what regexs match patterns in your text. The site also has a cheatsheet for special characters.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nims-kma/cmip6/models/sandbox-3/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-3', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:29
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
skdaccess/skdaccess | skdaccess/examples/Demo_SRTM.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 150
import numpy as np
from getpass import getpass
from skdaccess.geo.srtm.cache import DataFetcher as SDF
"""
Explanation: The MIT License (MIT)<br>
Copyright (c) 2017 Massachusetts Institute of Technology<br>
Author: Cody Rude<br>
This software has been created in projects supported by the US National<br>
Science Foundation and NASA (PI: Pankratius)<br>
Permission is hereby granted, free of charge, to any person obtaining a copy<br>
of this software and associated documentation files (the "Software"), to deal<br>
in the Software without restriction, including without limitation the rights<br>
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell<br>
copies of the Software, and to permit persons to whom the Software is<br>
furnished to do so, subject to the following conditions:<br>
The above copyright notice and this permission notice shall be included in<br>
all copies or substantial portions of the Software.<br>
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR<br>
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,<br>
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE<br>
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER<br>
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,<br>
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN<br>
THE SOFTWARE.<br>
End of explanation
"""
username='Enter username'
password = getpass()
"""
Explanation: Supply Earth Data credentials
End of explanation
"""
sdf = SDF(lat_tile_start=37,lat_tile_end=37,lon_tile_start=-119,lon_tile_end=-119,
username=username,password=password)
sdw = sdf.output()
"""
Explanation: Create data fetcher for elevation data from Shuttle Radar Topography
End of explanation
"""
label, data = next(sdw.getIterator())
plt.imshow(data,cmap='terrain',vmin=-1300);
plt.colorbar()
plt.axis('off');
"""
Explanation: Access data
End of explanation
"""
|
cougarTech2228/Scouting-2016 | notebooks/robocop.ipynb | mit | # Object oriented approach, would have to feed csv data into objects
# maybe get rid of this and just use library analysis tools
class Robot(object):
def __init__(self, name, alliance, auto_points, points):
self.name = name
self.alliance = alliance
self.auto_points = auto_points
self.points = points
def points_per_sec(self):
return self.points / 150
def auto_points_per_sec(self):
return self.auto_points / 15
def get_name(self):
return self.name
def get_alliance(self):
return self.alliance
data
def analyze(dataframe, team):
total_points = dataframe[team]['Points'] + dataframe[team]['Auto Points']
cumulative_success_rate = 4
pps = dataframe[team]['Points'] / 150
auto_pps = dataframe[team]['Auto Points'] / 15
return(total_points, pps, auto_pps)
stuff = analyze(data, 'Cougar Tech')
print stuff
"""
Explanation: 9 defenses
Low Bar
ALLIANCE selected
Audience selected
ALLIANCE selected
ALLIANCE selected
Data structure choices include:
- Pandas dataframes
- Numpy Arrays
- Object oriented
- Dictionary
End of explanation
"""
data = pd.read_csv("robodummy.csv")
fig, axs = plt.subplots(1, 4, sharey = True)
data.plot(kind='scatter', x = 'x', y = 'y', ax = axs[0], figsize = (16, 8))
data.plot(kind='scatter', x = 'x', y = 'y', ax = axs[1])
data.plot(kind='scatter', x = 'x', y = 'y', ax = axs[2])
a = np.array(([1, 4], [6, 5], [9, 3]))
np.sort(a)
"""
Explanation: Analysis Functions:
End of explanation
"""
|
california-civic-data-coalition/python-calaccess-notebooks | tutorials/first-python-notebook.ipynb | mit | 2+2
"""
Explanation: First Python Notebook: Scripting your way to the story
By Ben Welsh
A step-by-step guide to analyzing data with Python and the Jupyter Notebook.
This tutorial will teach you how to use computer programming tools to analyze data by exploring contributors to campaigns for and again Proposition 64, a ballot measure asking California voters to decide if recreational marijuana should be legalized.
This guide was developed by Ben Welsh for a Oct. 2, 2016, "watchdog workshop" organized by Investigative Reporters and Editors at San Diego State University's school of journalism. The class is designed for beginners who have zero Python experience.
Prelude: Prequisites
Before you can begin, your computer needs the following tools installed and working to participate.
A command-line interface to interact with your computer
Version 2.7 of the Python programming language
The pip package manager and virtualenv environment manager for Python
Command-line interface
Unless something is wrong with your computer, there should be a way to open a window that lets you type in commands. Different operating systems give this tool slightly different names, but they all have some form of it, and there are alternative programs you can install as well.
On Windows you can find the command-line interface by opening the "command prompt." Here are instructions for Windows 10 and for Windows 8 and earlier versions. On Apple computers, you open the "Terminal" application. Ubuntu Linux comes with a program of the same name.
Python
If you are using Mac OSX or a common flavor of Linux, Python version 2.7 is probably already installed and you can test to see what version, if any, is already available by typing the following into your terminal.
bash
python -V
Even if you find it already on your machine, Mac users should install it separately by following these instructions offered by The Hitchhikers Guide to Python.
Windows people can find a similar guide here which will have them try downloading and installing Python from here.
pip and virtualenv
The pip package manager makes it easy to install open-source libraries that expand what you're able to do with Python. Later, we will use it to install everything needed to create a working web application.
If you don't have it already, you can get pip by following these instructions. In Windows, it's necessary to make sure that the Python Scripts directory is available on your system's PATH so it can be called from anywhere on the command line. This screencast can help.
Verify pip is installed with the following.
bash
pip -V
The virtualenv environment manager makes it possible to create an isolated corner of your computer where all the different tools you use to build an application are sealed off.
It might not be obvious why you need this, but it quickly becomes important when you need to juggle different tools
for different projects on one computer. By developing your applications inside separate virtualenv environments, you can use different versions of the same third-party Python libraries without a conflict. You can also more easily recreate your project on another machine, handy when you want to copy your code to a server that publishes pages on the Internet.
You can check if virtualenv is installed with the following.
bash
virtualenv --version
If you don't have it, install it with pip.
```bash
pip install virtualenv
If you're on a Mac or Linux and get an error saying you lack permissions, try again as a superuser.
sudo pip install virtualenv
```
If that doesn't work, try following this advice.
Act 1: Hello Jupyter Notebook
Start by creating a new development environment with virtualenv in your terminal. Name it after our application.
bash
virtualenv first-python-notebook
Jump into the directory it created.
bash
cd first-python-notebook
Turn on the new virtualenv, which will instruct your terminal to only use those libraries installed
inside its sealed space. You only need to create the virtualenv once, but you'll need to repeat these
"activation" steps each time you return to working on this project.
```bash
In Linux or Mac OSX try this...
. bin/activate
In Windows it might take something more like...
cd Scripts
activate
cd ..
```
Use pip on the command line to install Jupyter Notebook, an open-source tool for writing and sharing Python scripts.
bash
pip install jupyter
Start up the notebook from your terminal.
bash
jupyter notebook
That will open up a new tab in your default web browser that looks something like this:
Click the "New" button in the upper right and create a new Python 2 notebook. Now you're all setup and ready to start writing code.
Act 2: Hello Python
You are now ready to roll within the Jupyter Notebook's framework for writing Python. Don't stress. There's nothing too fancy about it. You can start by just doing a little simple math. Type the following into the first box, then hit the play button in the toolbox (or hit SHIFT+ENTER on your keyboard).
End of explanation
"""
san = 2
"""
Explanation: There. You've just written your first Python code. You've entered two integers (the 2's) and added them together using the plus sign operator. Not so bad, right?
Next, let's introduce one of the basics of computer programming, a variable.
Variables are like containers that hold different types of data so you can go back and refer to them later. They’re fundamental to programming in any language, and you’ll use them all the time when you're writing Python.
Move down to the next box. Now let's put that number two into our first variable.
End of explanation
"""
print san
"""
Explanation: In this case, we’ve created a variable called san and assigned it the integer value 2.
In Python, variable assignment is done with the = sign. On the left is the name of the variable you want to create (it can be anything) and on the right is the value that you want to assign to that variable.
If we use the print command on the variable, Python will output its contents to the terminal because that value is stored in the variable. Let's try it.
End of explanation
"""
diego = 2
"""
Explanation: We can do the same thing again with a different variable name
End of explanation
"""
san + diego
"""
Explanation: Then add those two together the same way we added the numbers at the top.
End of explanation
"""
string = "Hello"
decimal = 1.2
list_of_strings = ["a", "b", "c", "d"]
list_of_integers = [1, 2, 3, 4]
list_of_whatever = ["a", 2, "c", 4]
my_phonebook = {'Mom': '713-555-5555', 'Chinese Takeout': '573-555-5555'}
"""
Explanation: Variables can contain many different kinds of data types. There are integers, strings, floating point numbers (decimals), lists and dictionaries.
End of explanation
"""
data_file = open("./first-python-notebook.csv", "r")
"""
Explanation: Playing with data we invent can be fun, but it's a long way from investigative journalism.
Now's the time for us to get our hands on some real data and get some real work done.
Your assignment: Proposition 64.
The use and sale of marijuana for recreational purposes is illegal in California. Proposition 64, scheduled to appear on the November 2016 ballot, asked voters if it ought to be legalized. A "yes" vote would support legalization. A "no" vote would oppose it. A similar measure, Proposition 19, was defeated in 2010.
According to California's Secretary of State, more than $16 million was been raised to campaign in support of Prop. 64 as of September 20. Just over 2 million was been raised to oppose it.
Your mission, should you choose to accept it, is to download a list of campaign contributors and figure out the biggest donors both for and against the measure.
Click here to download the file as a list of comma-separated values. This is known as a CSV file. It is the most common way you will find data published online. Save the file with the name first-python-notebook.csv in the same directory where you made this notebook.
Python can read files using the built-in open function. You feed two things into it: 1) The path to the file; 2) What type of operation you'd like it to execute on the file. "r" stands for read.
End of explanation
"""
print data_file
"""
Explanation: Print that variable and you see that open has created a file "object" that offers a number of different ways to interact with the contents of the file.
End of explanation
"""
data = data_file.read()
print data
"""
Explanation: One thing a file object can do is read in all of the data from the file. Let's do that next and store the contents in a new variable.
End of explanation
"""
import pandas
"""
Explanation: That's all good, but the data is printing out as one big long string. If we're going to do some real analysis, we need Python to recognize and respect the structure of our data, in the way an Excel spreadsheet would.
To do that, we're going to need something smarter than open. We're going to need something like pandas.
Act 3: Hello pandas
Lucky for us, Python already has tools filled with functions to do pretty much anything you’d ever want to do with a programming language: navigate the web, parse data, interact with a database, run fancy statistics, build a pretty website and so much more.
Some of those tools are included a toolbox that comes with the language, known as the standard library. Others have been built by members of Python's developer community and need to be downloaded and installed from the web.
For this exercise, we're going to install and use pandas, a tool developed by a financial investment firm that has become the leading open-source tool for accessing and analyzing data.
There are several others we could use instead (like agate) but we're picking pandas here because it's the most popular and powerful.
We'll install pandas the same way we installed the Jupyter Notebook earlier: Our friend pip. Save your notebook, switch to your window/command prompt and hit CTRL-C. That will kill your notebook and return you to the command line. There we'll install pandas.
bash
pip install pandas
Now let's restart our notebook and get back to work.
bash
jupyter notebook
Use the next open box to import pandas into our script, so we can use all its fancy methods here in our script.
End of explanation
"""
pandas.read_csv("./first-python-notebook.csv")
"""
Explanation: Opening our CSV isn't any harder than with open, you just need to know the right trick to make it work.
End of explanation
"""
table = pandas.read_csv("./first-python-notebook.csv")
"""
Explanation: Great. Now let's do it again and assign it to a variable this time
End of explanation
"""
table.info()
"""
Explanation: Now let's see what that returns when we print it.
End of explanation
"""
table.head()
"""
Explanation: Here's how you can see the first few rows
End of explanation
"""
print len(table)
"""
Explanation: How many rows are there? Here's how to find out.
End of explanation
"""
table.sort_values("AMOUNT")
"""
Explanation: Even with that simple question and answer, we've begun the process of interviewing our data.
In some ways, your database is no different from a human source. Getting a good story requires careful, thorough questioning.
In the next section we will move ahead by conducting an interview with pandas to pursue our quest of finding out the biggest donors to Proposition 64.
Act 4: Hello analysis
Let's start with something easy. What are the ten biggest contributions?
That will require a sort using the column with the money in it.
End of explanation
"""
table.sort_values("AMOUNT", ascending=False)
"""
Explanation: We've got it sorted the wrong way. Let's reverse it.
End of explanation
"""
table.sort_values("AMOUNT", ascending=False).head(10)
"""
Explanation: Now let's limit it to the top 10.
End of explanation
"""
table['AMOUNT']
"""
Explanation: What is the total sum of contributions that have been reported?
First, let's get our hands on the column with our numbers in it. In pandas you can do that like so.
End of explanation
"""
table['AMOUNT'].sum()
"""
Explanation: Now adding it up is this easy.
End of explanation
"""
table['COMMITTEE_POSITION']
"""
Explanation: There's our big total. Why is it lower than the ones I quoted above? That's because campaigns are only required to report the names of donors over $200, so our data is missing all of the donors who gave smaller amounts of money.
The overall totals are reported elsewhere in lump sums and cannot be replicated by adding up the individual contributions. Understanding this is crucial to understanding not just this data, but all campaign finance data, which typically has this limitation.
Filtering
Adding up a big total is all well and good. But we're aiming for something more nuanced. We want to separate the money for the proposition from the money against it. To do that, we'll need to learn how to filter.
First let's look at the column we're going to filter by
End of explanation
"""
table[table['COMMITTEE_POSITION'] == 'SUPPORT']
"""
Explanation: Now let's filter using that column using pandas oddball method
End of explanation
"""
support_table = table[table['COMMITTEE_POSITION'] == 'SUPPORT']
"""
Explanation: Stick that in a variable
End of explanation
"""
print len(support_table)
"""
Explanation: So now we can ask: How many contributions does the supporting side have?
End of explanation
"""
support_table.sort_values("AMOUNT", ascending=False).head(10)
"""
Explanation: Next: What are the 10 biggest supporting contributions?
End of explanation
"""
oppose_table = table[table['COMMITTEE_POSITION'] == 'OPPOSE']
print len(oppose_table)
oppose_table.sort_values("AMOUNT", ascending=False).head(10)
"""
Explanation: Now let's ask the same questions of the opposing side.
End of explanation
"""
support_table['AMOUNT'].sum()
oppose_table['AMOUNT'].sum()
"""
Explanation: How about the sum total of contributions for each?
End of explanation
"""
table.groupby("COMMITTEE_NAME")['AMOUNT'].sum()
"""
Explanation: Grouping
One thing we noticed as we explored the data is that there are a lot of different committees. A natural question follows: Which ones have raised the most money?
To figure that out, we'll need to group the data by that column and sum up the amount column for each. Here's how pandas does that.
End of explanation
"""
table.groupby("COMMITTEE_NAME")['AMOUNT'].sum().reset_index()
"""
Explanation: Wow. That's pretty ugly. Why? Because pandas is weird.
To convert a raw dump like that into a clean table (known in pandas slang as a "dataframe") you can add the reset_index method to the end of your code. Don't ask why.
End of explanation
"""
table.groupby("COMMITTEE_NAME")['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False)
"""
Explanation: Now let's sort it by size
End of explanation
"""
table.groupby(["FIRST_NAME", "LAST_NAME"])['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False)
"""
Explanation: Okay. Committees are good. But what about something a little more interesting. Who has given the most money?
To do that, we'll group by the two name columns at the same time.
End of explanation
"""
table.groupby([
"FIRST_NAME",
"LAST_NAME",
"COMMITTEE_POSITION"
])['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False)
"""
Explanation: But which side where they are? Add in the position column to see that too.
End of explanation
"""
import matplotlib.pyplot as plt
"""
Explanation: Pretty cool, right? Now now that we've got this interesting list of people, let's see if we can make a chart out of it.
Act 5: Hello viz
Python has a number of charting tools that can work hand in hand with pandas. The most popular is matplotlib. It isn't the prettiest thing in the world, but it offers some reasonably straightfoward tools for making quick charts. And, best of all, it can display right here in our Jupyter Notebook.
Before we start, we'll need to make sure matplotlib is installed. Return to your terminal and try installing it with our buddy pip, as we installed other things before.
bash
pip install matplotlib
Once you've got it in here, you can import it just as we would anything else. Though by adding the optional as option at the end we can create a shorter alias for accessing its tools.
End of explanation
"""
%matplotlib inline
"""
Explanation: Before we'll get started, let's run one more trick to configure matplotlib to show its charts in our notebook.
End of explanation
"""
top_supporters = support_table.groupby(
["FIRST_NAME", "LAST_NAME"]
)['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False).head(10)
top_supporters
"""
Explanation: Now let's save the data we want to chart into a variable
End of explanation
"""
top_supporters['AMOUNT'].plot.bar()
top_supporters['AMOUNT'].plot.barh()
"""
Explanation: Making a quick bar chart is as easy as this.
End of explanation
"""
top_supporters.head(5)['AMOUNT'].plot.barh()
"""
Explanation: It's really those first five that are the most interesting, so let's trim our chart.
End of explanation
"""
chart = top_supporters.head(5)['AMOUNT'].plot.barh()
chart.set_yticklabels(top_supporters['LAST_NAME'])
"""
Explanation: What are those y axis labels? Those are the row number (pandas calls them indexes) of each row. We don't want that. We want the names.
End of explanation
"""
top_supporters.head(5)
"""
Explanation: Okay, but what if I want to combine the first and last name?
First, make a new column. First let's look at what we have now.
End of explanation
"""
print string
"""
Explanation: In plain old Python, we created a string at the start of our less. Remember this?
End of explanation
"""
print string + "World"
"""
Explanation: Combining strings can be as easy as addition.
End of explanation
"""
print string + " " + "World"
"""
Explanation: And if we want to get a space in there yet we can do something like:
End of explanation
"""
top_supporters['FULL_NAME'] = top_supporters['FIRST_NAME'] + " " + top_supporters['LAST_NAME']
"""
Explanation: And guess what we can do the same thing with two columns in our table, and use a pandas trick that will apply it to every row.
End of explanation
"""
top_supporters.head()
"""
Explanation: Now let's see the results
End of explanation
"""
chart = top_supporters.head(5)['AMOUNT'].plot.barh()
chart.set_yticklabels(top_supporters['FULL_NAME'])
"""
Explanation: Now let's chart that.
End of explanation
"""
top_supporters.head(5).to_csv("top_supporters.csv")
"""
Explanation: That's all well and good, but this chart is pretty ugly. If you wanted to hand this data off to your graphics department, or try your hand at a simple chart yourself using something like Chartbuilder, you'd need to export this data into a spreadsheet.
It's this easy.
End of explanation
"""
|
henchc/Rediscovering-Text-as-Data | 05-Intro-to-SpaCy/01-Intro-to-SpaCy.ipynb | mit | from datascience import *
import spacy
"""
Explanation: SpaCy: Industrial-Strength NLP
The tradtional NLP library has always been NLTK. While NLTK is still very useful for linguistics analysis and exporation, spacy has become a nice option for easy and fast implementation of the NLP pipeline. What's the NLP pipeline? It's a number of common steps computational linguists perform to help them (and the computer) better understand textual data. Digital Humanists are often fond of the pipeline because it gives us more things to count! Let's what spacy can give us that we can count.
End of explanation
"""
my_string = '''
"What are you going to do with yourself this evening, Alfred?" said Mr.
Royal to his companion, as they issued from his counting-house in New
Orleans. "Perhaps I ought to apologize for not calling you Mr. King,
considering the shortness of our acquaintance; but your father and I
were like brothers in our youth, and you resemble him so much, I can
hardly realize that you are not he himself, and I still a young man.
It used to be a joke with us that we must be cousins, since he was a
King and I was of the Royal family. So excuse me if I say to you, as
I used to say to him. What are you going to do with yourself, Cousin
Alfred?"
"I thank you for the friendly familiarity," rejoined the young man.
"It is pleasant to know that I remind you so strongly of my good
father. My most earnest wish is to resemble him in character as much
as I am said to resemble him in person. I have formed no plans for the
evening. I was just about to ask you what there was best worth seeing
or hearing in the Crescent City."'''.replace("\n", " ")
"""
Explanation: Let's start out with a short string from our reading and see what happens.
End of explanation
"""
nlp = spacy.load('en')
# nlp = spacy.load('en', parser=False) # run this instead if you don't have > 1GB RAM
"""
Explanation: We've downloaded the English model, and now we just have to load it. This model will do everything for us, but we'll only get a little taste today.
End of explanation
"""
parsed_text = nlp(my_string)
parsed_text
"""
Explanation: To parse an entire text we just call the model on a string.
End of explanation
"""
sents_tab = Table()
sents_tab.append_column(label="Sentence", values=[sentence.text for sentence in parsed_text.sents])
sents_tab.show()
"""
Explanation: That was quick! So what happened? We've talked a lot about tokenizing, either in words or sentences.
What about sentences?
End of explanation
"""
toks_tab = Table()
toks_tab.append_column(label="Word", values=[word.text for word in parsed_text])
toks_tab.show()
"""
Explanation: Words?
End of explanation
"""
toks_tab.append_column(label="POS", values=[word.pos_ for word in parsed_text])
toks_tab.show()
"""
Explanation: What about parts of speech?
End of explanation
"""
toks_tab.append_column(label="Lemma", values=[word.lemma_ for word in parsed_text])
toks_tab.show()
"""
Explanation: Lemmata?
End of explanation
"""
def tablefy(parsed_text):
toks_tab = Table()
toks_tab.append_column(label="Word", values=[word.text for word in parsed_text])
toks_tab.append_column(label="POS", values=[word.pos_ for word in parsed_text])
toks_tab.append_column(label="Lemma", values=[word.lemma_ for word in parsed_text])
toks_tab.append_column(label="Stop Word", values=[word.is_stop for word in parsed_text])
toks_tab.append_column(label="Punctuation", values=[word.is_punct for word in parsed_text])
toks_tab.append_column(label="Space", values=[word.is_space for word in parsed_text])
toks_tab.append_column(label="Number", values=[word.like_num for word in parsed_text])
toks_tab.append_column(label="OOV", values=[word.is_oov for word in parsed_text])
toks_tab.append_column(label="Dependency", values=[word.dep_ for word in parsed_text])
return toks_tab
tablefy(parsed_text).show()
"""
Explanation: What else? Let's just make a function tablefy that will make a table of all this information for us:
End of explanation
"""
parsed_text
"""
Explanation: Challenge
What's the most common verb? Noun? What if you only include lemmata? What if you remove "stop words"?
How would lemmatizing or removing "stop words" help us better understand a text over regular tokenizing?
Dependency Parsing
Let's look at our text again:
End of explanation
"""
from spacy.symbols import nsubj, VERB
SV = []
for possible_subject in parsed_text:
if possible_subject.dep == nsubj and possible_subject.head.pos == VERB:
SV.append((possible_subject.text, possible_subject.head))
sv_tab = Table()
sv_tab.append_column(label="Subject", values=[x[0] for x in SV])
sv_tab.append_column(label="Verb", values=[x[1] for x in SV])
sv_tab.show()
"""
Explanation: Dependency parsing is one of the most useful and interesting NLP tools. A dependency parser will draw a tree of relationships between words. This is how you can find out specifically what adjectives are attributed to a specific person, what verbs are associated with a specific subject, etc.
spacy provides an online visualizer named "displaCy" to visualize dependencies. Let's look at the first sentence
We can loop through a dependency for a subject by checking the head attribute for the pos tag:
End of explanation
"""
shakespeare = '''
Tush! Never tell me; I take it much unkindly
That thou, Iago, who hast had my purse
As if the strings were thine, shouldst know of this.
'''
shake_parsed = nlp(shakespeare.strip())
tablefy(shake_parsed).show()
huck_finn_jim = '''
“Who dah?” “Say, who is you? Whar is you? Dog my cats ef I didn’ hear sumf’n.
Well, I know what I’s gwyne to do: I’s gwyne to set down here and listen tell I hears it agin.”"
'''
hf_parsed = nlp(huck_finn_jim.strip())
tablefy(hf_parsed).show()
text_speech = '''
LOL where r u rn? omg that's sooo funnnnnny. c u in a sec.
'''
ts_parsed = nlp(text_speech.strip())
tablefy(ts_parsed).show()
old_english = '''
þæt wearð underne eorðbuendum,
þæt meotod hæfde miht and strengðo
ða he gefestnade foldan sceatas.
'''
oe_parsed = nlp(old_english.strip())
tablefy(oe_parsed).show()
"""
Explanation: You can imagine that you could look over a large corpus to analyze first person, second person, and third person characterizations. Dependency parsers are also important for understanding and processing natural language, a question answering system for example. These models help the computer understand what the question is that is being asked.
Limitations
How accurate are the models? What happens if we change the style of English we're working with?
End of explanation
"""
ner_tab = Table()
ner_tab.append_column(label="NER Label", values=[ent.label_ for ent in parsed_text.ents])
ner_tab.append_column(label="NER Text", values=[ent.text for ent in parsed_text.ents])
ner_tab.show()
"""
Explanation: NER and Civil War-Era Novels
Wilkens uses a technique called "NER", or "Named Entity Recognition" to let the computer identify all of the geographic place names. Wilkens writes:
Text strings representing named locations in the corpus were identified using
the named entity recognizer of the Stanford CoreNLP package with supplied training
data. To reduce errors and to narrow the results for human review, only those
named-location strings that occurred at least five times in the corpus and were used
by at least two different authors were accepted. The remaining unique strings were
reviewed by hand against their context in each source volume. [883]
While we don't have the time for a human review right now, spacy does allow us to annotate place names (among other things!) in the same fashion as Stanford CoreNLP (a native Java library):
End of explanation
"""
import requests
text = requests.get("http://www.gutenberg.org/files/10549/10549.txt").text
text = text[1050:].replace('\r\n', ' ') # fix formatting and skip title header
print(text[:5000])
"""
Explanation: Cool! It's identified a few types of things for us. We can check what these mean here. GPE is country, cities, or states. Seems like that's what Wilkens was using.
Since we don't have his corpus of 1000 novels, let's just take our reading, A Romance of the Republic, as an example. We can use the requests library to get the raw HTML of a web page, and if we take the .text property we can make this a nice string.
End of explanation
"""
parsed = nlp(text)
"""
Explanation: We'll leave the chapter headers for now, it shouldn't affect much. Now we need to parse this with that nlp function:
End of explanation
"""
from collections import Counter
places = []
for ent in parsed.ents:
if ent.label_ == "GPE":
places.append(ent.text.strip())
places = Counter(places)
places
"""
Explanation: Challenge
With this larger string, find the most common noun, verb, and adjective. Then explore the other features of spacy and see what you can discover about our reading:
Let's continue in the fashion that Wilkens did and extract the named entities, specifically those for "GPE". We can loop through each entity, and if it is labeled as GPE we'll add it to our places list. We'll then make a Counter object out of that to get the frequency of each place name.
End of explanation
"""
with open('data/us_states.txt', 'r') as f:
states = f.read().split('\n')
states = [x.strip() for x in states]
states
"""
Explanation: That looks OK, but it's pretty rough! Keep this in mind when using trained models. They aren't 100% accurate. That's why Wilkens went through by hand after to get rid of the garbage.
If you thought NER was cool, wait for this. Now that we have a list of "places", we can send that to an online database to get back latitude and longitude coordinates (much like Wilkens used Google's geocoder), along with the US state. To make sure it's actually a US state, we'll need a list to compare to. So let's load that:
End of explanation
"""
from geopy.geocoders import Nominatim
from datascience import *
import time
geolocator = Nominatim(timeout=10)
geo_tab = Table(["latitude", "longitude", "name", "state"])
for name in places.keys(): # only want to loop through unique place names to call once per place name
print("Getting information for " + name + "...")
# finds the lat and lon of each name in the locations list
location = geolocator.geocode(name)
try:
# index the raw response for lat and lon
lat = float(location.raw["lat"])
lon = float(location.raw["lon"])
# string manipulation to find state name
for p in location.address.split(","):
if p.strip() in states:
state = p.strip()
break
# add to our table
for i in range(places[name] - 1):
geo_tab.append(Table.from_records([{"name": name,
"latitude": lat,
"longitude": lon,
"state": state}]).row(0))
except:
pass
geo_tab.show()
"""
Explanation: OK, now we're ready. The Nominatim function from the geopy library will return an object that has the properties we want. We'll append a new row to our table for each entry. Importantly, we're using the keys of the places counter because we don't need to ask the database for "New Orleans" 10 times to get the location. So after we get the information we'll just add as many rows as the counter tells us there are.
End of explanation
"""
%matplotlib inline
from scripts.choropleth import us_choropleth
us_choropleth(geo_tab)
"""
Explanation: Now we can plot a nice choropleth.
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.2/examples/legacy.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.2,<2.3"
"""
Explanation: Comparing PHOEBE 2 vs PHOEBE Legacy
NOTE: PHOEBE 1.0 legacy is an alternate backend and is not installed with PHOEBE 2. In order to run this backend, you'll need to have PHOEBE 1.0 installed and manually build the python bindings in the phoebe-py directory.
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u
import numpy as np
import matplotlib.pyplot as plt
phoebe.devel_on() # needed to use WD-style meshing, which isn't fully supported yet
logger = phoebe.logger()
b = phoebe.default_binary()
b['q'] = 0.7
b['requiv@secondary'] = 0.7
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvdyn')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvnum')
"""
Explanation: Adding Datasets and Compute Options
End of explanation
"""
b.add_compute(compute='phoebe2marching', irrad_method='none', mesh_method='marching')
b.add_compute(compute='phoebe2wd', irrad_method='none', mesh_method='wd', eclipse_method='graham')
"""
Explanation: Let's add compute options for phoebe using both the new (marching) method for creating meshes as well as the WD method which imitates the format of the mesh used within legacy.
End of explanation
"""
b.add_compute('legacy', compute='phoebe1', irrad_method='none')
"""
Explanation: Now we add compute options for the 'legacy' backend.
End of explanation
"""
b.set_value_all('rv_method', dataset='rvdyn', value='dynamical')
b.set_value_all('rv_method', dataset='rvnum', value='flux-weighted')
"""
Explanation: And set the two RV datasets to use the correct methods (for both compute options)
End of explanation
"""
b.set_value_all('atm', 'extern_planckint')
"""
Explanation: Let's use the external atmospheres available for both phoebe1 and phoebe2
End of explanation
"""
b.set_value_all('gridsize', 30)
"""
Explanation: Let's make sure both 'phoebe1' and 'phoebe2wd' use the same value for gridsize
End of explanation
"""
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.,0.])
b.set_value_all('rv_grav', False)
b.set_value_all('ltte', False)
"""
Explanation: Let's also disable other special effect such as heating, gravity, and light-time effects.
End of explanation
"""
b.run_compute(compute='phoebe2marching', model='phoebe2marchingmodel')
b.run_compute(compute='phoebe2wd', model='phoebe2wdmodel')
b.run_compute(compute='phoebe1', model='phoebe1model')
"""
Explanation: Finally, let's compute all of our models
End of explanation
"""
colors = {'phoebe2marchingmodel': 'g', 'phoebe2wdmodel': 'b', 'phoebe1model': 'r'}
afig, mplfig = b['lc01'].plot(c=colors, legend=True, show=True)
"""
Explanation: Plotting
Light Curve
End of explanation
"""
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2marchingmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'g-')
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2wdmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'b-')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-0.003, 0.003)
"""
Explanation: Now let's plot the residuals between these two models
End of explanation
"""
afig, mplfig = b.filter(dataset='rvdyn', model=['phoebe2wdmodel', 'phoebe1model']).plot(c=colors, legend=True, show=True)
"""
Explanation: Dynamical RVs
Since the dynamical RVs don't depend on the mesh, there should be no difference between the 'phoebe2marching' and 'phoebe2wd' synthetic models. Here we'll just choose one to plot.
End of explanation
"""
artist, = plt.plot(b.get_value('rvs@rvdyn@primary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@primary@phoebe1model'), color='b', ls=':')
artist, = plt.plot(b.get_value('rvs@rvdyn@secondary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@secondary@phoebe1model'), color='b', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-1.5e-12, 1.5e-12)
"""
Explanation: And also plot the residuals of both the primary and secondary RVs (notice the scale on the y-axis)
End of explanation
"""
afig, mplfig = b.filter(dataset='rvnum').plot(c=colors, show=True)
artist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2marchingmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='g', ls=':')
artist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2marchingmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='g', ls='-.')
artist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2wdmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='b', ls=':')
artist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2wdmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='b', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-1e-2, 1e-2)
"""
Explanation: Numerical (flux-weighted) RVs
End of explanation
"""
|
AllenDowney/ModSimPy | notebooks/jump2.ipynb | mit | # Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
"""
Explanation: Modeling and Simulation in Python
Bungee dunk example, taking into account the mass of the bungee cord
Copyright 2019 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(v_init = 0 * m / s,
g = 9.8 * m/s**2,
M = 75 * kg, # mass of jumper
m_cord = 75 * kg, # mass of cord
area = 1 * m**2, # frontal area of jumper
rho = 1.2 * kg/m**3, # density of air
v_term = 60 * m / s, # terminal velocity of jumper
L = 25 * m, # length of cord
k = 40 * N / m) # spring constant of cord
"""
Explanation: Bungee jumping
In the previous case study, we simulated a bungee jump with a model that took into account gravity, air resistance, and the spring force of the bungee cord, but we ignored the weight of the cord.
It is tempting to say that the weight of the cord doesn't matter, because it falls along with the jumper. But that intuition is incorrect, as explained by Heck, Uylings, and Kędzierska. As the cord falls, it transfers energy to the jumper. They derive a differential equation that relates the acceleration of the jumper to position and velocity:
$a = g + \frac{\mu v^2/2}{\mu(L+y) + 2L}$
where $a$ is the net acceleration of the number, $g$ is acceleration due to gravity, $v$ is the velocity of the jumper, $y$ is the position of the jumper relative to the starting point (usually negative), $L$ is the length of the cord, and $\mu$ is the mass ratio of the cord and jumper.
If you don't believe this model is correct, this video might convince you.
Following the example in Chapter 21, we'll model the jump with the following modeling assumptions:
Initially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea.
Until the cord is fully extended, it applies a force to the jumper as explained above.
After the cord is fully extended, it obeys Hooke's Law; that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length.
The jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion.
First I'll create a Param object to contain the quantities we'll need:
Let's assume that the jumper's mass is 75 kg and the cord's mass is also 75 kg, so mu=1.
The jumpers's frontal area is 1 square meter, and terminal velocity is 60 m/s. I'll use these values to back out the coefficient of drag.
The length of the bungee cord is L = 25 m.
The spring constant of the cord is k = 40 N / m when the cord is stretched, and 0 when it's compressed.
I adopt the coordinate system and most of the variable names from Heck, Uylings, and Kędzierska.
End of explanation
"""
def make_system(params):
"""Makes a System object for the given params.
params: Params object
returns: System object
"""
M, m_cord = params.M, params.m_cord
g, rho, area = params.g, params.rho, params.area
v_init, v_term = params.v_init, params.v_term
# back out the coefficient of drag
C_d = 2 * M * g / (rho * area * v_term**2)
mu = m_cord / M
init = State(y=0*m, v=v_init)
t_end = 10 * s
return System(params, C_d=C_d, mu=mu,
init=init, t_end=t_end)
"""
Explanation: Now here's a version of make_system that takes a Params object as a parameter.
make_system uses the given value of v_term to compute the drag coefficient C_d.
It also computes mu and the initial State object.
End of explanation
"""
system = make_system(params)
"""
Explanation: Let's make a System
End of explanation
"""
def drag_force(v, system):
"""Computes drag force in the opposite direction of `v`.
v: velocity
returns: drag force in N
"""
rho, C_d, area = system.rho, system.C_d, system.area
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
"""
Explanation: drag_force computes drag as a function of velocity:
End of explanation
"""
drag_force(20 * m/s, system)
"""
Explanation: Here's drag force at 20 m/s.
End of explanation
"""
def cord_acc(y, v, system):
"""Computes the force of the bungee cord on the jumper:
y: height of the jumper
v: velocity of the jumpter
returns: acceleration in m/s
"""
L, mu = system.L, system.mu
a_cord = -v**2 / 2 / (2*L/mu + (L+y))
return a_cord
"""
Explanation: The following function computes the acceleration of the jumper due to tension in the cord.
$a_{cord} = \frac{\mu v^2/2}{\mu(L+y) + 2L}$
End of explanation
"""
y = -20 * m
v = -20 * m/s
cord_acc(y, v, system)
"""
Explanation: Here's acceleration due to tension in the cord if we're going 20 m/s after falling 20 m.
End of explanation
"""
def slope_func1(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
"""
y, v = state
M, g = system.M, system.g
a_drag = drag_force(v, system) / M
a_cord = cord_acc(y, v, system)
dvdt = -g + a_cord + a_drag
return v, dvdt
"""
Explanation: Now here's the slope function:
End of explanation
"""
slope_func1(system.init, 0, system)
"""
Explanation: As always, let's test the slope function with the initial params.
End of explanation
"""
def event_func(state, t, system):
"""Run until y=-L.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: difference between y and -L
"""
y, v = state
return y + system.L
"""
Explanation: We'll need an event function to stop the simulation when we get to the end of the cord.
End of explanation
"""
event_func(system.init, 0, system)
"""
Explanation: We can test it with the initial conditions.
End of explanation
"""
results, details = run_ode_solver(system, slope_func1, events=event_func)
details.message
"""
Explanation: And then run the simulation.
End of explanation
"""
t_final = get_last_label(results)
"""
Explanation: Here's how long it takes to drop 25 meters.
End of explanation
"""
def plot_position(results, **options):
plot(results.y, **options)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
"""
Explanation: Here's the plot of position as a function of time.
End of explanation
"""
min(results.y)
"""
Explanation: We can use min to find the lowest point:
End of explanation
"""
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
"""
Explanation: Here's velocity as a function of time:
End of explanation
"""
min(results.v)
"""
Explanation: Velocity when we reach the end of the cord.
End of explanation
"""
a = gradient(results.v)
plot(a)
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
"""
Explanation: Although we compute acceleration inside the slope function, we don't get acceleration as a result from run_ode_solver.
We can approximate it by computing the numerical derivative of v:
End of explanation
"""
max_acceleration = max(abs(a)) * m/s**2 / params.g
"""
Explanation: The maximum downward acceleration, as a factor of g
End of explanation
"""
def max_acceleration(system):
mu = system.mu
return 1 + mu * (4+mu) / 8
max_acceleration(system)
"""
Explanation: Using Equation (1) from Heck, Uylings, and Kędzierska, we can compute the peak acceleration due to interaction with the cord, neglecting drag.
End of explanation
"""
def sweep_m_cord(m_cord_array, params):
sweep = SweepSeries()
for m_cord in m_cord_array:
system = make_system(Params(params, m_cord=m_cord))
results, details = run_ode_solver(system, slope_func1, events=event_func)
min_velocity = min(results.v) * m/s
sweep[m_cord.magnitude] = min_velocity
return sweep
m_cord_array = linspace(1, 201, 21) * kg
sweep = sweep_m_cord(m_cord_array, params)
"""
Explanation: If you set C_d=0, the simulated acceleration approaches the theoretical result, although you might have to reduce max_step to get a good numerical estimate.
Sweeping cord weight
Now let's see how velocity at the crossover point depends on the weight of the cord.
End of explanation
"""
plot(sweep)
decorate(xlabel='Mass of cord (kg)',
ylabel='Fastest downward velocity (m/s)')
"""
Explanation: Here's what it looks like. As expected, a heavier cord gets the jumper going faster.
There's a hitch near 25 kg that seems to be due to numerical error.
End of explanation
"""
def spring_force(y, system):
"""Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system:
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
"""
L, k = system.L, system.k
distance_fallen = -y
extension = distance_fallen - L
f_spring = k * extension
return f_spring
"""
Explanation: Phase 2
Once the jumper falls past the length of the cord, acceleration due to energy transfer from the cord stops abruptly. As the cord stretches, it starts to exert a spring force. So let's simulate this second phase.
spring_force computes the force of the cord on the jumper:
End of explanation
"""
spring_force(-25*m, system)
spring_force(-26*m, system)
"""
Explanation: The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
End of explanation
"""
def slope_func2(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
"""
y, v = state
M, g = system.M, system.g
a_drag = drag_force(v, system) / M
a_spring = spring_force(y, system) / M
dvdt = -g + a_drag + a_spring
return v, dvdt
"""
Explanation: The slope function for Phase 2 includes the spring force, and drops the acceleration due to the cord.
End of explanation
"""
system1 = make_system(params)
event_func.direction=-1
results1, details1 = run_ode_solver(system1, slope_func1, events=event_func)
print(details1.message)
"""
Explanation: I'll run Phase 1 again so we can get the final state.
End of explanation
"""
t_final = get_last_label(results1)
init2 = results1.row[t_final]
"""
Explanation: Now I need the final time, position, and velocity from Phase 1.
End of explanation
"""
system2 = System(system1, t_0=t_final, init=init2)
"""
Explanation: And that gives me the starting conditions for Phase 2.
End of explanation
"""
event_func.direction=+1
results2, details2 = run_ode_solver(system2, slope_func2, events=event_func)
print(details2.message)
t_final = get_last_label(results2)
"""
Explanation: Here's how we run Phase 2, setting the direction of the event function so it doesn't stop the simulation immediately.
End of explanation
"""
plot_position(results1, label='Phase 1')
plot_position(results2, label='Phase 2')
"""
Explanation: We can plot the results on the same axes.
End of explanation
"""
min(results2.y)
"""
Explanation: And get the lowest position from Phase 2.
End of explanation
"""
def simulate_system2(params):
system1 = make_system(params)
event_func.direction=-1
results1, details1 = run_ode_solver(system1, slope_func1, events=event_func)
t_final = get_last_label(results1)
init2 = results1.row[t_final]
system2 = System(system1, t_0=t_final, init=init2)
results2, details2 = run_ode_solver(system2, slope_func2, events=event_func)
t_final = get_last_label(results2)
return TimeFrame(pd.concat([results1, results2]))
"""
Explanation: To see how big the effect of the cord is, I'll collect the previous code in a function.
End of explanation
"""
results = simulate_system2(params);
plot_position(results)
params_no_cord = Params(params, m_cord=1*kg)
results_no_cord = simulate_system2(params_no_cord);
plot_position(results, label='m_cord = 75 kg')
plot_position(results_no_cord, label='m_cord = 1 kg')
savefig('figs/jump.png')
min(results_no_cord.y)
diff = min(results.y) - min(results_no_cord.y)
"""
Explanation: Now we can run both phases and get the results in a single TimeFrame.
End of explanation
"""
|
diegocavalca/Studies | deep-learnining-specialization/1. neural nets and deep learning/week2/Logistic+Regression+with+a+Neural+Network+mindset+v4.ipynb | cc0-1.0 | import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
"""
Explanation: Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
Instructions:
- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.
You will learn to:
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
1 - Packages
First, let's run the cell below to import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- matplotlib is a famous library to plot graphs in Python.
- PIL and scipy are used here to test your model with your own picture at the end.
End of explanation
"""
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
"""
Explanation: 2 - Overview of the Problem set
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
End of explanation
"""
# Example of a picture
index = 88
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
"""
Explanation: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
End of explanation
"""
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig[1].shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
"""
Explanation: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise: Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape[0].
End of explanation
"""
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
"""
Explanation: Expected Output for m_train, m_test and num_px:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $$ num_px $$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px $$ num_px $$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$$c$$d, a) is to use:
python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
End of explanation
"""
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
"""
Explanation: Expected Output:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>**sanity check after reshaping**</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->
Let's standardize our dataset.
End of explanation
"""
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
"""
Explanation: <font color='blue'>
What you need to remember:
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
- "Standardize" the data
3 - General Architecture of the learning algorithm
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network!
<img src="images/LogReg_kiank.png" style="width:650px;height:400px;">
Mathematical expression of the algorithm:
For one example $x^{(i)}$:
$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$
$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$
$$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$
The cost is then computed by summing over all training examples:
$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$
Key steps:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call model().
4.1 - Helper functions
Exercise: Using your code from "Python Basics", implement sigmoid(). As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
End of explanation
"""
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim, 1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
"""
Explanation: Expected Output:
<table>
<tr>
<td>**sigmoid([0, 2])**</td>
<td> [ 0.5 0.88079708]</td>
</tr>
</table>
4.2 - Initializing parameters
Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
End of explanation
"""
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid( np.dot(w.T, X) + b ) # compute activation
cost = -(1/m)*np.sum( Y * np.log(A) + (1-Y) * np.log(1-A) ) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = (1/m) * np.dot(X, (A - Y).T)
db = (1/m) * (np.sum(A - Y))
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
"""
Explanation: Expected Output:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
Exercise: Implement a function propagate() that computes the cost function and its gradient.
Hints:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
End of explanation
"""
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw # Use broadcasting
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
"""
Explanation: Expected Output:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99845601]
[ 2.39507239]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.00145557813678 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 5.801545319394553 </td>
</tr>
</table>
d) Optimization
You have initialized your parameters.
You are also able to compute a cost function and its gradient.
Now, you want to update the parameters using gradient descent.
Exercise: Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
End of explanation
"""
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid( np.dot(w.T, X) + b )
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
Y_prediction[0,i] = 1 if A[0,i] > 0.5 else 0
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
"""
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.19033591]
[ 0.12259159]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.92535983008 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.67752042]
[ 1.41625495]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 0.219194504541 </td>
</tr>
</table>
Exercise: The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions:
Calculate $\hat{Y} = A = \sigma(w^T X + b)$
Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a for loop (though there is also a way to vectorize this).
End of explanation
"""
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
"""
Explanation: Expected Output:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1. 0.]]
</td>
</tr>
</table>
<font color='blue'>
What to remember:
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
5 - Merge all functions into a model
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
Exercise: Implement the model function. Use the following notation:
- Y_prediction for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
End of explanation
"""
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
"""
Explanation: Run the following cell to train your model.
End of explanation
"""
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
"""
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **Cost after iteration 0 ** </td>
<td> 0.693147 </td>
</tr>
<tr>
<td> <center> $\vdots$ </center> </td>
<td> <center> $\vdots$ </center> </td>
</tr>
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
Comment: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the index variable) you can look at predictions on pictures of the test set.
End of explanation
"""
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
"""
Explanation: Let's also plot the cost function and the gradients.
End of explanation
"""
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
"""
Explanation: Interpretation:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
6 - Further analysis (optional/ungraded exercise)
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
Choice of learning rate
Reminder:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the learning_rates variable to contain, and see what happens.
End of explanation
"""
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "dog.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
"""
Explanation: Interpretation:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.)
7 - Test with your own image (optional/ungraded exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
End of explanation
"""
|
emsi/ml-toolbox | random/catfish/TL_02-1_Fixed feature extraction (CNN Codes vel bottleneck) Max Pooling.ipynb | agpl-3.0 | from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import zipfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
from skimage import color, io
from scipy.misc import imresize
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D, Activation, GlobalMaxPooling2D
from keras.layers import merge, Input, Lambda
from keras.callbacks import EarlyStopping
from keras.models import Model
import h5py
np.random.seed(31337)
NAME="ResNet50-300x300-MaxPooling"
# Config the matplotlib backend as plotting inline in IPython
%matplotlib inline
"""
Explanation: The purpose of this notebook it to provide scientific proof that cats are stranger than dogs (possibly of alien origin). Cats' features are of enormous variety compared to dogs and simply annoying to our brains.
Let's start by following the procedure to rearrange folders:
https://github.com/daavoo/kaggle_solutions/blob/master/dogs_vs_cats/01_rearrange_folders.ipynb
End of explanation
"""
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
resnet_codes_model = ResNet50(input_shape=(300,300,3), include_top=False, weights='imagenet')
#resnet_codes_model.summary()
"""
Explanation: Load original Keras ResNet50 model without the top layer.
End of explanation
"""
# Final model
model=Model(input=resnet_codes_model.input, output=GlobalMaxPooling2D()(resnet_codes_model.output))
model.summary()
"""
Explanation: Add a Pooling layer at the top to extract the CNN coded (aka bottleneck)
End of explanation
"""
from keras.preprocessing.image import ImageDataGenerator
def img_to_bgr(im):
# the following BGR values should be subtracted: [103.939, 116.779, 123.68]. (VGG)
return (im[:,:,::-1] - np.array([103.939, 116.779, 123.68]))
datagen = ImageDataGenerator(rescale=1., preprocessing_function=img_to_bgr) #(rescale=1./255)
"""
Explanation: The following preprocessing is not proper for the ResNet as it uses mean image rather than mean pixel (I chose VGG paper values) yet it yields little numerical differencies hence works properly and is more than enough for this experiment.
Note that it's required to install github version of keras for preprocessing_function to work with:
pip install git+https://github.com/fchollet/keras.git --upgrade
End of explanation
"""
train_batches = datagen.flow_from_directory("train", model.input_shape[1:3], shuffle=False, batch_size=32)
valid_batches = datagen.flow_from_directory("valid", model.input_shape[1:3], shuffle=False, batch_size=32)
test_batches = datagen.flow_from_directory("test", model.input_shape[1:3], shuffle=False, batch_size=32, class_mode=None)
"""
Explanation: Get the trainign and validation DirectoryIterators
End of explanation
"""
train_codes = model.predict_generator(train_batches, train_batches.nb_sample)
valid_codes = model.predict_generator(valid_batches, valid_batches.nb_sample)
test_codes = model.predict_generator(test_batches, test_batches.nb_sample)
"""
Explanation: Obtain the CNN codes for all images (it takes ~10 minutes on GTX 1080 GPU)
End of explanation
"""
from keras.utils.np_utils import to_categorical
with h5py.File(NAME+"_codes-train.h5") as hf:
hf.create_dataset("X_train", data=train_codes)
hf.create_dataset("X_valid", data=valid_codes)
hf.create_dataset("Y_train", data=to_categorical(train_batches.classes))
hf.create_dataset("Y_valid", data=to_categorical(valid_batches.classes))
with h5py.File(NAME+"_codes-test.h5") as hf:
hf.create_dataset("X_test", data=test_codes)
"""
Explanation: Save the CNN codes for futher analysys
End of explanation
"""
def get_codes_by_class(X,Y):
l=len(Y)
if (len(X)!=l):
raise Exception("X and Y are of different lengths")
classes=set(Y)
return [[X[i] for i in xrange(l) if Y[i]==c] for c in classes], classes
class_codes, classes=get_codes_by_class(train_codes, train_batches.classes)
cats=np.mean(class_codes[0],0)
dogs=np.mean(class_codes[1],0)
cats=np.abs(cats)
dogs=np.abs(dogs)
# cats=np.log(cats)
# dogs=np.log(dogs)
cats/=cats.max()
dogs/=dogs.max()
"""
Explanation: Compute mean values of codes across all training codes
End of explanation
"""
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(12, 6))
ax[0,0].imshow(cats.reshape(32,64),cmap="Greys")
ax[0,0].set_title('Cats')
ax[0,1].imshow(dogs.reshape(32,64),cmap="Greys")
ax[0,1].set_title('Dogs')
freq = np.fft.fft2(cats.reshape(32,64))
freq = np.abs(freq)
ax[1,0].hist(np.log(freq).ravel(), bins=100)
ax[1,0].set_title('hist(log(freq))')
freq = np.fft.fft2(dogs.reshape(32,64))
freq = np.abs(freq)
ax[1,1].hist(np.log(freq).ravel(), bins=100)
ax[1,1].set_title('hist(log(freq))')
plt.show()
"""
Explanation: Visualize codes as images. As it can be clearly seen, Cats have many different features (plenty of high value - dark spots) while dogs highly activate only two neurons (two distinct dark spots).
It can be concluded that cats activate more brain regions or are more annoying than dogs.
t's even more apparent when looking at the histograms of frequency domain.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ipsl/cmip6/models/sandbox-2/landice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-2', 'landice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
"""
|
minireference/noBSLAnotebooks | chapter04_problems.ipynb | mit | # helper code needed for running in colab
if 'google.colab' in str(get_ipython()):
print('Downloading plot_helpers.py to util/ (only neded for colab')
!mkdir util; wget https://raw.githubusercontent.com/minireference/noBSLAnotebooks/master/util/plot_helpers.py -P util
# setup SymPy
from sympy import *
init_printing()
# setup plotting
%matplotlib inline
import matplotlib.pyplot as mpl
from util.plot_helpers import plot_plane, plot_line, plot_vec, plot_vecs
# aliases
Vector = Matrix # define alias Vector so I don't have to explain this during video
Point = Vector # define alias Point for Vector since they're the same thing
"""
Explanation: 4/ Problem solutions
End of explanation
"""
# a) x y z | c
A = Matrix([[3, -2, -1, 2],
[1, 2, 1, 0]])
A.rref()
"""
Explanation: P4.2
Find the lines of intersection between these pairs of planes:
a) $P_1$: $3x-2y-z=2$ and $P_2$: $x+2y+z=0$,
b) $P_3$: $2x+y-z=0$ and $P_4$: $x+2y+z=3$.
<!--
\begin{answer}\textbf{a)}~;
\textbf{b)}~.\end{answer}
% A = Matrix([[1,-2,-1,2],[1,2,1,0]])
% A.rref()
%
-->
End of explanation
"""
# b) x y z | c
B = Matrix([[2, 1, -1, 0],
[1, 2, 1, 3]])
B.rref()
"""
Explanation: So $z=s$ is a free variable, and the rest of the equation
can be written as
$$
\begin{array}{rl}
x &= \frac{1}{2}\
y + \frac{1}{2}s &= -\frac{1}{4}\
z &= s
\end{array}
$$
The answer is $(x,y,z) = (\frac{1}{2},-\frac{1}{4},0) + s(0,-\frac{1}{2},1), \forall s \in \mathbb{R}$.
End of explanation
"""
v = Vector([3, 4, 1])
normal = Vector([2, -1, 4])
vPperp = (normal.dot(v)/normal.norm()**2)*normal
print('vPperp =', vPperp)
vP = v - vPperp
print('vP =', vP)
plot_plane(normal, 0) # plane P
plot_vec(0.2*normal, color='r') # its normal vec
plot_vecs(v, vPperp, vP)
v = Vector([3, 4, 1])
normal = Vector([2, -1, 4])
D=4
# point on P closest to the origin
alpha = D/normal.norm()**2
p_closest = alpha*normal
# print('len normal', normal.norm())
# print('p_closest', p_closest)
assert p_closest.dot(normal) == 4
vPperp = (normal.dot(v)/normal.norm()**2)*normal
print('vPperp', vPperp)
v_wrong = v - vPperp
print('v_wrong', v_wrong)
plot_plane(normal, D) # plane P
plot_vec(0.2*normal, at=p_closest, color='r') # its normal vec
plot_vecs(v, vPperp, v_wrong)
ax = mpl.gca()
ax.grid(True,which='both')
v = Vector([3, 4, 1])
normal = Vector([2, -1, 4])
D = 4
# some point on P
p0 = Point([2,0,0])
u = v - p0 # vector from p0 to tip of v
uPperp = (normal.dot(u)/normal.norm()**2)*normal
print('uPperp', uPperp)
uInP = u - uPperp
proj_v_on_P = p0 + uInP
print('proj_v_on_P', proj_v_on_P)
plot_plane(normal, D) # plane P
plot_vec(0.2*normal, at=p_closest, color='r') # its normal vec
plot_vec(v)
plot_vec(u, at=p0, color='r')
plot_vec(uPperp, at=p0, color='b')
plot_vec(uInP, at=p0, color='g')
plot_vec(proj_v_on_P, color='y')
ax = mpl.gca()
ax.grid(True,which='both')
"""
Explanation: The free variable is $z=t$.
The answer to b) is ${ (-1,2,0) + t(1,-1,1), \forall t \in \mathbb{R}}$.
P4.11
End of explanation
"""
|
charlesreid1/empirical-model-building | ipython/Factorial - Two-Level Six-Factor Design.ipynb | mit | %matplotlib inline
import pandas as pd
import numpy as np
from numpy.random import rand, seed
import seaborn as sns
import scipy.stats as stats
from matplotlib.pyplot import *
seed(10)
"""
Explanation: A Two-Level, Six-Factor Full Factorial Design
<br />
<br />
<br />
Table of Contents
Introduction
Factorial Experimental Design:
Two-Level Six-Factor Full Factorial Design
Variables and Variable Labels
Computing Main and Interaction Effects
Analysis of results:
Analyzing Effects
Quantile-Quantile Effects Plot
Utilizing Degrees of Freedom
Ordinary Least Squares Regression Model
Goodness of Fit
Distribution of Error
Aggregating Results
Distribution of Variance
Residual vs. Response Plots
<br />
<br />
<br />
<a name="intro"></a>
Introduction
This notebook roughly follows content from Box and Draper's Empirical Model-Building and Response Surfaces (Wiley, 1984). This content is covered by Chapter 4 of Box and Draper.
In this notebook, we'll carry out an anaylsis of a full factorial design, and show how we can obtain inforomation about a system and its responses, and a quantifiable range of certainty about those values. This is the fundamental idea behind empirical model-building and allows us to construct cheap and simple models to represent complex, nonlinear systems.
End of explanation
"""
import itertools
# Create the inputs:
encoded_inputs = list( itertools.product([-1,1],[-1,1],[-1,1],[-1,1],[-1,1],[-1,1]) )
# Create the experiment design table:
doe = pd.DataFrame(encoded_inputs,columns=['x%d'%(i+1) for i in range(6)])
# "Manufacture" observed data y
doe['y1'] = doe.apply( lambda z : sum([ rand()*z["x%d"%(i)]+0.01*(0.5-rand()) for i in range(1,7) ]), axis=1)
doe['y2'] = doe.apply( lambda z : sum([ 5*rand()*z["x%d"%(i)]+0.01*(0.5-rand()) for i in range(1,7) ]), axis=1)
doe['y3'] = doe.apply( lambda z : sum([ 100*rand()*z["x%d"%(i)]+0.01*(0.5-rand()) for i in range(1,7) ]), axis=1)
print(doe[['y1','y2','y3']])
"""
Explanation: <a name="fullfactorial"></a>
Two-Level Six-Factor Full Factorial Design
Let's start with our six-factor factorial design example. Six factors means there are six input variables; this is still a two-level experiment, so this is now a $2^6$-factorial experiment.
Additionally, there are now three response variables, $(y_1, y_2, y_3)$.
To generate a table of the 64 experiments to be run at each factor level, we will use the itertools.product function below. This is all put into a DataFrame.
This example generates some random response data, by multiplying a vector of random numbers by the vector of input variable values. (Nothing too complicated.)
End of explanation
"""
labels = {}
labels[1] = ['x1','x2','x3','x4','x5','x6']
for i in [2,3,4,5,6]:
labels[i] = list(itertools.combinations(labels[1], i))
obs_list = ['y1','y2','y3']
for k in labels.keys():
print(str(k) + " : " + str(labels[k]))
"""
Explanation: <a name="varlablels"></a>
Defining Variables and Variable Labels
Next we'll define some containers for input variable labels, output variable labels, and any interaction terms that we'll be computing:
End of explanation
"""
effects = {}
# Start with the constant effect: this is $\overline{y}$
effects[0] = {'x0' : [doe['y1'].mean(),doe['y2'].mean(),doe['y3'].mean()]}
print(effects[0])
"""
Explanation: Now that we have variable labels for each main effect and interaction effect, we can actually compute those effects.
<a name="computing_effects"></a>
Computing Main and Interaction Effects
We'll start by finding the constant effect, which is the mean of each response:
End of explanation
"""
effects[1] = {}
for key in labels[1]:
effects_result = []
for obs in obs_list:
effects_df = doe.groupby(key)[obs].mean()
result = sum([ zz*effects_df.ix[zz] for zz in effects_df.index ])
effects_result.append(result)
effects[1][key] = effects_result
effects[1]
"""
Explanation: Next, compute the main effect of each variable, which quantifies the amount the response changes by when the input variable is changed from the -1 to +1 level. That is, it computes the average effect of an input variable $x_i$ on each of the three response variables $y_1, y_2, y_3$.
End of explanation
"""
for c in [2,3,4,5,6]:
effects[c] = {}
for key in labels[c]:
effects_result = []
for obs in obs_list:
effects_df = doe.groupby(key)[obs].mean()
result = sum([ np.prod(zz)*effects_df.ix[zz]/(2**(len(zz)-1)) for zz in effects_df.index ])
effects_result.append(result)
effects[c][key] = effects_result
def printd(d):
for k in d.keys():
print("%25s : %s"%(k,d[k]))
for i in range(1,7):
printd(effects[i])
"""
Explanation: Our next step is to crank through each variable interaction level: two-variable, three-variable, and on up to six-variable interaction effects. We compute interaction effects for each two-variable combination, three-variable combination, etc.
End of explanation
"""
print(len(effects))
master_dict = {}
for nvars in effects.keys():
effect = effects[nvars]
for k in effect.keys():
v = effect[k]
master_dict[k] = v
master_df = pd.DataFrame(master_dict).T
master_df.columns = obs_list
y1 = master_df['y1'].copy()
y1.sort_values(inplace=True,ascending=False)
print("Top 10 effects for observable y1:")
print(y1[:10])
y2 = master_df['y2'].copy()
y2.sort_values(inplace=True,ascending=False)
print("Top 10 effects for observable y2:")
print(y2[:10])
y3 = master_df['y3'].copy()
y3.sort_values(inplace=True,ascending=False)
print("Top 10 effects for observable y3:")
print(y3[:10])
"""
Explanation: We've computed the main and interaction effects for every variable combination (whew!), but now we're at a point where we want to start doing things with these quantities.
<a name="analyzing_effects"></a>
Analyzing Effects
The first and most important question is, what variable, or combination of variables, has the strongest effect on the three responses $y_1$? $y_2$? $y_3$?
To figure this out, we'll need to use the data we computed above. Python makes it easy to slice and dice data. In this case, we've constructed a nested dictionary, with the outer keys mapping to the number of variables and inner keys mapping to particular combinations of input variables. Its pretty easy to convert this to a flat data structure that we can use to sort by variable effects. We've got six "levels" of variable combinations, so we'll flatten effects by looping through all six dictionaries of variable combinations (from main effects to six-variable interaction effects), and adding each entry to a master dictionary.
The master dictionary will be a flat dictionary, and once we've populated it, we can use it to make a DataFrame for easier sorting, printing, manipulating, aggregating, and so on.
End of explanation
"""
# Quantify which effects are not normally distributed,
# to assist in identifying important variables
fig = figure(figsize=(14,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
stats.probplot(y1, dist="norm", plot=ax1)
ax1.set_title('y1')
stats.probplot(y2, dist="norm", plot=ax2)
ax2.set_title('y2')
stats.probplot(y3, dist="norm", plot=ax3)
ax3.set_title('y3')
"""
Explanation: If we were only to look at the list of rankings of each variable, we would see that each response is affected by different input variables, listed below in order of descending importance:
* $y_1$: 136254
* $y_2$: 561234
* $y_3$: 453216
This is a somewhat mixed message that's hard to interpret - can we get rid of variable 2? We can't eliminate 1, 4, or 5, and probably not 3 or 6 either.
However, looking at the quantile-quantile plot of the effects answers the question in a more visual way.
<a name="quantile_effects"></a>
Quantile-Quantile Effects Plot
We can examine the distribution of the various input variable effects using a quantile-quantile plot of the effects. Quantile-quantile plots arrange the effects in order from least to greatest, and can be applied in several contexts (as we'll see below, when assessing model fits). If the quantities plotted on a quantile-qantile plot are normally distributed, they will fall on a straight line; data that do not fall on the straight line indicate significant deviations from normal behavior.
In the case of a quantile-quantile plot of effects, non-normal behavior means the effect is paticularly strong. By identifying the outlier points on thse quantile-quantile plots (they're ranked in order, so they correspond to the lists printed above), we can identify the input variables most likely to have a strong impact on the responses.
We need to look both at the top (the variables that have the largest overall positive effect) and the bottom (the variables that have the largest overall negative effect) for significant outliers. When we find outliers, we can add them to a list of variabls that we have decided are important and will keep in our analysis.
End of explanation
"""
xlabs = ['x1','x2','x3','x4','x5','x6']
ylabs = ['y1','y2','y3']
ls_data = doe[xlabs+ylabs]
import statsmodels.api as sm
import numpy as np
x = ls_data[xlabs]
x = sm.add_constant(x)
"""
Explanation: Normally, we would use the main effects that were computed, and their rankings, to eliminate any variables that don't have a strong effect on any of our variables. However, this analysis shows that sometimes we can't eliminate any variables.
All six input variables are depicted as the effects that fall far from the red line - indicating all have a statistically meaningful (i.e., not normally distributed) effect on all three response variables. This means we should keep all six factors in our analysis.
There is also a point on the $y_3$ graph that appears significant on the bottom. Examining the output of the lists above, this point represents the effect for the six-way interaction of all input variables. High-order interactions are highly unlikely (and in this case it is a numerical artifact of the way the responses were generated), so we'll keep things simple and stick to a linear model.
Let's continue our analysis without eliminating any of the six factors, since they are important to all of our responses.
<a name="dof"></a>
Utilizing Degrees of Freedom
Our very expensive, 64-experiment full factorial design (the data for which maps $(x_1,x_2,\dots,x_6)$ to $(y_1,y_2,y_3)$) gives us 64 data points, and 64 degrees of freedom. What we do with those 64 degrees of freedom is up to us.
We could fit an empirical model, or response surface, that has 64 independent parameters, and account for many of the high-order interaction terms - all the way up to six-variable interaction effects. However, high-order effects are rarely important, and are a waste of our degrees of freedom.
Alternatively, we can fit an empirical model with fewer coefficients, using up fewer degrees of freedom, and use the remaining degrees of freedom to characterize the error introduced by our approximate model.
To describe a model with the 6 variables listed above and no other variable interaction effects would use only 6 degrees of freedom, plus 1 degree of freedom for the constant term, leaving 57 degrees of freedom available to quantify error, attribute variance, etc.
Our goal is to use least squares to compute model equations for $(y_1,y_2,y_3)$ as functions of $(x_1,x_2,x_3,x_4,x_5,x_6)$.
End of explanation
"""
y1 = ls_data['y1']
est1 = sm.OLS(y1,x).fit()
print(est1.summary())
"""
Explanation: The first ordinary least squares linear model is created to predict values of the first variable, $y_1$, as a function of each of our input variables, the list of which are contained in the xlabs variable. When we perform the linear regression fitting, we see much of the same information that we found in the prior two-level three-factor full factorial design, but here, everything is done automatically.
The model is linear, meaning it's fitting the coefficients of the function:
$$
\hat{y} = a_0 + a_1 x_1 + a_2 x_2 + a_3 + x_3 + a_4 x_4 + a_5 x_5 + a_6 x_6
$$
(here, the variables $y$ and $x$ are vectors, with one component for each response; in our case, they are three-dimensional vectors.)
Because there are 64 observations and 7 coefficients, the 57 extra observations give us extra degrees of freedom with which to assess how good the model is. That analysis can be done with an ordinary least squares (OLS) model, available through the statsmodel library in Python.
<a name="ols"></a>
Ordinary Least Squares Regression Model
This built-in OLS model will fit an input vector $(x_1,x_2,x_3,x_4,x_5,x_6)$ to an output vector $(y_1,y_2,y_3)$ using a linear model; the OLS model is designed to fit the model with more observations than coefficients, and utilize the remaining data to quantify the fit of the model.
Let's run through one of these, and analyze the results:
End of explanation
"""
y2 = ls_data['y2']
est2 = sm.OLS(y2,x).fit()
print(est2.summary())
y3 = ls_data['y3']
est3 = sm.OLS(y3,x).fit()
print(est3.summary())
"""
Explanation: The StatsModel OLS object prints out quite a bit of useful information, in a nicely-formatted table. Starting at the top, we see a couple of important pieces of information: specifically, the name of the dependent variable (the response) that we're looking at, the number of observations, and the number of degrees of freedom.
We can see an $R^2$ statistic, which indicates how well this data is fit with our linear model, and an adjusted $R^2$ statistic, which accounts for the large nubmer of degrees of freedom. While an adjusted $R^2$ of 0.73 is not great, we have to remember that this linear model is trying to capture a wealth of complexity in six coefficients. Furthermore, the adjusted $R^2$ value is too broad to sum up how good our model actually is.
The table in the middle is where the most useful information is located. The coef column shows the coefficients $a_0, a_1, a_2, \dots$ for the model equation:
$$
\hat{y} = a_0 + a_1 x_1 + a_2 x_2 + a_3 + x_3 + a_4 x_4 + a_5 x_5 + a_6 x_6
$$
Using the extra degrees of freedom, an estime $s^2$ of the variance in the regression coefficients is also computed, and reported in the the std err column. Each linear term is attributed the same amount of variance, $\pm 0.082$.
End of explanation
"""
%matplotlib inline
import seaborn as sns
import scipy.stats as stats
from matplotlib.pyplot import *
# Quantify goodness of fit
fig = figure(figsize=(14,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
r1 = y1 - est1.predict(x)
r2 = y2 - est2.predict(x)
r3 = y3 - est3.predict(x)
stats.probplot(r1, dist="norm", plot=ax1)
ax1.set_title('Residuals, y1')
stats.probplot(r2, dist="norm", plot=ax2)
ax2.set_title('Residuals, y2')
stats.probplot(r3, dist="norm", plot=ax3)
ax3.set_title('Residuals, y3')
"""
Explanation: <a name="goodness_of_fit"></a>
Quantifying Model Goodness-of-Fit
We can now use these linear models to evaluate each set of inputs and compare the model response $\hat{y}$ to the actual observed response $y$. What we would expect to see, if our model does an adequate job of representing the underlying behavior of the model, is that in each of the 64 experiments, the difference between the model prediction $M$ and the measured data $d$, defined as the residual $r$,
$$
r = \left| d - M \right|
$$
should be comparable across all experiments. If the residuals appear to have functional dependence on the input variables, it is an indication that our model is missing important effects and needs more or different terms. The way we determine this, mathematically, is by looking at a quantile-quantile plot of our errors (that is, a ranked plot of our error magnitudes).
If the residuals are normally distributed, they will follow a straight line; if the plot shows the data have significant wiggle and do not follow a line, it is an indication that the errors are not normally distributed, and are therefore skewed (indicating terms missing from our OLS model).
End of explanation
"""
fig = figure(figsize=(10,12))
ax1 = fig.add_subplot(311)
ax2 = fig.add_subplot(312)
ax3 = fig.add_subplot(313)
axes = [ax1,ax2,ax3]
colors = sns.xkcd_palette(["windows blue", "amber", "faded green", "dusty purple","aqua blue"])
#resids = [r1, r2, r3]
normed_resids = [r1/y1, r2/y2, r3/y3]
for (dataa, axx, colorr) in zip(normed_resids,axes,colors):
sns.kdeplot(dataa, bw=1.0, ax=axx, color=colorr, shade=True, alpha=0.5);
ax1.set_title('Probability Distribution: Normalized Residual Error, y1')
ax2.set_title('Normalized Residual Error, y2')
ax3.set_title('Normalized Residual Error, y3')
"""
Explanation: Determining whether significant trends are being missed by the model depends on how many points deviate from the red line, and how significantly. If there is a single point that deviates, it does not necessarily indicate a problem; but if there is significant wiggle and most points deviate significantly from the red line, it means that there is something about the relationship between the inputs and the outputs that our model is missing.
There are only a few points deviating from the red line. We saw from the effect quantile for $y_3$ that there was an interaction variable that was important to modeling the response $y_3$, and it is likely this interaction that is leading to noise at the tail end of these residuals. This indicates residual errors (deviations of the model from data) that do not follow a natural, normal distribution, which indicates there is a pattern in the deviations - namely, the interaction effect.
The conclusion about the error from the quantile plots above is that there are only a few points deviation from the line, and no particularly significant outliers. Our model can use some improvement, but it's a pretty good first-pass model.
<a name="distribution_of_error"></a>
Distribution of Error
Another thing we can look at is the normalized error: what are the residual errors (differences between our model prediction and our data)? How are their values distributed?
A kernel density estimate (KDE) plot, which is a smoothed histogram, shows the probability distribution of the normalized residual errors. As expected, they're bunched pretty close to zero. There are some bumps far from zero, corresponding to the outliers on the quantile-quantile plot of the errors above. However, they're pretty close to randomly distributed, and therefore it doesn't look like there is any systemic bias there.
End of explanation
"""
# Our original regression variables
xlabs = ['x2','x3','x4']
doe.groupby(xlabs)[ylabs].mean()
# If we decided to go for a different variable set
xlabs = ['x2','x3','x4','x6']
doe.groupby(xlabs)[ylabs].mean()
"""
Explanation: Note that in these figures, the bumps at extreme value are caused by the fact that the interval containing the responses includes 0 and values close to 0, so the normalization factor is very tiny, leading to large values.
<a name="aggregating"></a>
Aggregating Results
Let's next aggregate experimental results, by taking the mean over various variables to compute the mean effect for regressed varables. For example, we may want to look at the effects of variables 2, 3, and 4, and take the mean over the other three variables.
This is simple to do with Pandas, by grouping the data by each variable, and applying the mean function on all of the results. The code looks like this:
End of explanation
"""
xlabs = ['x1','x2']
doe.groupby(xlabs)[ylabs].var()
"""
Explanation: This functionality can also be used to determine the variance in all of the experimental observations being aggregated. For example, here we aggregate over $x_3 \dots x_6$ and show the variance broken down by $x_1, x_2$ vs $y_1, y_2, y_3$.
End of explanation
"""
doe.groupby(xlabs)[ylabs].count()
"""
Explanation: Or even the number of experimental observations being aggregated!
End of explanation
"""
# Histogram of means of response values, grouped by xlabs
xlabs = ['x1','x2','x3','x4']
print("Grouping responses by %s"%( "-".join(xlabs) ))
dat = np.ravel(doe.groupby(xlabs)[ylabs].mean().values) / np.ravel(doe.groupby(xlabs)[ylabs].var().values)
hist(dat, 10, normed=False, color=colors[3]);
xlabel(r'Relative Variance ($\mu$/$\sigma^2$)')
show()
# Histogram of variances of response values, grouped by xlabs
print("Grouping responses by %s"%( "-".join(xlabs) ))
dat = np.ravel(doe.groupby(xlabs)['y1'].var().values)
hist(dat, normed=True, color=colors[4])
xlabel(r'Variance in $y_{1}$ Response')
ylabel(r'Frequency')
show()
"""
Explanation: <a name="dist_variance"></a>
Distributions of Variance
We can convert these dataframes of averages, variances, and counts into data for plotting. For example, if we want to make a histogram of every value in the groupby dataframe, we can use the .values method, so that this:
doe.gorupby(xlabs)[ylabs].mean()
becomes this:
doe.groupby(xlabs)[ylabs].mean().values
This $M \times N$ array can then be flattened into a vector using the ravel() method from numpy:
np.ravel( doe.groupby(xlabs)[ylabs].mean().values )
The resulting data can be used to generate histograms, as shown below:
End of explanation
"""
# normal plot of residuals
fig = figure(figsize=(14,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
ax1.plot(y1,r1,'o',color=colors[0])
ax1.set_xlabel('Response value $y_1$')
ax1.set_ylabel('Residual $r_1$')
ax2.plot(y2,r2,'o',color=colors[1])
ax2.set_xlabel('Response value $y_2$')
ax2.set_ylabel('Residual $r_2$')
ax2.set_title('Response vs. Residual Plots')
ax3.plot(y1,r1,'o',color=colors[2])
ax3.set_xlabel('Response value $y_3$')
ax3.set_ylabel('Residual $r_3$')
show()
"""
Explanation: The distribution of variance looks mostly normal, with some outliers. These are the same outliers that showed up in our quantile-quantile plot, and they'll show up in the plots below as well.
<a name="residual"></a>
Residual vs. Response Plots
Another thing we can do, to look for uncaptured effects, is to look at our residuals vs. $\hat{y}$. This is a further effort to look for underlying functional relationships between $\hat{y}$ and the residuals, which would indicate that our system exhibits behavior not captured by our linear model.
End of explanation
"""
|
srcole/qwm | burrito/Burrito_nonlinear.ipynb | mit | %config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
import pandasql
import seaborn as sns
sns.set_style("white")
"""
Explanation: San Diego Burrito Analytics: Data characterization
Scott Cole
1 July 2016
This notebook applies nonlinear technqiues to analyze the contributions of burrito dimensions to the overall burrito rating.
Create the ‘vitalness’ metric. For each dimension, identify the burritos that scored below average (defined as 2 or lower), then calculate the linear model’s predicted overall score and compare it to the actual overall score. For what dimensions is this distribution not symmetric around 0?
If this distribution trends greater than 0 (Overall_predict - Overall_actual), that means that the actual score is lower than the predicted score. This means that this metric is ‘vital’ and that it being bad will make the whole burrito bad
If vitalness < 0, then the metric being really bad actually doesn’t affect the overall burrito as much as it should.
In opposite theme, make the ‘saving’ metric for all burritos in which the dimension was 4.5 or 5
For those that are significantly different from 0, quantify the effect size. (e.g. a burrito with a 2 or lower rating for this metric: its overall rating will be disproportionately impacted by XX points).
For the dimensions, how many are nonzero? If all of them are 0, then burritos are perfectly linear, which would be weird. If many of them are nonzero, then burritos are highly nonlinear.
NOTE: A Neural network is not recommended because we should have 30x as many examples as weights (and for 3-layer neural network with 4 nodes in the first 2 layers and 1 in the last layer, that would be (16+4 = 20), so would need 600 burritos. One option would be to artificially create data.
Default imports
End of explanation
"""
import util
df = util.load_burritos()
N = df.shape[0]
"""
Explanation: Load data
End of explanation
"""
def vitalness(df, dim, rating_cutoff = 2,
metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meatfilling',
'Uniformity','Salsa','Wrap']):
# Fit GLM to get predicted values
dffull = df[np.hstack((metrics,'overall'))].dropna()
X = sm.add_constant(dffull[metrics])
y = dffull['overall']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
dffull['overallpred'] = res.fittedvalues
# Make exception for Meat:filling in order to avoid pandasql error
if dim == 'Meat:filling':
dffull = dffull.rename(columns={'Meat:filling':'Meatfilling'})
dim = 'Meatfilling'
# Compare predicted and actual overall ratings for each metric below the rating cutoff
import pandasql
q = """
SELECT
overall, overallpred
FROM
dffull
WHERE
"""
q = q + dim + ' <= ' + np.str(rating_cutoff)
df2 = pandasql.sqldf(q.lower(), locals())
return sp.stats.ttest_rel(df2.overall,df2.overallpred)
vital_metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Wrap']
for metric in vital_metrics:
print metric
if metric == 'Volume':
rating_cutoff = .7
else:
rating_cutoff = 1
print vitalness(df,metric,rating_cutoff=rating_cutoff, metrics=vital_metrics)
"""
Explanation: Vitalness metric
End of explanation
"""
def savior(df, dim, rating_cutoff = 2,
metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meatfilling',
'Uniformity','Salsa','Wrap']):
# Fit GLM to get predicted values
dffull = df[np.hstack((metrics,'overall'))].dropna()
X = sm.add_constant(dffull[metrics])
y = dffull['overall']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
dffull['overallpred'] = res.fittedvalues
# Make exception for Meat:filling in order to avoid pandasql error
if dim == 'Meat:filling':
dffull = dffull.rename(columns={'Meat:filling':'Meatfilling'})
dim = 'Meatfilling'
# Compare predicted and actual overall ratings for each metric below the rating cutoff
import pandasql
q = """
SELECT
overall, overallpred
FROM
dffull
WHERE
"""
q = q + dim + ' >= ' + np.str(rating_cutoff)
df2 = pandasql.sqldf(q.lower(), locals())
print len(df2)
return sp.stats.ttest_rel(df2.overall,df2.overallpred)
vital_metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Wrap']
for metric in vital_metrics:
print metric
print savior(df,metric,rating_cutoff=5, metrics=vital_metrics)
print 'Volume'
vital_metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Wrap','Volume']
print savior(df,'Volume',rating_cutoff=.9,metrics=vital_metrics)
"""
Explanation: Savior metric
End of explanation
"""
|
esa-as/2016-ml-contest | JesperDramsch/Facies_classification_NMM_Split-Jesper.ipynb | apache-2.0 | %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from pandas import set_option
set_option("display.max_rows", 10)
pd.set_option('display.width', 1000)
pd.options.mode.chained_assignment = None
def hotcoder(data,sortlist,base):
hotdata = [0]*len(data)
dexin = {x:i for i,x in enumerate(sortlist)}
for i,a in enumerate(data):
for j,b in enumerate(a):
if isinstance(a,list):
if b not in sortlist:
print("Found illegal value of {0} at position {1}".format(b, i))
break
hotdata[i] += base**dexin[b]
else:
if a not in sortlist:
print("Found illegal value of {0} at position {1}".format(a, i))
break
hotdata[i] += base**dexin[a]
break
return hotdata
def distance(latlon1,latlon2):
lat1 = np.deg2rad(latlon1[0])
lon1 = np.deg2rad(latlon1[1])
lat2 = np.deg2rad(latlon2[0])
lon2 = np.deg2rad(latlon2[1])
dlon = lon2 - lon1
dlat = lat2 - lat1
a = np.sin(dlat / 2)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon / 2)**2
c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1 - a))
return (6371 * c)
facies_labels = ['SS','CSiS','FSiS','SiSh','MS','WS','D','PS','BS']
facies = pd.DataFrame({'ShortName': pd.Series(facies_labels,index=facies_labels),'Facies' : pd.Series(['Nonmarine sandstone', 'Nonmarine coarse siltstone', 'Nonmarine fine siltstone ', 'Marine siltstone and shale', 'Mudstone (limestone)', 'Wackestone (limestone)', 'Dolomite', 'Packstone-grainstone (limestone)','Phylloid-algal bafflestone (limestone)'],index=facies_labels), 'Neighbours' : pd.Series(['CSiS',['SS','FSiS'],'CSiS','MS',['SiSh','WS'],['MS','D'],['WS','PS'],['WS','D','BS'],['D','PS']],index=facies_labels), 'Colors' : pd.Series(['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D'],index=facies_labels)})
filename = '../facies_vectors.csv'
training_data = pd.read_csv(filename)
rows = len(training_data.index)
training_data
"""
Explanation: Facies classification using Machine Learning
Original contest notebook by Brendon Hall, Enthought
Let's train some data. I'm Jesper and I have no clue what I'm doing but I'm having fun. Most texts are still from the original notebook from Brandon Hall. As well as the basis for the code.
End of explanation
"""
facies[['Facies']]
"""
Explanation: Verbatim from Brendon Hall:
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
End of explanation
"""
pd.DataFrame({'Neighbours' : [facies.ShortName[x] for x in facies.Neighbours]},index=facies_labels)
"""
Explanation: These facies gradually blend into one another. We can see a marine non-marine neighbourhood already.
End of explanation
"""
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
"""
Explanation: The 'Well Name' and 'Formation' columns can be turned into a categorical data type.
End of explanation
"""
latlong = pd.DataFrame({"SHRIMPLIN": [37.98126,-100.98329], "ALEXANDER D": [37.6607787,-95.3534525], "SHANKLE": [38.0633727,-101.3891894], "LUKE G U": [37.4537739,-101.6073725], "KIMZEY A": [37.12289,-101.39697], "CROSS H CATTLE": [37.9105826,-101.6464517], "NOLAN": [37.7866294,-101.0451641], "NEWBY": [37.5406739,-101.5847635], "CHURCHMAN BIBLE": [37.3497658,-101.1060761], "STUART": [37.4857262,-101.1391063], "CRAWFORD": [37.1893654,-101.1494994], "Recruit F9": [37.4,-101]})
#distance([37.660779,-95.353453],[37.349766,-101.106076])
dist_mat= pd.DataFrame(np.zeros((latlong.shape[1],latlong.shape[1])))
#dist_mat = np.zeros(len(latlong.index),len(latlong.index))
for i,x in enumerate(latlong):
for j,y in enumerate(latlong):
if i > j:
dist_mat[i][j] = (distance(latlong[x].values,latlong[y].values))
dist_mat[j][i] = dist_mat[i][j]
dist_mat
training_data.describe()
"""
Explanation: Shrimplin:
* Longitude: -100.98329
* Latitude: 37.98126
ALEXANDER D
* Longitude: -95.3534525
* Latitude: 37.6607787
SHANKLE
* Longitude: -101.3891894
* Latitude: 38.0633727
LUKE G U
* Longitude: -101.6073725
* Latitude: 37.4537739
KIMZEY A
* Longitude: -101.39697
* Latitude: 37.12289
CROSS H CATTLE
* Longitude: -101.6464517
* Latitude: 37.9105826
NOLAN
* Longitude: -101.0451641
* Latitude: 37.7866294
Recruit F9
NEWBY
* Longitude: -101.5847635
* Latitude: 37.5406739
CHURCHMAN BIBLE
* Longitude: -101.1060761
* Latitude: 37.3497658
STUART
* Longitude: -101.1391063
* Latitude: 37.4857262
CRAWFORD
* Longitude: -101.1494994
* Latitude: 37.1893654
End of explanation
"""
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies.Colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] ]
processed_data = training_data.copy()
processed_data['Facies'] = processed_data['Facies']-1
processed_data= processed_data[processed_data['PE'].notnull().values]
processed_data.loc[:,'FaciesLabels'] = processed_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
"""
Explanation: The new data set contains about 1000 data points or 33% more than the original one. Rejoice!
End of explanation
"""
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies.Colors[0:len(facies.Colors)], 'indexed')
cmap_coarse = colors.ListedColormap(['#c0c0c0','#000000'])
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
clustercoarse=np.repeat(np.expand_dims(logs['NM_M'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=0,vmax=8)
im2 = ax[6].imshow(clustercoarse, interpolation='none', aspect='auto',
cmap=cmap_coarse,vmin=1,vmax=2)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im1, cax=cax)
cbar.set_label((18*' ').join(facies.index))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel('NM_M')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([]); ax[6].set_yticklabels([])
ax[5].set_xticklabels([]); ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
plt.show()
"""
Explanation: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
End of explanation
"""
make_facies_log_plot(
processed_data[processed_data['Well Name'] == 'SHRIMPLIN'],
facies.Colors)
"""
Explanation: Here's the SHRIMPLIN well for your vieling pleasure.
End of explanation
"""
#count the number of unique entries for each facies, sort them by
#facies number (instead of by number of entries)
facies_counts = processed_data['Facies'].value_counts().sort_index()
tmp = processed_data.query('NM_M==1')
facies_counts0 = tmp['Facies'].value_counts().sort_index()
tmp = processed_data.query('NM_M==2')
facies_counts1 = tmp['Facies'].value_counts().sort_index()
#use facies labels to index each count
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies.Colors,
title='Distribution of Training Data by Facies')
pd.DataFrame(facies_counts).T
"""
Explanation: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
End of explanation
"""
#save plot display settings to change back to when done plotting with seaborn
inline_rc = dict(mpl.rcParams)
import seaborn as sns
sns.set()
plotdata=processed_data.drop(['Well Name','Formation','Depth','RELPOS'],axis=1).dropna()
sns.pairplot(plotdata.drop(['Facies','NM_M'],axis=1),
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
"""
Explanation: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies.
End of explanation
"""
sns.pairplot(plotdata.query('NM_M==1').drop(['Facies','NM_M'],axis=1),
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
facies_counts0.plot(kind='bar',color=facies.Colors,
title='Distribution of Training Data by Facies')
pd.DataFrame(facies_counts0).T
"""
Explanation: At this point I woult like to take a look at the separated main classes.
End of explanation
"""
sns.pairplot(plotdata.query('NM_M==2').drop(['Facies','NM_M'],axis=1),
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
facies_counts1.plot(kind='bar',color=facies.Colors,
title='Distribution of Training Data by Facies')
pd.DataFrame(facies_counts1).T
"""
Explanation: In this view it becomes obvious that GR, PHIND and PE have some outliers at:
GR > 150
PhiND > 60
PE < 1
End of explanation
"""
processed_data0 = processed_data.query('NM_M==1 and not (PHIND > 60 or PE < 1 or GR > 150) and Facies < 3')
processed_data1 = processed_data.query('NM_M==2 and (PHIND < 45 or PE > 2) and Facies > 2')
processed_data = pd.concat([processed_data0,processed_data1])
print("Dropped {0} rows due to outliers and misclassification.".format(rows-len(processed_data.index),))
sns.set()
plotdata=processed_data.drop(['Well Name','Facies','Formation','Depth','NM_M','RELPOS'],axis=1).dropna()
sns.pairplot(plotdata,
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
#switch back to default matplotlib plot style
mpl.rcParams.update(inline_rc)
"""
Explanation: Here we can see aparent outliers in GR, PhiND, PE and DeltaPhi
* GR > 350
* DeltaPhi < -20
* PhiND > 45
* PE < 2
I have decided to keep GR as it is a general trend in the data (and not unheard of in that rock). DeltaPhi also fits a general trend and seems to be a feature. PhiND and PE do not lie in a trend or cluster and will therefore be removed.
End of explanation
"""
correct_facies_labels = training_data['Facies'].values
correct_facies_labels0 = training_data0['Facies'].values
#facies_labels=['Sand','Carbs']
correct_facies_labels1 = training_data1['Facies'].values
feature_vectors = training_data.drop(['Formation', 'Well Name','Depth','Facies','NM_M','FaciesLabels'], axis=1)
feature_vectors0 = training_data0.drop(['Formation', 'Well Name','Depth','Facies','NM_M','FaciesLabels'], axis=1)
feature_vectors1 = training_data1.drop(['Formation', 'Well Name','Depth','Facies','NM_M','FaciesLabels'], axis=1)
feature_vectors.describe()
feature_vectors0.describe()
feature_vectors1.describe()
"""
Explanation: Feature Engineering
End of explanation
"""
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
scaler0 = preprocessing.StandardScaler().fit(feature_vectors0)
scaled_features0 = scaler0.transform(feature_vectors0)
scaler1 = preprocessing.StandardScaler().fit(feature_vectors1)
scaled_features1 = scaler1.transform(feature_vectors1)
"""
Explanation: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie: Gaussian with zero mean and unit variance). The factors used to standardize the training set must be applied to any subsequent feature set that will be input to the classifier. The StandardScalar class can be fit to the training set, and later used to standardize any training data.
End of explanation
"""
from sklearn import svm
clf = svm.SVC()
clf0 = svm.SVC()
clf1 = svm.SVC()
"""
Explanation: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
Training the SVM classifier
Now we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a support vector machine. The SVM is a map of the feature vectors as points in a multi dimensional space, mapped so that examples from different facies are divided by a clear gap that is as wide as possible.
The SVM implementation in scikit-learn takes a number of important parameters. First we create a classifier using the default settings.
End of explanation
"""
clf.fit(scaled_features,correct_facies_labels)
clf0.fit(scaled_features0,correct_facies_labels0)
clf1.fit(scaled_features1,correct_facies_labels1)
"""
Explanation: Now we can train the classifier using the training set we created above.
End of explanation
"""
predicted_labels = clf.predict(scaled_features)
predicted_labels0 = clf0.predict(scaled_features0)
predicted_labels1 = clf1.predict(scaled_features1)
"""
Explanation: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. Because we know the true facies labels of the vectors in the test set, we can use the results to evaluate the accuracy of the classifier.
End of explanation
"""
from sklearn.metrics import confusion_matrix,f1_score, accuracy_score, make_scorer
from classification_utilities import display_cm, display_adj_cm
conf = confusion_matrix(correct_facies_labels, predicted_labels)
display_cm(conf, facies_labels, hide_zeros=True)
conf0 = confusion_matrix(correct_facies_labels0, predicted_labels0)
display_cm(conf0, facies_labels[0:3], hide_zeros=True)
conf1 = confusion_matrix(correct_facies_labels1, predicted_labels1)
display_cm(conf1, facies_labels[3:], hide_zeros=True)
y_test_comb = np.concatenate([correct_facies_labels0,correct_facies_labels1])
predicted_labels_comb = np.concatenate([predicted_labels0,predicted_labels1])
conf_comb = confusion_matrix(y_test_comb, predicted_labels_comb)
display_cm(conf_comb, facies_labels, hide_zeros=True)
"""
Explanation: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i.
To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function.
End of explanation
"""
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
"""
Explanation: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 23 were correctly indentified as SS, 21 were classified as CSiS and 2 were classified as FSiS.
The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications.
End of explanation
"""
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies,offset):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j-offset]
return total_correct / sum(sum(conf))
print('Facies classification accuracy = %f' % accuracy(conf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies,0))
print('Facies classification accuracy = %f' % accuracy(conf0))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf0, adjacent_facies[0:3],0))
print('Facies classification accuracy = %f' % accuracy(conf1))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf1, adjacent_facies[3:],3))
print('Facies classification accuracy = %f' % accuracy(conf_comb))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf_comb, adjacent_facies,0))
"""
Explanation: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
End of explanation
"""
from sklearn.model_selection import RandomizedSearchCV,LeaveOneGroupOut
from scipy.stats import uniform as sp_uniform
Fscorer = make_scorer(f1_score, average = 'micro')
Ascorer = make_scorer(accuracy_score)
clf = svm.SVC(cache_size = 800, random_state=1)
clf0 = svm.SVC(cache_size = 800, random_state=1)
clf1 = svm.SVC(cache_size = 800, random_state=1)
#parm_grid={'kernel': ['linear', 'rbf'],
# 'C': [0.5, 1, 10, 50],
# 'gamma':[0.0001, 0.01, 1, 10]}
param_dist = {"kernel": ['rbf'],
"C": sp_uniform(0.0001,10),
"gamma": sp_uniform(0.0001, 10)}
n_iter_search = 20
random_search0 = RandomizedSearchCV(clf0, param_distributions=param_dist,scoring = Fscorer,n_iter=n_iter_search)
random_search0.fit(scaled_features0,correct_facies_labels0)
print('Best score for NM: {}'.format(random_search0.best_score_))
print('Best parameters for NM: {}'.format(random_search0.best_params_))
clf0 = random_search0.best_estimator_
random_search0.best_estimator_
random_search1 = RandomizedSearchCV(clf1, param_distributions=param_dist,scoring = Fscorer,n_iter=n_iter_search)
random_search1.fit(scaled_features1,correct_facies_labels1)
print('Best score for random: {}'.format(random_search1.best_score_))
print('Best parameters for random: {}'.format(random_search1.best_params_))
clf1 = random_search1.best_estimator_
random_search1.best_estimator_
random_search = (clf, param_distributions=param_dist,scoring = Fscorer,n_iter=n_iter_search)
random_search.fit(scaled_features,correct_facies_labels)
print('Best score for combo: {}'.format(random_search.best_score_))
print('Best parameters for combo: {}'.format(random_search.best_params_))
clf = random_search.best_estimator_
random_search.best_estimator_
"""
Explanation: Model parameter selection
The classifier so far has been built with the default parameters. However, we may be able to get improved classification results with optimal parameter choices.
We will consider two parameters. The parameter C is a regularization factor, and tells the classifier how much we want to avoid misclassifying training examples. A large value of C will try to correctly classify more examples from the training set, but if C is too large it may 'overfit' the data and fail to generalize when classifying new data. If C is too small then the model will not be good at fitting outliers and will have a large error on the training set.
The SVM learning algorithm uses a kernel function to compute the distance between feature vectors. Many kernel functions exist, but in this case we are using the radial basis function rbf kernel (the default). The gamma parameter describes the size of the radial basis functions, which is how far away two vectors in the feature space need to be to be considered close.
We will train a series of classifiers with different values for C and gamma. Two nested loops are used to train a classifier for every possible combination of values in the ranges specified. The classification accuracy is recorded for each combination of parameter values. The results are shown in a series of plots, so the parameter values that give the best classification accuracy on the test set can be selected.
This process is also known as 'cross validation'. Often a separate 'cross validation' dataset will be created in addition to the training and test sets to do model selection. For this tutorial we will just use the test set to choose model parameters.
End of explanation
"""
clf0.fit(scaled_features0, correct_facies_labels0)
clf1.fit(scaled_features1, correct_facies_labels1)
clf.fit(scaled_features, correct_facies_labels)
cv_conf = confusion_matrix(np.concatenate([correct_facies_labels0,correct_facies_labels1]), np.concatenate([clf0.predict(scaled_features0),clf1.predict(scaled_features1)]))
cv_conf_comb = confusion_matrix(correct_facies_labels, clf.predict(scaled_features))
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies,0))
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf_comb))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf_comb, adjacent_facies,0))
"""
Explanation: Okay, take it up a notch.
End of explanation
"""
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
display_cm(cv_conf_comb, facies_labels,
display_metrics=True, hide_zeros=True)
"""
Explanation: Precision and recall are metrics that give more insight into how the classifier performs for individual facies. Precision is the probability that given a classification result for a sample, the sample actually belongs to that class. Recall is the probability that a sample will be correctly classified for a given class.
Precision and recall can be computed easily using the confusion matrix. The code to do so has been added to the display_confusion_matrix() function:
End of explanation
"""
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
display_adj_cm(cv_conf_comb, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
"""
Explanation: To interpret these results, consider facies SS. In our test set, if a sample was labeled SS the probability the sample was correct is 0.8 (precision). If we know a sample has facies SS, then the probability it will be correctly labeled by the classifier is 0.78 (recall). It is desirable to have high values for both precision and recall, but often when an algorithm is tuned to increase one, the other decreases. The F1 score combines both to give a single measure of relevancy of the classifier results.
These results can help guide intuition for how to improve the classifier results. For example, for a sample with facies MS or mudstone, it is only classified correctly 57% of the time (recall). Perhaps this could be improved by introducing more training samples. Sample quality could also play a role. Facies BS or bafflestone has the best F1 score and relatively few training examples. But this data was handpicked from other wells to provide training examples to identify this facies.
We can also consider the classification metrics when we consider misclassifying an adjacent facies as correct:
End of explanation
"""
wells = processed_data["Well Name"].values
wells0 = processed_data0["Well Name"].values
wells1 = processed_data1["Well Name"].values
logo = LeaveOneGroupOut()
logo0 = LeaveOneGroupOut()
logo1 = LeaveOneGroupOut()
f1_SVC0 = []
pred0 = {}
for train, test in logo0.split(scaled_features0, correct_facies_labels0, groups=wells0):
well_name = wells0[test[0]]
clf0.fit(scaled_features0[train], correct_facies_labels0[train])
pred0[well_name] = clf0.predict(scaled_features0[test])
sc = f1_score(correct_facies_labels0[test], pred0[well_name], labels = np.arange(10), average = 'micro')
print("{:>20s} {:.3f}".format(well_name, sc))
f1_SVC0.append(sc)
print("-Average leave-one-well-out F1 Score: {0}".format(sum(f1_SVC0)/(1.0*(len(f1_SVC0)))))
f1_SVC1 = []
pred1 = {}
for train, test in logo1.split(scaled_features1, correct_facies_labels1, groups=wells1):
well_name = wells1[test[0]]
clf1.fit(scaled_features1[train], correct_facies_labels1[train])
pred1[well_name] = clf1.predict(scaled_features1[test])
sc = f1_score(correct_facies_labels1[test], pred1[well_name], labels = np.arange(10), average = 'micro')
print("{:>20s} {:.3f}".format(well_name, sc))
f1_SVC1.append(sc)
print("-Average leave-one-well-out F1 Score: {0}".format(sum(f1_SVC1)/(1.0*(len(f1_SVC1)))))
f1_SVC_comb = []
for train, test in logo.split(np.concatenate([scaled_features0,scaled_features1]), np.concatenate([correct_facies_labels0,correct_facies_labels1]), groups=np.concatenate([wells0,wells1])):
well_name = np.concatenate([wells0,wells1])[test[0]]
pred=np.concatenate([pred0.get(well_name, []),pred1.get(well_name, [])])
sc = f1_score(np.concatenate([correct_facies_labels0,correct_facies_labels1])[test], pred, labels = np.arange(10), average = 'micro')
print("{:>20s} {:.3f}".format(well_name, sc))
f1_SVC_comb.append(sc)
print("-Average leave-one-well-out F1 Score: {0}".format(sum(f1_SVC_comb)/(1.0*(len(f1_SVC_comb)))))
f1_SVC_compare = []
pred_comp = {}
for train, test in logo.split(scaled_features, correct_facies_labels, groups=wells):
well_name = wells[test[0]]
clf.fit(scaled_features[train], correct_facies_labels[train])
pred0[well_name] = clf.predict(scaled_features[test])
sc = f1_score(correct_facies_labels[test], pred0[well_name], labels = np.arange(10), average = 'micro')
print("{:>20s} {:.3f}".format(well_name, sc))
f1_SVC_compare.append(sc)
print("-Average leave-one-well-out F1 Score Comparison: {0}".format(sum(f1_SVC_compare)/(1.0*(len(f1_SVC_compare)))))
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
"""
Explanation: Considering adjacent facies, the F1 scores for all facies types are above 0.9, except when classifying SiSh or marine siltstone and shale. The classifier often misclassifies this facies (recall of 0.66), most often as wackestone.
These results are comparable to those reported in Dubois et al. (2007).
LOGO
End of explanation
"""
well_data = pd.read_csv('../validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_data0 = well_data.query('NM_M==1')
well_data1 = well_data.query('NM_M==2')
well_features0 = well_data0.drop(['Formation', 'Well Name','Depth','NM_M'], axis=1)
well_features1 = well_data1.drop(['Formation', 'Well Name','Depth','NM_M'], axis=1)
"""
Explanation: Applying the classification model to new data
Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.
This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data.
End of explanation
"""
X_unknown0 = scaler0.transform(well_features0)
X_unknown1 = scaler1.transform(well_features1)
"""
Explanation: The data needs to be scaled using the same constants we used for the training data.
End of explanation
"""
#predict facies of unclassified data
y_unknown0 = clf0.predict(X_unknown0)
y_unknown1 = clf1.predict(X_unknown1)
well_data0['Facies'] = y_unknown0
well_data1['Facies'] = y_unknown1
well_data=pd.concat([well_data0,well_data1], axis=0)
well_data
well_data['Well Name'].unique()
"""
Explanation: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.
End of explanation
"""
make_facies_log_plot(
well_data[well_data['Well Name'] == 'STUART'],
facies_colors=facies.Colors)
make_facies_log_plot(
well_data[well_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies.Colors)
"""
Explanation: We can use the well log plot to view the classification results along with the well logs.
End of explanation
"""
well_data0['Facies'] = y_unknown0+1
well_data1['Facies'] = y_unknown1+1
well_data=pd.concat([well_data0,well_data1], axis=0)
well_data.to_csv('well_data_with_facies.csv')
"""
Explanation: Finally we can write out a csv file with the well data along with the facies classification results.
End of explanation
"""
|
ioggstream/python-course | ansible-101/notebooks/05_inventories.ipynb | agpl-3.0 | cd /notebooks/exercise-05
!cat inventory
"""
Explanation: Inventories
Inventories are a fundamental doc entrypoint for our infrastructures.
They contain a lot of informations, including:
- ansible_user
- configuration variables in [group_name:vars]
- host grouping eg. by geographical zones in [group_name:children]
Files:
inventory
End of explanation
"""
!ansible -i inventory --list-host all
"""
Explanation: The ansible executable can process inventory files
End of explanation
"""
# Use this cell for the exercise
# The ping module is very useful.
# Use it whenever you want to check connectivity!
!ansible -m ping -i inventory web_rome
"""
Explanation: Exercise
Use ansible to show:
- all hosts of the web group.
End of explanation
"""
#To create custom inventory scripts just use python ;) and set it in
!grep inventory ansible.cfg # inventory = ./docker-inventory.py
"""
Explanation: Inventory scripts
End of explanation
"""
"""List our containers.
Note: this only works with docker-compose containers.
"""
from __future__ import print_function
#
# Manage different docker libraries
#
try:
from docker import Client
except ImportError:
from docker import APIClient as Client
c = Client(base_url="http://172.17.0.1:2375")
# Define a function to make it clear!
container_fmt = lambda x: (
x['Names'][0][1:],
x['Labels']['com.docker.compose.service'],
x['NetworkSettings']['Networks']['bridge']['IPAddress'],
)
for x in c.containers():
try:
print(*container_fmt(x), sep='\t\t')
except KeyError:
# skip non-docker-compose containers
pass
# Ansible accepts
import json
inventories = {
'web': {
'hosts': ['ws-1', 'ws-2'],
},
'db': {
'hosts': ['db-1', 'db-2'],
}
}
# like this
print(json.dumps(inventories, indent=1))
# You can pass variables to generated inventories too
inventories['web']['host_vars'] = {
'ansible_ssh_common_args': ' -o GSSApiAuthentication=no'
}
print(json.dumps(inventories, indent=1))
"""
Explanation: Exercise
in the official ansible documentation find at least 3 ansible_connection=docker parameters
End of explanation
"""
!ansible -m ping -i inventory-docker-solution.py all
"""
Explanation: Exercise:
Reuse the code in inventory-docker.py to print a json inventory that:
connects via docker to "web" hosts
connects via ssh to "ansible" hosts
Test it in the cell below.
NOTE: there's a docker inventory script shipped with ansible
End of explanation
"""
# Test here your inventory
"""
Explanation: Exercise
Modify the inventory-docker.py to skip StrictHostKeyChecking only on web hosts.
End of explanation
"""
# Use this cell to test the exercise
"""
Explanation: Configurations
You may want to split inventory files and separate prod and test environment.
Exercise:
split inventory in two inventory files:
prod for production servers
test for test servers
Then use ansible -i to explicitly use the different ones.
End of explanation
"""
!tree group_vars
"""
Explanation: group_vars and host_vars
You can move variables out of inventories - eg to simplify inventory scripts - and store them in files:
under group_vars for host groups
under host_vars for single hosts
End of explanation
"""
# Test here the new inventory file
"""
Explanation: If you have different inventories, you can store different set of variable in custom files.
The all ones will be shared between all inventories
Exercise:
edit group_vars/all and move there all common variables from inventory
End of explanation
"""
!cat group_vars/example
"""
Explanation: Inventory variables can store almost everything and even describe the architecture of your deployment
End of explanation
"""
|
StingraySoftware/notebooks | Simulator/Simulator Tutorial.ipynb | mit | %load_ext autoreload
%autoreload 2
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
"""
Explanation: Contents
This notebook covers the basics of initializing and using the functionalities of simulator class. Various ways of simulating light curves that include 'power law distribution', 'user-defined responses', 'pre'defined responses' and 'impulse responses' are covered. The notebook also illustrates channel creation and ways to store and retrieve simulator objects.
Setup
Import some useful libraries.
End of explanation
"""
from stingray import Lightcurve, Crossspectrum, sampledata, Powerspectrum
from stingray.simulator import simulator, models
from stingray.fourier import poisson_level
"""
Explanation: Import relevant stingray libraries.
End of explanation
"""
sim = simulator.Simulator(N=10000, mean=5, rms=0.4, dt=0.125, red_noise=8, poisson=False)
sim_pois = simulator.Simulator(N=10000, mean=5, rms=0.4, dt=0.125, red_noise=8, poisson=True)
"""
Explanation: Creating a Simulator Object
Stingray has a simulator class which can be used to instantiate a simulator object and subsequently, perform simulations. Arguments can be passed in Simulator class to set the properties of simulated light curve.
In this case, we instantiate a simulator object specifying the number of data points in the output light curve, the expected mean and binning interval.
End of explanation
"""
sample = sampledata.sample_data().counts
"""
Explanation: We also import some sample data for later use.
End of explanation
"""
lc = sim.simulate(1)
plt.errorbar(lc.time, lc.counts, yerr=lc.counts_err)
"""
Explanation: Light Curve Simulation
There are multiple way to simulate a light curve:
Using power-law spectrum
Using user-defined model
Using pre-defined models (lorenzian etc)
Using impulse response
(i) Using power-law spectrum
By passing a beta value as a function argument, the shape of power-law spectrum can be defined. Passing beta as 1 gives a flicker-noise distribution.
End of explanation
"""
lc_pois = sim_pois.simulate(1)
plt.plot(lc_pois.time, lc_pois.counts)
plt.plot(lc_pois.time, lc_pois.smooth_counts)
"""
Explanation: When simulating Poisson-distributed light curves, a smooth_counts attribute is added to the light curve, containing the original smooth light curve, for debugging purposes.
End of explanation
"""
lc = sim.simulate(2)
plt.errorbar(lc.time, lc.counts, yerr=lc.counts_err)
lc_pois = sim_pois.simulate(2)
plt.plot(lc_pois.time, lc_pois.counts)
plt.plot(lc_pois.time, lc_pois.smooth_counts)
"""
Explanation: Passing beta as 2, gives random-walk distribution.
End of explanation
"""
pds = Powerspectrum.from_lightcurve(lc_pois, norm="leahy")
pds = pds.rebin_log(0.005)
poisson = poisson_level(meanrate=lc_pois.meanrate, norm="leahy")
plt.loglog(pds.freq, pds.power)
plt.axhline(poisson)
"""
Explanation: These light curves can be used for standard power spectral analysis with other Stingray classes.
End of explanation
"""
w = np.fft.rfftfreq(sim.N, d=sim.dt)[1:]
spectrum = np.power((1/w),2/2)
plt.plot(spectrum)
lc = sim.simulate(spectrum)
plt.plot(lc.counts)
"""
Explanation: (ii) Using user-defined model
Light curve can also be simulated using a user-defined spectrum.
End of explanation
"""
lc = sim.simulate('generalized_lorentzian', [1.5, .2, 1.2, 1.4])
plt.plot(lc.counts[1:400])
lc = sim.simulate('smoothbknpo', [.6, 0.9, .2, 4])
plt.plot(lc.counts[1:400])
"""
Explanation: (iii) Using pre-defined models
One of the pre-defined spectrum models can also be used to simulate a light curve. In this case, model name and model parameters (as list iterable) need to be passed as function arguments.
To read more about the models and what the different parameters mean, see models notebook.
End of explanation
"""
s_ir = sim.simple_ir(10, 5, 0.1)
plt.plot(s_ir)
"""
Explanation: (iv) Using impulse response
Before simulating a light curve through this approach, an appropriate impulse response needs to be constructed. There
are two helper functions available for that purpose.
simple_ir() allows to define an impulse response of constant height. It takes in starting time, width and intensity as arguments, all of whom are set by default.
End of explanation
"""
r_ir = sim.relativistic_ir()
r_ir = sim.relativistic_ir(t1=3, t2=4, t3=10, p1=1, p2=1.4, rise=0.6, decay=0.1)
plt.plot(r_ir)
"""
Explanation: A more realistic impulse response mimicking black hole dynamics can be created using relativistic_ir(). Its arguments are: primary peak time, secondary peak time, end time, primary peak value, secondary peak value, rise slope and decay slope. These paramaters are set to appropriate values by default.
End of explanation
"""
lc_new = sim.simulate(sample, r_ir)
"""
Explanation: Now, that the impulse response is ready, simulate() method can be called to produce a light curve.
End of explanation
"""
lc_new = sim.simulate(sample, r_ir, 'full')
"""
Explanation: Since, the new light curve is produced by the convolution of original light curve and impulse response, its length is truncated by default for ease of analysis. This can be changed, however, by supplying an additional parameter full.
End of explanation
"""
lc_new = sim.simulate(sample, r_ir, 'filtered')
"""
Explanation: Finally, some times, we do not need to include lag delay portion in the output light curve. This can be done by changing the final function parameter to filtered.
End of explanation
"""
sim.simulate_channel('3.5-4.5', 2)
sim.count_channels()
"""
Explanation: To learn more about what the lags look like in practice, head to the lag analysis notebook.
Channel Simulation
Here, we demonstrate simulator's functionality to simulate light curves independently for each channel. This is useful, for example, when dealing with energy dependent impulse responses where you can create a new channel for each energy range and simulate.
In practical situations, different channels may have different impulse responses and hence, would react differently to incoming light curves. To account for this, there is an option to simulate light curves and add them to corresponding energy channels.
End of explanation
"""
lc = sim.get_channel('3.5-4.5')
plt.plot(lc.counts)
"""
Explanation: Above command assigns a light curve of random-walk distribution to energy channel of range 3.5-4.5. Notice, that simulate_channel() has the same parameters as simulate() with the exception of first parameter that describes the energy range of channel.
To get a light curve belonging to a specific channel, get_channel() is used.
End of explanation
"""
sim.delete_channel('3.5-4.5')
sim.count_channels()
"""
Explanation: A specific energy channel can also be deleted.
End of explanation
"""
sim.simulate_channel('3.5-4.5', 1)
sim.simulate_channel('4.5-5.5', 'smoothbknpo', [.6, 0.9, .2, 4])
sim.count_channels()
sim.get_channels(['3.5-4.5', '4.5-5.5'])
sim.delete_channels(['3.5-4.5', '4.5-5.5'])
sim.count_channels()
"""
Explanation: Alternatively, if there are multiple channels that need to be added or deleted, this can be done by a single command.
End of explanation
"""
sim.write('data.pickle')
sim.read('data.pickle')
"""
Explanation: Reading/Writing
Simulator object can be saved or retrieved at any time using pickle.
End of explanation
"""
|
jdvelasq/pytimeseries | pytimeseries/PyTimeSeries.ipynb | mit | import pytimeseries
import pandas
import matplotlib
"""
Explanation: PyTimeSeries
Test for pytimeseries package
Importamos la librería
End of explanation
"""
tserie = pandas.read_csv('champagne.csv', index_col='Month')
print(tserie)
"""
Explanation: Preparación de los datos
pytimeseries recibe una serie de pandas para analizar. A continuación se muestra la preparación de los datos para una serie que inicialmente está en csv.
Elegir una serie de tiempo
End of explanation
"""
rng = pandas.to_datetime(tserie.index)
print(rng)
"""
Explanation: Convertir en fechas la columna que indica los meses
End of explanation
"""
tserie.reindex(rng)
"""
Explanation: Asignar las fechas a los datos
End of explanation
"""
frq = pandas.infer_freq(tserie.index)
print(frq)
"""
Explanation: Inferir la frecuencia de los datos automáticamente
End of explanation
"""
pytimeseries.series_viewer(tserie).time_plot()
matplotlib.pyplot.show()
"""
Explanation: Visualización de la serie
Análisis exploratorio de los datos
Gráfica de la serie
End of explanation
"""
pytimeseries.series_viewer(tserie).ACF_plot()
matplotlib.pyplot.show()
"""
Explanation: ACF
End of explanation
"""
pytimeseries.series_viewer(tserie).PACF_plot()
matplotlib.pyplot.show()
"""
Explanation: PACF
End of explanation
"""
pytimeseries.series_viewer(tserie).qq_plot()
matplotlib.pyplot.show()
"""
Explanation: QQ Plot
End of explanation
"""
pytimeseries.series_viewer(tserie).histogram()
matplotlib.pyplot.show()
"""
Explanation: Histograma
End of explanation
"""
pytimeseries.series_viewer(tserie).density_plot()
matplotlib.pyplot.show()
"""
Explanation: Gráfica de densidad
End of explanation
"""
nt = pytimeseries.series_viewer(tserie).normality()
print(nt)
"""
Explanation: Test de normalidad
End of explanation
"""
matplotlib.pyplot.plot(tserie.values)
pytimeseries.base_model(ts = tserie).specify(trans = 'log')
matplotlib.pyplot.show()
"""
Explanation: Especificación de la serie
El modelo puede recibir una serie de tiempo transformada de la siguiente manera:
trans = Transformaciones directas de la serie:
- log
- log10
- sqrt
- cbrt
- boxcox
trend = Eliminando la tendencia:
- linear
- cuadratic
- cubic
- diff1
- diff2
seasonal = Estacionalidad (de acuerdo a la frecuencia):
- poly2
- diff
- (dummy)
También la combinación de ellas
End of explanation
"""
model = pytimeseries.AR_p(ts = tserie, trans='sqrt', trend = 'linear', seasonal = 'poly2')
result = model.estimate()
matplotlib.pyplot.plot(tserie.values)
matplotlib.pyplot.plot(result.X.original)
matplotlib.pyplot.plot(result.X.residuals)
matplotlib.pyplot.plot(result.X.estimation)
matplotlib.pyplot.show()
"""
Explanation: Estimación (Modelo AR)
Utilizando el modelo de statsmodels
```python
X = base_model(self.ts).specify(trans = self.trans, trend = self.trend, seasonal = self.seasonal)
model = statsmodels.tsa.ar_model.AR(X.residuals)
model_fit = model.fit()
estimation = model_fit.predict()
X.estimation = estimation
X.restore(trans = self.trans, trend = self.trend, seasonal = self.seasonal)
super().set_residuals(X.residuals)
```
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.2/examples/contact_spots.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
"""
Explanation: Contact Binary with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary(contact_binary=True)
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('lc', times=phoebe.linspace(0,0.5,101))
b.run_compute(irrad_method='none', model='no_spot')
"""
Explanation: Model without Spots
End of explanation
"""
b.add_feature('spot', component='primary', feature='spot01', relteff=0.9, radius=20, colat=90, long=-45)
b.run_compute(irrad_method='none', model='with_spot')
"""
Explanation: Adding Spots
Let's add a spot to the primary component in our binary. Note that if you attempt to attach to the 'contact_envelope' component, an error will be raised. Spots can only be attached to star components.
The 'colat' parameter defines the latitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the other star at t0. See to spots tutorial for more details.
End of explanation
"""
afig, mplfig = b.plot(show=True, legend=True)
"""
Explanation: Comparing Light Curves
End of explanation
"""
b.remove_dataset(kind='lc')
b.remove_model(model=['with_spot', 'no_spot'])
b.add_dataset('mesh', compute_times=b.to_time(0.25), columns='teffs')
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(fc='teffs', ec='face', fcmap='plasma', show=True)
"""
Explanation: Spots near the "neck"
Since the spots are still defined with the coordinate system of the individual star components, this can result in spots that are distorted and even "cropped" at the neck. Furthermore, spots with long=0 could be completely "hidden" by the neck or result in a ring around the neck.
To see this, let's plot our mesh with teff as the facecolor.
End of explanation
"""
b.set_value('long', value=-30)
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(fc='teffs', ec='face', fcmap='plasma', show=True)
"""
Explanation: Now if we set the long closer to the neck, we'll see it get cropped by the boundary between the two components. If we need a spot that crosses between the two "halves" of the contact, we'd have to add separate spots to each component, with each getting cropped at the boundary.
End of explanation
"""
b.set_value('long', value=0.0)
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(fc='teffs', ec='face', fcmap='plasma', show=True)
"""
Explanation: If we set long to zero, the spot completely disappears (as there is nowhere in the neck that is still on the surface.
End of explanation
"""
b.set_value('radius', value=40)
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(fc='teffs', ec='face', fcmap='plasma', show=True)
"""
Explanation: But if we increase the radius large enough, we'll get a ring.
End of explanation
"""
|
zaqwes8811/micro-apps | self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb | mit | from __future__ import division, print_function
%matplotlib inline
#format the book
import book_format
book_format.set_style()
"""
Explanation: Table of Contents
Probabilities, Gaussians, and Bayes' Theorem
End of explanation
"""
import numpy as np
import kf_book.book_plots as book_plots
belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2])
belief = belief / np.sum(belief)
with book_plots.figsize(y=2):
book_plots.bar_plot(belief)
print('sum = ', np.sum(belief))
"""
Explanation: Introduction
The last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is unimodal and continuous. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.
We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features.
Mean, Variance, and Standard Deviations
Most of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned.
Random Variables
Each time you roll a die the outcome will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the probability, or odds of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6.
This combination of values and associated probabilities is called a random variable. Here random does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.
While we are defining terms, the range of values is called the sample space. For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. Space is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.
Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.
Random variables such as coin tosses and die rolls are discrete random variables. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called continuous random variables since they can take on any real value between two limits.
Do not confuse the measurement of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable.
In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two.
Probability Distribution
The probability distribution gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:
|Value|Probability|
|-----|-----------|
|1|1/6|
|2|1/6|
|3|1/6|
|4|1/6|
|5|1/6|
|6|1/6|
We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:
$$P(X{=}4) = p(4) = \frac{1}{6}$$
This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this.
Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as
$$\begin{gathered}P(X{=}H) = 0.5\P(X{=}T)=0.5\end{gathered}$$
Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.
The probabilities for all values of a discrete random value is known as the discrete probability distribution and the probabilities for all values of a continuous random value is known as the continuous probability distribution.
To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as
$$\sum\limits_u P(X{=}u)= 1$$
for discrete distributions, and as
$$\int\limits_u P(X{=}u) \,du= 1$$
for continuous distributions.
In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example:
End of explanation
"""
x = [1.8, 2.0, 1.7, 1.9, 1.6]
np.mean(x)
"""
Explanation: Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction.
The Mean, Median, and Mode of a Random Variable
Given a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a measure of central tendency. For example we might want to know the average height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the mean. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is
$$X = {1.8, 2.0, 1.7, 1.9, 1.6}$$
we compute the mean as
$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$
It is traditional to use the symbol $\mu$ (mu) to denote the mean.
We can formalize this computation with the equation
$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$
NumPy provides numpy.mean() for computing the mean.
End of explanation
"""
x = np.array([1.8, 2.0, 1.7, 1.9, 1.6])
x.mean()
"""
Explanation: As a convenience NumPy arrays provide the method mean().
End of explanation
"""
np.median(x)
"""
Explanation: The mode of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a unimodal set, and if two or more numbers occur the most with equal frequency than the set is multimodal. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the Discrete Bayes chapter we talked about our belief in the dog's position as a multimodal distribution because we assigned different probabilities to different positions.
Finally, the median of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.
Numpy provides numpy.median() to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true.
End of explanation
"""
total = 0
N = 1000000
for r in np.random.rand(N):
if r <= .80: total += 1
elif r < .95: total += 3
else: total += 5
total / N
"""
Explanation: Expected Value of a Random Variable
The expected value of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we expect $x$ to have, on average?
It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the mean of the sample space.
Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute
$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$
Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.
We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us
$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$
A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:
$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$
If $x$ is continuous we substitute the sum for an integral, like so
$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$
where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.
We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically.
End of explanation
"""
X = [1.8, 2.0, 1.7, 1.9, 1.6]
Y = [2.2, 1.5, 2.3, 1.7, 1.3]
Z = [1.8, 1.8, 1.8, 1.8, 1.8]
"""
Explanation: You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size.
Exercise
What is the expected value of a die role?
Solution
Each side is equally likely, so each has a probability of 1/6. Hence
$$\begin{aligned}
\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \
&= 1/6(1+2+3+4+5+6)\&= 3.5\end{aligned}$$
Exercise
Given the uniform continuous distribution
$$f(x) = \frac{1}{b - a}$$
compute the expected value for $a=0$ and $B=20$.
Solution
$$\begin{aligned}
\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \
&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \
&= 10 - 0 \
&= 10
\end{aligned}$$
Variance of a Random Variable
The computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights:
End of explanation
"""
print(np.mean(X), np.mean(Y), np.mean(Z))
"""
Explanation: Using NumPy we see that the mean height of each class is the same.
End of explanation
"""
print("{:.2f} meters squared".format(np.var(X)))
"""
Explanation: The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.
The mean tells us something about the data, but not the whole story. We want to be able to specify how much variation there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students.
Statistics has formalized this concept of measuring variation into the notion of standard deviation and variance. The equation for computing the variance is
$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$
Ignoring the square for a moment, you can see that the variance is the expected value for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get
$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$
Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.
The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute
$$
\begin{aligned}
\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \
&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \
\mathit{VAR}(X)&= 0.02 \, m^2
\end{aligned}$$
NumPy provides the function var() to compute the variance:
End of explanation
"""
print('std {:.4f}'.format(np.std(X)))
print('var {:.4f}'.format(np.std(X)**2))
"""
Explanation: This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the standard deviation, which is defined as the square root of the variance:
$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$
It is typical to use $\sigma$ for the standard deviation and $\sigma^2$ for the variance. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.
For the first class we compute the standard deviation with
$$
\begin{aligned}
\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \
&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \
\sigma_x&= 0.1414
\end{aligned}$$
We can verify this computation with the NumPy method numpy.std() which computes the standard deviation. 'std' is a common abbreviation for standard deviation.
End of explanation
"""
from kf_book.gaussian_internal import plot_height_std
import matplotlib.pyplot as plt
plot_height_std(X)
"""
Explanation: And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.
What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters.
We can view this in a plot:
End of explanation
"""
from numpy.random import randn
data = 1.8 + randn(100)*.1414
mean, std = data.mean(), data.std()
plot_height_std(data, lw=2)
print('mean = {:.3f}'.format(mean))
print('std = {:.3f}'.format(std))
"""
Explanation: For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.
We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on.
End of explanation
"""
np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100.
"""
Explanation: By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code.
End of explanation
"""
print('std of Y is {:.2f} m'.format(np.std(Y)))
"""
Explanation: We'll discuss this in greater depth soon. For now let's compute the standard deviation for
$$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$
The mean of $Y$ is $\mu=1.8$ m, so
$$
\begin{aligned}
\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \
&= \sqrt{0.152} = 0.39 \ m
\end{aligned}$$
We will verify that with NumPy with
End of explanation
"""
print(np.std(Z))
"""
Explanation: This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.
Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.
$$
\begin{aligned}
\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \
&= \sqrt{\frac{0+0+0+0+0}{5}} \
\sigma_z&= 0.0 \ m
\end{aligned}$$
End of explanation
"""
X = [3, -3, 3, -3]
mean = np.average(X)
for i in range(len(X)):
plt.plot([i ,i], [mean, X[i]], color='k')
plt.axhline(mean)
plt.xlim(-1, len(X))
plt.tick_params(axis='x', labelbottom=False)
"""
Explanation: Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account.
I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school!
We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues.
Why the Square of the Differences
Why are we taking the square of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$
End of explanation
"""
X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100]
print('Variance of X with outlier = {:6.2f}'.format(np.var(X)))
print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1])))
"""
Explanation: If we didn't take the square of the differences the signs would cancel everything out:
$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$
This is clearly incorrect, as there is more than 0 variance in the data.
Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.
This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have:
End of explanation
"""
from filterpy.stats import plot_gaussian_pdf
plot_gaussian_pdf(mean=1.8, variance=0.1414**2,
xlabel='Student Height', ylabel='pdf');
"""
Explanation: Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.
I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called Bayesian robustness, or the excellent publications on robust statistics by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss.
The point to gather from this is that these summary statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way.
Gaussians
We are now ready to learn about Gaussians. Let's remind ourselves of the motivation for this chapter.
We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.
Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about.
End of explanation
"""
import kf_book.book_plots as book_plots
belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0]
book_plots.bar_plot(belief)
"""
Explanation: This curve is a probability density function or pdf for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.
I explain how to plot Gaussians, and much more, in the Notebook Computing_and_Plotting_PDFs in the
Supporting_Notebooks folder. You can read it online here [1].
This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.
This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.
To further motivate you, recall the shapes of the probability distributions in the Discrete Bayes chapter:
End of explanation
"""
plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)');
"""
Explanation: They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter!
Nomenclature
A bit of nomenclature before we continue - this chart depicts the probability density of a random variable having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this:
End of explanation
"""
x = np.arange(-3, 3, .01)
plt.plot(x, np.exp(-x**2));
"""
Explanation: The y-axis depicts the probability density — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.
The Gaussian model is imperfect. Though these charts do not show it, the tails of the distribution extend out to infinity. Tails are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives.
You will hear these distributions called Gaussian distributions or normal distributions. Gaussian and normal both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a Gaussian or normal — these are both typical shortcut names for the Gaussian distribution.
Gaussian Distributions
Let's explore how Gaussians work. A Gaussian is a continuous probability distribution that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:
$$
f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]
$$
$\exp[x]$ is notation for $e^x$.
<p> Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`.
Shorn of the constants, you can see it is a simple exponential:
$$f(x)\propto e^{-x^2}$$
which has the familiar bell curve shape
End of explanation
"""
from filterpy.stats import gaussian
#gaussian??
"""
Explanation: Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now.
End of explanation
"""
plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$');
"""
Explanation: Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$.
End of explanation
"""
from filterpy.stats import norm_cdf
print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format(
norm_cdf((21.5, 22.5), 22,4)*100))
print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format(
norm_cdf((23.5, 24.5), 22,4)*100))
"""
Explanation: What does this curve mean? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called Central Limit Theorem states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C.
Recall that a Gaussian distribution is continuous. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being exactly 2°C is 0% because there are an infinite number of values the reading can take.
What is this curve? It is something we call the probability density function. The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures.
Here is another way to understand it. What is the density of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.
$$M = \iiint_R p(x,y,z)\, dV$$
We do the same with probability density. If you want to know the temperature being between 20°C and 21°C kph you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability.
What is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero.
Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.
In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.
We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve.
How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian
$$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$
This is called the cumulative probability distribution, commonly abbreviated cdf.
I wrote filterpy.stats.norm_cdf which computes the integral for you. For example, we can compute
End of explanation
"""
print(norm_cdf((-1e8, 1e8), mu=0, var=4))
"""
Explanation: The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean.
The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means distributed according to. This means I can express the temperature reading of our thermometer as
$$\text{temp} \sim \mathcal{N}(22,4)$$
This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.
Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example.
The Variance and Belief
Since this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, something happened, and the probability of something happening is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$)
End of explanation
"""
from filterpy.stats import gaussian
print(gaussian(x=3.0, mean=2.0, var=1))
print(gaussian(x=[3.0, 2.0], mean=2.0, var=1))
"""
Explanation: This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of how much the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.
Let's look at that graphically. We will use the aforementioned filterpy.stats.gaussian which can take either a single value or array of values.
End of explanation
"""
print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False))
"""
Explanation: By default gaussian normalizes the output, which turns the output back into a probability distribution. Use the argumentnormed to control this.
End of explanation
"""
xs = np.arange(15, 30, 0.05)
plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$')
plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':')
plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--')
plt.legend();
"""
Explanation: If the Gaussian is not normalized it is called a Gaussian function instead of Gaussian distribution.
End of explanation
"""
from kf_book.gaussian_internal import display_stddev_plot
display_stddev_plot()
"""
Explanation: What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.
If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.
An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the mean and $\tau$ the precision. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our belief about a measurement, they express the precision of the measurement, and they express how much variance there is in the measurements. These are all different ways of stating the same fact.
I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using.
The 68-95-99.7 Rule
It is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the 68-95-99.7 rule. If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$).
Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.
The following graph depicts the relationship between the standard deviation and the normal distribution.
End of explanation
"""
import math
from ipywidgets import interact, FloatSlider
def plt_g(mu,variance):
plt.figure()
xs = np.arange(2, 8, 0.01)
ys = gaussian(xs, mu, variance)
plt.plot(xs, ys)
plt.ylim(0, 0.04)
interact(plt_g, mu=FloatSlider(value=5, min=3, max=7),
variance=FloatSlider(value = .03, min=.01, max=1.));
"""
Explanation: Interactive Gaussians
For those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner.
End of explanation
"""
x = np.arange(-1, 3, 0.01)
g1 = gaussian(x, mean=0.8, var=.1)
g2 = gaussian(x, mean=1.3, var=.2)
plt.plot(x, g1, x, g2)
g = g1 * g2 # element-wise multiplication
g = g / sum(g) # normalize
plt.plot(x, g, ls='-.');
"""
Explanation: Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified.
<img src='animations/04_gaussian_animate.gif'>
Computational Properties of Gaussians
The discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians.
A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).
Before we do the math, let's test this visually.
End of explanation
"""
x = np.arange(0, 4*np.pi, 0.01)
plt.plot(np.sin(1.2*x))
plt.plot(np.sin(1.2*x) * np.sin(2*x));
"""
Explanation: Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result looks like a Gaussian distribution.
Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from sin(x).
End of explanation
"""
def normalize(p):
return p / sum(p)
def update(likelihood, prior):
return normalize(likelihood * prior)
prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2]))
likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16]))
posterior = update(likelihood, prior)
book_plots.bar_plot(posterior)
"""
Explanation: But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians because they are computationally nice.
The product of two independent Gaussians is given by:
$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\
\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2}
\end{aligned}$$
The sum of two Gaussians is given by
$$\begin{gathered}\mu = \mu_1 + \mu_2 \
\sigma^2 = \sigma^2_1 + \sigma^2_2
\end{gathered}$$
At the end of the chapter I derive these equations. However, understanding the deriviation is not very important.
Putting it all Together
Now we are ready to talk about Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.
In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so:
End of explanation
"""
xs = np.arange(0, 10, .01)
def mean_var(p):
x = np.arange(len(p))
mean = np.sum(p * x,dtype=float)
var = np.sum((x - mean)**2 * p)
return mean, var
mean, var = mean_var(posterior)
book_plots.bar_plot(posterior)
plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r');
print('mean: %.2f' % mean, 'var: %.2f' % var)
"""
Explanation: In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory.
But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart.
End of explanation
"""
from scipy.stats import norm
import filterpy.stats
print(norm(2, 3).pdf(1.5))
print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3))
"""
Explanation: This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.
Next, recall that our filter implements the update function with
python
def update(likelihood, prior):
return normalize(likelihood * prior)
If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with
$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\
\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2}
\end{aligned}$$
which is three multiplications and two divisions.
Bayes Theorem
In the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered Bayes' Theorem. Bayes theorem tells us how to compute the probability of an event given prior information.
We implemented the update() function with this probability calculation:
$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$
It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:
$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$
where $\| \cdot\|$ expresses normalizing the term.
We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.
To review, the prior is the probability of something happening before we include the probability of the measurement (the likelihood) and the posterior is the probability we compute after incorporating the information from the measurement.
Bayes theorem is
$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$
$P(A \mid B)$ is called a conditional probability. That is, it represents the probability of $A$ happening if $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).
I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a probability distribution. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions
$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$
In the equation above $B$ is the evidence, $p(A)$ is the prior, $p(B \mid A)$ is the likelihood, and $p(A \mid B)$ is the posterior. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches out update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at i, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$.
So, let's plug that into the equation and solve it.
$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$
That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the prior - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the update() function:
python
def update(likelihood, prior):
posterior = prior * likelihood # p(z|x) * p(x)
return normalize(posterior)
The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the evidence. We compute that by taking the sum of $x$, or sum(belief) in the code. That is how we compute the normalization! So, the update() function is doing nothing more than computing Bayes' theorem.
The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as
$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$
This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent opinion piece for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the Particle Filters chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.
It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.
But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute
$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$
That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a much easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable.
Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position x. A hard problem becomes easy.
Total Probability Theorem
We now know the formal mathematics behind the update() function; what about the predict() function? predict() implements the total probability theorem. Let's recall what predict() computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is
$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$
That equation is called the total probability theorem. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented predict(), but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation
python
for i in range(N):
for k in range (kN):
index = (i + (width-k) - offset) % N
result[i] += prob_dist[index] * kernel[k]
Computing Probabilities with scipy.stats
In this chapter I used code from FilterPy to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module scipy.stats. So let's walk through how to use scipy.stats to compute statistics and probabilities.
The scipy.stats module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses scipy.stats.norm to compute a Gaussian, and compare its value to the value returned by the gaussian() function from FilterPy.
End of explanation
"""
n23 = norm(2, 3)
print('pdf of 1.5 is %.4f' % n23.pdf(1.5))
print('pdf of 2.5 is also %.4f' % n23.pdf(2.5))
print('pdf of 2 is %.4f' % n23.pdf(2))
"""
Explanation: The call norm(2, 3) creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so:
End of explanation
"""
np.set_printoptions(precision=3, linewidth=50)
print(n23.rvs(size=15))
"""
Explanation: The documentation for scipy.stats.norm [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the rvs() function.
End of explanation
"""
# probability that a random value is less than the mean 2
print(n23.cdf(2))
"""
Explanation: We can get the cumulative distribution function (CDF), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$.
End of explanation
"""
print('variance is', n23.var())
print('standard deviation is', n23.std())
print('mean is', n23.mean())
"""
Explanation: We can get various properties of the distribution:
End of explanation
"""
xs = np.arange(10, 100, 0.05)
ys = [gaussian(x, 90, 30) for x in xs]
plt.plot(xs, ys, label='var=0.2')
plt.xlim(0, 120)
plt.ylim(-0.02, 0.09);
"""
Explanation: Limitations of Using Gaussians to Model the World
Earlier I mentioned the central limit theorem, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions.
However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading.
This is a broad topic which I will not treat exhaustively.
Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for any value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.
But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions.
End of explanation
"""
from numpy.random import randn
def sense():
return 10 + randn()*2
"""
Explanation: The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places.
Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the Student's $t$-distribution.
Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function numpy.random.randn() to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with:
End of explanation
"""
zs = [sense() for i in range(5000)]
plt.plot(zs, lw=1);
"""
Explanation: Let's plot that signal and see what it looks like.
End of explanation
"""
import random
import math
def rand_student_t(df, mu=0, std=1):
"""return random number distributed by Student's t
distribution with `df` degrees of freedom with the
specified mean and standard deviation.
"""
x = random.gauss(0, std)
y = 2.0*random.gammavariate(0.5*df, 2.0)
return x / (math.sqrt(y / df)) + mu
def sense_t():
return 10 + rand_student_t(7)*2
zs = [sense_t() for i in range(5000)]
plt.plot(zs, lw=1);
"""
Explanation: That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening.
Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it.
End of explanation
"""
import scipy
scipy.stats.describe(zs)
"""
Explanation: We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13).
It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests.
This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.
The code for rand_student_t is included in filterpy.stats. You may use it with
python
from filterpy.stats import rand_student_t
While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called skew. The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called kurtosis. the scipy.stats module contains the function describe which computes these statistics, among others.
End of explanation
"""
print(scipy.stats.describe(np.random.randn(10)))
print()
print(scipy.stats.describe(np.random.randn(300000)))
"""
Explanation: Let's examine two normal populations, one small, one large:
End of explanation
"""
|
FiryZeplin/deep-learning | dcgan-svhn/DCGAN.ipynb | mit | %matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
"""
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
"""
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
"""
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
"""
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
"""
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
"""
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
"""
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), self.scaler(y)
"""
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
"""
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
# 4x4x512 now
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
# 8x8x256 now
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# 16x16x128 now
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
# 32x32x3 now
out = tf.tanh(logits)
return out
"""
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stack layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
End of explanation
"""
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
# 16x16x32
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
# 8x8x128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
# 4x4x256
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
End of explanation
"""
def model_loss(input_real, input_z, output_dim, alpha=0.2):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
"""
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
"""
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
"""
Explanation: Optimizers
Again, nothing new here.
End of explanation
"""
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, 0.5)
"""
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
"""
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img)
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
"""
Explanation: Here is a function for displaying generated images.
End of explanation
"""
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(50, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 5, 10, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
"""
Explanation: And another function we can use to train our network.
End of explanation
"""
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0002
batch_size = 128
epochs = 25
alpha = 0.2
beta1 = 0.5
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 5, 10, figsize=(10,5))
"""
Explanation: Hyperparameters
GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
End of explanation
"""
|
tensorflow/examples | courses/udacity_intro_to_tensorflow_lite/tflite_c01_linear_regression.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
import pathlib
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input
"""
Explanation: Running TFLite models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c01_linear_regression.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c01_linear_regression.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
Setup
End of explanation
"""
# Create a simple Keras model.
x = [-1, 0, 1, 2, 3, 4]
y = [-3, -1, 1, 3, 5, 7]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(x, y, epochs=200, verbose=1)
"""
Explanation: Create a basic model of the form y = mx + c
End of explanation
"""
export_dir = 'saved_model/1'
tf.saved_model.save(model, export_dir)
"""
Explanation: Generate a SavedModel
End of explanation
"""
# Convert the model.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
tflite_model_file = pathlib.Path('model.tflite')
tflite_model_file.write_bytes(tflite_model)
"""
Explanation: Convert the SavedModel to TFLite
End of explanation
"""
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the TensorFlow Lite model on random input data.
input_shape = input_details[0]['shape']
inputs, outputs = [], []
for _ in range(100):
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
tflite_results = interpreter.get_tensor(output_details[0]['index'])
# Test the TensorFlow model on random input data.
tf_results = model(tf.constant(input_data))
output_data = np.array(tf_results)
inputs.append(input_data[0][0])
outputs.append(output_data[0][0])
"""
Explanation: Initialize the TFLite interpreter to try it out
End of explanation
"""
plt.plot(inputs, outputs, 'r')
plt.show()
"""
Explanation: Visualize the model
End of explanation
"""
try:
from google.colab import files
files.download(tflite_model_file)
except:
pass
"""
Explanation: Download the TFLite model file
End of explanation
"""
|
tkurfurst/deep-learning | gan_mnist/Intro_to_GANs_Exercises.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='inputs_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='inputs_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
"""
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True, alpha=alpha)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
"""
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \
labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_loss_fake
# CHANGE - had smoothing before
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \
labels=tf.ones_like(d_logits_fake))) # had smoothing before
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
"""
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [g_var for g_var in t_vars if g_var.name.startswith('generator')]
d_vars = [d_var for d_var in t_vars if d_var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
# CHANGE
!mkdir checkpoints
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
"""
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
newsapps/public-notebooks | Red Light Camera Locations.ipynb | mit | few_crashes_url = 'http://www.arcgis.com/sharing/rest/content/items/5a8841f92e4a42999c73e9a07aca0c23/data?f=json&token=lddNjwpwjOibZcyrhJiogNmyjIZmzh-pulx7jPD9c559e05tWo6Qr8eTcP7Deqw_CIDPwZasbNOCSBHfthynf-8WRMmguxHbIFptbZQvnpRupJHSY8Abrz__xUteBS93MitgvoU6AqSN5eDVKRYiUg..'
removed_url = 'http://www.arcgis.com/sharing/rest/content/items/1e01ac5dc4d54dc186502316feab156e/data?f=json&token=lddNjwpwjOibZcyrhJiogNmyjIZmzh-pulx7jPD9c559e05tWo6Qr8eTcP7Deqw_CIDPwZasbNOCSBHfthynf-8WRMmguxHbIFptbZQvnpRupJHSY8Abrz__xUteBS93MitgvoU6AqSN5eDVKRYiUg..'
"""
Explanation: These are the URLs for the JSON data powering the ESRI/ArcGIS maps.
End of explanation
"""
import requests
def extract_features(url, title=None):
r = requests.get(url)
idx = 0
found = False
if title:
while idx < len(r.json()['operationalLayers']):
for item in r.json()['operationalLayers'][idx].items():
if item[0] == 'title' and item[1] == title:
found = True
break
if found:
break
idx += 1
try:
return r.json()['operationalLayers'][idx]['featureCollection']['layers'][0]['featureSet']['features']
except IndexError, e:
return {}
few_crashes = extract_features(few_crashes_url)
all_cameras = extract_features(removed_url, 'All Chicago red light cameras')
removed_cameras = extract_features(removed_url, 'red-light-cams')
print 'Found %d data points for few-crash intersections, %d total cameras and %d removed camera locations' % (
len(few_crashes), len(all_cameras), len(removed_cameras))
"""
Explanation: We need a way to easily extract the actual data points from the JSON. The data will actually contain multiple layers (really, one layer per operationalLayer, but multiple operationalLayers) so, if we pass a title, we should return the operationalLayer corresponding to that title; otherwise, just return the first one.
End of explanation
"""
filtered_few_crashes = [
point for point in few_crashes if point['attributes']['LONG_X'] != 0 and point['attributes']['LAT_Y'] != 0]
"""
Explanation: Now we need to filter out the bad points from few_crashes - the ones with 0 given as the lat/lon.
End of explanation
"""
cameras = {}
for point in all_cameras:
label = point['attributes']['LABEL']
if label not in cameras:
cameras[label] = point
cameras[label]['attributes']['Few crashes'] = False
cameras[label]['attributes']['To be removed'] = False
"""
Explanation: Now let's build a dictionary of all the cameras, so we can merge all their info.
End of explanation
"""
for point in filtered_few_crashes:
label = point['attributes']['LABEL']
if label not in cameras:
print 'Missing label %s' % label
else:
cameras[label]['attributes']['Few crashes'] = True
"""
Explanation: Set the 'Few crashes' flag to True for those intersections that show up in filtered_few_crashes.
End of explanation
"""
for point in removed_cameras:
label = point['attributes']['displaylabel'].replace(' and ', '-')
if label not in cameras:
print 'Missing label %s' % label
else:
cameras[label]['attributes']['To be removed'] = True
"""
Explanation: Set the 'To be removed' flag to True for those intersections that show up in removed_cameras.
End of explanation
"""
counter = {
'both': {
'names': [],
'count': 0
},
'crashes only': {
'names': [],
'count': 0
},
'removed only': {
'names': [],
'count': 0
}
}
for camera in cameras:
if cameras[camera]['attributes']['Few crashes']:
if cameras[camera]['attributes']['To be removed']:
counter['both']['count'] += 1
counter['both']['names'].append(camera)
else:
counter['crashes only']['count'] += 1
counter['crashes only']['names'].append(camera)
elif cameras[camera]['attributes']['To be removed']:
counter['removed only']['count'] += 1
counter['removed only']['names'].append(camera)
print '%d locations had few crashes and were slated to be removed: %s\n' % (
counter['both']['count'], '; '.join(counter['both']['names']))
print '%d locations had few crashes but were not slated to be removed: %s\n' % (
counter['crashes only']['count'], '; '.join(counter['crashes only']['names']))
print '%d locations were slated to be removed despite having reasonable numbers of crashes: %s' % (
counter['removed only']['count'], '; '.join(counter['removed only']['names']))
"""
Explanation: How many camera locations have few crashes and were slated to be removed?
End of explanation
"""
from csv import DictReader
from StringIO import StringIO
data_portal_url = 'https://data.cityofchicago.org/api/views/thvf-6diy/rows.csv?accessType=DOWNLOAD'
r = requests.get(data_portal_url)
fh = StringIO(r.text)
reader = DictReader(fh)
def cleaner(str):
filters = [
('Stony?Island', 'Stony Island'),
('Van?Buren', 'Van Buren'),
(' (SOUTH INTERSECTION)', '')
]
for filter in filters:
str = str.replace(filter[0], filter[1])
return str
for line in reader:
line['INTERSECTION'] = cleaner(line['INTERSECTION'])
cameras[line['INTERSECTION']]['attributes']['current'] = line
counter = {
'not current': [],
'current': [],
'not current and slated for removal': [],
'not current and not slated for removal': [],
'current and slated for removal': []
}
for camera in cameras:
if 'current' not in cameras[camera]['attributes']:
counter['not current'].append(camera)
if cameras[camera]['attributes']['To be removed']:
counter['not current and slated for removal'].append(camera)
else:
counter['not current and not slated for removal'].append(camera)
else:
counter['current'].append(camera)
if cameras[camera]['attributes']['To be removed']:
counter['current and slated for removal'].append(camera)
for key in counter:
print key, len(counter[key])
print '; '.join(counter[key]), '\n'
"""
Explanation: How does this list compare to the one currently published on the Chicago Data Portal?
End of explanation
"""
import requests
from csv import DictReader
from datetime import datetime
from StringIO import StringIO
data_portal_url = 'https://data.cityofchicago.org/api/views/spqx-js37/rows.csv?accessType=DOWNLOAD'
r = requests.get(data_portal_url)
fh = StringIO(r.text)
reader = DictReader(fh)
def violation_cleaner(str):
filters = [
(' AND ', '-'),
(' and ', '-'),
('/', '-'),
# These are streets spelled one way in ticket data, another way in location data
('STONEY ISLAND', 'STONY ISLAND'),
('CORNELL DRIVE', 'CORNELL'),
('NORTHWEST HWY', 'NORTHWEST HIGHWAY'),
('CICERO-I55', 'CICERO-STEVENSON NB'),
('31ST ST-MARTIN LUTHER KING DRIVE', 'DR MARTIN LUTHER KING-31ST'),
('4700 WESTERN', 'WESTERN-47TH'),
('LAKE SHORE DR-BELMONT', 'LAKE SHORE-BELMONT'),
# These are 3-street intersections where the ticket data has 2 streets, location data has 2 other streets
('KIMBALL-DIVERSEY', 'MILWAUKEE-DIVERSEY'),
('PULASKI-ARCHER', 'PULASKI-ARCHER-50TH'),
('KOSTNER-NORTH', 'KOSTNER-GRAND-NORTH'),
('79TH-KEDZIE', 'KEDZIE-79TH-COLUMBUS'),
('LINCOLN-MCCORMICK', 'KIMBALL-LINCOLN-MCCORMICK'),
('KIMBALL-LINCOLN', 'KIMBALL-LINCOLN-MCCORMICK'),
('DIVERSEY-WESTERN', 'WESTERN-DIVERSEY-ELSTON'),
('HALSTED-FULLERTON', 'HALSTED-FULLERTON-LINCOLN'),
('COTTAGE GROVE-71ST', 'COTTAGE GROVE-71ST-SOUTH CHICAGO'),
('DAMEN-FULLERTON', 'DAMEN-FULLERTON-ELSTON'),
('DAMEN-DIVERSEY', 'DAMEN-DIVERSEY-CLYBOURN'),
('ELSTON-FOSTER', 'ELSTON-LAPORTE-FOSTER'),
('STONY ISLAND-79TH', 'STONY ISLAND-79TH-SOUTH CHICAGO'),
# This last one is an artifact of the filter application process
('KIMBALL-LINCOLN-MCCORMICK-MCCORMICK', 'KIMBALL-LINCOLN-MCCORMICK')
]
for filter in filters:
str = str.replace(filter[0], filter[1])
return str
def intersection_is_reversed(key, intersection):
split_key = key.upper().split('-')
split_intersection = intersection.upper().split('-')
if len(split_key) != len(split_intersection):
return False
for k in split_key:
if k not in split_intersection:
return False
for k in split_intersection:
if k not in split_key:
return False
return True
missing_intersections = set()
for idx, line in enumerate(reader):
line['INTERSECTION'] = violation_cleaner(line['INTERSECTION'])
found = False
for key in cameras:
if key.lower() == line['INTERSECTION'].lower() or intersection_is_reversed(key, line['INTERSECTION']):
found = True
if 'total tickets' not in cameras[key]['attributes']:
cameras[key]['attributes']['total tickets'] = 0
cameras[key]['attributes']['tickets since 12/22/2014'] = 0
cameras[key]['attributes']['tickets since 3/6/2015'] = 0
cameras[key]['attributes']['last ticket date'] = line['VIOLATION DATE']
else:
cameras[key]['attributes']['total tickets'] += int(line['VIOLATIONS'])
dt = datetime.strptime(line['VIOLATION DATE'], '%m/%d/%Y')
if dt >= datetime.strptime('12/22/2014', '%m/%d/%Y'):
cameras[key]['attributes']['tickets since 12/22/2014'] += int(line['VIOLATIONS'])
if dt >= datetime.strptime('3/6/2015', '%m/%d/%Y'):
cameras[key]['attributes']['tickets since 3/6/2015'] += int(line['VIOLATIONS'])
if not found:
missing_intersections.add(line['INTERSECTION'])
print 'Missing %d intersections' % len(missing_intersections), missing_intersections
"""
Explanation: Now we need to compute how much money has been generated at each intersection - assuming $100 fine for each violation. In order to do that, we need to make the violation data line up with the camera location data.
Then, we'll add 3 fields: number of violations overall; number on/after 12/22/2014; number on/after 3/6/2015.
End of explanation
"""
import locale
locale.setlocale( locale.LC_ALL, '' )
total = 0
missing_tickets = []
for camera in cameras:
try:
total += cameras[camera]['attributes']['total tickets']
except KeyError:
missing_tickets.append(camera)
print '%d tickets have been issued since 7/1/2014, raising %s' % (total, locale.currency(total * 100, grouping=True))
print 'The following %d intersections appear to never have issued a ticket in that time: %s' % (
len(missing_tickets), '; '.join(missing_tickets))
"""
Explanation: Now it's time to ask some specific questions. First: how much money has the program raised overall? (Note that this data only goes back to 7/1/2014, several years after the program began.)
End of explanation
"""
total = 0
low_crash_total = 0
for camera in cameras:
try:
total += cameras[camera]['attributes']['tickets since 12/22/2014']
if cameras[camera]['attributes']['Few crashes']:
low_crash_total += cameras[camera]['attributes']['tickets since 12/22/2014']
except KeyError:
continue
print '%d tickets have been issued at low-crash intersections since 12/22/2014, raising %s' % (
low_crash_total, locale.currency(low_crash_total * 100, grouping=True))
print '%d tickets have been issued overall since 12/22/2014, raising %s' % (
total, locale.currency(total * 100, grouping=True))
"""
Explanation: Since 12/22/2014, how much money has been generated by low-crash intersections?
End of explanation
"""
total = 0
low_crash_total = 0
slated_for_closure_total = 0
for camera in cameras:
try:
total += cameras[camera]['attributes']['tickets since 3/6/2015']
if cameras[camera]['attributes']['Few crashes']:
low_crash_total += cameras[camera]['attributes']['tickets since 3/6/2015']
if cameras[camera]['attributes']['To be removed']:
slated_for_closure_total += cameras[camera]['attributes']['tickets since 3/6/2015']
except KeyError:
continue
print '%d tickets have been issued at low-crash intersections since 3/6/2015, raising %s' % (
low_crash_total, locale.currency(low_crash_total * 100, grouping=True))
print '%d tickets have been issued overall since 3/6/2015, raising %s' % (
total, locale.currency(total * 100, grouping=True))
print '%d tickets have been issued at cameras that were supposed to be closed since 3/6/2015, raising %s' % (
slated_for_closure_total, locale.currency(slated_for_closure_total * 100, grouping=True))
"""
Explanation: How about since 3/6/2015?
End of explanation
"""
from csv import DictWriter
output = []
for camera in cameras:
data = {
'intersection': camera,
'last ticket date': cameras[camera]['attributes'].get('last ticket date', ''),
'tickets since 7/1/2014': cameras[camera]['attributes'].get('total tickets', 0),
'revenue since 7/1/2014': cameras[camera]['attributes'].get('total tickets', 0) * 100,
'tickets since 12/22/2014': cameras[camera]['attributes'].get('tickets since 12/22/2014', 0),
'revenue since 12/22/2014': cameras[camera]['attributes'].get('tickets since 12/22/2014', 0) * 100,
'was slated for removal': cameras[camera]['attributes'].get('To be removed', False),
'had few crashes': cameras[camera]['attributes'].get('Few crashes', False),
'is currently active': True if 'current' in cameras[camera]['attributes'] else False,
'latitude': cameras[camera]['attributes'].get('LAT', 0),
'longitude': cameras[camera]['attributes'].get('LNG', 0)
}
output.append(data)
with open('/tmp/red_light_intersections.csv', 'w+') as fh:
writer = DictWriter(fh, sorted(output[0].keys()))
writer.writeheader()
writer.writerows(output)
"""
Explanation: Now let's generate a CSV of the cameras data for export.
End of explanation
"""
|
LucaCanali/Miscellaneous | PLSQL_Neural_Network/MNIST_tensorflow_exp_to_oracle.ipynb | apache-2.0 | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Import data
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_string('data_dir', '/tmp/data/', 'Directory for storing data')
# Load training and test data sets with labels
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
"""
Explanation: TensorFlow training of an artificial neural network to recognize handwritten digits in the MNIST dataset and export it to Oracle RDBMS
This notebook contains the preparation steps for the notebook MNIST_oracle_plsql.ipynb where you can find the steps for deploying a neural network serving engine in Oracle using PL/SQL
Author: Luca.Canali@cern.ch - July 2016
Initialize the environment and load the training set
Credits: the code for defining and training the neural network is adapted (with extensions) from the Google TensorFlow tutorial https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_softmax.py
End of explanation
"""
# define and initialize the tensors
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
W0 = tf.Variable(tf.truncated_normal([784, 100], stddev=0.1))
b0 = tf.Variable(tf.zeros([100]))
W1 = tf.Variable(tf.truncated_normal([100, 10], stddev=0.1))
b1 = tf.Variable(tf.zeros([10]))
# Feed forward neural network with one hidden layer
# y0 is the hidden layer with sigmoid activation
y0 = tf.sigmoid(tf.matmul(x, W0) + b0)
# y1 is the output layer (softmax)
# y1[n] is the predicted probability that the input image depicts number 'n'
y1 = tf.nn.softmax(tf.matmul(y0, W1) + b1)
# The the loss function is defined as cross_entropy
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y1), reduction_indices=[1]))
# train the network using gradient descent
train_step = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cross_entropy)
# start a TensorFlow interactive session
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
"""
Explanation: Definition of the neural network:
The following defines a basic feed forward neural network with one hidden layer
Other standard techniques used are the definition of cross entropy as loss function and the use of gradient descent as optimizer
End of explanation
"""
batch_size = 100
train_iterations = 30000
# There are mnist.train.num_examples=55000 images in the train sample
# train in batches of 'batch_size' images at a time
# Repeat for 'train_iterations' number of iterations
# Training batches are randomly calculated as each new epoch starts
for i in range(train_iterations):
batch = mnist.train.next_batch(100)
train_data = feed_dict={x: batch[0], y_: batch[1]}
train_step.run(train_data)
# Test the accuracy of the trained network
correct_prediction = tf.equal(tf.argmax(y1, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Accuracy of the trained network over the test images: %s" %
accuracy.eval({x: mnist.test.images, y_: mnist.test.labels}))
"""
Explanation: Train the network
The training uses 55000 images with labels
It is performed over 30000 iterations using mini batch size of 100 images
End of explanation
"""
# There are 2 matrices and 2 vectors used in this neural network:
W0_matrix=W0.eval()
b0_array=b0.eval()
W1_matrix=W1.eval()
b1_array=b1.eval()
print ("W0 is matrix of size: %s " % (W0_matrix.shape,) )
print ("b0 is array of size: %s " % (b0_array.shape,) )
print ("W1 is matrix of size: %s " % (W1_matrix.shape,) )
print ("b1 is array of size: %s " % (b1_array.shape,) )
"""
Explanation: Learning exercise: extract the tensors as 'manually' run the neural network scoring
In the following you can find an example of how to manually run the neural network scoring in Python using numpy. This is intended as an example to further the understanding of how the scoring engine works and opens the way for the next steps, that is the implementation of the scoring engine for Oracle using PL/SQL (see also the notebook MNIST_oracle_plsql.ipynb)
End of explanation
"""
testlabels=tf.argmax(mnist.test.labels,1).eval()
testimages=mnist.test.images
print ("testimages is matrix of size: %s " % (testimages.shape,) )
print ("testlabels is array of size: %s " % (testlabels.shape,) )
"""
Explanation: Extracting the test images and labels as numpy arrays
End of explanation
"""
import numpy as np
def softmax(x):
"""Compute the softmax function on a numpy array"""
return np.exp(x) / np.sum(np.exp(x), axis=0)
def sigmoid(x):
"""Compute the sigmoid function on a numpy array"""
return (1 / (1 + np.exp(-x)))
testimage=testimages[0]
testlabel=testlabels[0]
hidden_layer = sigmoid(np.dot(testimage, W0_matrix) + b0_array)
predicted = np.argmax(softmax(np.dot(hidden_layer, W1_matrix) + b1_array))
print ("image label %d, predicted value by the neural network: %d" % (testlabel, predicted))
"""
Explanation: Example of how to run the neural network "manually" using the tensor values extracted into numpy arrays
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(testimage.reshape(28,28), cmap='Greys')
"""
Explanation: Visual test that the predicted value is indeed correct
End of explanation
"""
import cx_Oracle
ora_conn = cx_Oracle.connect('mnist/mnist@dbserver:1521/orcl.cern.ch')
cursor = ora_conn.cursor()
"""
Explanation: Transfer of the tensors and test data into Oracle tables
For the following you should have access to a (test) Oracle database. This procedure has been tested with Oracle 11.2.0.4 and 12.1.0.2 on Linux.
To keep the test isolated you can create a dedicated user (suggested name, mnist) for the data transfer, as follows:
<code>
From a DBA account (for example the user system) execute:
SQL> create user mnist identified by mnist default tablespace users quota unlimited on users;
SQL> grant connect, create table, create procedure to mnist;
SQL> grant read, write on directory DATA_PUMP_DIR to mnist;
</code>
These are the tables that will be used in the following code to transfer the tensors and testdata:
<code>
SQL> connect mnist/mnist@ORCL
SQL> create table tensors(name varchar2(20), val_id number, val binary_float, primary key(name, val_id));
SQL> create table testdata(image_id number, label number, val_id number, val binary_float, primary key(image_id, val_id));
</code>
Open the connection to the database using cx_Oracle:
(for tips on how to install and use of cx_Oracle see also https://github.com/LucaCanali/Miscellaneous/tree/master/Oracle_Jupyter)
End of explanation
"""
i = 0
sql="insert into tensors values ('W0', :val_id, :val)"
for column in W0_matrix:
array_values = []
for element in column:
array_values.append((i, float(element)))
i += 1
cursor.executemany(sql, array_values)
ora_conn.commit()
i = 0
sql="insert into tensors values ('W1', :val_id, :val)"
for column in W1_matrix:
array_values = []
for element in column:
array_values.append((i, float(element)))
i += 1
cursor.executemany(sql, array_values)
ora_conn.commit()
"""
Explanation: Transfer the matrixes W0 and W1 into the table tensors (which must be precreated as described above)
End of explanation
"""
i = 0
sql="insert into tensors values ('b0', :val_id, :val)"
array_values = []
for element in b0_array:
array_values.append((i, float(element)))
i += 1
cursor.executemany(sql, array_values)
i = 0
sql="insert into tensors values ('b1', :val_id, :val)"
array_values = []
for element in b1_array:
array_values.append((i, float(element)))
i += 1
cursor.executemany(sql, array_values)
ora_conn.commit()
"""
Explanation: Transfer the vectors b0 and b1 into the table "tensors" (the table is expected to exist on the DB, create it using the SQL described above)
End of explanation
"""
image_id = 0
array_values = []
sql="insert into testdata values (:image_id, :label, :val_id, :val)"
for image in testimages:
val_id = 0
array_values = []
for element in image:
array_values.append((image_id, testlabels[image_id], val_id, float(element)))
val_id += 1
cursor.executemany(sql, array_values)
image_id += 1
ora_conn.commit()
"""
Explanation: Transfer the test data with images and labels into the table "testdata" (the table is expected to exist on the DB, create it using the SQL described above)
End of explanation
"""
|
ssanderson/pstats-view | examples/ExampleView.ipynb | mit | %matplotlib inline
import pandas as pd
import numpy as np
import cProfile
from pstatsviewer import StatsViewer
from qgrid import nbinstall
nbinstall()
# Construct two 5000 x 8 frames with random floats.
df1 = pd.DataFrame(
np.random.randn(5000, 8),
columns=[chr(ord('A') + i) for i in range(8)],
index=range(5000),
)
df2 = pd.DataFrame(
np.random.randn(5000, 8),
columns=[chr(ord('A') + i) for i in range(8)],
index=range(5000, 10000),
)
df1.head(5)
from qgrid import show_grid
"""
Explanation: This notebook shows a simple example of profiling alternative methods of concatenating two pandas DataFrames.
End of explanation
"""
def concat_naive():
for i in range(500):
pd.concat([df1, df2])
cProfile.run(
'concat_naive()',
'naive.stats',
)
"""
Explanation: Generating stats files with cProfile:
End of explanation
"""
slow = StatsViewer("naive.stats")
slow.table()
"""
Explanation: Table/Grid View
Provides interactive support for:
- Scrolling
- Filtering
- Sorting
- Resizing Columns
End of explanation
"""
slow.chart()
"""
Explanation: Chart View
Supports interactive generation of charts parameterized by no. of functions and sort order.
End of explanation
"""
def concat_fast():
"""
Concatenate using numpy primitives instead of pd.concat.
"""
for i in range(500):
pd.DataFrame(
np.vstack([df1.values, df2.values]),
columns=df1.columns,
index=np.hstack([
df1.index.values,
df2.index.values,
])
)
cProfile.run(
'concat_fast()',
'fast.stats',
)
fast = StatsViewer("fast.stats")
"""
Explanation: Comparing Alternative Implementations
End of explanation
"""
slow.compare_table(fast, lsuffix="_slow", rsuffix="_fast")
slow.compare_chart(fast, 'tottime', 25)
"""
Explanation: Comparison View
Both chart and grid support comparison versions.
End of explanation
"""
|
nickdavidhaynes/python-data-science-intro | week_2/dealing_with_data.ipynb | mit | def to_integer(x):
the_sum = 0
for index, val in enumerate(x[::-1]):
the_sum += val * 2 ** index
return the_sum
# [1, 1] == 3
to_integer([1, 0, 0, 0, 1, 1, 0, 1])
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Week-2:-Dealing-with-data" data-toc-modified-id="Week-2:-Dealing-with-data-1"><span class="toc-item-num">1 </span>Week 2: Dealing with data</a></div><div class="lev1 toc-item"><a href="#How-is-data-stored?" data-toc-modified-id="How-is-data-stored?-2"><span class="toc-item-num">2 </span>How is data stored?</a></div><div class="lev2 toc-item"><a href="#Stepping-wayyyy-back---what-is-a-computer?" data-toc-modified-id="Stepping-wayyyy-back---what-is-a-computer?-21"><span class="toc-item-num">2.1 </span>Stepping wayyyy back - what is a computer?</a></div><div class="lev2 toc-item"><a href="#Storing-data-(on-disk)" data-toc-modified-id="Storing-data-(on-disk)-22"><span class="toc-item-num">2.2 </span>Storing data (on disk)</a></div><div class="lev3 toc-item"><a href="#Numbers" data-toc-modified-id="Numbers-221"><span class="toc-item-num">2.2.1 </span>Numbers</a></div><div class="lev4 toc-item"><a href="#Integers" data-toc-modified-id="Integers-2211"><span class="toc-item-num">2.2.1.1 </span>Integers</a></div><div class="lev4 toc-item"><a href="#Decimals" data-toc-modified-id="Decimals-2212"><span class="toc-item-num">2.2.1.2 </span>Decimals</a></div><div class="lev3 toc-item"><a href="#Text-vs-binary-data-on-disk" data-toc-modified-id="Text-vs-binary-data-on-disk-222"><span class="toc-item-num">2.2.2 </span>Text vs binary data on disk</a></div><div class="lev2 toc-item"><a href="#Some-common-file-types" data-toc-modified-id="Some-common-file-types-23"><span class="toc-item-num">2.3 </span>Some common file types</a></div><div class="lev3 toc-item"><a href="#JSON" data-toc-modified-id="JSON-231"><span class="toc-item-num">2.3.1 </span>JSON</a></div><div class="lev3 toc-item"><a href="#CSV" data-toc-modified-id="CSV-232"><span class="toc-item-num">2.3.2 </span>CSV</a></div><div class="lev1 toc-item"><a href="#NumPy:-an-intro-to-making-data-analysis-fast" data-toc-modified-id="NumPy:-an-intro-to-making-data-analysis-fast-3"><span class="toc-item-num">3 </span>NumPy: an intro to making data analysis fast</a></div><div class="lev2 toc-item"><a href="#The-NumPy-array" data-toc-modified-id="The-NumPy-array-31"><span class="toc-item-num">3.1 </span>The NumPy array</a></div><div class="lev2 toc-item"><a href="#Indexing-and-slicing-arrays" data-toc-modified-id="Indexing-and-slicing-arrays-32"><span class="toc-item-num">3.2 </span>Indexing and slicing arrays</a></div><div class="lev2 toc-item"><a href="#Mutability-and-Data-types-in-NumPy" data-toc-modified-id="Mutability-and-Data-types-in-NumPy-33"><span class="toc-item-num">3.3 </span>Mutability and Data types in NumPy</a></div><div class="lev2 toc-item"><a href="#Vectorized-operations-with-NumPy" data-toc-modified-id="Vectorized-operations-with-NumPy-34"><span class="toc-item-num">3.4 </span>Vectorized operations with NumPy</a></div><div class="lev1 toc-item"><a href="#Take-home-exercises" data-toc-modified-id="Take-home-exercises-4"><span class="toc-item-num">4 </span>Take-home exercises</a></div>
# Week 2: Dealing with data
Welcome back everyone! Week 2 is all about data: how computers physically store and represent data, some common data abstractions that are used in data science, and using a package called NumPy to work with arrays of numerical data.
The agenda:
- Review homework assignments, answer questions
- How do computers store data?
- Common data storage abstractions you should know
- A lightning intro to a very important library: NumPy
# How is data stored?
Actually, there are lots of different ways to answer this question...
Moving from abstract to concrete:
- A brief intro to computer architecture
- Data storage paradigms
- Different types of files, the advantages and disadvantages of each
The motivation:
- Important to be informed about the trade-offs between storage paradigms
- Writing performant code requires familiarity with *why* some code is faster than other code
## Stepping wayyyy back - what is a computer?
During WWII, mathematical foundations of modern computing were invented out of necessity. In 1945, John von Neumann drafted a design for a *stored program computer*:

- A processing device
- Control unit for storing instruction sets
- Logic unit for executing those instructions
- Memory for storing data
- External mass storage
- Input/output lines for communicating with the world
The advance over previous designs: instead of "hard-wiring" (literally) a program, design a device that takes in a program just like data. The logic unit is able to execute a limited number of operations. By composing those operations together, can prove *mathematically* that we can solve any problem that is solvable (ignoring resource usage...). *This is still fundamentally the way computers work today.*
Fast-forward 70 years, how do modern computers store and process data?

What this means for us:
- For large organizations, real tape backup is a thing! https://aws.amazon.com/glacier/
- Most of us store data long-term on hard drives
- When actively working on a project, our data lives in RAM
- When in the middle of a computation, data is shifting from RAM to CPU cache
- CPU actually does work on bits that are in its registers
The typical computing workflow:
- Want to process some data stored on a hard drive (either physically connected to our local machine or accessible over a network connection)
- Provide the address of that data and some information for how to load it into memory
- Provide a set of instructions for what to do with the data that's in memory
- Write intermediate results to memory
- Store final results on a hard drive
Note - modern computers handle the RAM <-> Cache <-> CPU pipeline for us. But understanding how it works allows us to write faster code (will return to this later today).
## Storing data (on disk)
There are lots of ways that data can be stored on disk, and different formats can have drastic performance differences! Fundamentally, the data contained in a file is represented in a computer as a sequence of bits (0s and 1s), which are grouped into chunks of 8 called bytes.
### Numbers
Because data on computers can only be represented as sequences of 0s and 1s, we need a way of representing numbers (both integers and decimals) in this system.
#### Integers
How do we represent integers on a computer?
Imagine a super simple world, where the only integers we ever wanted to talk about were 0 and 1. We would only need 1 bit:

We just need to remember that a blank bit corresponds to "0" and a filled bit corresponds to "1".
What if, instead, we lived in a world where the only integers we ever wanted to talk about were 0, 1, 2, and 3? In this case, we would have to use 2 bits:

And if we lived in a world where we only cared about integers 0-7, we could use 3 bits:

In other words, we can always use $N$ bits to represent $2^N$ unique integers. Conversely, we can represent $M$ integers with $log_2 M$ bits.
In low-level languages like C and FORTRAN, integer types are given *fixed-width* representations. For example, a C `short` contains 2 bytes, so can represent $2^{16} = 65536$ unique integers. Depending on whether the integer is *signed* or *unsigned*, the bit patterns correspond to either 0:65535 or -32,768:32,767. Therefore, the integers 720,913 or -56,093 can't be represented with C `short` values - an integer type with more bits is required.
Python, in contrast, only has a single integer data type (`int`, as we introduced last week). Python algorithmically determines the number of bytes required and automatically allocates the necessary memory.
**Your turn**
Write a function that takes a list of 0s and 1s and produces the corresponding integer. The equation for converting a list $L = [l_1, l_2, ..., l_n]$ of 0's and 1's to binary is $\sum_i l_i*2^i$. What is the integer representation of `[1, 0, 0, 0, 1, 1, 0, 1]`?
End of explanation
"""
.1 + .1 + .1 == .3
"""
Explanation: Decimals
Representing integers in binary is relatively straight-forward because both integers and bytes are discrete units - there are a fixed, countable number of elements representable with a sequence of bits of a given length. Decimal numbers are trickier - in principle, decimals can have any length (including, probably, infinite). How can you possibly represent a infinite number of decimals with a sequence of bits?
Unfortunately, the answer is that we can't represent decimals with arbitrary precision. Much as the number of bits in integer representations above defined how many integers we could represent, the number of bits in a representation of a decimal number defines how precisely we can define the number.
It's not hard to find artifacts of the floating point representation. If you're not careful, it can get you in trouble:
End of explanation
"""
.1 + .1 + .1
"""
Explanation: What??
End of explanation
"""
with open('data/hi.txt', 'r') as file: # open the file data/hi.txt in read mode, refer to it as `file`
text_data = file.read() # read the contents of `file` into a variable called `text_data`
"""
Explanation: This behavior is a result of the finite precision with which Python is representing decimals.
Most software (including Python) uses the concept of floating point numbers (i.e. floats) to represent decimals. Essentially, a float is really two separate numbers: the mantissa and the exponent. This is a similar concept to scientific notation, where, for example, the number $123456.789$ can be written as $1.23456789 \times 10^5$. In this case, $1.23456789$ is the mantissa, and $5$ is the exponent. In binary, we usually represent numbers as a string of bits, plus an exponent of 2 (rather than 10). This is a much more compact way of representing numbers than, for example, an explicit grid.
A more complete discussion of floating point arithmetic is beyond the scope of this course, but the important thing to remember is that a floating point number is really represented with 2 numbers under the hood.
Text vs binary data on disk
Broadly speaking, there are two categories of files - binary and text.
Binary files:
- Can be any sequence of bytes (though they should adhere to some pattern)
- Designed to be machine readable only (i.e. don't make sense to a human eye)
- Examples: images (.png, .jpg), videos (.mp4, .wav), documents (.doc, .pdf), archive (.zip, .tar), executable (.exe, .dll)
Text files:
- Sequence of bytes correspond to an encoding that can be rendered into text
- Examining in a text editor, the files are human-readable
- Examples: documents (.txt, .md), web data (.html, .json), source code (.py, .java), tabular data (.csv)
In other words, text formatted files have particular structure to their bytes that can be rendered into characters that are displayed on a screen. Binary files don't adhere to the notion that sequences of bytes should correspond to characters, so are free to implement other protocols.
Let's use Python to open and read some files:
End of explanation
"""
type(text_data)
len(text_data)
text_data
"""
Explanation: What's in text_data?
End of explanation
"""
import os
os.path.getsize('data/hi.txt')
# `bytes` converts a Python character to a representation of its bytes
# `ord()` converts a Python character into an integer representation
for char in text_data:
print(bytes(char, 'utf-8'), ord(char))
len(bytes(char, 'utf-8'))
"""
Explanation: So text_data is a string with two characters containing the text 'hi'.
Question: What is the physical size of hi.txt on disk?
End of explanation
"""
my_str = 'hi猫😺'
with open('data/hi2.txt', 'w', encoding='UTF-8') as f:
f.write(my_str)
with open('data/hi2.txt', 'r') as file:
text_data = file.read()
type(text_data)
len(text_data)
text_data
"""
Explanation: Python 3 encodes strings with the Unicode standard (UTF-8, specifically) by default. As a sanity check, we can look up the values of 104 and 105 in a Unicode table to double check that they correspond to the characters 'h' and 'i': http://www.ssec.wisc.edu/~tomw/java/unicode.html
In the days before Unicode became the de facto standard of internet communication, it was common to use ASCII to encode characters. In ASCII, each character corresponded to 1 byte in the computer, so there were only 2^8 = 256 characters. To the early computer pioneers in the 60s and 70s, the majority of whom lived in English-speaking countries, 256 characters was plenty - there were 26 upper case characters, 26 lower case characters, 10 digits, some special symbols like "(" and "&", and a few accented characters.
The rise of the internet, however, meant that many non-English speakers wanted to communicate digitally. But with ASCII, there was no way for people to write in Cyrillic or Mandarin characters. This led to a proliferation of character encodings that were eventually unified into UTF-8.
Let's read a different file with Python, this time with some characters outside of the standard 26-character English alphabet:
End of explanation
"""
os.path.getsize('data/hi2.txt')
for char in text_data:
print(bytes(char, 'utf-8'), ord(char))
"""
Explanation: We can see that text_data is a string with 4 characters this time - the same 2 English characters "h" and "i", as well as a Chinese character and a cat emoji.
Question: What is the size of hi2.txt on disk?
End of explanation
"""
%%timeit # an IPython "magic" function for profiling blocks of code
with open('data/sherlock_holmes.txt', 'r') as file: # open the file in read mode
file.read()
"""
Explanation: This gives us a better sense of where the file size comes from. The integer values of "h" and "i" are small enough that they can each be represented by a single byte, but several bytes are necessary to represent each of the other 2 characters. Printing the byte representation of the characters tells us that "猫" requires 3 bytes to store on disk, and "😺" requires 4 bytes, therefore there are a combined 9 bytes in hi.txt. In other words, Unicode characters correspond to a variable number of bytes, as opposed to ASCII, where characters always correspond to a single byte.
Now, let's read some bigger files into memory:
End of explanation
"""
import pickle
with open('data/sherlock_holmes.txt', 'r') as file:
sherlock_text = file.read()
with open('data/sherlock_holmes.pickle', 'wb') as file: # open a file in binary write mode
pickle.dump(sherlock_text, file)
"""
Explanation: An aside: Python provides a way of serializing data into a binary format for storing on disk called pickling. Many different types of Python objects can be pickled, so it's a useful step for checkpointing your work on long-running calculations or freezing the state of your code for later use.
For example, we can dump the text data to a pickle...
End of explanation
"""
%%timeit
with open('data/sherlock_holmes.pickle', 'rb') as file: # note the 'rb' - for "read binary"
pickle.load(file)
with open('data/sherlock_holmes.pickle', 'rb') as file:
sherlock_pickle = pickle.load(file)
sherlock_text == sherlock_pickle
"""
Explanation: ... and when we go to read the file, it loads a bit faster (even though it contains the same data).
End of explanation
"""
with open('data/alice_in_wonderland.txt', 'r', encoding='utf-8') as file:
alice = file.read()
len(alice)
os.path.getsize('data/alice_in_wonderland.txt')
char_list = []
for char in alice:
if len(bytes(char, 'utf-8')) > 1:
char_list.append(char)
set(char_list)
with open('data/alice_partial.pickle', 'wb') as file:
#file.write(alice[:10000])
pickle.dump(alice[:10000], file)
os.path.getsize('data/alice_partial.pickle')
"""
Explanation: What happened here? Recall - there are no primitive data types in Python, everything is an object! So to read data from disk, Python must create an object to store the data in
When reading a file from disk into memory, Python:
Pulls raw bytes from disk into memory
Encodes the raw bytes into their character representations
Builds the Python objects that store those bytes
That second step, the encoding, can actually be fairly slow. If you're dealing with large text files, encoding them once and them pickling (or using another serialization method) can be a much more efficient way to read them in the future. We'll use pickling later in the course to serialize some intermediate results.
IMPORTANT: Pickling is NOT SAFE. Anyone can pickle arbitrary code objects. In other words, it is possible to use pickles to distribute malicious code. Never un-pickle data from someone you don't trust. Pickling really should only be used as a convenience for yourself, not as a way of distributing code.
Your turn
- Read data/alice_in_wonderland.txt into memory. How many characters does it contain? How does this compare to its size on disk?
- Print out the unique non-ASCII characters in Alice in Wonderland (hint: non-ASCII means that the number of bytes used is greater than 1).
- Write the first 10,000 characters of Alice in Wonderland as text and as a pickle. What are the sizes of each file on disk?
End of explanation
"""
import json
with open('data/good_movies.json', 'r') as file:
good_movies = json.loads(file.read())
from pprint import pprint # pprint for pretty-printing nested objects
pprint(good_movies)
good_movies[0]['stars']
"""
Explanation: Some common file types
We've already seen text and binary formats, but let's take a look at a couple others.
JSON
Javascript Object Notation - a nested sequence of lists and dictionaries (or "arrays" and "hashes"). A very common way of transmitting data on the web because it's simple for both humans and computers to parse.
End of explanation
"""
import csv
good_movies = []
with open('data/good_movies.csv', 'r') as file:
reader = csv.DictReader(file)
for row in reader:
good_movies.append(row)
pprint(good_movies)
"""
Explanation: Your turn
Iterating over good_movies, print the name of the movies that Ben Affleck stars in.
Find the total number of Oscar nominations for 2016 movies in the dataset.
CSV
Comma-separated value data is another very common, easy-to-use way of storing data. In particular, CSVs are used when you have tabular data - think, data that fits in a spreadsheet. The most common format is for columns to correspond to categories and rows to correspond to examples.
End of explanation
"""
good_movies[0]['title'] # value of cell in first row, column called "title"
"""
Explanation: Look familiar? csv.DictReader is actually parsing the CSV row-by-row into a JSON-like structure!
End of explanation
"""
import numpy as np
list_of_numbers = [1, 2, 3, 4, 5]
array_1d = np.array(list_of_numbers)
array_1d.shape
type(array_1d)
print(array_1d)
another_list_of_numbers = [6, 7, 8, 9, 10]
array_2d = np.array([list_of_numbers, another_list_of_numbers])
array_2d.shape
print(array_2d)
"""
Explanation: For doing simple things like iterating over data structures, these built-in methods and objects are sufficient. But more complicated tasks will require better tooling. We'll see a lot more CSV data in the next couple of weeks.
NumPy: an intro to making data analysis fast
Now that we have an introduction to using data in Python, let's introduce some more ways of manipulating that data.
NumPy is a fundamental library in the Python ecosystem for handling (and doing math on) array-like data. Most importantly, all of the "heavy lifting" is done by algorithms written in "fast" languages like C and FORTRAN. We'll see below the difference between the fast algorithms implemented in NumPy and the same algorithms written in pure Python.
The NumPy array
The fundamental object that NumPy provides is an array:
End of explanation
"""
# create a 1D array with numbers 0-9
x = np.arange(10)
print(x)
# create an array with 4 evenly-spaced numbers starting at 1 and ending at 13
y = np.linspace(1, 13, 3)
print(y)
# create some other common types of arrays
x = np.zeros((3, 5))
print(x)
x = np.ones((3, 5))
print(x)
x = np.eye(5) # why this name?
print(x)
x = np.random.rand(4)
print(x)
"""
Explanation: In addition to defining arrays by hand, we can produce them programmatically:
End of explanation
"""
x
x[0]
x[-2]
array_2d
array_2d[0][0]
array_2d[1, 4]
array_2d[1, 4] = 12
array_2d[1, 4]
array_2d[1, 5] = 15
"""
Explanation: Indexing and slicing arrays
To access the elements of NumPy arrays, we use a notation that's very similar to the one we used for accessing elements of Python lists. Remember - in Python, indexing always starts at 0!
End of explanation
"""
x[1:3]
x[:2]
x[1:]
x[1:3:2]
x
x[::-1]
"""
Explanation: If we want to access more than one value at a time, we can take slices of NumPy arrays, too. The general format is x[start_index:stop_index:step_size].
End of explanation
"""
my_list = [1, 2, 3, 4]
my_sliced_list = my_list[0:2]
my_sliced_list[0] = 10
my_list[0] == my_sliced_list[0]
"""
Explanation: There's one very important performance-related difference between the way that slicing works between Python lists and NumPy arrays. For vanilla lists, a slice returns a copy of the sliced data:
End of explanation
"""
my_array = np.array([1, 2, 3, 4])
my_sliced_array = my_array[:2]
my_sliced_array[0] = 10
my_array[0] == my_sliced_array[0]
my_array
my_sliced_array
"""
Explanation: For NumPy arrays, slices are views of the original array:
End of explanation
"""
x = 1
type(x)
x = 'hello'
type(x)
"""
Explanation: This memory efficiency is extremely useful when dealing with large datasets. But be careful!
Your turn
Create a NumPy array with 100,000 random integers between 0 and 100. Then, write two functions (in pure Python, not using built-in NumPy functions):
Compute the average
Compute the standard deviation
Create weight vector of 100,000 elements (the sum of the elements is 1). Compute the weighted average of your first vector with these weights.
We'll return to these functions a little later.
Mutability and Data types in NumPy
Python has a dynamic typing system. For example:
End of explanation
"""
my_list = [1, 3.1415, 'hello']
my_list
"""
Explanation: In a statically typed language like C, you can't do this. The following code will fail when you try to compile it:
int x = 1;
x = "hello";
In other words, Python determines the best type to store your data at run time, whereas C requires you to explicitly specify the type of data (and enforces this typing). This design choice makes Python simple to use, but comes with a performance overhead, since Python needs to compute and store extra information about your data.
In addition, Python lists are able to hold heterogenous data:
End of explanation
"""
x = np.array([1, 2, 3, 4])
x.dtype
type(x[0])
x[0] = 1.1
x[0]
x[0] = 'hello'
x = x.astype(float)
type(x[0])
x[0] = 1.1
x[0]
x = np.array([1, 2, 3, 4], dtype='float_')
x.dtype
x[0] = 1.1
x[0]
"""
Explanation: NumPy's approach to data types in arrays is slightly different than vanilla Python lists:
End of explanation
"""
my_list = [1, 2, 3, 4]
my_list[0] = 10
my_list
my_list.append(20)
my_list
my_list.remove(2)
print(my_list)
"""
Explanation: So - NumPy gives you the flexibility to use a wide variety of data types in arrays. However, NumPy arrays must be homogeneous in data type.
What about the mutability of NumPy arrays? Recall that vanilla Python lists are totally mutable:
End of explanation
"""
my_array = np.array([1, 2, 3, 4])
print(my_array)
my_array[0] = 10
print(my_array)
my_new_array = np.append(my_array, 20)
my_new_array[0] = 20
print(my_array)
print(my_new_array)
"""
Explanation: We can always change the value of any element in a list, as well as add and delete elements as we wish. But whatever we do to the elements of the list, the same list object is always there. With NumPy arrays, the story is a bit different:
End of explanation
"""
|
CAChemE/curso-python-datos | notebooks_vacios/022-matplotlib-GeoData-cartopy.ipynb | bsd-3-clause | # Inicializamos una figura con el tamaño que necesitemos
# si no la queremos por defecto
# Creamos unos ejes con la proyección que queramos
# por ejemplo, Mercator
# Y lo que queremos representar en el mapa
# Tierra
# Océanos
# Líneas de costa (podemos modificar el color)
# Fronteras
# Ríos y lagos
# Por último, podemos pintar el grid, si nos interesa
"""
Explanation: Representación de datos geográficos con cartopy
En ocasiones necesitaremos representar datos sobre un mapa. En este tipo de casos basemap es una buena alternativa dentro del ecosistema Python, pero pronto será sustituida por cartopy. Aunque seguirá teniendo mantenimiento hasta 2020 y cartopy todavía no ha incorporado todas las características de basemap, miraremos al futuro y haremos nuestros primeros ejemplos con la nueva biblioteca. Si aun así te interesa basemap puedes ver esta entrada en el blog de Jake Vanderplas o este notebook de su libro sobre data science.
En primer lugar, como siempre importamos la librería y el resto de cosas que necesitaremos:
Utilizando diferentes proyecciones
Mercator
Veremos en un primer ejemplo cómo crear un mapa con una proyección y añadiremos la información que nos interese:
End of explanation
"""
# Inicializamos una figura con el tamaño que necesitemos
# si no la queremos por defecto
# Elegimos la proyección InterruptedGoodeHomolosine
# Y lo que queremos representar en el mapa
"""
Explanation: InterruptedGoodeHomolosine
Veremos ahora otro ejemplo usando una proyección distinta y colorearemos el mapa de forma diferente:
End of explanation
"""
# Importamos los formatos de ejes para latitud y longitud
# Elegimos la proyección PlateCarree
# Y lo que queremos representar en el mapa
# Tierra
# Océanos
# Líneas de costa (podemos modificar el color)
# Fronteras
# Ríos y lagos
# Dentro de los ejes seleccionamos las lineas del grid y
# activamos la opción de mostrar etiquetas
# Sobre las líneas del grid, ajustamos el formato en x e y
"""
Explanation: Puede interesarnos poner etiquetas a los ejes. Podemos utilizar entonces las herramientas dentro de : cartopy.mpl.gridliner
PlateCarree
End of explanation
"""
# Elegimos la proyección
# Fijar el punto y la extensión del mapa que queremos ver
# Y lo que queremos representar en el mapa
"""
Explanation: Fijando la extensión de nuestra representación
Para las ocasiones en las que no queremos mostrar un mapa entero, sino que sólo necesitamos representar una determinada localización, vale todo lo anterior, simplemente tendremos espedificar el área a mostrar con el método set_extent
y tomar alguna precaución...
End of explanation
"""
# Importando Natural Earth Feature
# Elegimos la proyección
# Fijar el punto y la extensión del mapa que queremos ver
# Y lo que queremos representar en el mapa
# Hasta ahora utilizábamos:
# ax.add_feature(cfeature.COASTLINE,
# edgecolor=(0.3, 0.3, 0.3),
# facecolor=cfeature.COLORS['land']
# )
# Pero ahora descargaremos primero la característica que
# queremos representar:
# Y después la añadiremso al mapa con las propiedades que creamos convenientes.
"""
Explanation: Como se ve en la figura enterior, la representación que obtenemos es demasiado burda. Esto se debe a que los datos por defecto se encuentran descargados a una escala poco detallada.
Cartopy permite acceder a datos propios almacenados en nuestro ordenador o descargarlos de algunas bases de datos reconocidas. En este caso accederemos a NaturalEarthFeature, que es la que hemos utilizado por defecto hasta ahora sin saberlo.
Ver http://www.naturalearthdata.com/
End of explanation
"""
# Leeremos el csv que ya tenemos descargado utilizando pandas
# Cremos un mapa sobre el que representar los datos:
# Elegimos la proyección PlateCarree
# Y representamos las líneas de costa
# Ahora podemos añadir sobre ese mapa los datos con un scatter
"""
Explanation: Desde Naturale Earth Feature, no sólo podemos descargar caraterísticas físicas, sino que también podemos acceder a datasets demográficos
Representando datos sobre el mapa
Habitualmente, no queremos sólo pintar un mapa sino que queremos representar datos sobre él. Estos datos pueden venir del dataset anterior de cualquier otra fuente.
En este ejemplo representaremos los datos de impactos de meteoritos que han caído en la tierra que están recogidos en el dataset: www.kaggle.como/nasa/meteorite-landings. En el link se pude encontrar toda la información.
End of explanation
"""
# preserve
from netCDF4 import Dataset
from netCDF4 import date2index
from datetime import datetime
# preserve
data = Dataset('../data/gistemp250.nc')
"""
Explanation: Casi cualquier representación de las que hemos cisto anteriormente con matploltib es posible.
Ejemplo de PythonDataScienceHandbook (Jake Vanderplas)
https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.13-Geographic-Data-With-Basemap.ipynb
Example: Surface Temperature Data
As an example of visualizing some more continuous geographic data, let's consider the "polar vortex" that hit the eastern half of the United States in January of 2014.
A great source for any sort of climatic data is NASA's Goddard Institute for Space Studies.
Here we'll use the GIS 250 temperature data, which we can download using shell commands (these commands may have to be modified on Windows machines).
The data used here was downloaded on 6/12/2016, and the file size is approximately 9MB:
The data comes in NetCDF format, which can be read in Python by the netCDF4 library.
You can install this library as shown here
$ conda install netcdf4
We read the data as follows:
End of explanation
"""
# preserve
timeindex = date2index(datetime(2014, 1, 15),
data.variables['time'])
"""
Explanation: The file contains many global temperature readings on a variety of dates; we need to select the index of the date we're interested in—in this case, January 15, 2014:
End of explanation
"""
# preserve
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
lon, lat = np.meshgrid(lon, lat)
temp_anomaly = data.variables['tempanomaly'][timeindex]
"""
Explanation: Now we can load the latitude and longitude data, as well as the temperature anomaly for this index:
End of explanation
"""
# preserve
fig = plt.figure(figsize=(8,4))
# Elegimos la proyección
ax = plt.axes(projection=ccrs.PlateCarree())
# Y lo que queremos representar en el mapa
coastline = NaturalEarthFeature(category='physical', name='coastline', scale='50m')
# ax.add_feature(land, color=cfeature.COLORS['land'])
ax.add_feature(coastline, facecolor=cfeature.COLORS['land'], edgecolor='k', alpha=0.5)
ax.pcolormesh(lon, lat, temp_anomaly, cmap='RdBu_r')
"""
Explanation: Finally, we'll use the pcolormesh() method to draw a color mesh of the data.
We'll look at North America, and use a shaded relief map in the background.
Note that for this data we specifically chose a divergent colormap, which has a neutral color at zero and two contrasting colors at negative and positive values.
We'll also lightly draw the coastlines over the colors for reference:
End of explanation
"""
|
dxl0632/deeplearning_nd_udacity | embeddings/Skip-Gram_word2vec.ipynb | mit | import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
"""
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
"""
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
"""
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
"""
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
"""
np.random.uniform?
http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/
int_words[:10]
## Your code here
from collections import Counter
import random
count = Counter(int_words)
tot = sum(count)
freq = {k: (v / tot) for k, v in count.items()}
np.random.seed(632)
t = 1e-5
p_discard = {word: 1 - np.sqrt(t/freq[word]) for word in int_words}
train_words = [word for word in int_words if random.random() < (1 - p_discard[word])]
len(train_words)
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
random.sample?
random.sample(range(1, 6), 1)
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
new_size = random.sample(range(1, window_size + 1), 1)[0]
left_index = idx - new_size
right_index = idx + new_size + 1
if left_index < 0 :
left_index = 0
return words[left_index: idx] + words[idx: right_index]
# # test
# words = [1, 2, 3, 4, 5, 6, 7, 8]
# idx = 1
# window_size = 4
# get_target(words, idx, window_size)
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
"""
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
"""
tf.reset_default_graph()
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
"""
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
"""
n_vocab = len(int_to_vocab)
n_embedding = 300# Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform([n_vocab, n_embedding], minval=-1, maxval=1))# create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs)# use tf.nn.embedding_lookup to get the hidden layer output
"""
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
"""
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal([n_vocab, n_embedding], stddev=0.1))# create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab))# create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
"""
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
"""
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
"""
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
"""
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
"""
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
"""
Explanation: Restore the trained network if you need to:
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
computational-class/computational-communication-2016 | code/03.python_intro.ipynb | mit | import random, datetime
import numpy as np
import pylab as plt
import statsmodels.api as sm
from scipy.stats import norm
from scipy.stats.stats import pearsonr
"""
Explanation: 数据科学的编程工具
Python使用简介
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
人生苦短,我用Python。
Python(/ˈpaɪθən/)是一种面向对象、解释型计算机程序设计语言
- 由Guido van Rossum于1989年底发明
- 第一个公开发行版发行于1991年
- Python语法简洁而清晰
- 具有强大的标准库和丰富的第三方模块
- 它常被昵称为胶水语言
- TIOBE编程语言排行榜“2010年度编程语言”
特点
免费、功能强大、使用者众多
与R和MATLAB相比,Python是一门更易学、更严谨的程序设计语言。使用Python编写的脚本更易于理解和维护。
如同其它编程语言一样,Python语言的基础知识包括:类型、列表(list)和元组(tuple)、字典(dictionary)、条件、循环、异常处理等。
关于这些,初阶读者可以阅读《Beginning Python》一书(Hetland, 2005)。
Python中包含了丰富的类库。
众多开源的科学计算软件包都提供了Python的调用接口,例如著名的计算机视觉库OpenCV。
Python本身的科学计算类库发展也十分完善,例如NumPy、SciPy和matplotlib等。
就社会网络分析而言,igraph, networkx, graph-tool, Snap.py等类库提供了丰富的网络分析工具
Python软件与IDE
目前最新的Python版本为3.0,更稳定的2.7版本。
编译器是编写程序的重要工具。
免费的Python编译器有Spyder、PyCharm(免费社区版)、Ipython、Vim、 Emacs、 Eclipse(加上PyDev插件)。
Installing Anaconda Python
Use the Anaconda Python
http://continuum.io/downloads.html
第三方包可以使用pip install的方法安装。
可以点击ToolsOpen command prompt
然后在打开的命令窗口中输入:
pip install beautifulsoup4
pip install beautifulsoup4
NumPy /SciPy for scientific computing
pandas to make Python usable for data analysis
matplotlib to make graphics
scikit-learn for machine learning
End of explanation
"""
# str, int, float
str(3)
# int
int('5')
# float
float('7.1')
range(10)
range(1, 10)
"""
Explanation: Variable Type
End of explanation
"""
dir
dir(str)[-5:]
help(str)
x = ' Hello WorlD '
dir(x)[-10:]
# lower
x.lower()
# upper
x.upper()
# rstrip
x.rstrip()
# strip
x.strip()
# replace
x.replace('lo', '')
# split
x.split('lo')
# join
','.join(['a', 'b'])
"""
Explanation: dir & help
当你想要了解对象的详细信息时使用
End of explanation
"""
x = 'hello world'
type(x)
"""
Explanation: type
当你想要了解变量类型时使用type
End of explanation
"""
l = [1,2,3,3] # list
t = (1, 2, 3, 3) # tuple
s = set([1,2,3,3]) # set
d = {'a':1,'b':2,'c':3} # dict
a = np.array(l) # array
print l, t, s, d, a
l = [1,2,3,3] # list
l.append(4)
l
d = {'a':1,'b':2,'c':3} # dict
d.keys()
d = {'a':1,'b':2,'c':3} # dict
d.values()
d = {'a':1,'b':2,'c':3} # dict
d['b']
d = {'a':1,'b':2,'c':3} # dict
d.items()
"""
Explanation: Data Structure
list, tuple, set, dictionary, array
End of explanation
"""
def devidePlus(m, n): # 结尾是冒号
y = float(m)/n+ 1 # 注意:空格
return y # 注意:return
"""
Explanation: 定义函数
End of explanation
"""
range(10)
range(1, 10)
for i in range(10):
print i, i*10, i**2
for i in range(10):
print i*10
for i in range(10):
print devidePlus(i, 2)
# 列表内部的for循环
r = [devidePlus(i, 2) for i in range(10)]
r
"""
Explanation: For 循环
End of explanation
"""
map(devidePlus, [4,3,2], [2, 1, 5])
# 注意: 将(4, 2)作为一个组合进行计算,将(3, 1)作为一个组合进行计算
map(lambda x, y: x + y, [1, 3, 5, 7, 9], [2, 4, 6, 8, 10])
map(lambda x, y, z: x + y - z, [1, 3, 5, 7, 9], [2, 4, 6, 8, 10], [3, 3, 2, 2, 5])
"""
Explanation: map
End of explanation
"""
j = 3
if j%2 == 1:
print r'余数是1'
elif j%2 ==2:
print r'余数是2'
else:
print r'余数既不是1也不是2'
x = 5
if x < 5:
y = -1
z = 5
elif x > 5:
y = 1
z = 11
else:
y = 0
z = 10
print(x, y, z)
"""
Explanation: if elif else
End of explanation
"""
j = 0
while j <10:
print j
j+=1 # avoid dead loop
j = 0
while j <10:
if j%2 != 0:
print j**2
j+=1 # avoid dead loop
j = 0
while j <50:
if j == 30:
break
if j%2 != 0:
print j**2
j+=1 # avoid dead loop
a = 4
while a:
print a
a -= 1
if a < 0:
a = None # []
"""
Explanation: while循环
End of explanation
"""
for i in [2, 0, 5]:
try:
print devidePlus(4, i)
except Exception, e:
print e
pass
"""
Explanation: try except
End of explanation
"""
data =[[i, i**2, i**3] for i in range(10)]
data
for i in data:
print '\t'.join(map(str, i))
type(data)
len(data)
data[0]
# 保存数据
data =[[i, i**2, i**3] for i in range(10000)]
f = open("/Users/chengjun/github/cjc2016/data/data_write_to_file.txt", "wb")
for i in data:
f.write('\t'.join(map(str,i)) + '\n')
f.close()
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
data = f.readlines()
data[:5]
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
data = f.readlines(1000)
len(data)
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
print f.readline()
f = [1, 2, 3, 4, 5]
for k, i in enumerate(f):
print k, i
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
for i in f:
print i
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
for k, i in enumerate(f):
if k%2000 ==0:
print i
data = []
line = '0\t0\t0\n'
line = line.replace('\n', '')
line = line.split('\t')
line = [int(i) for i in line] # convert str to int
data.append(line)
data
# 读取数据
data = []
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
for line in f:
#line = line.replace('\n', '').split('\t')
#line = [int(i) for i in line]
data.append(line)
data
# 读取数据
data = []
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
for line in f:
line = line.replace('\n', '').split('\t')
line = [int(i) for i in line]
data.append(line)
data
"""
Explanation: Write and Read data
End of explanation
"""
import json
data_dict = {'a':1, 'b':2, 'c':3}
with open('/Users/chengjun/github/cjc2016/save_dict.json', 'w') as f:
json.dump(data_dict, f)
dd = json.load(open("/Users/chengjun/github/cjc2016/save_dict.json"))
dd
"""
Explanation: 保存中间步骤产生的字典数据
End of explanation
"""
data_list = range(10)
with open('/Users/chengjun/github/cjc2016/save_list.json', 'w') as f:
json.dump(data_list, f)
dl = json.load(open("/Users/chengjun/github/cjc2016/save_list.json"))
dl
"""
Explanation: 重新读入json
保存中间步骤产生的列表数据
End of explanation
"""
import dill # pip insstall dill
# http://trac.mystic.cacr.caltech.edu/project/pathos/wiki/dill
def myFunction(num):
return num,num
with open('/Users/chengjun/github/cjc2016/data.pkl', 'wb') as f:
dill.dump(myFunction, f)
with open('/Users/chengjun/github/cjc2016/data.pkl', 'r') as f:
newFunction = dill.load(f)#, strictio=strictio))
newFunction('hello')
"""
Explanation: use dill to save data
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
x = range(1, 100)
y = [i**-3 for i in x]
plt.plot(x, y, 'b-s')
plt.ylabel('$p(k)$', fontsize = 20)
plt.xlabel('$k$', fontsize = 20)
plt.xscale('log')
plt.yscale('log')
plt.title('Degree Distribution')
plt.show()
import numpy as np
# red dashes, blue squares and green triangles
t = np.arange(0., 5., 0.2)
plt.plot(t, t, 'r--')
plt.plot(t, t**2, 'bs')
plt.plot(t, t**3, 'g^')
plt.show()
# red dashes, blue squares and green triangles
t = np.arange(0., 5., 0.2)
plt.plot(t, t**2, 'b-s', label = '1')
plt.plot(t, t**2.5, 'r-o', label = '2')
plt.plot(t, t**3, 'g-^', label = '3')
plt.annotate(r'$\alpha = 3$', xy=(3.5, 40), xytext=(2, 80),
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize = 20)
plt.ylabel('$f(t)$', fontsize = 20)
plt.xlabel('$t$', fontsize = 20)
plt.legend(loc=2,numpoints=1,fontsize=10)
plt.show()
# plt.savefig('/Users/chengjun/GitHub/cjc2016/figure/save_figure.png',
# dpi = 300, bbox_inches="tight",transparent = True)
plt.figure(1)
plt.subplot(221)
plt.plot(t, t, 'r--')
plt.text(2, 0.8*np.max(t), r'$\alpha = 1$', fontsize = 20)
plt.subplot(222)
plt.plot(t, t**2, 'bs')
plt.text(2, 0.8*np.max(t**2), r'$\alpha = 2$', fontsize = 20)
plt.subplot(223)
plt.plot(t, t**3, 'g^')
plt.text(2, 0.8*np.max(t**3), r'$\alpha = 3$', fontsize = 20)
plt.subplot(224)
plt.plot(t, t**4, 'r-o')
plt.text(2, 0.8*np.max(t**4), r'$\alpha = 4$', fontsize = 20)
plt.show()
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure(1)
plt.subplot(211)
plt.plot(t1, f(t1), 'bo')
plt.plot(t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
import matplotlib.gridspec as gridspec
t = np.arange(0., 5., 0.2)
gs = gridspec.GridSpec(3, 3)
ax1 = plt.subplot(gs[0, :])
plt.plot(t, t**2, 'b-s')
ax2 = plt.subplot(gs[1,:-1])
plt.plot(t, t**2, 'g-s')
ax3 = plt.subplot(gs[1:, -1])
plt.plot(t, t**2, 'r-o')
ax4 = plt.subplot(gs[-1,0])
plt.plot(t, t**2, 'g-^')
ax5 = plt.subplot(gs[-1,-2])
plt.plot(t, t**2, 'b-<')
plt.tight_layout()
def OLSRegressPlot(x,y,col,xlab,ylab):
xx = sm.add_constant(x, prepend=True)
res = sm.OLS(y,xx).fit()
constant, beta = res.params
r2 = res.rsquared
lab = r'$\beta = %.2f, \,R^2 = %.2f$' %(beta,r2)
plt.scatter(x,y,s=60,facecolors='none', edgecolors=col)
plt.plot(x,constant + x*beta,"red",label=lab)
plt.legend(loc = 'upper left',fontsize=16)
plt.xlabel(xlab,fontsize=26)
plt.ylabel(ylab,fontsize=26)
x = np.random.randn(50)
y = np.random.randn(50) + 3*x
pearsonr(x, y)
fig = plt.figure(figsize=(10, 4),facecolor='white')
OLSRegressPlot(x,y,'RoyalBlue',r'$x$',r'$y$')
plt.show()
fig = plt.figure(figsize=(7, 4),facecolor='white')
data = norm.rvs(10.0, 2.5, size=5000)
mu, std = norm.fit(data)
plt.hist(data, bins=25, normed=True, alpha=0.6, color='g')
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
plt.plot(x, p, 'r', linewidth=2)
title = r"$\mu = %.2f, \, \sigma = %.2f$" % (mu, std)
plt.title(title,size=16)
plt.show()
from matplotlib.dates import WeekdayLocator, DayLocator, MONDAY, DateFormatter
from matplotlib.finance import quotes_historical_yahoo_ochl, candlestick_ochl
date1 = (2014, 2, 1)
date2 = (2014, 5, 1)
quotes = quotes_historical_yahoo_ochl('INTC', date1, date2)
fig = plt.figure(figsize=(15, 5))
ax = fig.add_subplot(1,1,1)
candlestick_ochl(ax, quotes, width=0.8, colorup='green', colordown='r', alpha=0.8)
mondays = WeekdayLocator(MONDAY) # major ticks on the mondays
alldays = DayLocator() # minor ticks on the days
weekFormatter = DateFormatter('%b %d') # e.g., Jan 12
ax.xaxis.set_major_locator(mondays)
ax.xaxis.set_minor_locator(alldays)
ax.xaxis.set_major_formatter(weekFormatter)
ax.autoscale_view()
plt.setp( plt.gca().get_xticklabels(), rotation=45, horizontalalignment='right')
plt.title(r'$Intel \,Corporation \,Stock \,Price$',size=16)
fig.subplots_adjust(bottom=0.2)
plt.show()
"""
Explanation: http://stackoverflow.com/questions/35603979/pickling-defaultdict-with-lambda
使用matplotlib绘图
End of explanation
"""
|
brettavedisian/phys202-2015-work | assignments/assignment05/InteractEx04.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
"""
Explanation: Interact Exercise 4
Imports
End of explanation
"""
def random_line(m, b, sigma, size=10):
"""Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
"""
x=np.linspace(-1.0,1.0,size)
if sigma==0:
y=m*x+b
else:
y=m*x+b+np.random.normal(0.0,sigma**2,size)
return x,y
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
"""
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
"""
def ticks_out(ax):
"""Move the ticks to the outside of the box."""
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
"""Plot a random line with slope m, intercept b and size points."""
ran_line1, ran_line2=random_line(m,b,sigma,size)
f=plt.figure(figsize=(10,6))
plt.scatter(ran_line1,ran_line2,color=color)
plt.xlim(-1.1,1.1)
plt.ylim(-10.0,10.0)
plt.grid(True)
plt.title('Line with Gaussian Noise')
plt.xlabel('X'), plt.ylabel('Y')
plt.tick_params(axis='x',direction='inout')
plt.tick_params(axis='y',direction='inout')
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
"""
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
"""
interact(plot_random_line, m=(-10.0,10.0,0.1),b=(-5.0,5.0,0.1),sigma=(0.0,5.0,0.01),size=(10,100,10),color={'red':'r','green':'g','blue':'b'});
#### assert True # use this cell to grade the plot_random_line interact
"""
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation
"""
|
enakai00/jupyter_tfbook | Chapter02/MNIST softmax estimation.ipynb | gpl-3.0 | import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
np.random.seed(20160604)
"""
Explanation: [MSE-01] モジュールをインポートして、乱数のシードを設定します。
End of explanation
"""
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
"""
Explanation: [MSE-02] MNISTのデータセットを用意します。
End of explanation
"""
x = tf.placeholder(tf.float32, [None, 784])
w = tf.Variable(tf.zeros([784, 10]))
w0 = tf.Variable(tf.zeros([10]))
f = tf.matmul(x, w) + w0
p = tf.nn.softmax(f)
"""
Explanation: [MSE-03] ソフトマックス関数による確率 p の計算式を用意します。
End of explanation
"""
t = tf.placeholder(tf.float32, [None, 10])
loss = -tf.reduce_sum(t * tf.log(p))
train_step = tf.train.AdamOptimizer().minimize(loss)
"""
Explanation: [MSE-04] 誤差関数 loss とトレーニングアルゴリズム train_step を用意します。
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(p, 1), tf.argmax(t, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
"""
Explanation: [MSE-05] 正解率 accuracy を定義します。
End of explanation
"""
sess = tf.Session()
sess.run(tf.initialize_all_variables())
"""
Explanation: [MSE-06] セッションを用意して、Variableを初期化します。
End of explanation
"""
i = 0
for _ in range(2000):
i += 1
batch_xs, batch_ts = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, t: batch_ts})
if i % 100 == 0:
loss_val, acc_val = sess.run([loss, accuracy],
feed_dict={x:mnist.test.images, t: mnist.test.labels})
print ('Step: %d, Loss: %f, Accuracy: %f'
% (i, loss_val, acc_val))
"""
Explanation: [MSE-07] パラメーターの最適化を2000回繰り返します。
1回の処理において、トレーニングセットから取り出した100個のデータを用いて、勾配降下法を適用します。
最終的に、テストセットに対して約92%の正解率が得られます。
End of explanation
"""
images, labels = mnist.test.images, mnist.test.labels
p_val = sess.run(p, feed_dict={x:images, t: labels})
fig = plt.figure(figsize=(8,15))
for i in range(10):
c = 1
for (image, label, pred) in zip(images, labels, p_val):
prediction, actual = np.argmax(pred), np.argmax(label)
if prediction != i:
continue
if (c < 4 and i == actual) or (c >= 4 and i != actual):
subplot = fig.add_subplot(10,6,i*6+c)
subplot.set_xticks([])
subplot.set_yticks([])
subplot.set_title('%d / %d' % (prediction, actual))
subplot.imshow(image.reshape((28,28)), vmin=0, vmax=1,
cmap=plt.cm.gray_r, interpolation="nearest")
c += 1
if c > 6:
break
"""
Explanation: [MSE-08] この時点のパラメーターを用いて、テストセットに対する予測を表示します。
ここでは、「0」〜「9」の数字に対して、正解と不正解の例を3個ずつ表示します。
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session04/Day1/LSSTC-DSFP4-Juric-FrequentistAndBayes-03-Credibility.ipynb | mit | import numpy as np
N = 5
Nsamp = 10 ** 6
sigma_x = 2
np.random.seed(0)
x = np.random.normal(0, sigma_x, size=(Nsamp, N))
mu_samp = x.mean(1)
sig_samp = sigma_x * N ** -0.5
print("{0:.3f} should equal {1:.3f}".format(np.std(mu_samp), sig_samp))
"""
Explanation: Frequentism and Bayesianism III: Confidence, Credibility and why Frequentism and Science Don't Mix
Mario Juric & Jake VanderPlas, University of Washington
e-mail: mjuric@astro.washington.edu, twitter: @mjuric
This lecture is based on a post on the blog Pythonic Perambulations, by Jake VanderPlas. The content is BSD licensed. See also VanderPlas (2014) "Frequentism and Bayesianism: A Python-driven Primer".
Slides built using the excellent RISE Jupyter extension by Damian Avila.
In Douglas Adams' classic Hitchhiker's Guide to the Galaxy, hyper-intelligent pan-dimensional beings build a computer named Deep Thought in order to calculate "the Answer to the Ultimate Question of Life, the Universe, and Everything".
After seven and a half million years spinning its hyper-dimensional gears, before an excited crowd, Deep Thought finally outputs the answer:
<big><center>42</center></big>
The disappointed technicians, who trained a lifetime for this moment, are stupefied. They probe Deep Though for more information, and after some back-and-forth, the computer responds: "once you do know what the question actually is, you'll know what the answer means."
An answer does you no good if you don't know the question.
This story is an apt metaphor for statistics as sometimes used in the scientific literature.
When trying to estimate the value of an unknown parameter, the frequentist approach generally relies on a confidence interval (CI), while the Bayesian approach relies on a credible region (CR).
While these concepts sound and look very similar, their subtle difference can be extremely important, as they answer essentially different questions.
Like the poor souls hoping for enlightenment in Douglas Adams' universe, scientists often turn the crank of frequentism hoping for useful answers, but in the process overlook the fact that in science, frequentism is generally answering the wrong question.
This is far from simple philosophical navel-gazing: as I'll show, it can have real consequences for the conclusions we draw from observed data.
Confidence vs. Credibility
In the first part of this lecture, we discussed the basic philosophical difference between frequentism and Bayesianism: frequentists consider probability a measure of the frequency of (perhaps hypothetical) repeated events; Bayesians consider probability as a measure of the degree of certainty about values. As a result of this, speaking broadly, frequentists consider model parameters to be fixed and data to be random, while Bayesians consider model parameters to be random and data to be fixed.
These philosophies fundamenally affect the way that each approach seeks bounds on the value of a model parameter. Because the differences here are subtle, let's go right into a simple example to illustrate the difference between a frequentist confidence interval and a Bayesian credible region.
Example 1: The Mean of a Gaussian
Let's start by again examining an extremely simple problem; this is the same problem we saw in part I of this series: finding the mean of a Gaussian distribution. Previously we simply looked at the (frequentist) maximum likelihood and (Bayesian) maximum a posteriori estimates; here we'll extend this and look at confidence intervals and credibile regions.
Here is the problem: imagine you're observing a star that you assume has a constant brightness. Simplistically, we can think of this brightness as the number of photons reaching our telescope in one second. Any given measurement of this number will be subject to measurement errors: the source of those errors is not important right now, but let's assume the observations $x_i$ are drawn from a normal distribution about the true brightness value with a known standard deviation $\sigma_x$.
Given a series of measurements, what are the 95% (i.e. $2\sigma$) limits that we would place on the brightness of the star?
1. The Frequentist Approach
The frequentist approach to this problem is well-known, and is as follows:
For any set of $N$ values $D = {x_i}_{i=1}^N$, an unbiased estimate of the mean $\mu$ of the distribution is given by
$$
\bar{x} = \frac{1}{N}\sum_{i=1}^N x_i
$$
The sampling distribution describes the observed frequency of the estimate of the mean; by the central limit theorem we can show that the sampling distribution is normal; i.e.
$$
f(\bar{x}~||~\mu) \propto \exp\left[\frac{-(\bar{x} - \mu)^2}{2\sigma_\mu^2}\right]
$$
where we've used the standard error of the mean,
$$
\sigma_\mu = \sigma_x / \sqrt{N}
$$
The central limit theorem tells us that this is a reasonable approximation for any generating distribution if $N$ is large; if our generating distribution happens to be Gaussian, it also holds for $N$ as small as 2.
Let's quickly check this empirically, by looking at $10^6$ samples of the mean of 5 numbers:
End of explanation
"""
true_B = 100
sigma_x = 10
np.random.seed(1)
D = np.random.normal(true_B, sigma_x, size=3)
print(D)
"""
Explanation: It checks out: the standard deviation of the observed means is equal to $\sigma_x N^{-1/2}$, as expected.
From this normal sampling distribution, we can quickly write the 95% confidence interval by recalling that two standard deviations is roughly equivalent to 95% of the area under the curve. So our confidence interval is
$$
CI_{\mu} = \left(\bar{x} - 2\sigma_\mu,~\bar{x} + 2\sigma_\mu\right)
$$
Let's try this with a quick example: say we have three observations with an error (i.e. $\sigma_x$) of 10. What is our 95% confidence interval on the mean?
We'll generate our observations assuming a true value of 100:
End of explanation
"""
from scipy.special import erfinv
def freq_CI_mu(D, sigma, frac=0.95):
"""Compute the confidence interval on the mean"""
# we'll compute Nsigma from the desired percentage
Nsigma = np.sqrt(2) * erfinv(frac)
mu = D.mean()
sigma_mu = sigma * D.size ** -0.5
return mu - Nsigma * sigma_mu, mu + Nsigma * sigma_mu
print("95% Confidence Interval: [{0:.0f}, {1:.0f}]".format(*freq_CI_mu(D, 10)))
"""
Explanation: Next let's create a function which will compute the confidence interval:
End of explanation
"""
def bayes_CR_mu(D, sigma, frac=0.95):
"""Compute the credible region on the mean"""
Nsigma = np.sqrt(2) * erfinv(frac)
mu = D.mean()
sigma_mu = sigma * D.size ** -0.5
return mu - Nsigma * sigma_mu, mu + Nsigma * sigma_mu
print("95% Credible Region: [{0:.0f}, {1:.0f}]".format(*bayes_CR_mu(D, 10)))
"""
Explanation: Note here that we've assumed $\sigma_x$ is a known quantity; this could also be estimated from the data along with $\mu$, but here we kept things simple for sake of example.
2. The Bayesian Approach
For the Bayesian approach, we start with Bayes' theorem:
$$
P(\mu~|~D) = \frac{P(D~|~\mu)P(\mu)}{P(D)}
$$
We'll use a flat prior on $\mu$ (i.e. $P(\mu) \propto 1$ over the region of interest) and use the likelihood
$$
P(D~|~\mu) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi\sigma_x^2}}\exp\left[\frac{(\mu - x_i)^2}{2\sigma_x^2}\right]
$$
Computing this product and manipulating the terms, it's straightforward to show that this gives
$$
P(\mu~|~D) \propto \exp\left[\frac{-(\mu - \bar{x})^2}{2\sigma_\mu^2}\right]
$$
which is recognizable as a normal distribution with mean $\bar{x}$ and standard deviation $\sigma_\mu$.
That is, the Bayesian posterior on $\mu$ in this case is exactly equal to the frequentist sampling distribution for $\mu$.
From this posterior, we can compute the Bayesian credible region, which is the shortest interval that contains 95% of the probability. Here, it looks exactly like the frequentist confidence interval:
$$
CR_{\mu} = \left(\bar{x} - 2\sigma_\mu,~\bar{x} + 2\sigma_\mu\right)
$$
For completeness, we'll also create a function to compute the Bayesian credible region:
End of explanation
"""
# first define some quantities that we need
Nsamples = int(2E7)
N = len(D)
sigma_x = 10
# if someone changes N, this could easily cause a memory error
if N * Nsamples > 1E8:
raise ValueError("Are you sure you want this many samples?")
# eps tells us how close to D we need to be to consider
# it a matching sample. The value encodes the tradeoff
# between bias and variance of our simulation
eps = 0.5
# Generate some mean values from the (flat) prior in a reasonable range
np.random.seed(0)
mu = 80 + 40 * np.random.random(Nsamples)
# Generate data for each of these mean values
x = np.random.normal(mu, sigma_x, (N, Nsamples)).T
# find data which matches matches our "observed" data
x.sort(1)
D.sort()
i = np.all(abs(x - D) < eps, 1)
print("number of suitable samples: {0}".format(i.sum()))
# Now we ask how many of these mu values fall in our credible region
mu_good = mu[i]
CR = bayes_CR_mu(D, 10)
within_CR = (CR[0] < mu_good) & (mu_good < CR[1])
print "Fraction of means in Credible Region: {0:.3f}".format(within_CR.sum() * 1. / within_CR.size)
"""
Explanation: So What's the Difference?
The above derivation is one reason why the frequentist confidence interval and the Bayesian credible region are so often confused. In many simple problems, they correspond exactly. But we must be clear that even though the two are numerically equivalent, their interpretation is very different.
Recall that in Bayesianism, the probability distributions reflect our degree of belief. So when we computed the credible region above, it's equivalent to saying
"Given our observed data, there is a 95% probability that the true value of $\mu$ falls within $CR_\mu$" - Bayesians
In frequentism, on the other hand, $\mu$ is considered a fixed value and the data (and all quantities derived from the data, including the bounds of the confidence interval) are random variables. So the frequentist confidence interval is equivalent to saying
"There is a 95% probability that when I compute $CI_\mu$ from data of this sort, the true mean will fall within $CI_\mu$." - Frequentists
Note the difference: the Bayesian solution is a statement of probability about the parameter value given fixed bounds. The frequentist solution is a probability about the bounds given a fixed parameter value. This follows directly from the philosophical definitions of probability that the two approaches are based on.
The difference is subtle, but, as I'll discuss below, it has drastic consequences. First, let's further clarify these notions by running some simulations to confirm the interpretation.
Confirming the Bayesian Credible Region
To confirm what the Bayesian credible region is claiming, we must do the following:
sample random $\mu$ values from the prior
sample random sets of points given each $\mu$
select the sets of points which match our observed data
ask what fraction of these $\mu$ values are within the credible region we've constructed.
In code, that looks like this:
End of explanation
"""
# define some quantities we need
N = len(D)
Nsamples = int(1E4)
mu = 100
sigma_x = 10
# Draw datasets from the true distribution
np.random.seed(0)
x = np.random.normal(mu, sigma_x, (Nsamples, N))
# Compute a confidence interval from each dataset
CIs = np.array([freq_CI_mu(Di, sigma_x) for Di in x])
# find which confidence intervals contain the mean
contains_mu = (CIs[:, 0] < mu) & (mu < CIs[:, 1])
print "Fraction of Confidence Intervals containing the mean: {0:.3f}".format(contains_mu.sum() * 1. / contains_mu.size)
"""
Explanation: We see that, as predicted, roughly 95% of $\mu$ values with data matching ours lie in the Credible Region.
The important thing to note here is which of the variables is random, and which are fixed. In the Bayesian approach, we compute a single credible region from our observed data, and we consider it in terms of multiple random draws of $\mu$.
Confirming the frequentist Confidence Interval
Confirmation of the interpretation of the frequentist confidence interval is a bit less involved. We do the following:
draw sets of values from the distribution defined by the single true value of $\mu$.
for each set of values, compute a new confidence interval.
determine what fraction of these confidence intervals contain $\mu$.
In code, it looks like this:
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def p(x, theta):
return (x > theta) * np.exp(theta - x)
x = np.linspace(5, 18, 1000)
plt.fill(x, p(x, 10), alpha=0.3)
plt.ylim(0, 1.2)
plt.xlabel('x')
plt.ylabel('p(x)');
"""
Explanation: We see that, as predicted, 95% of the confidence intervals contain the true value of $\mu$.
Again, the important thing to note here is which of the variables is random. We use a single value of $\mu$, and consider it in relation to multiple confidence intervals constructed from multiple random data samples.
Discussion
We should remind ourselves again of the difference between the two types of constraints:
The Bayesian approach fixes the credible region, and guarantees 95% of possible values of $\mu$ will fall within it.
The frequentist approach fixes the parameter, and guarantees that 95% of possible confidence intervals will contain it.
Comparing the frequentist confirmation and the Bayesian confirmation above, we see that the distinctions which stem from the very definition of probability mentioned above:
Bayesianism treats parameters (e.g. $\mu$) as random variables, while frequentism treats parameters as fixed.
Bayesianism treats observed data (e.g. $D$) as fixed, while frequentism treats data as random variables.
Bayesianism treats its parameter constraints (e.g. $CR_\mu$) as fixed, while frequentism treats its constraints (e.g. $CI_\mu$) as random variables.
In the above example, as in many simple problems, the confidence interval and the credibility region overlap exactly, so the distinction is not especially important. But scientific analysis is rarely this simple; next we'll consider an example in which the choice of approach makes a big difference.
Example 2: Jaynes' Truncated Exponential
For an example of a situation in which the frequentist confidence interval and the Bayesian credibility region do not overlap, I'm going to turn to an example given by E.T. Jaynes, a 20th century physicist who wrote extensively on statistical inference in Physics. In the fifth example of his Confidence Intervals vs. Bayesian Intervals (pdf), he considers a truncated exponential model. Here is the problem, in his words:
A device will operate without failure for a time $\theta$ because of a protective chemical inhibitor injected into it; but at time $\theta$ the supply of the chemical is exhausted, and failures then commence, following the exponential failure law. It is not feasible to observe the depletion of this inhibitor directly; one can observe only the resulting failures. From data on actual failure times, estimate the time $\theta$ of guaranteed safe operation...
Essentially, we have data $D$ drawn from the following model:
$$
p(x~|~\theta) = \left{
\begin{array}{lll}
\exp(\theta - x) &,& x > \theta\
0 &,& x < \theta
\end{array}
\right}
$$
where $p(x~|~\theta)$ gives the probability of failure at time $x$, given an inhibitor which lasts for a time $\theta$.
Given some observed data $D = {x_i}$, we want to estimate $\theta$.
Let's start by plotting this model for a particular value of $\theta$, so we can see what we're working with:
End of explanation
"""
from scipy.special import erfinv
def approx_CI(D, sig=0.95):
"""Approximate truncated exponential confidence interval"""
# use erfinv to convert percentage to number of sigma
Nsigma = np.sqrt(2) * erfinv(sig)
D = np.asarray(D)
N = D.size
theta_hat = np.mean(D) - 1
return [theta_hat - Nsigma / np.sqrt(N),
theta_hat + Nsigma / np.sqrt(N)]
D = [10, 12, 15]
print("approximate CI: ({0:.1f}, {1:.1f})".format(*approx_CI(D)))
"""
Explanation: Imagine now that we've observed some data, $D = {10, 12, 15}$, and we want to infer the value of $\theta$ from this data. We'll explore four approaches to this below.
1. Common Sense Approach
One general tip that I'd always recommend: in any problem, before computing anything, think about what you're computing and guess what a reasonable solution might be. We'll start with that here. Thinking about the problem, the hard cutoff in the probability distribution leads to one simple observation: $\theta$ must be smaller than the smallest observed value.
This is immediately obvious on examination: the probability of seeing a value less than $\theta$ is zero. Thus, a model with $\theta$ greater than any observed value is impossible, assuming our model specification is correct. Our fundamental assumption in both Bayesianism and frequentism is that the model is correct, so in this case, we can immediately write our common sense condition:
$$
\theta < \min(D)
$$
or, in the particular case of $D = {10, 12, 15}$,
$$
\theta < 10
$$
Any reasonable constraint on $\theta$ given this data should meet this criterion. With this in mind, let's go on to some quantitative approaches based on Frequentism and Bayesianism.
2. Frequentist approach #1: Sampling Distribution via the Normal Approximation
In the frequentist paradigm, we'd like to compute a confidence interval on the value of $\theta$. We can start by observing that the population mean is given by
$$
\begin{array}{ll}
E(x) &= \int_0^\infty xp(x)dx\
&= \theta + 1
\end{array}
$$
So, using the sample mean as the point estimate of $E(x)$, we have an unbiased estimator for $\theta$ given by
$$
\hat{\theta} = \frac{1}{N} \sum_{i=1}^N x_i - 1
$$
The exponential distribution has a standard deviation of 1, so in the limit of large $N$, we can use the standard error of the mean (as above) to show that the sampling distribution of $\hat{\theta}$ will approach normal with variance $\sigma^2 = 1 / N$. Given this, we can write our 95% (i.e. 2$\sigma$) confidence interval as
$$
CI_{\rm large~N} = \left(\hat{\theta} - 2 N^{-1/2},~\hat{\theta} + 2 N^{-1/2}\right)
$$
Let's write a function which will compute this, and evaluate it for our data:
End of explanation
"""
from scipy.special import gammaincc
from scipy import optimize
def exact_CI(D, frac=0.95):
"""Exact truncated exponential confidence interval"""
D = np.asarray(D)
N = D.size
theta_hat = np.mean(D) - 1
def f(theta, D):
z = theta_hat + 1 - theta
return (z > 0) * z ** (N - 1) * np.exp(-N * z)
def F(theta, D):
return gammaincc(N, np.maximum(0, N * (theta_hat + 1 - theta))) - gammaincc(N, N * (theta_hat + 1))
def eqns(CI, D):
"""Equations which should be equal to zero"""
theta1, theta2 = CI
return (F(theta2, D) - F(theta1, D) - frac,
f(theta2, D) - f(theta1, D))
guess = approx_CI(D, 0.68) # use 1-sigma interval as a guess
result = optimize.root(eqns, guess, args=(D,))
if not result.success:
print "warning: CI result did not converge!"
return result.x
"""
Explanation: We immediately see an issue. By our simple common sense argument, we've determined that it is impossible for $\theta$ to be greater than 10, yet the entirety of the 95% confidence interval is above this range! Perhaps this issue is due to the small sample size: the above computation is based on a large-$N$ approximation, and we have a relatively paltry $N = 3$.
Maybe this will be improved if we do the more computationally intensive exact approach?
The answer is no. If we compute the confidence interval without relying on large-$N$ Gaussian eapproximation, the result is $(10.2, 12.2)$.
Note: you can verify yourself by evaluating the code in the sub-slides.
3. Frequentist approach #2: Exact Sampling Distribution
Computing the confidence interval from the exact sampling distribution takes a bit more work.
For small $N$, the normal approximation will not apply, and we must instead compute the confidence integral from the actual sampling distribution, which is the distribution of the mean of $N$ variables each distributed according to $p(\theta)$. The sum of random variables is distributed according to the convolution of the distributions for individual variables, so we can exploit the convolution theorem and use the method of characteristic functions to find the following sampling distribution for the sum of $N$ variables distributed according to our particular $p(x~|~\theta)$:
$$
f(\theta~|~D) \propto
\left{
\begin{array}{lll}
z^{N - 1}\exp(-z) &,& z > 0\
0 &,& z < 0
\end{array}
\right}
;~ z = N(\hat{\theta} + 1 - \theta)
$$
To compute the 95% confidence interval, we can start by computing the cumulative distribution: we integrate $f(\theta~|~D)$ from $0$ to $\theta$ (note that we are not actually integrating over the parameter $\theta$, but over the estimate of $\theta$. Frequentists cannot integrate over parameters).
This integral is relatively painless if we make use of the expression for the incomplete gamma function:
$$
\Gamma(a, x) = \int_x^\infty t^{a - 1}e^{-t} dt
$$
which looks strikingly similar to our $f(\theta)$.
Using this to perform the integral, we find that the cumulative distribution is given by
$$
F(\theta~|~D) = \frac{1}{\Gamma(N)}\left[ \Gamma\left(N, \max[0, N(\hat{\theta} + 1 - \theta)]\right) - \Gamma\left(N,~N(\hat{\theta} + 1)\right)\right]
$$
A contiguous 95% confidence interval $(\theta_1, \theta_2)$ satisfies the following equation:
$$
F(\theta_2~|~D) - F(\theta_1~|~D) = 0.95
$$
There are in fact an infinite set of solutions to this; what we want is the shortest of these. We'll add the constraint that the probability density is equal at either side of the interval:
$$
f(\theta_2~|~D) = f(\theta_1~|~D)
$$
(Jaynes claims that this criterion ensures the shortest possible interval, but I'm not sure how to prove that).
Solving this system of two nonlinear equations will give us the desired confidence interval. Let's compute this numerically:
End of explanation
"""
np.random.seed(0)
Dlarge = 10 + np.random.random(500)
print "approx: ({0:.3f}, {1:.3f})".format(*approx_CI(Dlarge))
print "exact: ({0:.3f}, {1:.3f})".format(*exact_CI(Dlarge))
"""
Explanation: As a sanity check, let's make sure that the exact and approximate confidence intervals match for a large number of points:
End of explanation
"""
print("approximate CI: ({0:.1f}, {1:.1f})".format(*approx_CI(D)))
print("exact CI: ({0:.1f}, {1:.1f})".format(*exact_CI(D)))
"""
Explanation: As expected, the approximate solution is very close to the exact solution for large $N$, which gives us confidence that we're computing the right thing.
Let's return to our 3-point dataset and see the results:
End of explanation
"""
def bayes_CR(D, frac=0.95):
"""Bayesian Credibility Region"""
D = np.asarray(D)
N = float(D.size)
theta2 = D.min()
theta1 = theta2 + np.log(1. - frac) / N
return theta1, theta2
"""
Explanation: The exact confidence interval is slightly different than the approximate one, but still reflects the same problem: we know from common-sense reasoning that $\theta$ can't be greater than 10, yet the 95% confidence interval is entirely in this forbidden region! The confidence interval seems to be giving us unreliable results.
We'll discuss this in more depth further below, but first let's see if Bayes can do better.
4. Bayesian Credibility Interval
For the Bayesian solution, we start by writing Bayes' rule:
$$
p(\theta~|~D) = \frac{p(D~|~\theta)p(\theta)}{P(D)}
$$
Using a constant prior $p(\theta)$, and with the likelihood
$$
p(D~|~\theta) = \prod_{i=1}^N p(x~|~\theta)
$$
we find
$$
p(\theta~|~D) \propto \left{
\begin{array}{lll}
N\exp\left[N(\theta - \min(D))\right] &,& \theta < \min(D)\
0 &,& \theta > \min(D)
\end{array}
\right}
$$
where $\min(D)$ is the smallest value in the data $D$, which enters because of the truncation of $p(x~|~\theta)$.
Because $p(\theta~|~D)$ increases exponentially up to the cutoff, the shortest 95% credibility interval $(\theta_1, \theta_2)$ will be given by
$$
\theta_2 = \min(D)
$$
and $\theta_1$ given by the solution to the equation
$$
\int_{\theta_1}^{\theta_2} N\exp[N(\theta - \theta_2)]d\theta = f
$$
this can be solved analytically by evaluating the integral, which gives
$$
\theta_1 = \theta_2 + \frac{\log(1 - f)}{N}
$$
Let's write a function which computes this:
End of explanation
"""
print("common sense: theta < {0:.1f}".format(np.min(D)))
print("frequentism (approx): 95% CI = ({0:.1f}, {1:.1f})".format(*approx_CI(D)))
print("frequentism (exact): 95% CI = ({0:.1f}, {1:.1f})".format(*exact_CI(D)))
print("Bayesian: 95% CR = ({0:.1f}, {1:.1f})".format(*bayes_CR(D)))
"""
Explanation: Now that we have this Bayesian method, we can compare the results of the four methods:
End of explanation
"""
from scipy.stats import expon
Nsamples = 1000
N = 3
theta = 10
np.random.seed(42)
data = expon(theta).rvs((Nsamples, N))
CIs = np.array([exact_CI(Di) for Di in data])
# find which confidence intervals contain the mean
contains_theta = (CIs[:, 0] < theta) & (theta < CIs[:, 1])
print "Fraction of Confidence Intervals containing theta: {0:.3f}".format(contains_theta.sum() * 1. / contains_theta.size)
"""
Explanation: What we find is that the Bayesian result agrees with our common sense, while the frequentist approach does not. The problem is that frequentism is answering the wrong question.
Numerical Confirmation
To try to quell any doubts about the math here, I want to repeat the exercise we did above and show that the confidence interval derived above is, in fact, correct. We'll use the same approach as before, assuming a "true" value for $\theta$ and sampling data from the associated distribution:
End of explanation
"""
np.random.seed(42)
N = int(1E7)
eps = 0.1
theta = 9 + 2 * np.random.random(N)
data = (theta + expon().rvs((3, N))).T
data.sort(1)
D.sort()
i_good = np.all(abs(data - D) < eps, 1)
print("Number of good samples: {0}".format(i_good.sum()))
theta_good = theta[i_good]
theta1, theta2 = bayes_CR(D)
within_CR = (theta1 < theta_good) & (theta_good < theta2)
print("Fraction of thetas in Credible Region: {0:.3f}".format(within_CR.sum() * 1. / within_CR.size))
"""
Explanation: As is promised by frequentism, 95% of the computed confidence intervals contain the true value. The procedure we used to compute the confidence intervals is, in fact, correct: our data just happened to be among the 5% where the method breaks down. But here's the thing: we know from the data themselves that we are in the 5% where the CI fails. The fact that the standard frequentist confidence interval ignores this common-sense information should give you pause about blind reliance on the confidence interval for any nontrivial problem.
For good measure, let's check that the Bayesian credible region also passes its test:
End of explanation
"""
|
karenlmasters/ComputationalPhysicsUnit | IntroductiontoPython/UserDefinedFunction.ipynb | apache-2.0 | import numpy as np
import scipy.constants as constants
print('Pi = ', constants.pi)
h = float(input("Enter the height of the tower (in metres): "))
t = float(input("Enter the time interval (in seconds): "))
s = constants.g*t**2/2
print("The height of the ball is",h-s,"meters")
"""
Explanation: User Defined Functions
User defined functions make for neater and more efficient programming.
We have already made use of several library functions in the math, scipy and numpy libraries.
End of explanation
"""
x = 4**0.5
print(x)
x = np.sqrt(4)
print(x)
"""
Explanation: Link to What's in Scipy.constants: https://docs.scipy.org/doc/scipy/reference/constants.html
Library Functions in Maths
(and numpy)
End of explanation
"""
def factorial(n):
f = 1.0
for k in range(1,n+1):
f *= k
return f
print("This programme calculates n!")
n = int(input("Enter n:"))
a = factorial(10)
print("n! = ", a)
"""
Explanation: User Defined Functions
Here we'll practice writing our own functions.
Functions start with
python
def name(input):
and must end with a statement to return the value calculated
python
return x
To run a function your code would look like this:
```python
import numpy as np
def name(input)
```
FUNCTION CODE HERE
```python
return D
y=int(input("Enter y:"))
D = name(y)
print(D)
```
First - write a function to calculate n factorial. Reminder:
$n! = \pi^n_{k=1} k$
~
~
~
~
~
~
~
~
~
~
~
~
~
~
End of explanation
"""
from math import sqrt, cos, sin
def distance(r,theta,z):
x = r*cos(theta)
y = r*sin(theta)
d = sqrt(x**2+y**2+z**2)
return d
D = distance(2.0,0.1,1.5)
print(D)
"""
Explanation: Finding distance to the origin in cylindrical co-ordinates:
End of explanation
"""
def factors(n):
factorlist=[]
k = 2
while k<=n:
while n%k==0:
factorlist.append(n)
n //= k
k += 1
return factorlist
list=factors(12)
print(list)
print(factors(17556))
print(factors(23))
"""
Explanation: Another Example: Prime Factors and Prime Numbers
Reminder: prime factors are the numbers which divide another number exactly.
Factors of the integer n can be found by dividing by all integers from 2 up to n and checking to see which remainders are zero.
Remainder in python calculated using
python
n % k
End of explanation
"""
for n in range(2,100):
if len(factors(n))==1:
print(n)
"""
Explanation: The reason these are useful are for things like the below, where you want to make the same calculation many times. This finds all the prime numbers (only divided by 1 and themselves) from 2 to 100.
End of explanation
"""
|
ilyankou/passport-index-dataset | Update.ipynb | mit | import requests
import pandas as pd
import json
codes = pd.read_csv(
'https://gist.githubusercontent.com/ilyankou/b2580c632bdea4af2309dcaa69860013/raw/420fb417bcd17d833156efdf64ce8a1c3ceb2691/country-codes',
dtype=str
).fillna('NA').set_index('ISO2')
def fix_iso2(x):
o = {
'UK': 'GB',
'RK': 'XK'
}
return o[x] if x in o else x
"""
Explanation: Generate Passport Index datasets
Data by Passport Index 2022: https://www.passportindex.org/
In both tidy and matrix formats
Using ISO-2, ISO-3, and full country names
For questions, get in touch with Ilya @ ilyankou@gmail.com
End of explanation
"""
# URL of the compare passport page
url = 'https://www.passportindex.org/comparebyPassport.php?p1=ro&p2=gt&p3=qa'
# Make a request to the .php page taht outputs data
result_raw = requests.post('https://www.passportindex.org/incl/compare2.php', headers={
'Host': 'www.passportindex.org',
'User-Agent': 'Mozilla/5.0',
'Accept': '*/*',
'Accept-Language': 'en-US,en;q=0.5',
'Accept-Encoding': 'gzip, deflate, br',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'X-Requested-With': 'XMLHttpRequest',
'Content-Length': '9',
'Origin': 'https://www.passportindex.org',
'DNT': '1',
'Connection': 'keep-alive',
'Pragma': 'no-cache',
'Cache-Control': 'no-cache',
'TE': 'Trailers',
}, data={
'compare': '1'
})
"""
Explanation: Get data from PassportIndex
End of explanation
"""
result = json.loads( result_raw.text )
obj = {}
for passport in result:
# Fix ISO-2 codes
passport = fix_iso2(passport)
# Add passport to the object
if passport not in obj:
obj[passport] = {}
# Add destinations for the given passport
for dest in result[passport]['destination']:
text = dest['text']
res = ''
# ** Visa required, incl Cuba's tourist card **
if text == 'visa required' or text == 'tourist card':
res = 'visa required'
# ** Visa on arrival **
elif 'visa on arrival' in text:
res = 'visa on arrival'
# ** Covid-19 ban **
elif text == 'COVID-19 ban':
res = 'covid ban'
# ** Visa-free, incl. Seychelles' tourist registration **
elif 'visa-free' in text or 'tourist registration' in text:
res = dest['dur'] if dest['dur'] != '' else 'visa free'
# ** eVisas, incl eVisitors (Australia), eTourist cards (Suriname),
# eTA (US), and pre-enrollment (Ivory Coast), or EVW (UK) **
elif 'eVis' in text or 'eTourist' in text or text == 'eTA' or text == 'pre-enrollment' or text == 'EVW':
res = 'e-visa'
# ** No admission, including Trump ban **
elif text == 'trump ban' or text == 'not admitted':
res = 'no admission'
# Update the result!
obj[passport][ fix_iso2(dest['code']) ] = res if res != '' else dest['text']
"""
Explanation: Clean up the data
End of explanation
"""
# ISO-2: Matrix
matrix = pd.DataFrame(obj).T.fillna(-1)
matrix.to_csv('passport-index-matrix-iso2.csv', index_label='Passport')
# ISO-2: Tidy
matrix.stack().to_csv(
'passport-index-tidy-iso2.csv',
index_label=['Passport', 'Destination'],
header=['Requirement'])
# ISO-3: Matrix
iso2to3 = { x:y['ISO3'] for x,y in codes.iterrows() }
matrix.rename(columns=iso2to3, index=iso2to3).to_csv('passport-index-matrix-iso3.csv', index_label='Passport')
# ISO-3: Tidy
matrix.rename(columns=iso2to3, index=iso2to3).stack().to_csv(
'passport-index-tidy-iso3.csv',
index_label=['Passport', 'Destination'],
header=['Requirement'])
# Country names: Matrix
iso2name = { x:y['Country'] for x,y in codes.iterrows() }
matrix.rename(columns=iso2name, index=iso2name).to_csv('passport-index-matrix.csv', index_label='Passport')
# Country names: Tidy
matrix.rename(columns=iso2name, index=iso2name).stack().to_csv(
'passport-index-tidy.csv',
index_label=['Passport', 'Destination'],
header=['Requirement'])
# Print all values
tidy = matrix.rename(columns=iso2to3, index=iso2to3).stack()
tidy.value_counts()
tidy[ tidy == 'no admission' ]
"""
Explanation: Save
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/mohc/cmip6/models/sandbox-1/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-1', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
awjuliani/DeepRL-Agents | Policy-Network.ipynb | mit | from __future__ import division
import numpy as np
try:
import cPickle as pickle
except:
import pickle
import tensorflow as tf
%matplotlib inline
import matplotlib.pyplot as plt
import math
try:
xrange = xrange
except:
xrange = range
"""
Explanation: Simple Reinforcement Learning in Tensorflow Part 2: Policy Gradient Method
This tutorial contains a simple example of how to build a policy-gradient based agent that can solve the CartPole problem. For more information, see this Medium post.
For more Reinforcement Learning algorithms, including DQN and Model-based learning in Tensorflow, see my Github repo, DeepRL-Agents.
Parts of this tutorial are based on code by Andrej Karpathy and korymath.
End of explanation
"""
import gym
env = gym.make('CartPole-v0')
"""
Explanation: Loading the CartPole Environment
If you don't already have the OpenAI gym installed, use pip install gym to grab it.
End of explanation
"""
env.reset()
random_episodes = 0
reward_sum = 0
while random_episodes < 10:
env.render()
observation, reward, done, _ = env.step(np.random.randint(0,2))
reward_sum += reward
if done:
random_episodes += 1
print("Reward for this episode was:",reward_sum)
reward_sum = 0
env.reset()
"""
Explanation: What happens if we try running the environment with random actions? How well do we do? (Hint: not so well.)
End of explanation
"""
# hyperparameters
H = 10 # number of hidden layer neurons
batch_size = 5 # every how many episodes to do a param update?
learning_rate = 1e-2 # feel free to play with this to train faster or more stably.
gamma = 0.99 # discount factor for reward
D = 4 # input dimensionality
tf.reset_default_graph()
#This defines the network as it goes from taking an observation of the environment to
#giving a probability of chosing to the action of moving left or right.
observations = tf.placeholder(tf.float32, [None,D] , name="input_x")
W1 = tf.get_variable("W1", shape=[D, H],
initializer=tf.contrib.layers.xavier_initializer())
layer1 = tf.nn.relu(tf.matmul(observations,W1))
W2 = tf.get_variable("W2", shape=[H, 1],
initializer=tf.contrib.layers.xavier_initializer())
score = tf.matmul(layer1,W2)
probability = tf.nn.sigmoid(score)
#From here we define the parts of the network needed for learning a good policy.
tvars = tf.trainable_variables()
input_y = tf.placeholder(tf.float32,[None,1], name="input_y")
advantages = tf.placeholder(tf.float32,name="reward_signal")
# The loss function. This sends the weights in the direction of making actions
# that gave good advantage (reward over time) more likely, and actions that didn't less likely.
loglik = tf.log(input_y*(input_y - probability) + (1 - input_y)*(input_y + probability))
loss = -tf.reduce_mean(loglik * advantages)
newGrads = tf.gradients(loss,tvars)
# Once we have collected a series of gradients from multiple episodes, we apply them.
# We don't just apply gradeients after every episode in order to account for noise in the reward signal.
adam = tf.train.AdamOptimizer(learning_rate=learning_rate) # Our optimizer
W1Grad = tf.placeholder(tf.float32,name="batch_grad1") # Placeholders to send the final gradients through when we update.
W2Grad = tf.placeholder(tf.float32,name="batch_grad2")
batchGrad = [W1Grad,W2Grad]
updateGrads = adam.apply_gradients(zip(batchGrad,tvars))
"""
Explanation: The goal of the task is to achieve a reward of 200 per episode. For every step the agent keeps the pole in the air, the agent recieves a +1 reward. By randomly choosing actions, our reward for each episode is only a couple dozen. Let's make that better with RL!
Setting up our Neural Network agent
This time we will be using a Policy neural network that takes observations, passes them through a single hidden layer, and then produces a probability of choosing a left/right movement. To learn more about this network, see Andrej Karpathy's blog on Policy Gradient networks.
End of explanation
"""
def discount_rewards(r):
""" take 1D float array of rewards and compute discounted reward """
discounted_r = np.zeros_like(r)
running_add = 0
for t in reversed(xrange(0, r.size)):
running_add = running_add * gamma + r[t]
discounted_r[t] = running_add
return discounted_r
"""
Explanation: Advantage function
This function allows us to weigh the rewards our agent recieves. In the context of the Cart-Pole task, we want actions that kept the pole in the air a long time to have a large reward, and actions that contributed to the pole falling to have a decreased or negative reward. We do this by weighing the rewards from the end of the episode, with actions at the end being seen as negative, since they likely contributed to the pole falling, and the episode ending. Likewise, early actions are seen as more positive, since they weren't responsible for the pole falling.
End of explanation
"""
xs,hs,dlogps,drs,ys,tfps = [],[],[],[],[],[]
running_reward = None
reward_sum = 0
episode_number = 1
total_episodes = 10000
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
rendering = False
sess.run(init)
observation = env.reset() # Obtain an initial observation of the environment
# Reset the gradient placeholder. We will collect gradients in
# gradBuffer until we are ready to update our policy network.
gradBuffer = sess.run(tvars)
for ix,grad in enumerate(gradBuffer):
gradBuffer[ix] = grad * 0
while episode_number <= total_episodes:
# Rendering the environment slows things down,
# so let's only look at it once our agent is doing a good job.
if reward_sum/batch_size > 100 or rendering == True :
env.render()
rendering = True
# Make sure the observation is in a shape the network can handle.
x = np.reshape(observation,[1,D])
# Run the policy network and get an action to take.
tfprob = sess.run(probability,feed_dict={observations: x})
action = 1 if np.random.uniform() < tfprob else 0
xs.append(x) # observation
y = 1 if action == 0 else 0 # a "fake label"
ys.append(y)
# step the environment and get new measurements
observation, reward, done, info = env.step(action)
reward_sum += reward
drs.append(reward) # record reward (has to be done after we call step() to get reward for previous action)
if done:
episode_number += 1
# stack together all inputs, hidden states, action gradients, and rewards for this episode
epx = np.vstack(xs)
epy = np.vstack(ys)
epr = np.vstack(drs)
tfp = tfps
xs,hs,dlogps,drs,ys,tfps = [],[],[],[],[],[] # reset array memory
# compute the discounted reward backwards through time
discounted_epr = discount_rewards(epr)
# size the rewards to be unit normal (helps control the gradient estimator variance)
discounted_epr -= np.mean(discounted_epr)
discounted_epr //= np.std(discounted_epr)
# Get the gradient for this episode, and save it in the gradBuffer
tGrad = sess.run(newGrads,feed_dict={observations: epx, input_y: epy, advantages: discounted_epr})
for ix,grad in enumerate(tGrad):
gradBuffer[ix] += grad
# If we have completed enough episodes, then update the policy network with our gradients.
if episode_number % batch_size == 0:
sess.run(updateGrads,feed_dict={W1Grad: gradBuffer[0],W2Grad:gradBuffer[1]})
for ix,grad in enumerate(gradBuffer):
gradBuffer[ix] = grad * 0
# Give a summary of how well our network is doing for each batch of episodes.
running_reward = reward_sum if running_reward is None else running_reward * 0.99 + reward_sum * 0.01
print('Average reward for episode %f. Total average reward %f.' % (reward_sum//batch_size, running_reward//batch_size))
if reward_sum//batch_size > 200:
print("Task solved in",episode_number,'episodes!')
break
reward_sum = 0
observation = env.reset()
print(episode_number,'Episodes completed.')
"""
Explanation: Running the Agent and Environment
Here we run the neural network agent, and have it act in the CartPole environment.
End of explanation
"""
|
CLEpy/CLEpy-MotM | Scrapy_nb/Quotes base case.ipynb | mit | # Settings for notebook
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# Show Python version
import platform
platform.python_version()
try:
import scrapy
except:
!pip install scrapy
import scrapy
from scrapy.crawler import CrawlerProcess
"""
Explanation: Scrapy in a jupyter notebook
Why Scrapy?
Requests can run concurrently and in a fault-tolerant way.
This means that Scrapy doesn’t need to wait for a request to be finished and processed, it can send another request or do other things in the meantime. This also means that other requests can keep going even if some request fails or an error happens while handling it.
Crawl Politeness settings.
You can do things like setting a download delay between each request, limiting amount of concurrent requests per domain or per IP, and even using an auto-throttling extension that tries to figure out these automatically.
Extensible
Tens of thousands of urls.
Source: https://www.jitsejan.com/using-scrapy-in-jupyter-notebook.html
End of explanation
"""
import json
import logging
import re
from datetime import datetime
"""
Explanation: imports
End of explanation
"""
class JsonWriterPipeline(object):
def open_spider(self, spider):
self.file = open('quoteresult.jl', 'w')
def close_spider(self, spider):
self.file.close()
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "\n"
self.file.write(line)
return item
"""
Explanation: set up pipeline
This class creates a simple pipeline that writes all found items to a JSON file, where each line contains one JSON element.
End of explanation
"""
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
custom_settings = {
'LOG_LEVEL': logging.WARNING,
'ITEM_PIPELINES': {'__main__.JsonWriterPipeline': 1}, # Used for pipeline 1
'FEED_FORMAT':'json', # Used for pipeline 2
'FEED_URI': 'quoteresult.json' # Used for pipeline 2
}
def parse(self, response):
#A Response object represents an HTTP response, which is usually downloaded (by the Downloader)
# and fed to the Spiders for processing.
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('span small::text').extract_first(),
'tags': quote.css('div.tags a.tag::text').extract(),
}
"""
Explanation: Define Spider
The QuotesSpider class defines from which URLs to start crawling and which values to retrieve. I set the logging level of the crawler to warning, otherwise the notebook is overloaded with DEBUG messages about the retrieved data.
End of explanation
"""
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(QuotesSpider)
process.start()
"""
Explanation: Start the crawler
End of explanation
"""
|
scraperwiki/databaker | databaker/tutorial/Introduction.ipynb | agpl-3.0 | from databaker.framework import *
tab = loadxlstabs("example1.xls", "beatles", verbose=False)[0]
savepreviewhtml(tab, verbose=False)
"""
Explanation: Introduction
Databaker is an Open Source Python library for converting semi-structured spreadsheets into computer-friendly datatables. The resulting data can be stored into Pandas data tables or the ONS-specific WDA format.
The system is embedded into the interactive programming environment called Jupyter for fast prototyping and development, and depends for its spreadsheet processing on messytables and xypath.
Install it with the command:
pip3 install databaker
Your main interaction with databaker is through the Jupyter notebook interface. There are many tutorials to show you how to master this system elsewhere on-line.
Once you've have a working program to converts a particular spreadsheet style into the output which you want, there are ways to rerun the notebook on other spreadsheets externally or from the command line.
Example
Although Databaker can handle spreadsheets of any size, here is a tiny example from the tutorials to illustrate what it does.
End of explanation
"""
r1 = tab.excel_ref('B3').expand(RIGHT)
r2 = tab.excel_ref('A3').fill(DOWN)
dimensions = [
HDim(tab.excel_ref('B1'), TIME, CLOSEST, ABOVE),
HDim(r1, "Vehicles", DIRECTLY, ABOVE),
HDim(r2, "Name", DIRECTLY, LEFT),
HDimConst("Category", "Beatles")
]
observations = tab.excel_ref('B4').expand(DOWN).expand(RIGHT).is_not_blank().is_not_whitespace()
c1 = ConversionSegment(observations, dimensions)
savepreviewhtml(c1)
"""
Explanation: Conversion segments
Databaker gives you tools to help you write the code to navigate around the spreadsheet and select the cells and their correspondences.
When you are done your code will look like the following.
You can click on the OBS (observation) cells to see how they connect to the headings.
End of explanation
"""
c1.topandas()
"""
Explanation: Output in pandas
Pandas data tables provides an enormous scope for further processing and cleaning of the data.
To make full use of its power you should become familiar with its Time series functionality, which will allows you to plot, resample and align multple data sources at once.
End of explanation
"""
print(writetechnicalCSV(None, c1))
"""
Explanation: Output in WDA Observation File
The WDA system in the ONS has been the primary use for this library. If you need output into WDA the result would look like the following:
End of explanation
"""
|
rishuatgithub/MLPy | torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/06-CNN-Exercises-Solutions.ipynb | apache-2.0 | import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from torchvision.utils import make_grid
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
transform = transforms.ToTensor()
train_data = datasets.FashionMNIST(root='../Data', train=True, download=True, transform=transform)
test_data = datasets.FashionMNIST(root='../Data', train=False, download=True, transform=transform)
class_names = ['T-shirt','Trouser','Sweater','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Boot']
"""
Explanation: <img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
CNN Exercises - Solutions
For these exercises we'll work with the <a href='https://www.kaggle.com/zalando-research/fashionmnist'>Fashion-MNIST</a> dataset, also available through <a href='https://pytorch.org/docs/stable/torchvision/index.html'><tt><strong>torchvision</strong></tt></a>. Like MNIST, this dataset consists of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes:
0. T-shirt/top
1. Trouser
2. Pullover
3. Dress
4. Coat
5. Sandal
6. Shirt
7. Sneaker
8. Bag
9. Ankle boot
<div class="alert alert-danger" style="margin: 10px"><strong>IMPORTANT NOTE!</strong> Make sure you don't run the cells directly above the example output shown, <br>otherwise you will end up writing over the example output!</div>
Perform standard imports, load the Fashion-MNIST dataset
Run the cell below to load the libraries needed for this exercise and the Fashion-MNIST dataset.<br>
PyTorch makes the Fashion-MNIST dataset available through <a href='https://pytorch.org/docs/stable/torchvision/datasets.html#fashion-mnist'><tt><strong>torchvision</strong></tt></a>. The first time it's called, the dataset will be downloaded onto your computer to the path specified. From that point, torchvision will always look for a local copy before attempting another download.
End of explanation
"""
# CODE HERE
# DON'T WRITE HERE
train_loader = DataLoader(train_data, batch_size=10, shuffle=True)
test_loader = DataLoader(test_data, batch_size=10, shuffle=False)
"""
Explanation: 1. Create data loaders
Use DataLoader to create a <tt>train_loader</tt> and a <tt>test_loader</tt>. Batch sizes should be 10 for both.
End of explanation
"""
# CODE HERE
# DON'T WRITE HERE
# IMAGES ONLY
for images,labels in train_loader:
break
im = make_grid(images, nrow=10)
plt.figure(figsize=(12,4))
plt.imshow(np.transpose(im.numpy(), (1, 2, 0)));
# DON'T WRITE HERE
# IMAGES AND LABELS
for images,labels in train_loader:
break
print('Label: ', labels.numpy())
print('Class: ', *np.array([class_names[i] for i in labels]))
im = make_grid(images, nrow=10)
plt.figure(figsize=(12,4))
plt.imshow(np.transpose(im.numpy(), (1, 2, 0)));
"""
Explanation: 2. Examine a batch of images
Use DataLoader, <tt>make_grid</tt> and matplotlib to display the first batch of 10 images.<br>
OPTIONAL: display the labels as well
End of explanation
"""
# Run the code below to check your answer:
conv = nn.Conv2d(1, 1, 5, 1)
for x,labels in train_loader:
print('Orig size:',x.shape)
break
x = conv(x)
print('Down size:',x.shape)
"""
Explanation: Downsampling
<h3>3. If a 28x28 image is passed through a Convolutional layer using a 5x5 filter, a step size of 1, and no padding, what is the resulting matrix size?</h3>
<div style='border:1px black solid; padding:5px'>A 5x5 filter leaves a two-pixel border on each side, so the overall dimension is reduced by 4.<br>
The result is a 24x24 matrix.</div>
End of explanation
"""
# Run the code below to check your answer:
x = F.max_pool2d(x, 2, 2)
print('Down size:',x.shape)
"""
Explanation: 4. If the sample from question 3 is then passed through a 2x2 MaxPooling layer, what is the resulting matrix size?
<div style='border:1px black solid; padding:5px'>
If a 2x2 pooling layer is applied to a 24x24 matrix, each side is divided by two, and rounded down if necessary.<br>
The result is a 12x12 matrix.
</div>
End of explanation
"""
# DON'T WRITE HERE
class ConvolutionalNetwork(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 6, 3, 1)
self.conv2 = nn.Conv2d(6, 16, 3, 1)
self.fc1 = nn.Linear(5*5*16, 100)
self.fc2 = nn.Linear(100, 10)
def forward(self, X):
X = F.relu(self.conv1(X))
X = F.max_pool2d(X, 2, 2)
X = F.relu(self.conv2(X))
X = F.max_pool2d(X, 2, 2)
X = X.view(-1, 5*5*16)
X = F.relu(self.fc1(X))
X = self.fc2(X)
return F.log_softmax(X, dim=1)
torch.manual_seed(101)
model = ConvolutionalNetwork()
"""
Explanation: CNN definition
5. Define a convolutional neural network
Define a CNN model that can be trained on the Fashion-MNIST dataset. The model should contain two convolutional layers, two pooling layers, and two fully connected layers. You can use any number of neurons per layer so long as the model takes in a 28x28 image and returns an output of 10. Portions of the definition have been filled in for convenience.
End of explanation
"""
# Run the code below to check your answer:
def count_parameters(model):
params = [p.numel() for p in model.parameters() if p.requires_grad]
for item in params:
print(f'{item:>6}')
print(f'______\n{sum(params):>6}')
count_parameters(model)
"""
Explanation: Trainable parameters
6. What is the total number of trainable parameters (weights & biases) in the model above?
Answers will vary depending on your model definition.
<div style='border:1px black solid; padding:5px'>
$\quad\begin{split}(1\times6\times3\times3)+6+(6\times16\times3\times3)+16+(400\times100)+100+(100\times10)+10 &=\\
54+6+864+16+40000+100+1000+10 &= 42,050\end{split}$<br>
</div>
End of explanation
"""
# DON'T WRITE HERE
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
"""
Explanation: 7. Define loss function & optimizer
Define a loss function called "criterion" and an optimizer called "optimizer".<br>
You can use any functions you want, although we used Cross Entropy Loss and Adam (learning rate of 0.001) respectively.
End of explanation
"""
# DON'T WRITE HERE
epochs = 5
for i in range(epochs):
for X_train, y_train in train_loader:
# Apply the model
y_pred = model(X_train)
loss = criterion(y_pred, y_train)
# Update parameters
optimizer.zero_grad()
loss.backward()
optimizer.step()
# OPTIONAL print statement
print(f'{i+1} of {epochs} epochs completed')
"""
Explanation: 8. Train the model
Don't worry about tracking loss values, displaying results, or validating the test set. Just train the model through 5 epochs. We'll evaluate the trained model in the next step.<br>
OPTIONAL: print something after each epoch to indicate training progress.
End of explanation
"""
# DON'T WRITE HERE
model.eval()
with torch.no_grad():
correct = 0
for X_test, y_test in test_loader:
y_val = model(X_test)
predicted = torch.max(y_val,1)[1]
correct += (predicted == y_test).sum()
print(f'Test accuracy: {correct.item()}/{len(test_data)} = {correct.item()*100/(len(test_data)):7.3f}%')
"""
Explanation: 9. Evaluate the model
Set <tt>model.eval()</tt> and determine the percentage correct out of 10,000 total test images.
End of explanation
"""
|
AllenDowney/ThinkStats2 | code/chap04ex.ipynb | gpl-3.0 | import numpy as np
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz"
)
import thinkstats2
import thinkplot
"""
Explanation: Chapter 4
Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
import first
live, firsts, others = first.MakeFrames()
"""
Explanation: Examples
One more time, I'll load the data from the NSFG.
End of explanation
"""
first_wgt = firsts.totalwgt_lb
first_wgt_dropna = first_wgt.dropna()
print('Firsts', len(first_wgt), len(first_wgt_dropna))
other_wgt = others.totalwgt_lb
other_wgt_dropna = other_wgt.dropna()
print('Others', len(other_wgt), len(other_wgt_dropna))
first_pmf = thinkstats2.Pmf(first_wgt_dropna, label='first')
other_pmf = thinkstats2.Pmf(other_wgt_dropna, label='other')
"""
Explanation: And compute the distribution of birth weight for first babies and others.
End of explanation
"""
width = 0.4 / 16
# plot PMFs of birth weights for first babies and others
thinkplot.PrePlot(2)
thinkplot.Hist(first_pmf, align='right', width=width)
thinkplot.Hist(other_pmf, align='left', width=width)
thinkplot.Config(xlabel='Weight (pounds)', ylabel='PMF')
"""
Explanation: We can plot the PMFs on the same scale, but it is hard to see if there is a difference.
End of explanation
"""
def PercentileRank(scores, your_score):
count = 0
for score in scores:
if score <= your_score:
count += 1
percentile_rank = 100.0 * count / len(scores)
return percentile_rank
"""
Explanation: PercentileRank computes the fraction of scores less than or equal to your_score.
End of explanation
"""
t = [55, 66, 77, 88, 99]
"""
Explanation: If this is the list of scores.
End of explanation
"""
PercentileRank(t, 88)
"""
Explanation: If you got the 88, your percentile rank is 80.
End of explanation
"""
def Percentile(scores, percentile_rank):
scores.sort()
for score in scores:
if PercentileRank(scores, score) >= percentile_rank:
return score
"""
Explanation: Percentile takes a percentile rank and computes the corresponding percentile.
End of explanation
"""
Percentile(t, 50)
"""
Explanation: The median is the 50th percentile, which is 77.
End of explanation
"""
def Percentile2(scores, percentile_rank):
scores.sort()
index = percentile_rank * (len(scores)-1) // 100
return scores[int(index)]
"""
Explanation: Here's a more efficient way to compute percentiles.
End of explanation
"""
Percentile2(t, 50)
"""
Explanation: Let's hope we get the same answer.
End of explanation
"""
def EvalCdf(sample, x):
count = 0.0
for value in sample:
if value <= x:
count += 1
prob = count / len(sample)
return prob
"""
Explanation: The Cumulative Distribution Function (CDF) is almost the same as PercentileRank. The only difference is that the result is 0-1 instead of 0-100.
End of explanation
"""
t = [1, 2, 2, 3, 5]
"""
Explanation: In this list
End of explanation
"""
EvalCdf(t, 0), EvalCdf(t, 1), EvalCdf(t, 2), EvalCdf(t, 3), EvalCdf(t, 4), EvalCdf(t, 5)
"""
Explanation: We can evaluate the CDF for various values:
End of explanation
"""
cdf = thinkstats2.Cdf(live.prglngth, label='prglngth')
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='CDF', loc='upper left')
"""
Explanation: Here's an example using real data, the distribution of pregnancy length for live births.
End of explanation
"""
cdf.Prob(41)
"""
Explanation: Cdf provides Prob, which evaluates the CDF; that is, it computes the fraction of values less than or equal to the given value. For example, 94% of pregnancy lengths are less than or equal to 41.
End of explanation
"""
cdf.Value(0.5)
"""
Explanation: Value evaluates the inverse CDF; given a fraction, it computes the corresponding value. For example, the median is the value that corresponds to 0.5.
End of explanation
"""
first_cdf = thinkstats2.Cdf(firsts.totalwgt_lb, label='first')
other_cdf = thinkstats2.Cdf(others.totalwgt_lb, label='other')
thinkplot.PrePlot(2)
thinkplot.Cdfs([first_cdf, other_cdf])
thinkplot.Config(xlabel='Weight (pounds)', ylabel='CDF')
"""
Explanation: In general, CDFs are a good way to visualize distributions. They are not as noisy as PMFs, and if you plot several CDFs on the same axes, any differences between them are apparent.
End of explanation
"""
weights = live.totalwgt_lb
live_cdf = thinkstats2.Cdf(weights, label='live')
"""
Explanation: In this example, we can see that first babies are slightly, but consistently, lighter than others.
We can use the CDF of birth weight to compute percentile-based statistics.
End of explanation
"""
median = live_cdf.Percentile(50)
median
"""
Explanation: Again, the median is the 50th percentile.
End of explanation
"""
iqr = (live_cdf.Percentile(25), live_cdf.Percentile(75))
iqr
"""
Explanation: The interquartile range is the interval from the 25th to 75th percentile.
End of explanation
"""
live_cdf.PercentileRank(10.2)
"""
Explanation: We can use the CDF to look up the percentile rank of a particular value. For example, my second daughter was 10.2 pounds at birth, which is near the 99th percentile.
End of explanation
"""
sample = np.random.choice(weights, 100, replace=True)
ranks = [live_cdf.PercentileRank(x) for x in sample]
"""
Explanation: If we draw a random sample from the observed weights and map each weigh to its percentile rank.
End of explanation
"""
rank_cdf = thinkstats2.Cdf(ranks)
thinkplot.Cdf(rank_cdf)
thinkplot.Config(xlabel='Percentile rank', ylabel='CDF')
"""
Explanation: The resulting list of ranks should be approximately uniform from 0-1.
End of explanation
"""
resample = live_cdf.Sample(1000)
thinkplot.Cdf(live_cdf)
thinkplot.Cdf(thinkstats2.Cdf(resample, label='resample'))
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='CDF')
"""
Explanation: That observation is the basis of Cdf.Sample, which generates a random sample from a Cdf. Here's an example.
End of explanation
"""
|
computational-class/computational-communication-2016 | code/09.machine_learning_with_sklearn.ipynb | mit | %matplotlib inline
from sklearn import datasets
from sklearn import linear_model
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
from sklearn.preprocessing import scale
import sklearn
print sklearn.__version__
# boston data
boston = datasets.load_boston()
y = boston.target
X = boston.data
' '.join(dir(boston))
boston['feature_names']
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
# Fit regression model (using the natural log of one of the regressors)
results = smf.ols('boston.target ~ boston.data', data=boston).fit()
print results.summary()
regr = linear_model.LinearRegression()
lm = regr.fit(boston.data, y)
lm.intercept_, lm.coef_, lm.score(boston.data, y)
predicted = regr.predict(boston.data)
fig, ax = plt.subplots()
ax.scatter(y, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('$Measured$', fontsize = 20)
ax.set_ylabel('$Predicted$', fontsize = 20)
plt.show()
"""
Explanation: 计算传播与机器学习
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
1、 监督式学习
工作机制:
- 这个算法由一个目标变量或结果变量(或因变量)组成。
- 这些变量由已知的一系列预示变量(自变量)预测而来。
- 利用这一系列变量,我们生成一个将输入值映射到期望输出值的函数。
- 这个训练过程会一直持续,直到模型在训练数据上获得期望的精确度。
- 监督式学习的例子有:回归、决策树、随机森林、K – 近邻算法、逻辑回归等。
2、非监督式学习
工作机制:
- 在这个算法中,没有任何目标变量或结果变量要预测或估计。
- 这个算法用在不同的组内聚类分析。
- 这种分析方式被广泛地用来细分客户,根据干预的方式分为不同的用户组。
- 非监督式学习的例子有:关联算法和 K–均值算法。
3、强化学习
工作机制:
- 这个算法训练机器进行决策。
- 它是这样工作的:机器被放在一个能让它通过反复试错来训练自己的环境中。
- 机器从过去的经验中进行学习,并且尝试利用了解最透彻的知识作出精确的商业判断。
- 强化学习的例子有马尔可夫决策过程。alphago
Chess. Here, the agent decides upon a series of moves depending on the state of the board (the environment), and the
reward can be defined as win or lose at the end of the game:
<img src = './img/mlprocess.png' width = 800>
线性回归
逻辑回归
决策树
SVM
朴素贝叶斯
K最近邻算法
K均值算法
随机森林算法
降维算法
Gradient Boost 和 Adaboost 算法
使用sklearn做线性回归
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
线性回归
通常用于估计连续性变量的实际数值(房价、呼叫次数、总销售额等)。
通过拟合最佳直线来建立自变量X和因变量Y的关系。
这条最佳直线叫做回归线,并且用 $Y= \beta *X + C$ 这条线性等式来表示。
系数 $\beta$ 和 C 可以通过最小二乘法获得
End of explanation
"""
boston.data
from sklearn.cross_validation import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(boston.data,
boston.target,
test_size=0.2,
random_state=42)
regr = linear_model.LinearRegression()
lm = regr.fit(Xs_train, y_train)
lm.intercept_, lm.coef_, lm.score(Xs_train, y_train)
predicted = regr.predict(Xs_test)
fig, ax = plt.subplots()
ax.scatter(y_test, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('$Measured$', fontsize = 20)
ax.set_ylabel('$Predicted$', fontsize = 20)
plt.show()
"""
Explanation: 训练集和测试集
End of explanation
"""
from sklearn.cross_validation import cross_val_score
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, boston.data , boston.target, cv = 3)
scores.mean()
scores = [cross_val_score(regr, data_X_scale, boston.target, cv = int(i)).mean() for i in range(3, 50)]
plt.plot(range(3, 50), scores,'r-o')
plt.show()
data_X_scale = scale(boston.data)
scores = cross_val_score(regr, boston.data, boston.target, cv = 7)
scores.mean()
"""
Explanation: 交叉验证
End of explanation
"""
import pandas as pd
df = pd.read_csv('/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_list.txt', sep = "\t", header=None)
df=df.rename(columns = {0:'title', 1:'link', 2:'author',3:'author_page', 4:'click', 5:'reply', 6:'time'})
df[:2]
# 定义这个函数的目的是让读者感受到:
# 抽取不同的样本,得到的结果完全不同。
def randomSplit(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append([dataX[k]])
dataY_test.append(dataY[k])
else:
dataX_train.append([dataX[k]])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
import numpy as np
# Use only one feature
data_X = df.reply
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(np.log(df.click+1),
np.log(df.reply+1), 20)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
print'Variance score: %.2f' % regr.score(data_X_test, data_y_test)
y_true, y_pred = data_y_test, regr.predict(data_X_test)
plt.scatter(y_pred, y_true, color='black')
plt.show()
# Plot outputs
plt.scatter(data_X_test, data_y_test, color='black')
plt.plot(data_X_test, regr.predict(data_X_test), color='blue', linewidth=3)
plt.show()
# The coefficients
print 'Coefficients: \n', regr.coef_
# The mean square error
print "Residual sum of squares: %.2f" % np.mean((regr.predict(data_X_test) - data_y_test) ** 2)
df.click_log = [[df.click[i]] for i in range(len(df))]
df.reply_log = [[df.reply[i]] for i in range(len(df))]
from sklearn.cross_validation import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(df.click_log, df.reply_log,test_size=0.2, random_state=0)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(Xs_train, y_train)
# Explained variance score: 1 is perfect prediction
print'Variance score: %.2f' % regr.score(Xs_test, y_test)
# Plot outputs
plt.scatter(Xs_test, y_test, color='black')
plt.plot(Xs_test, regr.predict(Xs_test), color='blue', linewidth=3)
plt.show()
from sklearn.cross_validation import cross_val_score
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, df.click_log, df.reply_log, cv = 3)
scores.mean()
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, df.click_log, df.reply_log, cv = 4)
scores.mean()
"""
Explanation: 使用天涯bbs数据
End of explanation
"""
repost = []
for i in df.title:
if u'转载' in i.decode('utf8'):
repost.append(1)
else:
repost.append(0)
data_X = [[df.click[i], df.reply[i]] for i in range(len(df))]
data_X[:3]
from sklearn.linear_model import LogisticRegression
df['repost'] = repost
model = LogisticRegression()
model.fit(data_X,df.repost)
model.score(data_X,df.repost)
def randomSplitLogistic(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append(dataX[k])
dataY_test.append(dataY[k])
else:
dataX_train.append(dataX[k])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
# Create logistic regression object
log_regr = LogisticRegression()
# Train the model using the training sets
log_regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
print'Variance score: %.2f' % log_regr.score(data_X_test, data_y_test)
y_true, y_pred = data_y_test, log_regr.predict(data_X_test)
y_true, y_pred
print(classification_report(y_true, y_pred))
from sklearn.cross_validation import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(data_X, df.repost, test_size=0.2, random_state=42)
# Create logistic regression object
log_regr = LogisticRegression()
# Train the model using the training sets
log_regr.fit(Xs_train, y_train)
# Explained variance score: 1 is perfect prediction
print'Variance score: %.2f' % log_regr.score(Xs_test, y_test)
print('Logistic score for test set: %f' % log_regr.score(Xs_test, y_test))
print('Logistic score for training set: %f' % log_regr.score(Xs_train, y_train))
y_true, y_pred = y_test, log_regr.predict(Xs_test)
print(classification_report(y_true, y_pred))
logre = LogisticRegression()
scores = cross_val_score(logre, data_X, df.repost, cv = 3)
scores.mean()
logre = LogisticRegression()
data_X_scale = scale(data_X)
# The importance of preprocessing in data science and the machine learning pipeline I:
scores = cross_val_score(logre, data_X_scale, df.repost, cv = 3)
scores.mean()
"""
Explanation: 使用sklearn做logistic回归
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
logistic回归是一个分类算法而不是一个回归算法。
可根据已知的一系列因变量估计离散数值(比方说二进制数值 0 或 1 ,是或否,真或假)。
简单来说,它通过将数据拟合进一个逻辑函数(logistic function)来预估一个事件出现的概率。
因此,它也被叫做逻辑回归。因为它预估的是概率,所以它的输出值大小在 0 和 1 之间(正如所预计的一样)。
$$odds= \frac{p}{1-p} = \frac{probability\: of\: event\: occurrence} {probability \:of \:not\: event\: occurrence}$$
$$ln(odds)= ln(\frac{p}{1-p})$$
$$logit(x) = ln(\frac{p}{1-p}) = b_0+b_1X_1+b_2X_2+b_3X_3....+b_kX_k$$
End of explanation
"""
from sklearn import naive_bayes
' '.join(dir(naive_bayes))
"""
Explanation: 使用sklearn实现贝叶斯预测
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
Naive Bayes algorithm
It is a classification technique based on Bayes’ Theorem with an assumption of independence among predictors.
In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature.
why it is known as ‘Naive’? For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability that this fruit is an apple.
贝叶斯定理为使用$p(c)$, $p(x)$, $p(x|c)$ 计算后验概率$P(c|x)$提供了方法:
$$
p(c|x) = \frac{p(x|c) p(c)}{p(x)}
$$
P(c|x) is the posterior probability of class (c, target) given predictor (x, attributes).
P(c) is the prior probability of class.
P(x|c) is the likelihood which is the probability of predictor given class.
P(x) is the prior probability of predictor.
Step 1: Convert the data set into a frequency table
Step 2: Create Likelihood table by finding the probabilities like:
- p(Overcast) = 0.29, p(rainy) = 0.36, p(sunny) = 0.36
- p(playing) = 0.64, p(rest) = 0.36
Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class. The class with the highest posterior probability is the outcome of prediction.
Problem: Players will play if weather is sunny. Is this statement is correct?
We can solve it using above discussed method of posterior probability.
$P(Yes | Sunny) = \frac{P( Sunny | Yes) * P(Yes) } {P (Sunny)}$
Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 = 0.64
Now, $P (Yes | Sunny) = \frac{0.33 * 0.64}{0.36} = 0.60$, which has higher probability.
End of explanation
"""
#Import Library of Gaussian Naive Bayes model
from sklearn.naive_bayes import GaussianNB
import numpy as np
#assigning predictor and target variables
x= np.array([[-3,7],[1,5], [1,2], [-2,0], [2,3], [-4,0], [-1,1], [1,1], [-2,2], [2,7], [-4,1], [-2,7]])
Y = np.array([3, 3, 3, 3, 4, 3, 3, 4, 3, 4, 4, 4])
#Create a Gaussian Classifier
model = GaussianNB()
# Train the model using the training sets
model.fit(x[:8], Y[:8])
#Predict Output
predicted= model.predict([[1,2],[3,4]])
print predicted
model.score(x[8:], Y[8:])
"""
Explanation: naive_bayes.GaussianNB Gaussian Naive Bayes (GaussianNB)
naive_bayes.MultinomialNB([alpha, ...]) Naive Bayes classifier for multinomial models
naive_bayes.BernoulliNB([alpha, binarize, ...]) Naive Bayes classifier for multivariate Bernoulli models.
End of explanation
"""
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(df.click, df.reply, 20)
# Train the model using the training sets
model.fit(data_X_train, data_y_train)
#Predict Output
predicted= model.predict(data_X_test)
print predicted
model.score(data_X_test, data_y_test)
from sklearn.cross_validation import cross_val_score
model = GaussianNB()
scores = cross_val_score(model, [[c] for c in df.click], df.reply, cv = 5)
scores.mean()
"""
Explanation: cross-validation
k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:
- A model is trained using k-1 of the folds as training data;
- the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy).
End of explanation
"""
from sklearn import tree
model = tree.DecisionTreeClassifier(criterion='gini')
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
# Predict
model.predict(data_X_test)
# crossvalidation
scores = cross_val_score(model, data_X, df.repost, cv = 3)
scores.mean()
"""
Explanation: 使用sklearn实现决策树
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
决策树
这个监督式学习算法通常被用于分类问题。
它同时适用于分类变量和连续因变量。
在这个算法中,我们将总体分成两个或更多的同类群。
这是根据最重要的属性或者自变量来分成尽可能不同的组别。
在上图中你可以看到,根据多种属性,人群被分成了不同的四个小组,来判断 “他们会不会去玩”。
为了把总体分成不同组别,需要用到许多技术,比如说 Gini、Information Gain、Chi-square、entropy。
End of explanation
"""
from sklearn import svm
# Create SVM classification object
model=svm.SVC()
' '.join(dir(svm))
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
# Predict
model.predict(data_X_test)
# crossvalidation
scores = []
cvs = [3, 5, 10, 25, 50, 75, 100]
for i in cvs:
score = cross_val_score(model, data_X, df.repost, cv = i)
scores.append(score.mean() ) # Try to tune cv
plt.plot(cvs, scores, 'b-o')
plt.xlabel('$cv$', fontsize = 20)
plt.ylabel('$Score$', fontsize = 20)
plt.show()
"""
Explanation: 使用sklearn实现SVM支持向量机
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
将每个数据在N维空间中用点标出(N是你所有的特征总数),每个特征的值是一个坐标的值。
举个例子,如果我们只有身高和头发长度两个特征,我们会在二维空间中标出这两个变量,每个点有两个坐标(这些坐标叫做支持向量)。
现在,我们会找到将两组不同数据分开的一条直线。
两个分组中距离最近的两个点到这条线的距离同时最优化。
上面示例中的黑线将数据分类优化成两个小组
两组中距离最近的点(图中A、B点)到达黑线的距离满足最优条件。
这条直线就是我们的分割线。接下来,测试数据落到直线的哪一边,我们就将它分到哪一类去。
End of explanation
"""
#Import the Numpy library
import numpy as np
#Import 'tree' from scikit-learn library
from sklearn import tree
import pandas as pd
train = pd.read_csv('/Users/chengjun/github/cjc2016/data/tatanic_train.csv', sep = ",")
train.head()
train["Age"] = train["Age"].fillna(train["Age"].median())
#Convert the male and female groups to integer form
train["Sex"][train["Sex"] == "male"] = 0
train["Sex"][train["Sex"] == "female"] = 1
#Impute the Embarked variable
train["Embarked"] = train["Embarked"].fillna('S')
#Convert the Embarked classes to integer form
train["Embarked"][train["Embarked"] == "S"] = 0
train["Embarked"][train["Embarked"] == "C"] = 1
train["Embarked"][train["Embarked"] == "Q"] = 2
#Create the target and features numpy arrays: target, features_one
target = train['Survived'].values
features_one = train[["Pclass", "Sex", "Age", "Fare"]].values
#Fit your first decision tree: my_tree_one
my_tree_one = tree.DecisionTreeClassifier()
my_tree_one = my_tree_one.fit(features_one, target)
#Look at the importance of the included features and print the score
print(my_tree_one.feature_importances_)
print(my_tree_one.score(features_one, target))
test = pd.read_csv('/Users/chengjun/github/cjc2016/data/tatanic_test.csv', sep = ",")
# Impute the missing value with the median
test.Fare[152] = test.Fare.median()
test["Age"] = test["Age"].fillna(test["Age"].median())
#Convert the male and female groups to integer form
test["Sex"][test["Sex"] == "male"] = 0
test["Sex"][test["Sex"] == "female"] = 1
#Impute the Embarked variable
test["Embarked"] = test["Embarked"].fillna('S')
#Convert the Embarked classes to integer form
test["Embarked"][test["Embarked"] == "S"] = 0
test["Embarked"][test["Embarked"] == "C"] = 1
test["Embarked"][test["Embarked"] == "Q"] = 2
# Extract the features from the test set: Pclass, Sex, Age, and Fare.
test_features = test[["Pclass","Sex", "Age", "Fare"]].values
# Make your prediction using the test set
my_prediction = my_tree_one.predict(test_features)
# Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions
PassengerId =np.array(test['PassengerId']).astype(int)
my_solution = pd.DataFrame(my_prediction, PassengerId, columns = ["Survived"])
print my_solution[:3]
# Check that your data frame has 418 entries
print my_solution.shape
# Write your solution to a csv file with the name my_solution.csv
my_solution.to_csv("/Users/chengjun/github/cjc2016/data/tatanic_solution_one.csv", index_label = ["PassengerId"])
# Create a new array with the added features: features_two
features_two = train[["Pclass","Age","Sex","Fare", "SibSp", "Parch", "Embarked"]].values
#Control overfitting by setting "max_depth" to 10 and "min_samples_split" to 5 : my_tree_two
max_depth = 10
min_samples_split = 5
my_tree_two = tree.DecisionTreeClassifier(max_depth = max_depth, min_samples_split = min_samples_split, random_state = 1)
my_tree_two = my_tree_two.fit(features_two, target)
#Print the score of the new decison tree
print(my_tree_two.score(features_two, target))
# create a new train set with the new variable
train_two = train
train_two['family_size'] = train.SibSp + train.Parch + 1
# Create a new decision tree my_tree_three
features_three = train[["Pclass", "Sex", "Age", "Fare", "SibSp", "Parch", "family_size"]].values
my_tree_three = tree.DecisionTreeClassifier()
my_tree_three = my_tree_three.fit(features_three, target)
# Print the score of this decision tree
print(my_tree_three.score(features_three, target))
#Import the `RandomForestClassifier`
from sklearn.ensemble import RandomForestClassifier
#We want the Pclass, Age, Sex, Fare,SibSp, Parch, and Embarked variables
features_forest = train[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
#Building the Forest: my_forest
n_estimators = 100
forest = RandomForestClassifier(max_depth = 10, min_samples_split=2, n_estimators = n_estimators, random_state = 1)
my_forest = forest.fit(features_forest, target)
#Print the score of the random forest
print(my_forest.score(features_forest, target))
#Compute predictions and print the length of the prediction vector:test_features, pred_forest
test_features = test[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
pred_forest = my_forest.predict(test_features)
print(len(test_features))
print(pred_forest[:3])
#Request and print the `.feature_importances_` attribute
print(my_tree_two.feature_importances_)
print(my_forest.feature_importances_)
#Compute and print the mean accuracy score for both models
print(my_tree_two.score(features_two, target))
print(my_forest.score(features_two, target))
"""
Explanation: 泰坦尼克号数据分析
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
End of explanation
"""
|
saturn77/CythonBootstrap | .ipynb_checkpoints/CythonBootstrap-checkpoint.ipynb | gpl-2.0 | %%file ./src/helloCython.pyx
import cython
import sys
def message():
print(" Hello World ....\n")
print(" Hello Central Ohio Python User Group ...\n")
print(" The 614 > 650::True")
print(" Another line ")
print(" The Python version is %s" % sys.version)
print(" The Cython version is %s" % cython.__version__)
print(" I hope that you learn something useful . . . .")
def main():
message()
%%file ./src/cyMath.pyx
import cython
def cy_fib(int n):
"""Print the Fibonacci series up to n."""
cdef int a = 0
cdef int b = 1
cdef int c = 0
cdef int index = 0
while b < n:
print ("%d, %d, \n" % (index, b) )
a, b = b, a + b
index += 1
%%file ./src/printString.pyx
import cython
def display(char *bytestring):
""" Print out a bytestring byte by byte. """
cdef char byte
for byte in bytestring:
print(byte)
%%file ./src/bits.pyx
import cython
def cy_reflect(int reg, int bits):
""" Reverse all the bits in a register.
reg = input register
r = output register
"""
cdef int x
cdef int y
cdef int r
x = 1 << (bits-1)
y = 1
r = 0
while x:
if reg & x:
r |= y
x = x >> 1
y = y << 1
return r
def reflect(self,s, bits=8):
""" Take a binary number (byte) and reflect the bits. """
x = 1<<(bits-1)
y = 1
r = 0
while x:
if s & x:
r |= y
x = x >> 1
y = y << 1
return r
%%file ./src/setup.py
from distutils.core import setup, Extension
from Cython.Build import cythonize
#=========================================
# Setup the extensions
#=========================================
sources = [ "./src/cyMath.pyx", "./src/helloCython.pyx",
"./src/cy_math.pyx", "./src/bits.pyx",
"./src/printString.pyx"]
#for fileName in sources:
# setup(ext_modules=cythonize(str(fileName)))
map(lambda fileName : setup(ext_modules=cythonize(str(fileName))), sources)
!python ./src/setup.py build_ext --inplace
from src import helloCython
helloCython.message()
from src import cyMath
cyMath.cy_fib(100)
from src import bits
from bits import cy_reflect
hexlist = [int(0x01),int(0x02),int(0x04),int(0x08)]
[hex(cy_reflect(item,8)) for item in hexlist]
from src import printString
printString.display('123')
# A little list comprehension here ...
# A comparative method to the Cython printString function
numberList = [1,2,3]
[ord(str(value)) for value in numberList]
"""
Explanation: Cython -- A Transcompiler Language
Transform Your Python !!
By James Bonanno, Central Ohio Python Presentation, March 2015
There are many cases where you simply want to get speed up an existing Python design, and in particular code in Python to get things working, then optimize (yes, early optimization is the root of all evil, but it's even more sinister if you run out of ways to optimize your code.)
What is is good for?
for making Python faster,
for making Python faster in an easy way
for wrapping external C and C++
making Python accessible to C and C++ (going the other way)
This presentation seeks primarily to discuss ways to transform your Python code and use it in a Python project.
References
The new book by Kurt Smith is well written, clear in explanations, and the best overall treatment of Cython out there. An excellent book !! The book by Gorelick and Ozsvald is a good treatment, and it compares different methods of optimizing python including Shedskin, Theano, Numba, etc.
1] Kurt W. Smith Cython, A Guide for Python Programmers, O'Reilly, January 2015
2] Mich Gorelick & Ian Ozsvald High Performance Python -- Practical Performant Programming for Humans O'Reilly September 2014
3] David Beazley and Brian K Jones, Python Cookbook, 3rd Edition, Printed May 2013, O'Reilly -- Chapter 15, page 632
Why CYTHON?
It's more versatile than all the competition and has a manageable syntax. I hihgly recommend Kurt Smith's book on Cython. It's thorough, and if you read chapter 3, you will take in the essence of working with Cython functions. ***
Make sure to check out the new, improved documentation for Cython at:
http://docs.cython.org/index.html
This presentation will focus on using Cython to speed up Python functions, with some attention also given to arrays and numpy. There are more sophisticated treatments of using dynamically allocated memory, such as typically done with C and C++.
A good link on memory allocation, where the heap is used with malloc():
http://docs.cython.org/src/tutorial/memory_allocation.html?highlight=numpy
Getting Started:: Cython function types...
You must use "cdef" when defining a type inside of a function. For example,
python
def quad(int k):
cdef int alpha = 1.5
return alpha*(k**2)
People often get confused when using def, cdef, and cpdef.
The key factors are
def is importable into python
cdef is importable into C, but not python
cpdef is importable into both
Getting Started:: Cythonizing a Python function
Now, if you were going to put pure cython code into action within your editor, say Wing IDE
or PyCharm, you would want to define something like this in a file say for example cy_math.pyx
Now, let's start with the familiar Fibonacci series ...
```python
import cython
def cy_fib(int n):
"""Print the Fibonacci series up to n."""
cdef int a = 0
cdef int b = 1
cdef int index = 0
while b < n:
print ("%d, %d, \n" % (index, b) )
a, b = b, a + b
index += 1
```
Getting Started:: A Distutils setup.py ...
```python
from distutils.core import setup, Extension
from Cython.Build import cythonize
=========================================
Setup the extensions
=========================================
sources = [ "cyMath.pyx", "helloCython.pyx","cy_math.pyx", "bits.pyx", "printString.pyx"]
for fileName in sources:
setup(ext_modules=cythonize(str(fileName)))
or...
map(lambda fileName : setup(ext_modules=cythonize(str(fileName))), sources)
```
End of explanation
"""
%%file ./src/cyFib.pyx
def cyfib(int n):
cdef int a = 0
cdef int b = 1
cdef int index = 0
while b < n:
a, b = b, a+b
index += 1
return b
"""
Explanation: Now let's see the time difference between a cyfib and pyfib ...
End of explanation
"""
!makecython ./src/cyFib.pyx
def pyfib(n):
a = 0
b = 1
index = 0
while b < n:
a, b = b, a+b
index += 1
return b
%timeit pyfib(1000)
import cyFib
%timeit cyFib.cyfib(1000)
"""
Explanation: Introducing runcython !!
Is located on Github
Easy installation == pip install runcython
Russell91 on Github
https://github.com/Russell91/runcython
There is a runcython and makecython function calls . . . . .
End of explanation
"""
import dis
dis.dis(pyfib)
import cProfile
cProfile.run('pyfib(1000)')
"""
Explanation: NOW THAT IS A CONSIDERABLE SPEEDUP ...
Fibonnaci function shows a factor of over 1500 % Improvement
Let's take a look at disassembly for some reasons for this ....
End of explanation
"""
%%file ./src/cyPoly.pyx
def cypoly(int n, int k):
return map(lambda x:(1.0*x**2 + 0.5*x + 0.25*x), range(k))
!makecython ./src/cyPoly.pyx
def pypoly(n,k):
return map(lambda x:.1*x**2 + .5*x + 0.25*x, range(k))
"""
Explanation: Another Example, with a polynomial this time ...
For now, lets begin with a polynomial function, and compare how to do this in python and cython! ....
Now consider a function such as
$f(x) = a_0x^n + a_1x^{(n-1)} + a_2x^{(n-2)} ..... a_nx^0$
where in the case below n is selected as 2, and
- $a_0 = 0.1$,
- $a_1=0.5$
- $a_2=0.25$.
The cython function to do this called "cypoly" while the python version is called "pypoly". Each function is defined with a functional programming techinque of lambda and map, as shown below.
End of explanation
"""
from src import cyPoly
cyPoly.cypoly(4,50)
pypoly(4,50)
"""
Explanation: Now to compare the two ....
End of explanation
"""
%%file ./src/sineWave.pyx
import cython
from libc.math cimport sin
def sinewave(double x):
""" Calculate a sinewave for specified number of cycles, Ncycles, at a given frequency."""
return sin(x)
!makecython ./src/sineWave.pyx
from src import sineWave
import math
angle90 = math.pi/2
sineWave.sinewave(angle90)
"""
Explanation: Now's lets do something graphically, like plot a trig function. Let's also use a float/double type.
End of explanation
"""
%matplotlib inline
import numpy as np
x = np.linspace(0,2*np.pi,2000)
%timeit plot(x,np.sin(x),'r')
## %timeit plot(x,sineWave.sinewave(x),'r') <== Why is this a problem ??
xlim(0,6.28)
title('Sinewave for Array Data')
grid(True)
%%file ./src/myFunc.pyx
import cython
import numpy as np
cimport numpy as np
@cython.boundscheck(False)
@cython.wraparound(False)
def myfunc(np.ndarray[double, ndim=1] A):
return np.sin(A)
!makecython ./src/myFunc.pyx
%matplotlib inline
from src import myFunc
import cython
import numpy as np
x = np.linspace(0,2*np.pi,2000)
y = myFunc.myfunc(x)
%timeit plot(x,y,'r')
xlim(0,6.28)
title('Sinewave for Array Data with Cython')
grid(True)
"""
Explanation: Now let's looking a data that involves arrays, and look at both python and numpy versions as well.
End of explanation
"""
!python-config --cflags
!python-config --ldflags
!ls -a ./src
%%file ./src/quad.pyx
"""
module:: This is a Cython file that uses decorators for arguments.
"""
import cython
cython.declare(a = double, x = double, y = double)
def exp(a, x):
""" funtion that uses cython.locals """
cdef int y
y = a**x
return y
!makecython ./src/quad.pyx
%%file ./src/setup.py
from distutils.core import setup, Extension
from Cython.Build import cythonize
#=========================================
# Setup the extensions
#=========================================
sources = [ "./src/cyMath.pyx", "./src/helloCython.pyx",
"./src/cy_math.pyx", "./src/bits.pyx",
"./src/printString.pyx", "./src/quad.pyx"]
#for fileName in sources:
# setup(ext_modules=cythonize(str(fileName)))
map(lambda fileName : setup(ext_modules=cythonize(str(fileName))), sources)
!python ./src/setup.py build_ext --inplace
from src import quad
quad.exp(2,3)
def quadPy(a,x):
return a*(x**2)
%timeit quadPy(2.0, 5.0)
"""
Explanation: Summary & Conclusions
This talk has presented the basics of getting started with Cython and IPython/Jupyter Notebook. There were examples presented on how to compile Cython programs with a setup.py and distuils as well as a nice application, runcython. Basic programs and some programs with arrays were demonstrated.
Cython is flexible, and it's flexibility is matched by it's performance.
It's realitively easy to use, but it does have some details to watch out for when working with arrays, references, etc.
Overall
Cython enables Python code to be transformed easily
The transformed Python code is signficantly faster
Wide support and documentation exists for Cython
Language has evolved and grown over the past few years with widespread support
Usage in Ipython Notebook / Jupyter is now well supported
Can be used on a wide variety of programs, ranging from math to logic.
Transform your Python with Cython !!
End of explanation
"""
|
brinkar/real-world-machine-learning | Chapter 3 - Modeling and prediction.ipynb | mit | %pylab inline
"""
Explanation: Chapter 3 - Modeling and prediction
End of explanation
"""
import pandas
data = pandas.read_csv("data/titanic.csv")
data[:5]
# We make a 80/20% train/test split of the data
data_train = data[:int(0.8*len(data))]
data_test = data[int(0.8*len(data)):]
"""
Explanation: The Titanic dataset
We use the Pandas library to import the Titanic survival dataset.
End of explanation
"""
# The categorical-to-numerical function from chapter 2
# Changed to automatically add column names
def cat_to_num(data):
categories = unique(data)
features = {}
for cat in categories:
binary = (data == cat)
features["%s=%s" % (data.name, cat)] = binary.astype("int")
return pandas.DataFrame(features)
def prepare_data(data):
"""Takes a dataframe of raw data and returns ML model features
"""
# Initially, we build a model only on the available numerical values
features = data.drop(["PassengerId", "Survived", "Fare", "Name", "Sex", "Ticket", "Cabin", "Embarked"], axis=1)
# Setting missing age values to -1
features["Age"] = data["Age"].fillna(-1)
# Adding the sqrt of the fare feature
features["sqrt_Fare"] = sqrt(data["Fare"])
# Adding gender categorical value
features = features.join( cat_to_num(data['Sex']) )
# Adding Embarked categorical value
features = features.join( cat_to_num(data['Embarked']) )
return features
"""
Explanation: Preparing the data
End of explanation
"""
#cat_to_num(data['Sex'])
features = prepare_data(data_train)
features[:5]
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(features, data_train["Survived"])
# Make predictions
model.predict(prepare_data(data_test))
# The accuracy of the model on the test data
# (this will be introduced in more details in chapter 4)
model.score(prepare_data(data_test), data_test["Survived"])
"""
Explanation: Building a logistic regression classifier with Scikit-Learn
End of explanation
"""
from sklearn.svm import SVC
model = SVC()
model.fit(features, data_train["Survived"])
model.score(prepare_data(data_test), data_test["Survived"])
"""
Explanation: Non-linear model with Support Vector Machines
End of explanation
"""
mnist = pandas.read_csv("data/mnist_small.csv")
mnist_train = mnist[:int(0.8*len(mnist))]
mnist_test = mnist[int(0.8*len(mnist)):]
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=10)
knn.fit(mnist_train.drop("label", axis=1), mnist_train['label'])
preds = knn.predict_proba(mnist_test.drop("label", axis=1))
pandas.DataFrame(preds[:5], index=["Digit %d"%(i+1) for i in range(5)])
knn.score(mnist_test.drop("label", axis=1), mnist_test['label'])
"""
Explanation: Classification with multiple classes: hand-written digits
We use the popular non-linear multi-class K-nearest neighbor algorithm to predict hand-written digits from the MNIST dataset.
End of explanation
"""
auto = pandas.read_csv("data/auto-mpg.csv")
# Convert origin to categorical variable
auto = auto.join(cat_to_num(auto['origin']))
auto = auto.drop('origin', axis=1)
# Split in train/test set
auto_train = auto[:int(0.8*len(auto))]
auto_test = auto[int(0.8*len(auto)):]
auto[:5]
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
reg.fit(auto_train.drop('mpg', axis=1), auto_train["mpg"])
pred_mpg = reg.predict(auto_test.drop('mpg',axis=1))
plot(auto_test.mpg, pred_mpg, 'o')
x = linspace(10,40,5)
plot(x, x, '-');
"""
Explanation: Predicting numerical values with a regression model
We use the the Linear Regression algorithm to predict miles-per-gallon of various automobiles.
End of explanation
"""
|
tritemio/PyBroMo | notebooks/PyBroMo - 2. Generate smFRET data, including mixtures.ipynb | gpl-2.0 | %matplotlib inline
from pathlib import Path
import numpy as np
import tables
import matplotlib.pyplot as plt
import seaborn as sns
import pybromo as pbm
print('Numpy version:', np.__version__)
print('PyTables version:', tables.__version__)
print('PyBroMo version:', pbm.__version__)
"""
Explanation: PyBroMo - 2. Generate smFRET data, including mixtures
<small><i>
This notebook is part of <a href="http://tritemio.github.io/PyBroMo" target="_blank">PyBroMo</a> a
python-based single-molecule Brownian motion diffusion simulator
that simulates confocal smFRET
experiments.
</i></small>
Overview
In this notebook we show how to generated smFRET data files from the diffusion trajectories.
Loading the software
Import all the relevant libraries:
End of explanation
"""
S = pbm.ParticlesSimulation.from_datafile('0168', mode='w')
S.particles.diffusion_coeff_counts
#S = pbm.ParticlesSimulation.from_datafile('0168')
"""
Explanation: Create smFRET data-files
Create a file for a single FRET efficiency
In this section we show how to save a single smFRET data file. In the next section we will perform the same steps in a loop to generate a sequence of smFRET data files.
Here we load a diffusion simulation opening a file to save
timstamps in write mode. Use 'a' (i.e. append) to keep
previously simulated timestamps for the given diffusion.
End of explanation
"""
params = dict(
em_rates = (200e3,), # Peak emission rates (cps) for each population (D+A)
E_values = (0.75,), # FRET efficiency for each population
num_particles = (20,), # Number of particles in each population
bg_rate_d = 1500, # Poisson background rate (cps) Donor channel
bg_rate_a = 800, # Poisson background rate (cps) Acceptor channel
)
"""
Explanation: Simulate timestamps of smFRET
Example1: single FRET population
Define the simulation parameters with the following syntax:
End of explanation
"""
mix_sim = pbm.TimestapSimulation(S, **params)
mix_sim.summarize()
"""
Explanation: Create the object that will run the simulation and print a summary:
End of explanation
"""
rs = np.random.RandomState(1234)
mix_sim.run(rs=rs, overwrite=False, skip_existing=True)
"""
Explanation: Run the simualtion:
End of explanation
"""
mix_sim.save_photon_hdf5(identity=dict(author='John Doe',
author_affiliation='Planet Mars'))
"""
Explanation: Save simulation to a smFRET Photon-HDF5 file:
End of explanation
"""
params = dict(
em_rates = (200e3, 180e3), # Peak emission rates (cps) for each population (D+A)
E_values = (0.75, 0.35), # FRET efficiency for each population
num_particles = (20, 15), # Number of particles in each population
bg_rate_d = 1500, # Poisson background rate (cps) Donor channel
bg_rate_a = 800, # Poisson background rate (cps) Acceptor channel
)
mix_sim = pbm.TimestapSimulation(S, **params)
mix_sim.summarize()
rs = np.random.RandomState(1234)
mix_sim.run(rs=rs, overwrite=False, skip_existing=True)
mix_sim.save_photon_hdf5()
"""
Explanation: Example 2: 2 FRET populations
To simulate 2 population we just define the parameters with
one value per population, except for the Poisson background
rate that is a single value for each channel.
End of explanation
"""
import fretbursts as fb
filepath = list(Path('./').glob('smFRET_*'))
filepath
d = fb.loader.photon_hdf5(str(filepath[1]))
d
d.A_em
fb.dplot(d, fb.timetrace);
d.calc_bg(fun=fb.bg.exp_fit, tail_min_us='auto', F_bg=1.7)
d.bg_dd, d.bg_ad
d.burst_search(F=7)
d.num_bursts
ds = d.select_bursts(fb.select_bursts.size, th1=20)
ds.num_bursts
fb.dplot(d, fb.timetrace, bursts=True);
fb.dplot(ds, fb.hist_fret, pdf=False)
plt.axvline(0.75);
"""
Explanation: Burst analysis
The generated Photon-HDF5 files can be analyzed by any smFRET burst
analysis program. Here we show an example using the opensource
FRETBursts program:
End of explanation
"""
fb.bext.burst_data(ds)
"""
Explanation: NOTE: Unless you simulated a diffusion of 30s or more the previous histogram will be very poor.
End of explanation
"""
|
GoogleCloudPlatform/tensorflow-without-a-phd | tensorflow-mnist-tutorial/keras_01_mnist.ipynb | apache-2.0 | BATCH_SIZE = 128
EPOCHS = 10
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
"""
Explanation: <a href="https://colab.research.google.com/github/GoogleCloudPlatform/tensorflow-without-a-phd/blob/master/tensorflow-mnist-tutorial/keras_01_mnist.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Parameters
End of explanation
"""
import os, re, math, json, shutil, pprint
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import IPython.display as display
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
print("Tensorflow version " + tf.__version__)
#@title visualization utilities [RUN ME]
"""
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
"""
# Matplotlib config
plt.ioff()
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=1)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0', figsize=(16,9))
# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
batch_train_ds = training_dataset.unbatch().batch(N)
# eager execution: loop through datasets normally
if tf.executing_eagerly():
for validation_digits, validation_labels in validation_dataset:
validation_digits = validation_digits.numpy()
validation_labels = validation_labels.numpy()
break
for training_digits, training_labels in batch_train_ds:
training_digits = training_digits.numpy()
training_labels = training_labels.numpy()
break
else:
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = batch_train_ds.make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
fig = plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
plt.grid(b=None)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
display.display(fig)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
def plot_learning_rate(lr_func, epochs):
xx = np.arange(epochs+1, dtype=np.float)
y = [lr_decay(x) for x in xx]
fig, ax = plt.subplots(figsize=(9, 6))
ax.set_xlabel('epochs')
ax.set_title('Learning rate\ndecays from {:0.3g} to {:0.3g}'.format(y[0], y[-2]))
ax.minorticks_on()
ax.grid(True, which='major', axis='both', linestyle='-', linewidth=1)
ax.grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)
ax.step(xx,y, linewidth=3, where='post')
display.display(fig)
class PlotTraining(tf.keras.callbacks.Callback):
def __init__(self, sample_rate=1, zoom=1):
self.sample_rate = sample_rate
self.step = 0
self.zoom = zoom
self.steps_per_epoch = 60000//BATCH_SIZE
def on_train_begin(self, logs={}):
self.batch_history = {}
self.batch_step = []
self.epoch_history = {}
self.epoch_step = []
self.fig, self.axes = plt.subplots(1, 2, figsize=(16, 7))
plt.ioff()
def on_batch_end(self, batch, logs={}):
if (batch % self.sample_rate) == 0:
self.batch_step.append(self.step)
for k,v in logs.items():
# do not log "batch" and "size" metrics that do not change
# do not log training accuracy "acc"
if k=='batch' or k=='size':# or k=='acc':
continue
self.batch_history.setdefault(k, []).append(v)
self.step += 1
def on_epoch_end(self, epoch, logs={}):
plt.close(self.fig)
self.axes[0].cla()
self.axes[1].cla()
self.axes[0].set_ylim(0, 1.2/self.zoom)
self.axes[1].set_ylim(1-1/self.zoom/2, 1+0.1/self.zoom/2)
self.epoch_step.append(self.step)
for k,v in logs.items():
# only log validation metrics
if not k.startswith('val_'):
continue
self.epoch_history.setdefault(k, []).append(v)
display.clear_output(wait=True)
for k,v in self.batch_history.items():
self.axes[0 if k.endswith('loss') else 1].plot(np.array(self.batch_step) / self.steps_per_epoch, v, label=k)
for k,v in self.epoch_history.items():
self.axes[0 if k.endswith('loss') else 1].plot(np.array(self.epoch_step) / self.steps_per_epoch, v, label=k, linewidth=3)
self.axes[0].legend()
self.axes[1].legend()
self.axes[0].set_xlabel('epochs')
self.axes[1].set_xlabel('epochs')
self.axes[0].minorticks_on()
self.axes[0].grid(True, which='major', axis='both', linestyle='-', linewidth=1)
self.axes[0].grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)
self.axes[1].minorticks_on()
self.axes[1].grid(True, which='major', axis='both', linestyle='-', linewidth=1)
self.axes[1].grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)
display.display(self.fig)
"""
Explanation: Imports
End of explanation
"""
AUTO = tf.data.experimental.AUTOTUNE
def read_label(tf_bytestring):
label = tf.io.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.io.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(AUTO) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.repeat() # Mandatory for Keras for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
# For TPU, we will need a function that returns the dataset
training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)
"""
Explanation: tf.data.Dataset: parse files and prepare training and validation datasets
Please read the best practices for building input pipelines with tf.data.Dataset
End of explanation
"""
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
"""
Explanation: Let's have a look at the data
End of explanation
"""
model = tf.keras.Sequential(
[
tf.keras.layers.Input(shape=(28*28,)),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='sgd',
loss='categorical_crossentropy',
metrics=['accuracy'])
# print model layers
model.summary()
# utility callback that displays training curves
plot_training = PlotTraining(sample_rate=10, zoom=1)
"""
Explanation: Keras model
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: Tensorflow and deep learning without a PhD
End of explanation
"""
steps_per_epoch = 60000//BATCH_SIZE # 60,000 items in this dataset
print("Steps per epoch: ", steps_per_epoch)
history = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset, validation_steps=1, callbacks=[plot_training])
"""
Explanation: Train and validate the model
End of explanation
"""
# recognize digits from local fonts
probabilities = model.predict(font_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
probabilities = model.predict(validation_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
"""
Explanation: Visualize predictions
End of explanation
"""
|
BrentDorsey/pipeline | gpu.ml/notebooks/09_Deploy_Optimized_Model.ipynb | apache-2.0 | from tensorflow.python.tools import freeze_graph
optimize_me_parent_path = '/root/models/optimize_me/linear/cpu'
fully_optimized_model_graph_path = '%s/fully_optimized_cpu.pb' % optimize_me_parent_path
fully_optimized_frozen_model_graph_path = '%s/fully_optimized_frozen_cpu.pb' % optimize_me_parent_path
model_checkpoint_path = '%s/model.ckpt' % optimize_me_parent_path
freeze_graph.freeze_graph(input_graph=fully_optimized_model_graph_path,
input_saver="",
input_binary=True,
input_checkpoint='/root/models/optimize_me/linear/cpu/model.ckpt',
output_node_names="add",
restore_op_name="save/restore_all",
filename_tensor_name="save/Const:0",
output_graph=fully_optimized_frozen_model_graph_path,
clear_devices=True,
initializer_nodes="")
print(fully_optimized_frozen_model_graph_path)
"""
Explanation: Deploy Fully Optimized Model to TensorFlow Serving
IMPORTANT: You Must STOP All Kernels and Terminal Session
The GPU is wedged at this point. We need to set it free!!
Freeze Fully Optimized Graph
End of explanation
"""
%%bash
ls -l /root/models/optimize_me/linear/cpu/
"""
Explanation: File Size
End of explanation
"""
%%bash
summarize_graph --in_graph=/root/models/optimize_me/linear/cpu/fully_optimized_frozen_cpu.pb
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
from google.protobuf import text_format
from tensorflow.core.framework import graph_pb2
def convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary):
graph = graph_pb2.GraphDef()
with open(input_graph, "rb") as fh:
if is_input_graph_binary:
graph.ParseFromString(fh.read())
else:
text_format.Merge(fh.read(), graph)
with open(output_dot, "wt") as fh:
print("digraph graphname {", file=fh)
for node in graph.node:
output_name = node.name
print(" \"" + output_name + "\" [label=\"" + node.op + "\"];", file=fh)
for input_full_name in node.input:
parts = input_full_name.split(":")
input_name = re.sub(r"^\^", "", parts[0])
print(" \"" + input_name + "\" -> \"" + output_name + "\";", file=fh)
print("}", file=fh)
print("Created dot file '%s' for graph '%s'." % (output_dot, input_graph))
input_graph='/root/models/optimize_me/linear/cpu/fully_optimized_frozen_cpu.pb'
output_dot='/root/notebooks/fully_optimized_frozen_cpu.dot'
convert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True)
%%bash
dot -T png /root/notebooks/fully_optimized_frozen_cpu.dot \
-o /root/notebooks/fully_optimized_frozen_cpu.png > /tmp/a.out
from IPython.display import Image
Image('/root/notebooks/fully_optimized_frozen_cpu.png')
"""
Explanation: Graph
End of explanation
"""
%%bash
benchmark_model --graph=/root/models/optimize_me/linear/cpu/fully_optimized_frozen_cpu.pb \
--input_layer=weights,bias,x_observed \
--input_layer_type=float,float,float \
--input_layer_shape=:: \
--output_layer=add
"""
Explanation: Run Standalone Benchmarks
Note: These benchmarks are running against the standalone models on disk. We will benchmark the models running within TensorFlow Serving soon.
End of explanation
"""
import tensorflow as tf
tf.reset_default_graph()
"""
Explanation: Save Model for Deployment and Inference
Reset Default Graph
End of explanation
"""
sess = tf.Session()
"""
Explanation: Create New Session
End of explanation
"""
from datetime import datetime
version = int(datetime.now().strftime("%s"))
"""
Explanation: Generate Version Number
End of explanation
"""
%%bash
inspect_checkpoint --file_name=/root/models/optimize_me/linear/cpu/model.ckpt
saver = tf.train.import_meta_graph('/root/models/optimize_me/linear/cpu/model.ckpt.meta')
saver.restore(sess, '/root/models/optimize_me/linear/cpu/model.ckpt')
optimize_me_parent_path = '/root/models/optimize_me/linear/cpu'
fully_optimized_frozen_model_graph_path = '%s/fully_optimized_frozen_cpu.pb' % optimize_me_parent_path
print(fully_optimized_frozen_model_graph_path)
with tf.gfile.GFile(fully_optimized_frozen_model_graph_path, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name="",
op_dict=None,
producer_op_list=None
)
print("weights = ", sess.run("weights:0"))
print("bias = ", sess.run("bias:0"))
"""
Explanation: Load Optimized, Frozen Graph
End of explanation
"""
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
graph = tf.get_default_graph()
x_observed = graph.get_tensor_by_name('x_observed:0')
y_pred = graph.get_tensor_by_name('add:0')
inputs_map = {'inputs': x_observed}
outputs_map = {'outputs': y_pred}
predict_signature = signature_def_utils.predict_signature_def(
inputs = inputs_map,
outputs = outputs_map)
print(predict_signature)
"""
Explanation: Create SignatureDef Asset for TensorFlow Serving
End of explanation
"""
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants
fully_optimized_saved_model_path = '/root/models/linear_fully_optimized/cpu/%s' % version
print(fully_optimized_saved_model_path)
builder = saved_model_builder.SavedModelBuilder(fully_optimized_saved_model_path)
builder.add_meta_graph_and_variables(sess,
[tag_constants.SERVING],
signature_def_map={'predict':predict_signature,
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:predict_signature},
clear_devices=True,
)
builder.save(as_text=False)
import os
print(fully_optimized_saved_model_path)
os.listdir(fully_optimized_saved_model_path)
os.listdir('%s/variables' % fully_optimized_saved_model_path)
sess.close()
"""
Explanation: Save Model with Assets
End of explanation
"""
import subprocess
output = subprocess.run(["saved_model_cli", "show", \
"--dir", fully_optimized_saved_model_path, "--all"], \
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
print(output.stdout.decode('utf-8'))
"""
Explanation: Inspect with Saved Model CLI
Note: This takes a minute or two for some reason. Please be patient.
End of explanation
"""
|
ant0nisk/pybrl | docs/Samples/pdf_translation/Notebook.ipynb | gpl-3.0 | # Load our dependencies
import pybrl as brl
filename = "lorem_ipsum.pdf" # of course :P
pdf_password = None
language = 'english'
# Let's translate the PDF file.
translated = brl.translatePDF(filename, password = pdf_password, language = language) # Easy, right?
# Let's explore what this object looks like:
print(len(translated)) # = 2 (One for each page)
print(len(translated[0])) # = 1 group of text in the page.
# There might be more if (i.e.) a box of text is in a corner.
print(translated[0][0].keys()) # type, text, layout
print(translated[0][0]['type']) # 'text'
print(translated[0][0]['layout']) # The bounding box of this group
print(translated[0][0]['text'][0]) # The first word: ['000001', '111000', '101010', '111010', '100010', '101100']
"""
Explanation: Translating PDF files into Braille using pybrl and LaTeX
In this Notebook, I will show how to use pybrl in order to parse a PDF file, translate it into Braille and then generate a LaTeX file.
I will use texlive to compile the generated LaTeX file into a PDF file.
Installing texlive
Installing texlive and texlive-xetex in Linux distros should be pretty straight-forward, just use your package manager.
For example, on Ubuntu just do:
bash
apt-get install texlive texlive-xetex
On MacOS, there is a texlive port for Macports. So, all you need to do is:
bash
port install texlive texlive-xetex
LaTeX is preferred over making a PDF programmatically, because this is what LaTeX does: get the content right and LaTeX will make it beautiful.
PDFs in pybrl
pybrl has already basic PDF parsing and translation capabilities using pdfminer. To be more specific, there is a pdf_utils submodule in the utils directory, which can parse a PDF file and provide some layout information.
Now that we know what tools are going to be used, we can dive into the code:
End of explanation
"""
tex = "" # Template contents and what will be edited.
output = "output.tex" # Output path to the tex file
TEMPLATE_PATH = "template.tex" # Path to the Template tex file
# Load the Template
with open(TEMPLATE_PATH, "r") as f:
tex = f.read()
# Concatenate all the text.
content = ""
for page in translated:
for group in page:
grouptxt = group['text']
# Convert to Unicode characters:
unicode_brl = brl.toUnicodeSymbols(grouptxt, flatten=True)
content += "\n\n" + unicode_brl
# Create the new TeX
output_tex = tex.replace("%%% Content will go here %%%", content)
# Save it
with open(output, "w") as f:
f.write(output_tex)
"""
Explanation: The translatePDF method does the following:
1. Parses the PDF
2. Extracts the Layout information
3. For each page, translate the text.
As of the time of writing, the layout is pretty basic and all the text of each page is concatenated (e.g. different groups of text in the page).
Since we are using LaTeX to create the PDF file, we actually don't really care about the layout. LaTeX will take care of it.
LaTeX generation
I will use the following template to generate my document:
```latex
\documentclass{scrartcl}
\usepackage[utf8]{inputenc}
\usepackage[parfill]{parskip} % Begin paragraphs with an empty line (and not an indent)
\usepackage{fontspec}
\begin{document}
\setmainfont{LouisLouis.ttf}
%%% Content will go here %%%
\end{document}
```
End of explanation
"""
|
colour-science/colour-hdri | colour_hdri/examples/examples_variance_minimization_light_probe_sampling.ipynb | bsd-3-clause | import os
from pprint import pprint
import colour
from colour_hdri import (
EXAMPLES_RESOURCES_DIRECTORY,
light_probe_sampling_variance_minimization_Viriyothai2009,
)
from colour_hdri.sampling.variance_minimization import (
find_regions_variance_minimization_Viriyothai2009,
highlight_regions_variance_minimization,
)
RESOURCES_DIRECTORY = os.path.join(EXAMPLES_RESOURCES_DIRECTORY, "radiance")
colour.plotting.colour_style()
colour.utilities.describe_environment();
"""
Explanation: Colour - HDRI - Examples: Variance Minimization Light Probe Sampling
Through this example, lights will be extracted from radiance images using Viriyothai (2009) variance minimization light probe sampling algorithm.
<div class="alert alert-warning">
The current implementation is not entirely vectorised nor optimised thus slow.
</div>
End of explanation
"""
HDRI_IMAGE1 = colour.read_image(os.path.join(RESOURCES_DIRECTORY, "Dots.exr"))
HDRI_IMAGE2 = colour.read_image(
os.path.join(RESOURCES_DIRECTORY, "Grace_Cathedral.hdr")
)
Y1 = colour.RGB_luminance(
HDRI_IMAGE1,
colour.models.RGB_COLOURSPACE_sRGB.primaries,
colour.models.RGB_COLOURSPACE_sRGB.whitepoint,
)
regions1 = find_regions_variance_minimization_Viriyothai2009(Y1)
Y2 = colour.RGB_luminance(
HDRI_IMAGE2,
colour.models.RGB_COLOURSPACE_sRGB.primaries,
colour.models.RGB_COLOURSPACE_sRGB.whitepoint,
)
regions2 = find_regions_variance_minimization_Viriyothai2009(Y2)
colour.plotting.plot_image(
colour.cctf_encoding(
highlight_regions_variance_minimization(HDRI_IMAGE1, regions1)
),
text_kwargs={"text": "Dots"},
)
colour.plotting.plot_image(
colour.cctf_encoding(
highlight_regions_variance_minimization(HDRI_IMAGE2, regions2)
),
text_kwargs={"test": "Grace Cathedral"},
);
"""
Explanation: Regions
End of explanation
"""
print("Dots - 16 Lights")
pprint(
light_probe_sampling_variance_minimization_Viriyothai2009(HDRI_IMAGE1, 16)
)
print("\n")
print("Grace Cathedral - 32 Lights")
pprint(
light_probe_sampling_variance_minimization_Viriyothai2009(HDRI_IMAGE2, 32)
)
"""
Explanation: Lights
End of explanation
"""
|
AlJohri/DAT-DC-12 | notebooks/10_linear_regression_ml.ipynb | mit | # read the data and set the datetime as the index
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (8, 6)
plt.rcParams['font.size'] = 14
import pandas as pd
urls = ['../data/KDCA-201601.csv', '../data/KDCA-201602.csv', '../data/KDCA-201603.csv']
frames = [pd.read_csv(url) for url in urls]
weather = pd.concat(frames)
cols = 'WBAN Date Time StationType SkyCondition Visibility WeatherType DryBulbFarenheit DryBulbCelsius WetBulbFarenheit WetBulbCelsius DewPointFarenheit DewPointCelsius RelativeHumidity WindSpeed WindDirection ValueForWindCharacter StationPressure PressureTendency PressureChange SeaLevelPressure RecordType HourlyPrecip Altimeter'
cols = cols.split()
weather = weather[cols]
weather.rename(columns={'DryBulbFarenheit':'temp',
'RelativeHumidity': 'humidity'}, inplace=True)
# weather['humidity'] = pd.to_numeric(weather.humidity, errors='coerce')
weather['datetime'] = pd.to_datetime(weather.Date.astype(str) + weather.Time.apply('{0:0>4}'.format))
weather['datetime_hour'] = weather.datetime.dt.floor(freq='h')
weather['month'] = weather.datetime.dt.month
bikes = pd.read_csv('../data/2016-Q1-Trips-History-Data.csv')
bikes['start'] = pd.to_datetime(bikes['Start date'], infer_datetime_format=True)
bikes['end'] = pd.to_datetime(bikes['End date'], infer_datetime_format=True)
bikes['datetime_hour'] = bikes.start.dt.floor(freq='h')
weather[['datetime', 'temp']].hist(bins=30)
print(weather.columns)
weather.head()
bikes.merge(weather[['temp', 'datetime_hour', 'datetime']], on='datetime_hour')
hours = bikes.groupby('datetime_hour').agg('count')
hours['datetime_hour'] = hours.index
hours.head()
hours['total'] = hours.start
hours = hours[['total', 'datetime_hour']]
hours.total.plot()
hours_weather = hours.merge(weather, on='datetime_hour')
hours_weather.plot(kind='scatter', x='temp', y='total')
sns.lmplot(x='temp', y='total', data=hours_weather, aspect=1.5, scatter_kws={'alpha':0.8})
weekday = hours_weather[(hours_weather.datetime.dt.hour==11) & (hours_weather.datetime.dt.dayofweek<5) ]
weekday.plot(kind='scatter', x='temp', y='total')
# import seaborn as sns
sns.lmplot(x='temp', y='total', data=weekday, aspect=1.5, scatter_kws={'alpha':0.8})
"""
Explanation: Linear Regression
Agenda
Introducing the bikeshare dataset
Reading in the data
Visualizing the data
Linear regression basics
Form of linear regression
Building a linear regression model
Using the model for prediction
Does the scale of the features matter?
Working with multiple features
Visualizing the data (part 2)
Adding more features to the model
Choosing between models
Feature selection
Evaluation metrics for regression problems
Comparing models with train/test split and RMSE
Comparing testing RMSE with null RMSE
Creating features
Handling categorical features
Feature engineering
Comparing linear regression with other models
Reading in the data
We'll be working with a dataset from Capital Bikeshare that was used in a Kaggle competition (data dictionary).
End of explanation
"""
# create X and y
feature_cols = ['temp']
X = hours_weather[feature_cols]
y = hours_weather.total
# import, instantiate, fit
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X, y)
# print the coefficients
print(linreg.intercept_)
print(linreg.coef_)
"""
Explanation: Questions:
What does each observation represent?
What is the response variable (as defined by Kaggle)?
How many features are there?
Form of linear regression
$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n$
$y$ is the response
$\beta_0$ is the intercept
$\beta_1$ is the coefficient for $x_1$ (the first feature)
$\beta_n$ is the coefficient for $x_n$ (the nth feature)
The $\beta$ values are called the model coefficients:
These values are estimated (or "learned") during the model fitting process using the least squares criterion.
Specifically, we are find the line (mathematically) which minimizes the sum of squared residuals (or "sum of squared errors").
And once we've learned these coefficients, we can use the model to predict the response.
In the diagram above:
The black dots are the observed values of x and y.
The blue line is our least squares line.
The red lines are the residuals, which are the vertical distances between the observed values and the least squares line.
Building a linear regression model
End of explanation
"""
# manually calculate the prediction
linreg.intercept_ + linreg.coef_ * 77
# use the predict method
linreg.predict(77)
"""
Explanation: Interpreting the intercept ($\beta_0$):
It is the value of $y$ when $x$=0.
Thus, it is the estimated number of rentals when the temperature is 0 degrees Celsius.
Note: It does not always make sense to interpret the intercept. (Why?)
Interpreting the "temp" coefficient ($\beta_1$):
It is the change in $y$ divided by change in $x$, or the "slope".
Thus, a temperature increase of 1 degree F is associated with a rental increase of 9.17 bikes.
This is not a statement of causation.
$\beta_1$ would be negative if an increase in temperature was associated with a decrease in rentals.
Using the model for prediction
How many bike rentals would we predict if the temperature was 77 degrees F?
End of explanation
"""
# create a new column for Fahrenheit temperature
hours_weather['temp_C'] = (hours_weather.temp - 32) * 5/9
hours_weather.head()
# Seaborn scatter plot with regression line
sns.lmplot(x='temp_C', y='total', data=hours_weather, aspect=1.5, scatter_kws={'alpha':0.2})
sns.lmplot(x='temp', y='total', data=hours_weather, aspect=1.5, scatter_kws={'alpha':0.2})
# create X and y
feature_cols = ['temp_C']
X = hours_weather[feature_cols]
y = hours_weather.total
# instantiate and fit
linreg = LinearRegression()
linreg.fit(X, y)
# print the coefficients
print(linreg.intercept_, linreg.coef_)
# convert 77 degrees Fahrenheit to Celsius
(77 - 32)* 5/9
# predict rentals for 25 degrees Celsius
linreg.predict([[25], [30]])
"""
Explanation: Does the scale of the features matter?
Let's say that temperature was measured in Fahrenheit, rather than Celsius. How would that affect the model?
End of explanation
"""
# remove the temp_F column
# bikes.drop('temp_C', axis=1, inplace=True)
"""
Explanation: Conclusion: The scale of the features is irrelevant for linear regression models. When changing the scale, we simply change our interpretation of the coefficients.
End of explanation
"""
# explore more features
feature_cols = ['temp', 'month', 'humidity']
# multiple scatter plots in Seaborn
# print(hours_weather.humidity != 'M')
hours_weather.humidity = hours_weather.humidity.apply(lambda x: -1 if isinstance(x, str) else x)
# hours_weather.loc[hours_weather.humidity.dtype != int].humidity = 100
sns.pairplot(hours_weather, x_vars=feature_cols, y_vars='total', kind='reg')
# multiple scatter plots in Pandas
fig, axs = plt.subplots(1, len(feature_cols), sharey=True)
for index, feature in enumerate(feature_cols):
hours_weather.plot(kind='scatter', x=feature, y='total', ax=axs[index], figsize=(16, 3))
"""
Explanation: Visualizing the data (part 2)
End of explanation
"""
# cross-tabulation of season and month
pd.crosstab(hours_weather.month, hours_weather.datetime.dt.dayofweek)
# box plot of rentals, grouped by season
hours_weather.boxplot(column='total', by='month')
# line plot of rentals
hours_weather.total.plot()
"""
Explanation: Are you seeing anything that you did not expect?
End of explanation
"""
# correlation matrix (ranges from 1 to -1)
hours_weather.corr()
# visualize correlation matrix in Seaborn using a heatmap
sns.heatmap(hours_weather.corr())
"""
Explanation: What does this tell us?
There are more rentals in the winter than the spring, but only because the system is experiencing overall growth and the winter months happen to come after the spring months.
End of explanation
"""
# create a list of features
feature_cols = ['temp', 'month', 'humidity']
# create X and y
X = hours_weather[feature_cols]
y = hours_weather.total
# instantiate and fit
linreg = LinearRegression()
linreg.fit(X, y)
# print the coefficients
print(linreg.intercept_, linreg.coef_)
# pair the feature names with the coefficients
list(zip(feature_cols, linreg.coef_))
"""
Explanation: What relationships do you notice?
Adding more features to the model
End of explanation
"""
# example true and predicted response values
true = [10, 7, 5, 5]
pred = [8, 6, 5, 10]
# calculate these metrics by hand!
from sklearn import metrics
import numpy as np
print('MAE:', metrics.mean_absolute_error(true, pred))
print('MSE:', metrics.mean_squared_error(true, pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(true, pred)))
"""
Explanation: Interpreting the coefficients:
Holding all other features fixed, a 1 unit increase in temperature is associated with a rental increase of 9.3 bikes.
Holding all other features fixed, a 1 unit increase in month is associated with a rental increase of 30.6 bikes.
Holding all other features fixed, a 1 unit increase in humidity is associated with a rental decrease of .60 bikes.
Does anything look incorrect?
Feature selection
How do we choose which features to include in the model? We're going to use train/test split (and eventually cross-validation).
Why not use of p-values or R-squared for feature selection?
Linear models rely upon a lot of assumptions (such as the features being independent), and if those assumptions are violated, p-values and R-squared are less reliable. Train/test split relies on fewer assumptions.
Features that are unrelated to the response can still have significant p-values.
Adding features to your model that are unrelated to the response will always increase the R-squared value, and adjusted R-squared does not sufficiently account for this.
p-values and R-squared are proxies for our goal of generalization, whereas train/test split and cross-validation attempt to directly estimate how well the model will generalize to out-of-sample data.
More generally:
There are different methodologies that can be used for solving any given data science problem, and this course follows a machine learning methodology.
This course focuses on general purpose approaches that can be applied to any model, rather than model-specific approaches.
Evaluation metrics for regression problems
Evaluation metrics for classification problems, such as accuracy, are not useful for regression problems. We need evaluation metrics designed for comparing continuous values.
Here are three common evaluation metrics for regression problems:
Mean Absolute Error (MAE) is the mean of the absolute value of the errors:
$$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$
Mean Squared Error (MSE) is the mean of the squared errors:
$$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$
Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors:
$$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$
End of explanation
"""
# same true values as above
true = [10, 7, 5, 5]
# new set of predicted values
pred = [10, 7, 5, 13]
# MAE is the same as before
print('MAE:', metrics.mean_absolute_error(true, pred))
# MSE and RMSE are larger than before
print('MSE:', metrics.mean_squared_error(true, pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(true, pred)))
rmse = np.sqrt(metrics.mean_squared_error(true, pred))
rmse/pred
"""
Explanation: Comparing these metrics:
MAE is the easiest to understand, because it's the average error.
MSE is more popular than MAE, because MSE "punishes" larger errors, which tends to be useful in the real world.
RMSE is even more popular than MSE, because RMSE is interpretable in the "y" units.
All of these are loss functions, because we want to minimize them.
Here's an additional example, to demonstrate how MSE/RMSE punish larger errors:
End of explanation
"""
from sklearn.cross_validation import train_test_split
import sklearn.metrics as metrics
import numpy as np
# define a function that accepts a list of features and returns testing RMSE
def train_test_rmse(feature_cols, data):
X = data[feature_cols]
y = data.total
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=123)
linreg = LinearRegression()
linreg.fit(X_train, y_train)
y_pred = linreg.predict(X_test)
return np.sqrt(metrics.mean_squared_error(y_test, y_pred))
# compare different sets of features
print(train_test_rmse(['temp', 'month', 'humidity'], hours_weather))
print(train_test_rmse(['temp', 'month'], hours_weather))
print(train_test_rmse(['temp', 'humidity'], hours_weather))
print(train_test_rmse(['temp'], hours_weather))
print(train_test_rmse(['temp'], weekday))
"""
Explanation: Comparing models with train/test split and RMSE
End of explanation
"""
# split X and y into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(weekday[['temp']], weekday.total, random_state=123)
# create a NumPy array with the same shape as y_test
y_null = np.zeros_like(y_test, dtype=float)
# fill the array with the mean value of y_test
y_null.fill(y_test.mean())
y_null
# compute null RMSE
np.sqrt(metrics.mean_squared_error(y_test, y_null))
"""
Explanation: Comparing testing RMSE with null RMSE
Null RMSE is the RMSE that could be achieved by always predicting the mean response value. It is a benchmark against which you may want to measure your regression model.
End of explanation
"""
# create dummy variables
season_dummies = pd.get_dummies(hours_weather.month, prefix='month')
# print 5 random rows
season_dummies.sample(n=5, random_state=1)
"""
Explanation: Handling categorical features
scikit-learn expects all features to be numeric. So how do we include a categorical feature in our model?
Ordered categories: transform them to sensible numeric values (example: small=1, medium=2, large=3)
Unordered categories: use dummy encoding (0/1)
What are the categorical features in our dataset?
Ordered categories: weather (already encoded with sensible numeric values)
Unordered categories: season (needs dummy encoding), holiday (already dummy encoded), workingday (already dummy encoded)
For season, we can't simply leave the encoding as 1 = spring, 2 = summer, 3 = fall, and 4 = winter, because that would imply an ordered relationship. Instead, we create multiple dummy variables:
End of explanation
"""
# concatenate the original DataFrame and the dummy DataFrame (axis=0 means rows, axis=1 means columns)
hw_dum = pd.concat([hours_weather, season_dummies], axis=1)
# print 5 random rows
hw_dum.sample(n=5, random_state=1)
# include dummy variables for season in the model
feature_cols = ['temp','month_1', 'month_2', 'month_3', 'humidity']
X = hw_dum[feature_cols]
y = hw_dum.total
linreg = LinearRegression()
linreg.fit(X, y)
list(zip(feature_cols, linreg.coef_))
# compare original season variable with dummy variables
print(train_test_rmse(['temp', 'month', 'humidity'], hw_dum))
print(train_test_rmse(['temp', 'month_2', 'month', 'humidity'], hw_dum))
print(train_test_rmse(['temp', 'month_2', 'month_1', 'humidity'], hw_dum))
"""
Explanation: In general, if you have a categorical feature with k possible values, you create k-1 dummy variables.
If that's confusing, think about why we only need one dummy variable for holiday, not two dummy variables (holiday_yes and holiday_no).
End of explanation
"""
# hour as a numeric feature
hw_dum['hour'] = hw_dum.datetime.dt.hour
# hour as a categorical feature
hour_dummies = pd.get_dummies(hw_dum.hour, prefix='hour')
# hour_dummies.drop(hour_dummies.columns[0], axis=1, inplace=True)
hw_dum = pd.concat([hw_dum, hour_dummies], axis=1)
# daytime as a categorical feature
hw_dum['daytime'] = ((hw_dum.hour > 6) & (hw_dum.hour < 21)).astype(int)
print(train_test_rmse(['hour'], hw_dum),
train_test_rmse(hw_dum.columns[hw_dum.columns.str.startswith('hour_')], hw_dum)
,train_test_rmse(['daytime'], hw_dum))
"""
Explanation: Feature engineering
See if you can create the following features:
hour: as a single numeric feature (0 through 23)
hour: as a categorical feature (use 23 dummy variables)
daytime: as a single categorical feature (daytime=1 from 7am to 8pm, and daytime=0 otherwise)
Then, try using each of the three features (on its own) with train_test_rmse to see which one performs the best!
End of explanation
"""
|
drcgw/bass | Single Wave- Basic.ipynb | gpl-3.0 | from bass import *
"""
Explanation: Welcome to BASS!
Single Wave Analysis Notebook. This is the basic version.
BASS: Biomedical Analysis Software Suite for event detection and signal processing.
Copyright (C) 2015 Abigail Dobyns
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>
Initalize
Run the following code block to intialize the program.
End of explanation
"""
#initialize new file
Data = {}
Settings = {}
Results ={}
############################################################################################
#manual Setting block
Settings['folder']= r"/Users/abigaildobyns/Desktop"
Settings['Label'] = r'rat34_ECG.txt'
Settings['Output Folder'] = r"/Users/abigaildobyns/Desktop/demo"
#transformation Settings
Settings['Absolute Value'] = True #Must be True if Savitzky-Golay is being used
Settings['Bandpass Highcut'] = r'none' #in Hz
Settings['Bandpass Lowcut'] = r'none' #in Hz
Settings['Bandpass Polynomial'] = r'none' #integer
Settings['Linear Fit'] = False #between 0 and 1 on the whole time series
Settings['Linear Fit-Rolling R'] = 0.75 #between 0 and 1
Settings['Linear Fit-Rolling Window'] = 1000 #window for rolling mean for fit, unit is index not time
Settings['Relative Baseline'] = 0 #default 0, unless data is normalized, then 1.0. Can be any float
Settings['Savitzky-Golay Polynomial'] = 4 #integer
Settings['Savitzky-Golay Window Size'] = 251 #must be odd. units are index not time
#Baseline Settings
Settings['Baseline Type'] = r'static' #'linear', 'rolling', or 'static'
#For Linear
Settings['Baseline Start'] = 0.0 #start time in seconds
Settings['Baseline Stop'] = 1.0 #end time in seconds
#For Rolling
Settings['Rolling Baseline Window'] = r'none' #leave as 'none' if linear or static
#Peaks
Settings['Delta'] = 0.25
Settings['Peak Minimum'] = -1 #amplitude value
Settings['Peak Maximum'] = 1 #amplitude value
#Bursts
Settings['Burst Area'] = False #calculate burst area
Settings['Exclude Edges'] = True #False to keep edges, True to discard them
Settings['Inter-event interval minimum (seconds)'] = 0.0100 #only for bursts, not for peaks
Settings['Maximum Burst Duration (s)'] = 10
Settings['Minimum Burst Duration (s)'] = 0
Settings['Minimum Peak Number'] = 1 #minimum number of peaks/burst, integer
Settings['Threshold']= 0.15 #linear: proportion of baseline.
#static: literal value.
#rolling, linear ammount grater than rolling baseline at each time point.
#Outputs
Settings['Generate Graphs'] = False #create and save the fancy graph outputs
#Settings that you should not change unless you are a super advanced user:
#These are settings that are still in development
Settings['Graph LCpro events'] = False
Settings['File Type'] = r'Plain' #'LCPro', 'ImageJ', 'SIMA', 'Plain', 'Morgan'
Settings['Milliseconds'] = False
############################################################################################
"""
Explanation: Begin User Input
For help, check out the wiki: Protocol
Or the video tutorial: Coming Soon!
Load Data File
Use the following block to change your settings. You must use this block.
Here are some helpful information about the loading settings:
Settings['folder']= Full File Path for data input:
Designate the path to your file to load. It can also be the relative path to the folder where this notebook is stored. This does not include the file itself.
Mac OSX Example: '/Users/MYNAME/Documents/bass'
Microsoft Example: r'C:\\Users\MYNAME\Documents\bass'
Settings['Label']= File name:
This is the name of your data file. It should include the file type. This file should NOT have a header and the first column must be time in seconds. Note: This file name will also appear as part of the output files names.
'rat34_ECG.txt'
Settings['Output Folder'] = Full File Path for data output: Designate the location of the folder where you would like the folder containing your results to go. If the folder does not exist, then it will be created. A plots folder, called 'plots' will be created inside this folder for you if it does not already exist.
Mac OSX Example: '/Users/MYNAME/Documents/output'
Microsoft Example: r'C:\\Users\MYNAME\Documents\output'
Loading a file
WARNING All text input should be raw, especially if in Windows.
r'string!'
r"string!"
Other Settings
For more information about other settings, go to:
Transforming Data
Baseline Settings
Peak Detection Settings
Burst Detection Settings
End of explanation
"""
#Load in a Settings File
#initialize new file
Data = {}
Settings = {}
Results ={}
############################################################################################
#manual Setting block
Settings['folder']= r"/Users/abigaildobyns/Desktop"
Settings['Label'] = r'rat34_ECG.txt'
Settings['Output Folder'] = r"/Users/abigaildobyns/Desktop/demo"
#Load a Settings file
Settings['Settings File'] = r'/Users/abigaildobyns/Desktop/rat34_Settings.csv'
##Settings that you should not change unless you are a super advanced user:
#These are settings that are still in development
Settings['File Type'] = r'Plain' #'LCPro', 'ImageJ', 'SIMA', 'Plain', 'Morgan'
Settings['Milliseconds'] = False
Settings = load_settings(Settings)
Data, Settings, Results = analyze(Data, Settings, Results)
"""
Explanation: Load Settings from previous analysis
Must be a previously outputed BASS settings file, although the name can be changed. Expected format is '.csv'. Enter the full file path and name.
Mac OSX Example: '/Users/MYNAME/Documents/bass_settings.csv'
Microsoft Example: 'C:\\Users\MYNAME\Documents\bass_settings.csv'
See above instructions for how to load your data file.
End of explanation
"""
display_settings(Settings)
"""
Explanation: Display Event Detection Tables
Display Settings used for analysis
End of explanation
"""
#grouped summary for peaks
Results['Peaks-Master'].groupby(level=0).describe()
"""
Explanation: Display Summary Results for Peaks
End of explanation
"""
#grouped summary for bursts
Results['Bursts-Master'].groupby(level=0).describe()
"""
Explanation: Display Summary Results for Bursts
End of explanation
"""
#Interactive, single time series by Key
key = 'Mean1'
graph_ts(Data, Settings, Results, key)
"""
Explanation: Interactive Graphs
Line Graphs
One pannel, detected events
Plot one time series by calling it's name
End of explanation
"""
key = 'Mean1'
start =100 #start time in seconds
end= 101#end time in seconds
results_timeseries_plot(key, start, end, Data, Settings, Results)
"""
Explanation: Two pannel
Create line plots of the raw data as well as the data analysis.
Plots are saved by clicking the save button in the pop-up window with your graph.
key = 'Mean1'
start =100
end= 101
Results Line Plot
End of explanation
"""
#autocorrelation
key = 'Mean1'
start = 0 #seconds, where you want the slice to begin
end = 1 #seconds, where you want the slice to end.
autocorrelation_plot(Data['trans'][key][start:end])
plt.show()
"""
Explanation: Autocorrelation
Display the Autocorrelation plot of your transformed data.
Choose the start and end time in seconds. to capture whole time series, use end = -1. May be slow
key = 'Mean1'
start = 0
end = 10
Autocorrelation Plot
End of explanation
"""
#raster
raster(Data, Results)
"""
Explanation: Raster Plot
Shows the temporal relationship of peaks in each column. Auto scales. Display only. Intended for more than one column of data
End of explanation
"""
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1' #'Mean1' default for single wave
frequency_plot(event_type, meas, key, Data, Settings, Results)
"""
Explanation: Frequency Plot
Use this block to plot changes of any measurement over time. Does not support 'all'. Example:
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1'
Frequency Plot
End of explanation
"""
#Get average plots, display only
event_type = 'peaks'
meas = 'Peaks Amplitude'
average_measurement_plot(event_type, meas,Results)
"""
Explanation: Analyze Events by Measurement
Generates a line plot with error bars for a given event measurement. X axis is the names of each time series. Display Only. Intended for more than one column of data. This is not a box and whiskers plot.
event_type = 'peaks'
meas = 'Peaks Amplitude'
Analyze Events by Measurement
End of explanation
"""
#Batch
event_type = 'Peaks'
meas = 'all'
Results = poincare_batch(event_type, meas, Data, Settings, Results)
pd.concat({'SD1':Results['Poincare SD1'],'SD2':Results['Poincare SD2']})
"""
Explanation: Poincare Plots
Create a Poincare Plot of your favorite varible. Choose an event type (Peaks or Bursts), measurement type. Calling meas = 'All' is supported.
Plots and tables are saved automatically
Example:
event_type = 'Bursts'
meas = 'Burst Duration'
More on Poincare Plots
Batch Poincare
Batch Poincare
End of explanation
"""
#quick
event_type = 'Bursts'
meas = 'Burst Duration'
key = 'Mean1'
poincare_plot(Results[event_type][key][meas])
"""
Explanation: Quick Poincare Plot
Quickly call one poincare plot for display. Plot and Table are not saved automatically. Choose an event type (Peaks or Bursts), measurement type, and key. Calling meas = 'All' is not supported.
Quick Poincare
End of explanation
"""
Settings['PSD-Event'] = Series(index = ['Hz','ULF', 'VLF', 'LF','HF','dx'])
#Set PSD ranges for power in band
Settings['PSD-Event']['hz'] = 4.0 #freqency that the interpolation and PSD are performed with.
Settings['PSD-Event']['ULF'] = 0.03 #max of the range of the ultra low freq band. range is 0:ulf
Settings['PSD-Event']['VLF'] = 0.05 #max of the range of the very low freq band. range is ulf:vlf
Settings['PSD-Event']['LF'] = 0.15 #max of the range of the low freq band. range is vlf:lf
Settings['PSD-Event']['HF'] = 0.4 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2)
Settings['PSD-Event']['dx'] = 10 #segmentation for the area under the curve.
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1'
scale = 'raw'
Results = psd_event(event_type, meas, key, scale, Data, Settings, Results)
Results['PSD-Event'][key]
"""
Explanation: Power Spectral Density
The following blocks allows you to asses the power of event measuments in the frequency domain. While you can call this block on any event measurement, it is intended to be used on interval data (or at least data with units in seconds). Reccomended:
event_type = 'Bursts'
meas = 'Total Cycle Time'
key = 'Mean1'
scale = 'raw'
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1'
scale = 'raw'
Because this data is in the frequency domain, we must interpolate it in order to perform a FFT on it. Does not support 'all'.
Power Spectral Density: Events
Events
Use the code block below to specify your settings for event measurment PSD.
End of explanation
"""
#optional
Settings['PSD-Signal'] = Series(index = ['ULF', 'VLF', 'LF','HF','dx'])
#Set PSD ranges for power in band
Settings['PSD-Signal']['ULF'] = 25 #max of the range of the ultra low freq band. range is 0:ulf
Settings['PSD-Signal']['VLF'] = 75 #max of the range of the very low freq band. range is ulf:vlf
Settings['PSD-Signal']['LF'] = 150 #max of the range of the low freq band. range is vlf:lf
Settings['PSD-Signal']['HF'] = 300 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2) where hz is the sampling frequency
Settings['PSD-Signal']['dx'] = 2 #segmentation for integration of the area under the curve.
"""
Explanation: Time Series
Use the settings code block to set your frequency bands to calculate area under the curve. This block is not required. band output is always in raw power, even if the graph scale is dB/Hz.
Power Spectral Density: Signal
End of explanation
"""
scale = 'raw' #raw or db
Results = psd_signal(version = 'original', key = 'Mean1', scale = scale,
Data = Data, Settings = Settings, Results = Results)
Results['PSD-Signal']
"""
Explanation: Use the block below to generate the PSD graph and power in bands results (if selected). scale toggles which units to use for the graph:
raw = s^2/Hz
db = dB/Hz = 10*log10(s^2/Hz)
Graph and table are automatically saved in the PSD-Signal subfolder.
End of explanation
"""
version = 'original'
key = 'Mean1'
spectogram(version, key, Data, Settings, Results)
"""
Explanation: Spectrogram
Use the block below to get the spectrogram of the signal. The frequency (y-axis) scales automatically to only show 'active' frequencies. This can take some time to run.
version = 'original'
key = 'Mean1'
After transformation is run, you can call version = 'trans'. This graph is not automatically saved.
Spectrogram
End of explanation
"""
#Moving Stats
event_type = 'Peaks'
meas = 'all'
window = 30 #seconds
Results = moving_statistics(event_type, meas, window, Data, Settings, Results)
"""
Explanation: Descriptive Statistics
Moving/Sliding Averages, Standard Deviation, and Count
Generates the moving mean, standard deviation, and count for a given measurement across all columns of the Data in the form of a DataFrame (displayed as a table).
Saves out the dataframes of these three results automatically with the window size in the name as a .csv.
If meas == 'All', then the function will loop and produce these tables for all measurements.
event_type = 'Peaks'
meas = 'all'
window = 30
Moving Stats
End of explanation
"""
#Histogram Entropy
event_type = 'Bursts'
meas = 'all'
Results = histent_wrapper(event_type, meas, Data, Settings, Results)
Results['Histogram Entropy']
"""
Explanation: Entropy
Histogram Entropy
Calculates the histogram entropy of a measurement for each column of data. Also saves the histogram of each. If meas is set to 'all', then all available measurements from the event_type chosen will be calculated iteratevely.
If all of the samples fall in one bin regardless of the bin size, it means we have the most predictable sitution and the entropy is 0. If we have uniformly dist function, the max entropy will be 1
event_type = 'Bursts'
meas = 'all'
Histogram Entropy
End of explanation
"""
#Approximate Entropy
event_type = 'Peaks'
meas = 'all'
Results = ap_entropy_wrapper(event_type, meas, Data, Settings, Results)
Results['Approximate Entropy']
"""
Explanation: Approximate entropy
this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY
run the below code to get the approximate entropy of any measurement or raw signal. Returns the entropy of the entire results array (no windowing). I am using the following M and R values:
M = 2
R = 0.2*std(measurement)
these values can be modified in the source code. alternatively, you can call ap_entropy directly. supports 'all'
Interpretation: A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn.
Approximate Entropy in BASS
Aproximate Entropy Source
Events
End of explanation
"""
#Approximate Entropy on raw signal
#takes a VERY long time
from pyeeg import ap_entropy
version = 'original' #original, trans, shift, or rolling
key = 'Mean1' #Mean1 default key for one time series
start = 0 #seconds, where you want the slice to begin
end = 1 #seconds, where you want the slice to end. The absolute end is -1
ap_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))
"""
Explanation: Time Series
End of explanation
"""
#Sample Entropy
event_type = 'Bursts'
meas = 'all'
Results = samp_entropy_wrapper(event_type, meas, Data, Settings, Results)
Results['Sample Entropy']
"""
Explanation: Sample Entropy
this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY
run the below code to get the sample entropy of any measurement. Returns the entropy of the entire results array (no windowing). I am using the following M and R values:
M = 2
R = 0.2*std(measurement)
these values can be modified in the source code. alternatively, you can call samp_entropy directly.
Supports 'all'
Sample Entropy in BASS
Sample Entropy Source
Events
End of explanation
"""
#on raw signal
#takes a VERY long time
version = 'original' #original, trans, shift, or rolling
key = 'Mean1' #Mean1 default key for one time series
start = 0 #seconds, where you want the slice to begin
end = 1 #seconds, where you want the slice to end. The absolute end is -1
samp_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))
"""
Explanation: Time Series
End of explanation
"""
help(moving_statistics)
moving_statistics??
"""
Explanation: Helpful Stuff
While not completely up to date with some of the new changes, the Wiki can be useful if you have questions about some of the settings: https://github.com/drcgw/SWAN/wiki/Tutorial
More Help?
Stuck on a particular step or function?
Try typing the function name followed by two ??. This will pop up the docstring and source code.
You can also call help() to have the notebook print the doc string.
Example:
analyze??
help(analyze)
End of explanation
"""
|
KMFleischer/PyEarthScience | Data_Analysis/convert_ascii_to_netcdf.ipynb | mit | import numpy as np
from cdo import *
import csv
"""
Explanation: Convert a CSV data to netCDF
Read the CSV file, generate the gridfile from the CSV lon and lat data,
write data to file. Then use cdo to write the data to an netCDF file.
read the ASCII file
generate the gridfile
write netcdf file
Input data data/1901_1.csv:
    data is on a grid where the rows are longitudes and columns are latitudes
Output file: 1901_1.nc
</br></br>
End of explanation
"""
cdo = Cdo()
"""
Explanation: For the sake of simplicity
End of explanation
"""
ascii_data = np.genfromtxt('data/1901_1.csv', dtype=None, delimiter=',')
"""
Explanation: Read Ascii data
End of explanation
"""
nlines = ascii_data.shape[0]
ncols = ascii_data.shape[1]
"""
Explanation: Get number of lines and columns
End of explanation
"""
print('--> data shape = (%d,%d) ' % ascii_data.shape)
print('--> number of lines = %d ' % nlines)
print('--> number of columns = %d ' % ncols)
"""
Explanation: Print some information
End of explanation
"""
data = ascii_data.T
nlat = data.shape[0]
nlon = data.shape[1]
"""
Explanation: The data is in the wrong shape (columns x lines).
The rows and columns must be swapped (lines x columns).
End of explanation
"""
print('\nCorrect shape of data!\n')
print('--> data shape = (%d,%d) ' % data.shape)
print('--> number of lat = %d ' % nlat)
print('--> number of lon = %d ' % nlon)
"""
Explanation: Print the information about the transposed data
End of explanation
"""
varname = 't'
"""
Explanation: Set variable name
End of explanation
"""
missing = 1e20
"""
Explanation: Set missing value
End of explanation
"""
reftime = '1900-01-01,00:00:00,1day'
time = '1901-01-01,12:00:00,1day'
"""
Explanation: Set time and reference time
End of explanation
"""
data = np.nan_to_num(data, nan=missing)
"""
Explanation: Set NaN to missing value
End of explanation
"""
np.savetxt('data/var.txt', data, delimiter=', ', fmt='%1.2e')
"""
Explanation: Write data array to file data/var.txt
End of explanation
"""
f = open('data/gridfile_ascii.txt', 'w')
f.write('gridtype = lonlat'+'\n')
f = open('data/gridfile_ascii.txt', 'a')
f.write('gridsize = '+str(nlines*ncols)+'\n')
f.write('xsize = ' + str(nlon)+'\n')
f.write('ysize = ' + str(nlat)+'\n')
f.write('xname = lon'+'\n')
f.write('xlongname = longitude'+'\n')
f.write('xunits = degrees_east'+'\n')
f.write('xfirst = -179.75'+'\n')
f.write('xinc = 0.5'+'\n')
f.write('yname = lat'+'\n')
f.write('ylongname = latitude'+'\n')
f.write('yunits = degrees_north'+'\n')
f.write('yfirst = -89.75'+'\n')
f.write('yinc = 0.5'+'\n')
f.close()
"""
Explanation: Write grid description file.
End of explanation
"""
cdo.settaxis(
time, input='-setreftime,1900-01-01,00:00:00,1day '+ \
'-setcalendar,standard '+ \
'-setmissval,'+str(missing)+ \
' -setname,'+varname+ \
' -input,data/gridfile_ascii.txt < data/var.txt',
output='tmp.nc',
options = '-f nc')
"""
Explanation: CDO command:
- read the ASCII data
- set variable name
- set the calendar, time and reference time
- set the missing value
- convert to netCDF file format
End of explanation
"""
cdo.setattribute(
varname+'@long_name="monthly mean temperature",'+\
varname+'@units="deg C",'+ \
'source="CRU"',
input='tmp.nc',
output='1901_1.nc')
"""
Explanation: CDO command:
- add variable attributes long_name and units
- add global attribute source
End of explanation
"""
|
OpenAstronomy/workshop_sunpy_astropy | 03-python2-functions-instructors.ipynb | mit | # Let's get our import statements out of the way first
from __future__ import division, print_function
import numpy as np
import glob
import matplotlib.pyplot as plt
%matplotlib inline
def kelvin_to_celsius(temp):
return temp - 273.15
"""
Explanation: Introduction to Python 2
Creating Functions
<section class="objectives panel panel-warning">
<div class="panel-heading">
<h3><span class="fa fa-certificate"></span> Learning Objectives: </h3>
</div>
- Define a function that takes parameters.
- Return a value from a function.
- Test and debug a function.
- Set default values for function parameters.
- Explain why we should divide programs into small, single-purpose functions.
At this point, we’ve written code to draw some interesting features in our inflammation data, loop over all our data files to quickly draw these plots for each of them, and have Python make decisions based on what it sees in our data. But, our code is getting pretty long and complicated; what if we had thousands of datasets, and didn’t want to generate a figure for every single one? Commenting out the figure-drawing code is a nuisance. Also, what if we want to use that code again, on a different dataset or at a different point in our program? Cutting and pasting it is going to make our code get very long and very repetitive, very quickly. We’d like a way to package our code so that it is easier to reuse, and Python provides for this by letting us define things called ‘functions’ - a shorthand way of re-executing longer pieces of code.
Let’s start by defining a function `kelvin_to_celsius` that converts temperatures from Kelvin to Celsius:
End of explanation
"""
print('absolute zero in Celsius:', kelvin_to_celsius(0.0))
"""
Explanation: The function definition opens with the word def, which is followed by the name of the function and a parenthesized list of parameter names. The body of the function — the statements that are executed when it runs — is indented below the definition line, typically by four spaces.
When we call the function, the values we pass to it are assigned to those variables so that we can use them inside the function. Inside the function, we use a return statement to send a result back to whoever asked for it.
Let’s try running our function. Calling our own function is no different from calling any other function:
End of explanation
"""
print(5/9)
"""
Explanation: We’ve successfully called the function that we defined, and we have access to the value that we returned.
Integer division
We are using Python 3 division, which always returns a floating point number:
End of explanation
"""
!python2 -c "print 5/9"
"""
Explanation: Unfortunately, this wasn’t the case in Python 2:
End of explanation
"""
float(5) / 9
5 / float(9)
5.0 / 9
5 / 9.0
"""
Explanation: If you are using Python 2 and want to keep the fractional part of division you need to convert one or the other number to floating point:
End of explanation
"""
4 // 2
3 // 2
"""
Explanation: And if you want an integer result from division in Python 3, use a double-slash:
End of explanation
"""
def celsius_to_fahr(temp):
return temp * (9/5) + 32
print('freezing point of water:', celsius_to_fahr(0))
print('boiling point of water:', celsius_to_fahr(100))
"""
Explanation: Composing Functions
Now that we’ve seen how to turn Kelvin into Celsius, let's try converting Celsius to Fahrenheit:
End of explanation
"""
def kelvin_to_fahr(temp):
temp_c = kelvin_to_celsius(temp)
result = celsius_to_fahr(temp_c)
return result
print('freezing point of water in Fahrenheit:', kelvin_to_fahr(273.15))
print('absolute zero in Fahrenheit:', kelvin_to_fahr(0))
"""
Explanation: What about converting Kelvin to Fahrenheit? We could write out the formula, but we don’t need to. Instead, we can compose the two functions we have already created:
End of explanation
"""
def analyse(filename):
data = np.loadtxt(fname=filename, delimiter=',')
fig = plt.figure(figsize=(10.0, 3.0))
axes1 = fig.add_subplot(1, 3, 1)
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
axes1.plot(data.mean(axis=0))
axes2.set_ylabel('max')
axes2.plot(data.max(axis=0))
axes3.set_ylabel('min')
axes3.plot(data.min(axis=0))
fig.tight_layout()
plt.show(fig)
"""
Explanation: This is our first taste of how larger programs are built: we define basic operations, then combine them in ever-larger chunks to get the effect we want. Real-life functions will usually be larger than the ones shown here — typically half a dozen to a few dozen lines — but they shouldn’t ever be much longer than that, or the next person who reads it won’t be able to understand what’s going on.
Tidying up
Now that we know how to wrap bits of code up in functions, we can make our inflammation analyasis easier to read and easier to reuse. First, let’s make an analyse function that generates our plots:
End of explanation
"""
def detect_problems(filename):
data = np.loadtxt(fname=filename, delimiter=',')
if data.max(axis=0)[0] == 0 and data.max(axis=0)[20] == 20:
print('Suspicious looking maxima!')
elif data.min(axis=0).sum() == 0:
print('Minima add up to zero!')
else:
print('Seems OK!')
"""
Explanation: and another function called detect_problems that checks for those systematics we noticed:
End of explanation
"""
# First redefine our list of filenames from the last lesson
filenames = sorted(glob.glob('data/inflammation*.csv'))
for f in filenames[:3]:
print(f)
analyse(f)
detect_problems(f)
"""
Explanation: Notice that rather than jumbling this code together in one giant for loop, we can now read and reuse both ideas separately. We can reproduce the previous analysis with a much simpler for loop:
End of explanation
"""
def centre(data, desired):
return (data - data.mean()) + desired
"""
Explanation: By giving our functions human-readable names, we can more easily read and understand what is happening in the for loop. Even better, if at some later date we want to use either of those pieces of code again, we can do so in a single line.
Testing and Documenting
Once we start putting things in functions so that we can re-use them, we need to start testing that those functions are working correctly. To see how to do this, let’s write a function to center a dataset around a particular value:
End of explanation
"""
z = np.zeros((2,2))
print(centre(z, 3))
"""
Explanation: We could test this on our actual data, but since we don’t know what the values ought to be, it will be hard to tell if the result was correct. Instead, let’s use NumPy to create a matrix of 0’s and then center that around 3:
End of explanation
"""
data = np.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
print(centre(data, 0))
"""
Explanation: That looks right, so let’s try center on our real data:
End of explanation
"""
print('original min, mean, and max are:', data.min(), data.mean(), data.max())
centered = centre(data, 0)
print('min, mean, and and max of centered data are:', centered.min(), centered.mean(), centered.max())
"""
Explanation: It’s hard to tell from the default output whether the result is correct, but there are a few simple tests that will reassure us:
End of explanation
"""
print('std dev before and after:', data.std(), centered.std())
"""
Explanation: That seems almost right: the original mean was about 6.1, so the lower bound from zero is how about -6.1. The mean of the centered data isn’t quite zero — we’ll explore why not in the challenges — but it’s pretty close. We can even go further and check that the standard deviation hasn’t changed:
End of explanation
"""
print('difference in standard deviations before and after:', data.std() - centered.std())
"""
Explanation: Those values look the same, but we probably wouldn’t notice if they were different in the sixth decimal place. Let’s do this instead:
End of explanation
"""
# centre(data, desired): return a new array containing the original data centered around the desired value.
def centre(data, desired):
return (data - data.mean()) + desired
"""
Explanation: Again, the difference is very small. It’s still possible that our function is wrong, but it seems unlikely enough that we should probably get back to doing our analysis. We have one more task first, though: we should write some documentation for our function to remind ourselves later what it’s for and how to use it.
The usual way to put documentation in software is to add comments like this:
End of explanation
"""
def centre(data, desired):
'''Return a new array containing the original data centered around the desired value.'''
return (data - data.mean()) + desired
"""
Explanation: There’s a better way, though. If the first thing in a function is a string that isn’t assigned to a variable, that string is attached to the function as its documentation:
End of explanation
"""
help(centre)
"""
Explanation: This is better because we can now ask Python’s built-in help system to show us the documentation for the function:
End of explanation
"""
def centre(data, desired):
'''Return a new array containing the original data centered around the desired value.
Example: center([1, 2, 3], 0) => [-1, 0, 1]'''
return (data - data.mean()) + desired
help(centre)
"""
Explanation: A string like this is called a docstring. We don’t need to use triple quotes when we write one, but if we do, we can break the string across multiple lines:
End of explanation
"""
np.loadtxt('data/inflammation-01.csv', delimiter=',')
"""
Explanation: Defining Defaults
We have passed parameters to functions in two ways: directly, as in type(data), and by name, as in numpy.loadtxt(fname='something.csv', delimiter=','). In fact, we can pass the filename to loadtxt without the fname=:
End of explanation
"""
np.loadtxt('data/inflammation-01.csv', ',')
"""
Explanation: but we still need to say delimiter=:
End of explanation
"""
def centre(data, desired=0.0):
'''Return a new array containing the original data centered around the desired value (0 by default).
Example: center([1, 2, 3], 0) => [-1, 0, 1]'''
return (data - data.mean()) + desired
"""
Explanation: To understand what’s going on, and make our own functions easier to use, let’s re-define our center function like this:
End of explanation
"""
test_data = np.zeros((2, 2))
print(centre(test_data, 3))
"""
Explanation: The key change is that the second parameter is now written desired=0.0 instead of just desired. If we call the function with two arguments, it works as it did before:
End of explanation
"""
more_data = 5 + np.zeros((2, 2))
print('data before centering:')
print(more_data)
print('centered data:')
print(centre(more_data))
"""
Explanation: But we can also now call it with just one parameter, in which case desired is automatically assigned the default value of 0.0:
End of explanation
"""
def display(a=1, b=2, c=3):
print('a:', a, 'b:', b, 'c:', c)
print('no parameters:')
display()
print('one parameter:')
display(55)
print('two parameters:')
display(55, 66)
"""
Explanation: This is handy: if we usually want a function to work one way, but occasionally need it to do something else, we can allow people to pass a parameter when they need to but provide a default to make the normal case easier. The example below shows how Python matches values to parameters:
End of explanation
"""
print('only setting the value of c')
display(c=77)
"""
Explanation: As this example shows, parameters are matched up from left to right, and any that haven’t been given a value explicitly get their default value. We can override this behavior by naming the value as we pass it in:
End of explanation
"""
help(np.loadtxt)
"""
Explanation: With that in hand, let’s look at the help for numpy.loadtxt:
End of explanation
"""
np.loadtxt('data/inflammation-01.csv', ',')
"""
Explanation: There’s a lot of information here, but the most important part is the first couple of lines:
<pre>loadtxt(fname, dtype=<type 'float'>, comments='#', delimiter=None, converters=None, skiprows=0, usecols=None,
unpack=False, ndmin=0)</pre>
This tells us that loadtxt has one parameter called fname that doesn’t have a default value, and eight others that do. If we call the function like this:
End of explanation
"""
def fence(original, wrapper='#'):
"""Return a new string which consists of the original string with the wrapper character before and after"""
return wrapper + original + wrapper
print(fence('name', '*'))
"""
Explanation: then the filename is assigned to fname (which is what we want), but the delimiter string ',' is assigned to dtype rather than delimiter, because dtype is the second parameter in the list. However ',' isn’t a known dtype so our code produced an error message when we tried to run it. When we call loadtxt we don’t have to provide fname= for the filename because it’s the first item in the list, but if we want the ',' to be assigned to the variable delimiter, we do have to provide delimiter= for the second parameter since delimiter is not the second parameter in the list.
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="combining-strings"><span class="fa fa-pencil"></span>Combining strings</h2>
</div>
<div class="panel-body">
<p>“Adding” two strings produces their concatenation: <code>'a' + 'b'</code> is <code>'ab'</code>. Write a function called <code>fence</code> that takes two parameters called <code>original</code> and <code>wrapper</code> and returns a new string that has the wrapper character at the beginning and end of the original. A call to your function should look like this:</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="bu">print</span>(fence(<span class="st">'name'</span>, <span class="st">'*'</span>))</code></pre></div>
<pre class="output"><code>*name*</code></pre>
</div>
</section>
End of explanation
"""
|
cod3licious/simec | 00_matrix_factorization.ipynb | mit | from __future__ import unicode_literals, division, print_function, absolute_import
import numpy as np
np.random.seed(28)
import matplotlib.pyplot as plt
import tensorflow as tf
tf.set_random_seed(28)
import keras
from simec import SimilarityEncoder
%matplotlib inline
%load_ext autoreload
%autoreload 2
def msqe(A, B):
# compute the mean squared error between two matrices A and B
# obviously, A and B should be the same shape...
return np.mean((A - B) ** 2)
# generate a random matrix
n_input = 500
n_output = 700
A = np.random.rand(n_input, n_output)
# compute its SVD
U, s, Vh = np.linalg.svd(A, full_matrices=True)
U.shape, s.shape, Vh.shape
# make the eigenvalues of A a bit more extreme
S = np.zeros((n_input, n_output))
s[0] = s[1]+1
s[:10] *= 50.
s[10:20] *= 20.
s[20:100] *= 10.
S[:n_input, :n_input] = np.diag(s)
# recompute A and scale it to be in a somewhat reasonable range
A = np.dot(U, np.dot(S, Vh))
A = A/np.max(np.abs(A))
# recompute SVD again
U, s, Vh = np.linalg.svd(A, full_matrices=True)
S = np.zeros((n_input, n_output))
S[:n_input, :n_input] = np.diag(s)
# check that eigenvectors are orthogonal
np.dot(Vh[:100,:], Vh[:100,:].T)
# inspect eigenvalues
plt.plot(s)
"""
Explanation: Matrix Factorization with SimEc
In this notebook are some examples to show that SimEc can be used to compute the SVD or eigendecomposition of a matrix.
Performing an SVD and eigendecomposition of a matrix with neural networks was first described by A. Cichocki (et al.) in 1992 in this paper (SVD) and this paper (eigendecomposition).
End of explanation
"""
# mean squared error of approximation decreases with more embedding dim
for e_dim in [2, 10, 25, 50, 75, 100, 250, 400, 500]:
print("mse with %3i e_dim: %.8f" % (e_dim, msqe(A, np.dot(U[:,:e_dim], np.dot(S[:e_dim,:e_dim], Vh[:e_dim,:])))))
# factorize the matrix with a simec
X = np.eye(n_input)
mses = []
e_dims = [2, 10, 25, 50, 75, 100, 250, 400, 500, 750, 1000]
l_rates = {400: 0.004, 500: 0.0034, 750: 0.0032, 1000: 0.003}
for e_dim in e_dims:
model = SimilarityEncoder(n_input, e_dim, n_output, orth_reg=0.001 if e_dim > 500 else 0., opt=keras.optimizers.Adamax(lr=0.005 if e_dim <= 250 else l_rates[e_dim]))
model.fit(X, A, epochs=50)
mse = msqe(A, model.predict(X))
mses.append(mse)
print("mse with %4i e_dim: %.8f" % (e_dim, mse))
for i, e_dim in enumerate(e_dims):
print("mse with %4i e_dim: %.8f" % (e_dim, mses[i]))
# factorize the matrix with a simec - transpose works just as well
# this time we use the embedding we learned before as the weights of the last layer
# to get the mapping function for the other side of the equation (which is of course kind of useless
# here since we don't actually map feature vectors but only identiy vectors)
X1 = np.eye(n_input)
X2 = np.eye(n_output)
mses1 = []
mses2 = []
mses3 = []
e_dims = [2, 10, 25, 50, 75, 100, 250, 400, 500, 750, 1000]
for e_dim in e_dims:
model = SimilarityEncoder(n_input, e_dim, n_output, opt=keras.optimizers.Adamax(lr=0.005 if e_dim <= 250 else l_rates[e_dim]))
model.fit(X1, A, epochs=50)
mse = msqe(A, model.predict(X1))
mses1.append(mse)
print("mse with %4i e_dim: %.8f" % (e_dim, mse))
Y1 = model.transform(X1)
model = SimilarityEncoder(n_output, e_dim, n_input, W_ll=Y1.T, wll_frozen=True, opt=keras.optimizers.Adamax(lr=0.005 if e_dim <= 250 else l_rates[e_dim]))
model.fit(X2, A.T, epochs=50)
mse = msqe(A.T, model.predict(X2))
mses2.append(mse)
print("mse with %4i e_dim: %.8f" % (e_dim, mse))
Y2 = model.transform(X2)
# the dot product of both embeddings should also approximate A
mse = msqe(A, np.dot(Y1, Y2.T))
mses3.append(mse)
for i, e_dim in enumerate(e_dims):
print("mse with %4i e_dim: %.8f / %.8f / %.8f" % (e_dim, mses1[i], mses2[i], mses3[i]))
"""
Explanation: SVD of a random matrix
End of explanation
"""
X = np.eye(n_input)
missing_targets = [0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
e_dims = [2, 10, 50, 100, 250, 500]
for e_dim in e_dims:
np.random.seed(15)
mses = []
mses_svd = []
mse_svd = msqe(A, np.dot(U[:,:e_dim], np.dot(S[:e_dim,:e_dim], Vh[:e_dim,:])))
for m in missing_targets:
print(m)
A_noisy = A.copy()
A_noisy[np.random.rand(*A_noisy.shape)<=m] = -100
model = SimilarityEncoder(n_input, e_dim, n_output, mask_value=-100, l2_reg_emb=0.00001,
l2_reg_out=0. if m < 0.7 else 0.00001, opt=keras.optimizers.Adamax(lr=0.025 if e_dim < 50 else 0.01))
model.fit(X, A_noisy, epochs=60)
mse = msqe(A, model.predict(X))
mses.append(mse)
A_noisy[A_noisy == -100] = np.mean(A)
U_n, s_n, Vh_n = np.linalg.svd(A_noisy, full_matrices=True)
S_n = np.zeros((n_input, n_output))
S_n[:n_input, :n_input] = np.diag(s_n)
mses_svd.append(msqe(A, np.dot(U_n[:,:e_dim], np.dot(S_n[:e_dim,:e_dim], Vh_n[:e_dim,:]))))
print(mses)
plt.figure();
plt.plot([0, missing_targets[-1]], [mse_svd, mse_svd], '--', linewidth=0.5, label='SVD noise free');
plt.plot(missing_targets, mses_svd, '-o', markersize=3, label='SVD');
plt.plot(missing_targets, mses, '-o', markersize=3, label='SimEc SVD');
plt.legend(loc=0);
plt.title('Matrix factorization of A (%i embedding dim)' % e_dim);
plt.xticks(missing_targets, missing_targets);
plt.xlabel('Fraction of Missing Entries');
plt.ylabel('Mean Squared Error');
"""
Explanation: Dealing with missing values
A regular SVD can not be computed for a matrix with missing values, therefore these need to be filled by some values, e.g. the average of the matrix. SimEc on the other hand can compute the backpropagation error considering only the available entries of the matrix, which means it can easily handle about 50% missing values without seeing major performance decreases and consistently outperforms the regular SVD (results would probably be even better with more careful hyperparameter tuning, e.g. using some regularization).
End of explanation
"""
# get 2 square symmetric matrices as AA^T and A^TA
S1 = np.dot(A, A.T)
S2 = np.dot(A.T, A)
# make sure their range of values is still sort of reasonable (for >100 we should probably rescale)
np.max(S1), np.min(S1), np.max(S2), np.min(S2)
# the corresponding SVD eigenvalues and -vectors should work here as well
for e_dim in [2, 10, 25, 50, 75, 100, 250, 400, 500]:
print("mse with %3i e_dim: %11.8f" % (e_dim, msqe(S1, np.dot(U[:,:e_dim], np.dot(S[:e_dim,:e_dim]**2, U.T[:e_dim,:])))))
S_out = np.zeros((n_output, n_output))
S_out[:n_input, :n_input] = S[:n_input, :n_input]
for e_dim in [2, 10, 25, 50, 75, 100, 250, 500, 700]:
print("mse with %3i e_dim: %11.8f" % (e_dim, msqe(S2, np.dot(Vh.T[:,:e_dim], np.dot(S_out[:e_dim,:e_dim]**2, Vh[:e_dim,:])))))
# factorize the similarity matrix S1 with a simec
X = np.eye(n_input)
mses1 = []
mses2 = []
e_dims = [2, 10, 25, 50, 75, 100, 250, 400, 500, 750, 1000]
l_rates = [0.1, 0.1, 0.1, 0.1, 0.1, 0.05, 0.05, 0.01, 0.01, 0.01, 0.01]
for i, e_dim in enumerate(e_dims):
model = SimilarityEncoder(n_input, e_dim, S1.shape[1], s_ll_reg=1., S_ll=S1, opt=keras.optimizers.Adamax(lr=l_rates[i]))
model.fit(X, S1, epochs=100)
mse = msqe(S1, model.predict(X))
mses1.append(mse)
print("mse with %4i e_dim: %11.8f" % (e_dim, mse))
# scalar product of emedding should also approximate S1
Y = model.transform(X)
mse = msqe(S1, np.dot(Y, Y.T))
mses2.append(mse)
print("mse with %4i e_dim: %11.8f" % (e_dim, mse))
for i, e_dim in enumerate(e_dims):
print("mse with %4i e_dim: %11.8f (%11.8f)" % (e_dim, mses1[i], mses2[i]))
# factorize the similarity matrix S2 with a simec
X = np.eye(n_output)
mses1 = []
mses2 = []
e_dims = [2, 10, 25, 50, 75, 100, 250, 400, 500, 750, 1000]
l_rates = [0.1, 0.1, 0.1, 0.1, 0.1, 0.05, 0.05, 0.01, 0.01, 0.01, 0.01]
for i, e_dim in enumerate(e_dims):
model = SimilarityEncoder(n_output, e_dim, S2.shape[1], s_ll_reg=1., S_ll=S2, opt=keras.optimizers.Adamax(lr=l_rates[i]))
model.fit(X, S2, epochs=60)
mse = msqe(S2, model.predict(X))
mses1.append(mse)
print("mse with %4i e_dim: %11.8f" % (e_dim, mse))
# scalar product of emedding should also approximate S1
Y = model.transform(X)
mse = msqe(S2, np.dot(Y, Y.T))
mses2.append(mse)
print("mse with %4i e_dim: %11.8f" % (e_dim, mse))
for i, e_dim in enumerate(e_dims):
print("mse with %4i e_dim: %11.8f (%11.8f)" % (e_dim, mses1[i], mses2[i]))
"""
Explanation: Eigendecomposition of a square symmetric matrix
End of explanation
"""
|
piklprado/ode_examples | Qualitative analysis and Bifurcation diagram Tutorial.ipynb | mit | %matplotlib inline
from numpy import *
from scipy.integrate import odeint
from matplotlib.pyplot import *
ion()
def RM(y, t, r, K, a, h, e, d):
return array([ y[0] * ( r*(1-y[0]/K) - a*y[1]/(1+a*h*y[0]) ),
y[1] * (e*a*y[0]/(1+a*h*y[0]) - d) ])
t = arange(0, 1000, .1)
y0 = [1, 1.]
pars = (1., 10., 1., 0.1, 0.1, 0.1)
y = odeint(RM, y0, t, pars)
plot(t, y)
xlabel('time')
ylabel('population')
legend(['resource', 'consumer'])
"""
Explanation: Qualitative analysis and Bifurcation diagram Tutorial
This tutorial assumes you have read the tutorial on numerical integration.
Exploring the parameter space: bifurcation diagrams
Bifurcation diagrams represent the (long-term) solutions of a model as a function of some key variable. The idea is that, as this parameter changes, the solutions change in a "well-behaved" way, and that helps us understand better the general behavior of the model.
In this tutorial, we are going to study a simple predator-prey model (the Rosenzweig-MacArthur), and see how the amount of resources for prey ($K$) changes the dynamics.
The Rosenzweig-MacArthur consumer-resource model
This model is expressed as:
$$ \begin{aligned}
\frac{dR}{dt} &= rR \left( 1 - \frac{R}{K} \right) - \frac{a R C}{1+ahR} \
\frac{dC}{dt} &= \frac{e a R C}{1+ahR} - d C
\end{aligned} $$
Rosenzweig–MacArthur model solutions
We use the same method as before to integrate this model numerically:
End of explanation
"""
# plot the solution in the phase space
plot(y[:,0], y[:,1])
# defines a grid of points
R, C = meshgrid(arange(0.95, 1.25, .05), arange(0.95, 1.04, 0.01))
# calculates the value of the derivative at the point in the grid
dy = RM(array([R, C]), 0, *pars)
# plots arrows on the points of the grid, with the difection
# and length determined by the derivative dy
# This is a picture of the flow of the solution in the phase space
quiver(R, C, dy[0,:], dy[1,:], scale_units='xy', angles='xy')
xlabel('Resource')
ylabel('Consumer')
"""
Explanation: For the parameters chosen above, the long-term (asymptotic) solution is a fixed point. Let's see this in the phase space, that is, the space of Predators ($P$) vs. Prey ($V$). We note that the arrows are "circulating", but always point inwards, and so the trajectory moves toward the middle, to the fixed point.
End of explanation
"""
# now K = 15
t = arange(0, 1000, .1)
pars = (1., 15., 1., 0.1, 0.1, 0.1)
y_osc = odeint(RM, y0, t, pars)
plot(t, y_osc)
xlabel('time')
ylabel('population')
legend(['resource', 'consumer'])
"""
Explanation: Messing a little with the parameters...
Increasing the carrying capacity $K$ from $10$ to $15$, we now see oscillations...
End of explanation
"""
plot(y_osc[:,0], y_osc[:,1])
R, C = meshgrid(arange(0, 6., .4), arange(0, 2.1, 0.2))
dy = RM(array([R, C]), 0, *pars)
quiver(R, C, dy[0,:], dy[1,:], scale_units='xy', angles='xy')
xlabel('R')
ylabel('C')
"""
Explanation: And, looking again at the phase space plot, we now see that the flux (the arrows) inside circles outwards, towards a limit cycle, and the arrows outside points inwards. The limit cycle corresponds to the periodic solution we just saw.
End of explanation
"""
plot(10., y[-500:,0].min(), 'og')
plot(10., y[-500:,0].max(), 'og')
plot(10., y[-500:,1].min(), 'ob')
plot(10., y[-500:,1].max(), 'ob')
plot(15., y_osc[-500:,0].min(), 'og')
plot(15., y_osc[-500:,0].max(), 'og')
plot(15., y_osc[-500:,1].min(), 'ob')
plot(15., y_osc[-500:,1].max(), 'ob')
xlim((0, 20))
yscale('log')
xlabel('K')
ylabel('min / max population')
"""
Explanation: The bifurcation diagram
We have seen the solutions for two values of $K$, $10$ and $15$, so we want to plot those as a function of $K$. In the second case, there are oscilations, so instead of taking all of the solution, we just pick the minimum and maximum of the solution (after a long time). When the solution is a fixed point, the minimum and maximum should coincide.
End of explanation
"""
## this block calculates solutions for many K's, it should take some time
# empty lists to append the values later
ymin = []
ymax = []
KK = arange(.5, 25, .5)
t = arange(0, 6000, 1.)
# loop over the values of K (KK)
for K in KK:
# redefine the parameters using the new K
pars = (1., K, 1., 0.1, 0.1, 0.1)
# integrate again the equation, with new parameters
y = odeint(RM, y0, t, pars)
# calculate the minimum and maximum of the populations, but
# only for the last 1000 steps (the long-term solution),
# appending the result to the list
# question: is 1000 enough? When it wouldn't be?
ymin.append(y[-1000:,:].min(axis=0))
ymax.append(y[-1000:,:].max(axis=0))
# convert the lists into arrays
ymin = array(ymin)
ymax = array(ymax)
# and now, we plot the bifurcation diagram
plot(KK, ymin[:,0], 'g', label='resource')
plot(KK, ymax[:,0], 'g')
plot(KK, ymin[:,1], 'b', label='consumer')
plot(KK, ymax[:,1], 'b')
xlabel('$K$')
ylabel('min/max populations')
legend(loc='best')
# use a log scale in the y-axis
yscale('log')
"""
Explanation: This is a very poor bifurcation diagram: it has only two points in $K$! Let's try with many values of $K$.
What happens when we change the carrying capacity $K$ from very small values up to very large values? For very small values, the resource is not going to sustain the consumer population, but for larger values ok $K$, both species should be benefited... right?
End of explanation
"""
def RM_season(y, t, r, alpha, T, K, a, h, e, d):
# in this function, `t` appears explicitly
return array([ y[0] * ( r * (1+alpha*sin(2*pi*t/T)) *
(1-y[0]/K) - a*y[1]/(1+a*h*y[0]) ),
y[1] * (e*a*y[0]/(1+a*h*y[0]) - d) ])
t = arange(0, 2000, 1.)
y0 = [1., 1.]
pars = (1., 0.1, 80., 10., 1., 0.1, 0.1, 0.1)
y = odeint(RM_season, y0, t, pars)
plot(t, y)
xlabel('time')
ylabel('population')
legend(['resource', 'consumer'])
"""
Explanation: Well, the first prediction was OK (notice that the plot above uses a log scale), but for high $K$, the minima of the oscillation go to very low values, so that the populations have a high risk of extinction. This phenomenon is the so-called paradox of enrichment.
Consumer-resource dynamics in a seasonal environment
A special type of bifurcation diagram can be used when we have parameters that oscilate with time, and we want to see how this interacts with the system. Let's consider the Rosenzweig-MacArthur equations again, but now we make $r$, the growth rate of the prey, oscilate sinusoidally in time:
$$ \begin{aligned}
\frac{dR}{dt} &= r(t) R \left( 1 - \frac{R}{K} \right) - \frac{a R C}{1+ahR} \
\frac{dC}{dt} &= \frac{e a R C}{1+ahR} - d C \
r(t) &= r_0 (1+\alpha \sin(2\pi t/T))
\end{aligned} $$
We integrate this in the usual manner:
End of explanation
"""
ymin = []
ymax = []
t = arange(0, 6000, 1.) # times
TT = arange(1, 80, 2) # periods
for T in TT:
pars = (1., 0.1, T, 10., 1., 0.1, 0.1, 0.1)
y = odeint(RM_season, y0, t, pars)
ymin.append(y[-1000:,:].min(axis=0))
ymax.append(y[-1000:,:].max(axis=0))
ymin = array(ymin)
ymax = array(ymax)
plot(TT, ymin[:,0], 'g', label='resource')
plot(TT, ymax[:,0], 'g')
plot(TT, ymin[:,1], 'b', label='consumer')
plot(TT, ymax[:,1], 'b')
xlabel('$T$')
ylabel('min/max populations')
legend(loc='best')
yscale('log')
"""
Explanation: Notice that, even with small $K$, the solutions oscilate due to the oscilation of $r(t)$.
Now we use a tool that is an all-time favorite of physicists: the resonance diagram. It works exactly as a bifurcation diagram, but the parameter that is changed is the period (or frequency) of the external oscilation.
End of explanation
"""
|
kongjy/hyperAFM | Tutorials/Image Registration Tutorial.ipynb | mit | #for igor files:
!curl -o util.py https://raw.githubusercontent.com/kongjy/hyperAFM/master/hyperAFM/util.py
#for image alignment:
!curl -o imagealignment.py https://raw.githubusercontent.com/kongjy/hyperAFM/master/hyperAFM/imagealignment.py
#the above will download the files at the specified URL and save them as the filenames specified after '-o'
#curl stands for "see url." to learn more about curl see this page:
#https://tecadmin.net/5-curl-commands-to-download-files/#
"""
Explanation: A. Software required: Python with Conda
Reference: https://uwdirect.github.io/software.html
1. Get Python with Conda
Conda is a system for installing and managing packages and their dependencies. You can get Anaconda here: https://docs.anaconda.com/anaconda/install/ and follow the installation instructions.
Go to the Anaconda prompt (just hit the windows button and search for Anaconda Prompt) and update conda's packages for your system by typing "conda update conda" into the terminal. Update suggested packages.
Install Jupyter notebook and its requirements by typing "conda install jupyter" in the same terminal. IPython notebooks is nice for analyzing data and sharing analysis steps since there is access to Markdown and LaTex. If you've lots of code you want to edit, Spyder will be more ideal.
Type "jupyter notebook" in the terminal. A "Home" page in your browser should open. You can close both by closing the tabs. Shut down the kernel by holding "Ctrl +c" within the Anaconda Prompt.
2. Get additional packages required: igor, dipy
Anaconda comes with many packages, but we need a couple more: igor so that we can read Igor Pro files from Python and dipy which is what we will use to register images.
1. In the same Anaconda Prompt, type "pip install igor" so that you can read Igor Pro files from python.
2. Then install Dipy by typing "conda install dipy -c conda-forge"
3. Make sure you have igor and dipy. Type "conda list" to see all the packages you have installed.
B. Navigate to data folder and open a Jupyter Notebook
Typing "dir" in the Anaconda Prompt lists all files in the current directory. "cd" prints the current working directory. To change directories, type "cd" followed by the name of the folder. "cd .." navigates one directory up. For a list of possible commands, type "help."
Make a folder and give it a name by typing mkdir followed by the name like "mkdir ImageAlignment" in the Anaconda Prompt.
Chage to the new folder by typing "cd ImageAlignment"
Now open a Jupyter Notebook by typing "jupyter notebook" and create a new notebook by clicking "New" and selecting Python 2 (or 3 if you installed Python 3) You can change the name of the Notebook by double clicking "Untitled."
C. Download, import, and load scripts and data.
The files you'll need to import are util and imagealignment. util contains functions for reading in files and imagealignment contains functions for flattening and aligning images. To do this type:
1. Get code for loading in igor files and doing image alignment.
End of explanation
"""
#SKPM file:
!curl -o SKPM.ibw https://raw.githubusercontent.com/kongjy/hyperAFM/master/Data/PolymerBlends/Image%20Alignment%20Tutorial/Film15SKPM_0000.ibw
#cAFM file:
!curl -o cAFM.ibw https://raw.githubusercontent.com/kongjy/hyperAFM/master/Data/PolymerBlends/Image%20Alignment%20Tutorial/Film15cAFM_1V_0001.ibw
"""
Explanation: 2. Get data (cAFM and SKPM images of a P3HT/PMMA blend) for this tutorial
End of explanation
"""
#packages to load in data and for image alignment
from util import * #* means to import everything.
from imagealignment import *
#to plot
import matplotlib.pyplot as plt
#display graphs/plots in notebook
%matplotlib inline
#import data with load_ibw function in util
SKPMfile=load_ibw('SKPM.ibw')
cAFMfile=load_ibw('cAFM.ibw')
"""
Explanation: 3. Import relevant packages and data into the notebook.
Curl only downloads the files. To make use of the functions in those files, we need to import them into a notebook.
End of explanation
"""
fig=plt.imshow(SKPMfile[:,:,0])
plt.colorbar()
"""
Explanation: The data is stored in n-dimensional arrays where n = # data channels. The first layer (i.e, layer 0) is topography, the second, third, etc are the same as when the files are opened in Igor.
For the cAFM file, the layers are as follow:
0: topography
1: deflection
2: z sensor
3: current
For the SKPM file, the layers are as follow:
0: topography
1: amplitude
2: phase
3: potential retrace
Basic Image Visualization
We can use SKPMfile[:,:,0] to access the topography for the SKPM file. The first colon means that we will display all rows, the second means that we will display all columns.
End of explanation
"""
SKPMtopo_flattend=flatten(SKPMfile[:,:,0])
plt.imshow(SKPMtopo_flattend)
plt.colorbar()
"""
Explanation: Note that the image is unflattened. We can use the flatten function in the imagealignment file (which you can look at via spyder, textedit, notepad, or whatever you fancy.)
End of explanation
"""
SKPM_bottomquarter=flatten(SKPMfile[128:,:,3])
plt.imshow(SKPM_bottomquarter)
"""
Explanation: Here's how we can display the bottom, half of the 256x256 SKPM image
End of explanation
"""
mutualinformation=setup_mutualinformation(nbins=32, sampling_prop=None)
"""
Explanation: Image Registration with Affine Transformations
This tutorial follows the ideas in the example on the dipy website fairly closely. Take a look at the work by the pros here: http://nipy.org/dipy/examples_built/affine_registration_3d.html. Note that the functions I wrote are interfaces to those in dipy and are meant to make them a little more accessible. So, the functions they call explicitly will be different from those in this tutorial.
For more information about mutual information see this: https://link.springer.com/article/10.1023/A:1007958904918
1. Set up the mutual information metric with setup_mutualinformation, which will be used to evaluate how well we've aligned the images.
End of explanation
"""
affreg=setup_affine(metric=mutualinformation, level_iters=None , sigmas=None, \
factors=None, method='L-BFGS-B')
"""
Explanation: See doc string for details on the parameters. Docstrings are embedded between three quotes in the raw
imagealignment file. From here on out, please defer to docstrings for each function to learn more about the parameters (i.e, arguments) and their default settings.
2. Set up the affine transformation with setup_affine, which will be used to define the Gaussian Pyramid, smoothing, factors, and method used to optimize the transformation. (See docstring for info about the arguments in the function).
End of explanation
"""
cAFMtopo=flatten(cAFMfile[:,:,0])
SKPMtopo=flatten(SKPMfile[:,:,0])
translationtrans=find_affine(static=cAFMtopo, moving=SKPMtopo, affreg=affreg, \
transform=TranslationTransform2D(), params0=None, \
starting_affine=None)
"""
Explanation: 3. Now that the affine registration is set up, we can start registering images with respect to translation in the x-, y- plane, rotation, scaling, etc with find_affine. Like the example on the dipy page, we'll optimize the registration with a transformation with the fewest degrees of freedom (like RotationTransform2D() or TranslationTransform2D()) and then refine it.)
End of explanation
"""
SKPMtopo_translated = apply_affine(moving=SKPMtopo, transformation=translationtrans)
plt.imshow(SKPMtopo_translated)
"""
Explanation: You can apply the optimized translation transformation to the moving image, SKPMtopo, with apply_affine.
End of explanation
"""
rigidtrans=find_affine(static=cAFMtopo, moving=SKPMtopo, affreg=affreg, \
transform=RigidTransform2D(), params0=None, \
starting_affine=translationtrans)
#a rigid transform is one that includes rotatons and translations
"""
Explanation: We can optimize the transformation with respect to translation and rotation by supplying the previously optimized translation transformation.
End of explanation
"""
SKPMtopo_rigid=apply_affine(moving=SKPMtopo, transformation=rigidtrans)
plt.imshow(SKPMtopo_rigid)
"""
Explanation: ...and apply it to the original SKPM topo.
End of explanation
"""
affinetrans=find_affine(static=cAFMtopo, moving=SKPMtopo, affreg=affreg, \
transform=AffineTransform2D(), params0=None, \
starting_affine=rigidtrans)
SKPMtopo_affine=apply_affine(moving=SKPMtopo, transformation=affinetrans)
plt.imshow(SKPMtopo_affine)
"""
Explanation: Do the same with the full affine transformation.
End of explanation
"""
SKPM_transformed=apply_affine(moving=flatten(SKPMfile[:,:,3]), transformation=affinetrans)
"""
Explanation: To register the cAFM with the SKPM image, you can apply it to the SKPM layer:
End of explanation
"""
|
ALEXKIRNAS/DataScience | Python_for_data_analysis/Chapter_11/Chapter_11.ipynb | mit | from pandas import Series, DataFrame
import pandas as pd
from numpy.random import randn
import numpy as np
pd.options.display.max_rows = 12
np.set_printoptions(precision=4, suppress=True)
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(12, 6))
%matplotlib inline
"""
Explanation: Financial and Economic Data Applications
End of explanation
"""
close_px = pd.read_csv('stock_px.csv', parse_dates=True, index_col=0)
volume = pd.read_csv('volume.csv', parse_dates=True, index_col=0)
prices = close_px.ix['2011-09-05':'2011-09-14', ['AAPL', 'JNJ', 'SPX', 'XOM']]
volume = volume.ix['2011-09-05':'2011-09-12', ['AAPL', 'JNJ', 'XOM']]
prices
volume
prices * volume
vwap = (prices * volume).sum() / volume.sum()
vwap
vwap.dropna()
prices.align(volume, join='inner')
s1 = Series(range(3), index=['a', 'b', 'c'])
s2 = Series(range(4), index=['d', 'b', 'c', 'e'])
s3 = Series(range(3), index=['f', 'a', 'c'])
DataFrame({'one': s1, 'two': s2, 'three': s3})
DataFrame({'one': s1, 'two': s2, 'three': s3}, index=list('face'))
"""
Explanation: Data munging topics
Time series and cross-section alignment
End of explanation
"""
|
fazzolini/fast_ai | deeplearning1/nbs/lesson4.ipynb | apache-2.0 | ratings = pd.read_csv(path+'ratings.csv')
ratings.head()
len(ratings)
"""
Explanation: Set up data
We're working with the movielens data, which contains one rating per row, like this:
End of explanation
"""
movie_names = pd.read_csv(path+'movies.csv').set_index('movieId')['title'].to_dict
users = ratings.userId.unique()
movies = ratings.movieId.unique()
# userId and movieId become ditionary elements with values ranging from 0 to max len
userid2idx = {o:i for i,o in enumerate(users)}
movieid2idx = {o:i for i,o in enumerate(movies)}
"""
Explanation: Just for display purposes, let's read in the movie names too.
End of explanation
"""
ratings.movieId = ratings.movieId.apply(lambda x: movieid2idx[x])
ratings.userId = ratings.userId.apply(lambda x: userid2idx[x])
user_min, user_max, movie_min, movie_max = (ratings.userId.min(),
ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max())
user_min, user_max, movie_min, movie_max
n_users = ratings.userId.nunique()
n_movies = ratings.movieId.nunique()
n_users, n_movies
"""
Explanation: We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.
End of explanation
"""
n_factors = 50
np.random.seed = 42
"""
Explanation: This is the number of latent factors in each embedding.
End of explanation
"""
msk = np.random.rand(len(ratings)) < 0.8
trn = ratings[msk]
val = ratings[~msk]
"""
Explanation: Randomly split into training and validation.
End of explanation
"""
g=ratings.groupby('userId')['rating'].count()
topUsers=g.sort_values(ascending=False)[:15]
g=ratings.groupby('movieId')['rating'].count()
topMovies=g.sort_values(ascending=False)[:15]
top_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId')
top_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId')
pd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum)
"""
Explanation: Create subset for Excel
We create a crosstab of the most popular movies and most movie-addicted users which we'll copy into Excel for creating a simple example. This isn't necessary for any of the modeling below however.
End of explanation
"""
user_in = Input(shape=(1,), dtype='int64', name='user_in')
u = Embedding(input_dim=n_users, output_dim=n_factors, input_length=1, embeddings_regularizer=l2(1e-4))(user_in)
movie_in = Input(shape=(1,), dtype='int64', name='movie_in')
m = Embedding(input_dim=n_movies, output_dim=n_factors, input_length=1, embeddings_regularizer=l2(1e-4))(movie_in)
x = dot([u, m], axes=2)
x = Flatten()(x)
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=1,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=3,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=6,
validation_data=([val.userId, val.movieId], val.rating))
"""
Explanation: Dot product
The most basic model is a dot product of a movie embedding and a user embedding. Let's see how well that works:
End of explanation
"""
def embedding_input(name, n_in, n_out, reg):
inp = Input(shape=(1,), dtype='int64', name=name)
return inp, Embedding(input_dim=n_in, output_dim=n_out, input_length=1, embeddings_regularizer=l2(reg))(inp)
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
def create_bias(inp, n_in):
x = Embedding(input_dim=n_in, output_dim=1, input_length=1)(inp)
return Flatten()(x)
ub = create_bias(user_in, n_users)
mb = create_bias(movie_in, n_movies)
x = dot([u, m], axes=2)
x = Flatten()(x)
x = add([x, ub])
x = add([x, mb])
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=1,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=6,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=10,
validation_data=([val.userId, val.movieId], val.rating))
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=5,
validation_data=([val.userId, val.movieId], val.rating))
"""
Explanation: The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well...
Bias
The problem is likely to be that we don't have bias terms - that is, a single bias for each user and each movie representing how positive or negative each user is, and how good each movie is. We can add that easily by simply creating an embedding with one output for each movie and each user, and adding it to our output.
End of explanation
"""
model.save_weights(model_path+'bias.h5')
model.load_weights(model_path+'bias.h5')
"""
Explanation: This result is quite a bit better than the best benchmarks that we could find with a quick google search - so looks like a great approach!
End of explanation
"""
model.predict([np.array([3]), np.array([6])])
"""
Explanation: We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.
End of explanation
"""
g=ratings.groupby('movieId')['rating'].count()
topMovies=g.sort_values(ascending=False)[:2000]
topMovies = np.array(topMovies.index)
"""
Explanation: Analyze results
To make the analysis of the factors more interesting, we'll restrict it to the top 2000 most popular movies.
End of explanation
"""
get_movie_bias = Model(movie_in, mb)
movie_bias = get_movie_bias.predict(topMovies)
movie_ratings = [(b[0], movie_names()[movies[i]]) for i,b in zip(topMovies,movie_bias)]
"""
Explanation: First, we'll look at the movie bias term. We create a 'model' - which in keras is simply a way of associating one or more inputs with one more more outputs, using the functional API. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).
End of explanation
"""
sorted(movie_ratings, key=itemgetter(0))[:15]
sorted(movie_ratings, key=itemgetter(0), reverse=True)[:15]
"""
Explanation: Now we can look at the top and bottom rated movies. These ratings are corrected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.
End of explanation
"""
get_movie_emb = Model(movie_in, m)
movie_emb = np.squeeze(get_movie_emb.predict([topMovies]))
movie_emb.shape
"""
Explanation: We can now do the same thing for the embeddings.
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
movie_pca = pca.fit(movie_emb.T).components_
fac0 = movie_pca[0]
movie_comp = [(f, movie_names()[movies[i]]) for f,i in zip(fac0, topMovies)]
"""
Explanation: Because it's hard to interpret 50 embeddings, we use PCA to simplify them down to just 3 vectors.
End of explanation
"""
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac1 = movie_pca[1]
movie_comp = [(f, movie_names()[movies[i]]) for f,i in zip(fac1, topMovies)]
"""
Explanation: Here's the 1st component. It seems to be 'critically acclaimed' or 'classic'.
End of explanation
"""
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac2 = movie_pca[2]
movie_comp = [(f, movie_names()[movies[i]]) for f,i in zip(fac2, topMovies)]
"""
Explanation: The 2nd is 'hollywood blockbuster'.
End of explanation
"""
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
"""
Explanation: The 3rd is 'violent vs happy'.
End of explanation
"""
# The following would be for Python 2 only
# reload(sys)
# sys.setdefaultencoding('utf8')
start=50; end=100
X = fac0[start:end]
Y = fac2[start:end]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(topMovies[start:end], X, Y):
plt.text(x,y,movie_names()[movies[i]], color=np.random.rand(3)*0.7, fontsize=14)
plt.show()
"""
Explanation: We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.
End of explanation
"""
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
x = concatenate([u, m], axis=2)
x = Flatten()(x)
x = Dropout(0.3)(x)
x = Dense(70, activation='relu')(x)
x = Dropout(0.75)(x)
x = Dense(1)(x)
nn = Model([user_in, movie_in], x)
nn.compile(Adam(0.001), loss='mse')
nn.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, epochs=8,
validation_data=([val.userId, val.movieId], val.rating))
"""
Explanation: Neural net
Rather than creating a special purpose architecture (like our dot-product with bias earlier), it's often both easier and more accurate to use a standard neural network. Let's try it! Here, we simply concatenate the user and movie embeddings into a single vector, which we feed into the neural net.
End of explanation
"""
|
usantamaria/iwi131 | ipynb/01-Intro1/Introduccion.ipynb | cc0-1.0 | a, b = 2, 3
while b < 300:
print b,
a, b = b, a+b
"""
Explanation: <header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" align="left"/>
<img src="images/inf.png" alt="" align="right"/>
</header>
<br/><br/><br/><br/><br/>
IWI131
Programación de Computadores
Sebastián Flores
¿Qué contenido aprenderemos?
1- Reglas del curso
2- Características de python
3- Consejos para aprender
¿Porqué aprenderemos ese contenido?
Conocer las reglas -> optimizar recursos y anticipar dificultades.
Características de python -> ¿porqué python?
Consejos para aprender -> optimizar recursos.
1- Reglas del Curso
Evaluaciones
Nota final
Sitio web
Conflicto de metodología
Evaluaciones
Obligatorias
3 certámenes individuales.
3 tareas en equipo.
5 actividades.
Opcionales
1 Certamen Recuperativo: reemplaza el peor certamen
Asistencia a Ayudantías: reemplaza el peor trabajo en equipo
Nota final:
Calcular:
$$ PP = 60\% PC + 20\% PT + 20\% PAE $$
Si $PC ≥ 55$ y $PP ≥ 55$:
$$ NF = PP$$
Sino:
$$ NF = \min(PC,PP) $$
Sitio web del ramo
Información oficial del ramo:
http://progra.usm.cl (materia, ejercicios, material, entrega tareas, etc.)
Otros medios:
http://twitter.com/progra_usm y http://facebook.com/ (anuncios, consultas, etc.)
Adicional a este paralelo:
https://github.com/sebastiandres/iwi131 (material adicional)
Conflicto de metodología
Los certámenes ($60\%$ de la nota final) son en papel e individiduales.
Certámenes requieren las siguientes habilidades:
lectura
análisis
modelamiento
programación
Sobre este mí
Ingeniero Civil Matemático - UTFSM, Chile (2000).
Ingeniero y Magíster en Mecánica - Ecole Polytechnique, Francia (2005).
Magíster en Computación y Matemática Aplicada - Stanford, EEUU (2010).
Esval, Peugeot-Citroen, Lexity, CMM-UCh, Thinkful.
Proyectos de mecánica de sólidos y fluidos, minería, química y sismología.
Actualmente
Clases en el mundo real: IWI131 y MAT281
Clases online: Data Science @ Thinkful
Software para propagación de tsunamis
Mi visión de la educación
Todos pueden aprender, si hay esfuerzo.
Escuchar < Ver < Reproducir < Modificar < Crear < Innovar.
Mi visión de la programación
Python es fácil, útil y entretenido.
Programar es como andar en bicicleta o tocar piano.
Ingenieros que no sepan programar estarán en desventaja al egresar.
2- Sobre python
¿Qué es python?
¿Porqué python?
<img src="images/python.jpg" alt="" align="right"/>
¿Qué es python?
Lenguaje de alto nivel: programar sin conocer el hardware.
Navaja suiza de lenguajes de programación.
2 versiones:
2.7: Utilizado en este curso
3.5: Versión "consistente", todavía en adopción.
¿Porqué python?
Ampliamente utilizado en ingeniería
Fácil de leer, mantener y escribir
Gran cantidad de librerías
Alto nivel de abstracción
Ejecución directa
Gratis
Sobre este paralelo piloto
Responsabilidad: 50% profesor, 50% estudiantes.
Mutable: feedback es esencial.
Práctico: python para la vida, no para certámenes.
Interactivo: participación en clases NO es opcional.
Ejemplo 1
¿Que hace el siguiente archivo?
End of explanation
"""
anexos = {'Cesar':4001,
'Sebastian': 4002}
anexos['Claudio'] = 4003
print anexos
del anexos['Claudio']
anexos['Patricio'] = 4004
print anexos
if "Sebastian" in anexos:
print anexos["Sebastian"]
if "sebastian" in anexos:
print anexos["sebastian"]
print anexos["Luis"]
"""
Explanation: Ejemplo 2
¿Que hace el siguiente código?
End of explanation
"""
import urllib2
def download_file(download_url):
response = urllib2.urlopen(download_url)
file = open("document.pdf", 'wb')
file.write(response.read())
file.close()
print("Completed")
download_file("http://progra.usm.cl/Archivos/certamenes/Libro_prograRB.pdf")
"""
Explanation: Ejemplo 3
¿Que hace el siguiente archivo?
End of explanation
"""
|
bxin/cwfs | examples/AuxTel.ipynb | gpl-3.0 | from lsst.cwfs.instrument import Instrument
from lsst.cwfs.algorithm import Algorithm
from lsst.cwfs.image import Image, readFile, aperture2image, showProjection
import lsst.cwfs.plots as plots
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Patrick provided a pair of images from AuxTel.
Let's look at how those images work with our cwfs code
load the modules
End of explanation
"""
fieldXY = [0,0]
I1 = Image(readFile('../tests/testImages/AuxTel/I1_intra_20190912_HD21161_z05.fits'), fieldXY, Image.INTRA)
I2 = Image(readFile('../tests/testImages/AuxTel/I2_extra_20190912_HD21161_z05.fits'), fieldXY, Image.EXTRA)
plots.plotImage(I1.image,'intra')
plots.plotImage(I2.image,'extra')
"""
Explanation: Define the image objects. Input arguments: file name, field coordinates in deg, image type
The colorbar() below may produce a warning message if your matplotlib version is older than 1.5.0
( https://github.com/matplotlib/matplotlib/issues/5209 )
End of explanation
"""
inst=Instrument('AuxTel',I1.sizeinPix)
"""
Explanation: Define the instrument. Input arguments: instrument name, size of image stamps
End of explanation
"""
algo=Algorithm('exp',inst,0)
"""
Explanation: Define the algorithm being used. Input arguments: baseline algorithm, instrument, debug level
End of explanation
"""
algo.runIt(inst,I1,I2,'paraxial')
"""
Explanation: Run it
End of explanation
"""
print(algo.zer4UpNm)
"""
Explanation: Print the Zernikes Zn (n>=4)
End of explanation
"""
plots.plotZer(algo.zer4UpNm,'nm')
"""
Explanation: plot the Zernikes Zn (n>=4)
End of explanation
"""
print("Expected image diameter in pixels = %.0f"%(inst.offset/inst.fno/inst.pixelSize))
plots.plotImage(I1.image0,'original intra', mask=algo.pMask)
plots.plotImage(I2.image0,'original extra', mask=algo.pMask)
"""
Explanation: We check that the optical parameters provided are consistent with the image diameter. Otherwise the numerical solutions themselves do not make much sense.
End of explanation
"""
nanMask = np.ones(I1.image.shape)
nanMask[I1.pMask==0] = np.nan
fig, ax = plt.subplots(1,2, figsize=[10,4])
img = ax[0].imshow(algo.Wconverge*nanMask, origin='lower')
ax[0].set_title('Final WF = estimated + residual')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(algo.West*nanMask, origin='lower')
ax[1].set_title('residual wavefront')
fig.colorbar(img, ax=ax[1])
fig, ax = plt.subplots(1,2, figsize=[10,4])
img = ax[0].imshow(I1.image, origin='lower')
ax[0].set_title('Intra residual image')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(I2.image, origin='lower')
ax[1].set_title('Extra residual image')
fig.colorbar(img, ax=ax[1])
"""
Explanation: Patrick asked the question: can we show the results of the fit in intensity space, and also the residual?
Great question. The short answer is no.
The long answer: the current approach implemented is the so-called inversion approach, i.e., to inversely solve the Transport of Intensity Equation with boundary conditions. It is not a forward fit. If you think of the unperturbed image as I0, and the real image as I, we iteratively map I back toward I0 using the estimated wavefront. Upon convergence, our "residual images" should have intensity distributions that are nearly uniform. We always have an estimated wavefront, and a residual wavefront. The residual wavefront is obtained from the two residual images.
However, using tools availabe in the cwfs package, we can easily make forward prediction of the images using the wavefront solution. This is basically to take the slope of the wavefront at any pupil position, and raytrace to the image plane. We will demostrate these below.
End of explanation
"""
oversample = 10
projSamples = I1.image0.shape[0]*oversample
luty, lutx = np.mgrid[
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5),
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5)]
lutx = lutx / (projSamples / 2 / inst.sensorFactor)
luty = luty / (projSamples / 2 / inst.sensorFactor)
"""
Explanation: Now we do the forward raytrace using our wavefront solutions
The code is simply borrowed from existing cwfs code.
We first set up the pupil grid. Oversample means how many ray to trace from each grid point on the pupil.
End of explanation
"""
lutxp, lutyp, J = aperture2image(I1, inst, algo, algo.converge[:,-1], lutx, luty, projSamples, 'paraxial')
show_lutxyp = showProjection(lutxp, lutyp, inst.sensorFactor, projSamples, 1)
I1fit = Image(show_lutxyp, fieldXY, Image.INTRA)
I1fit.downResolution(oversample, I1.image0.shape[0], I1.image0.shape[1])
"""
Explanation: We now trace the rays to the image plane. Lutxp and Lutyp are image coordinates for each (oversampled) ray. showProjection() makes the intensity image. Then, to down sample the image back to original resolution, we want to use the function downResolution() which is defined for the image class.
End of explanation
"""
luty, lutx = np.mgrid[
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5),
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5)]
lutx = lutx / (projSamples / 2 / inst.sensorFactor)
luty = luty / (projSamples / 2 / inst.sensorFactor)
lutxp, lutyp, J = aperture2image(I2, inst, algo, algo.converge[:,-1], lutx, luty, projSamples, 'paraxial')
show_lutxyp = showProjection(lutxp, lutyp, inst.sensorFactor, projSamples, 1)
I2fit = Image(show_lutxyp, fieldXY, Image.EXTRA)
I2fit.downResolution(oversample, I2.image0.shape[0], I2.image0.shape[1])
#The atmosphere used here is just a random Gaussian smearing. We do not care much about the size at this point
from scipy.ndimage import gaussian_filter
atmSigma = .6/3600/180*3.14159*21.6/1.44e-5
I1fit.image[np.isnan(I1fit.image)]=0
a = gaussian_filter(I1fit.image, sigma=atmSigma)
fig, ax = plt.subplots(1,3, figsize=[15,4])
img = ax[0].imshow(I1fit.image, origin='lower')
ax[0].set_title('Forward prediction (no atm) Intra')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(a, origin='lower')
ax[1].set_title('Forward prediction (w atm) Intra')
fig.colorbar(img, ax=ax[1])
img = ax[2].imshow(I1.image0, origin='lower')
ax[2].set_title('Real Image, Intra')
fig.colorbar(img, ax=ax[2])
I2fit.image[np.isnan(I2fit.image)]=0
b = gaussian_filter(I2fit.image, sigma=atmSigma)
fig, ax = plt.subplots(1,3, figsize=[15,4])
img = ax[0].imshow(I2fit.image, origin='lower')
ax[0].set_title('Forward prediction (no atm) Extra')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(b, origin='lower')
ax[1].set_title('Forward prediction (w atm) Extra')
fig.colorbar(img, ax=ax[1])
img = ax[2].imshow(I2.image0, origin='lower')
ax[2].set_title('Real Image, Extra')
fig.colorbar(img, ax=ax[2])
"""
Explanation: Now do the same thing for extra focal image
End of explanation
"""
|
luctrudeau/DaalaNotebooks | CFL/DCT-Domain Subsampling.ipynb | mpl-2.0 | %matplotlib inline
import sys
import y4m
import matplotlib.pyplot as plt
import numpy as np
def decode_y4m_buffer(frame):
W, H = frame.headers['W'], frame.headers['H']
Wdiv2, Hdiv2 = W // 2, H // 2
C, buf = frame.headers['C'], frame.buffer
A, Adiv2, div2 = W * H, Hdiv2 * Wdiv2, (Hdiv2, Wdiv2)
dtype, scale = 'uint8', 1.
if C.endswith('p10'):
dtype, scale, A = 'uint16', 4., A * 2
Y = (np.ndarray((H, W), dtype, buf))
Cb = (np.ndarray(div2, dtype, buf, A))
Cr = (np.ndarray(div2, dtype, buf, A + Adiv2))
return Y, Cb, Cr
def process():
pass
def y4mread(file):
parser = y4m.Reader(process(), verbose=True)
frame = None
with open(file, 'rb') as f:
while True:
data = f.read(2048)
if not data:
break
parser._data += data
if parser._stream_headers is None:
parser._decode_stream_headers()
if frame is None:
frame = parser._decode_frame()
else :
break
Y, Cb, Cr = decode_y4m_buffer(frame)
return Y, Cb, Cr
Y, Cb, Cr = y4mread("images/owl.y4m")
plt.figure(figsize=(15,10))
y_height, y_width = Y.shape
cb_height, cb_width = Cb.shape
cr_height, cr_width = Cr.shape
plt.subplot(1,3,1)
plt.title("Luma (%dx%d)" % (y_width, y_height))
plt.imshow(Y, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255, aspect='equal', interpolation='nearest');
plt.subplot(1,3,2)
plt.title("Cb (%dx%d)" % (cb_width, cb_height))
plt.imshow(Cb, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255, aspect='equal', interpolation='nearest');
plt.subplot(1,3,3)
plt.title("Cr (%dx%d)" % (cr_width, cr_height))
plt.imshow(Cr, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255, aspect='equal', interpolation='nearest');
"""
Explanation: DCT-Domain Subsampling
In this notebook, we examine how to subsample an image while in the DCT domain. DCT-domain subsampling is crucial to DCT-domain Chroma from Luma (CfL) because of Chroma subsampling.
When the 4:2:0 Chroma subsampling is used, the Chroma prediction needs to be subsample to match the chroma subsampling.
Loading y4m
For this experiement, we will use a 4:2:0 Y4M file. You can create a 420 y4m file using ffmpeg with the following command
ffmpeg -i Owl.jpg -pix_fmt yuv420p owl.y4m
Next, we load the image using the y4m package. It might not be apparent, but the Luma plane is twice the size of the Chroma planes.
End of explanation
"""
from scipy.fftpack import dct
block_size = 8
Y_dct = np.zeros((y_height, y_width))
for y in range(0,y_height - (block_size-1), block_size):
yRange = np.arange(y,y+block_size)
for x in range(0, y_width - (block_size-1), block_size):
xRange = np.arange(x,x+block_size)
Y_dct[np.ix_(yRange,xRange)] = dct(dct(Y[np.ix_(yRange,xRange)].T, norm='ortho').T, norm='ortho')
plt.imshow(Y_dct);
"""
Explanation: The DCT Domain
Let's convert the luma plane of our image to the DCT domain. To do so, we will use 8x8 blocks and the normalized DCT-II:
DCT-II
$$X_k = \sum_{n=0}^{N-1} x_n \cos \left( \frac{\pi}{N} \left( n + \frac{1}{2} \right)k \right) \quad k = 0,\ldots,N-1$$
Normalization
$$X_0 = X_0 \times \frac{1}{\sqrt{2}}$$
$$X_k = X_k \times \sqrt{\frac{2}{N}} \quad k=0,\ldots,N-1$$
End of explanation
"""
from scipy.fftpack import idct
block_size = 8
Y_idct = np.zeros((y_height, y_width))
for y in range(0,y_height - block_size, block_size):
yRange = np.arange(y,y+block_size)
for x in range(0, y_width - block_size, block_size):
xRange = np.arange(x,x+block_size)
Y_idct[np.ix_(yRange,xRange)] = idct(idct(Y_dct[np.ix_(yRange,xRange)].T, norm='ortho').T, norm='ortho')
plt.imshow(Y_idct, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255, aspect='equal', interpolation='nearest');
"""
Explanation: Back to the Pixel Domain
To make sure we don't have any errors, let's perform the inverse dct on each 8x8 block.
End of explanation
"""
## DCT Subsampling
from scipy.fftpack import idct
sub_block_size = 4
y_sub_height = y_height // 2
y_sub_width = y_width // 2
Y_sub = np.zeros((y_sub_height, y_sub_width))
yy = 0
for y in range(0,y_sub_height - (sub_block_size-1), sub_block_size):
y_sub_range = range(y,y+sub_block_size)
y_range = range(yy,yy+sub_block_size)
xx = 0
for x in range(0, y_sub_width - (sub_block_size-1), sub_block_size):
x_sub_range = range(x,x+sub_block_size)
x_range = range(xx, xx+sub_block_size)
Y_sub[np.ix_(y_sub_range, x_sub_range)] = idct(idct(Y_dct[np.ix_(y_range, x_range)].T, norm='ortho').T, norm='ortho')
xx = xx + block_size
yy = yy + block_size
Y_sub_scaled = Y_sub // 2;
plt.figure(figsize=(15,10))
plt.subplot(1,2,1)
plt.title('Inverse DCT of the top left 4x4 blocks in each of the 8x8 blocks')
plt.imshow(Y_sub, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255, aspect='equal', interpolation='nearest');
plt.subplot(1,2,2)
plt.title('Same image with pixel values divided by 2')
plt.imshow(Y_sub_scaled, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255, aspect='equal', interpolation='nearest');
"""
Explanation: Subsampling Time
To subsample our image in the frequency domain, one simple trick is only to take the coefficients starting from the top left. In this case, since we want a quarter of the image size, we take the top left 4x4 block.
Notice that When we perform the inverse transform on the 4x4 block N (in the previous equations) is now 16 (instead of 64, when we had 8x8 blocks). As can be seen in the following images the coefficient are scaled for a 8x8 dct not for a 4x4. To fix this, we divide the values by 2.
End of explanation
"""
plt.figure(figsize=(15,10))
plt.subplot(1,2,1)
plt.title('Pixel Domain Subsampling (no filtering)')
plt.imshow(Y[::2, ::2], cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255, aspect='equal', interpolation='nearest');
plt.subplot(1,2,2)
plt.title('DCT Domain Subsampling')
plt.imshow(Y_sub_scaled, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255, aspect='equal', interpolation='nearest');
"""
Explanation: Comparing with spatial domain subsampling
We can compare the results with a pixel domain subsampling. Looking at the owl, we notice that the DCT domain subsampling is blurrier. This is not necessarily a bad thing, looking back at the Cb and Cr planes in the first image, these planes don't have the sharp details of the luma plane, so a blurred prediction is ideal
End of explanation
"""
|
thaophung/Udacity_deep_learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
from string import punctuation
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
#print(text)
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
int_to_vocab = dict(enumerate(vocab, 1))
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tokens_dict = dict([('.','||Period||'), (',','||Comma||'), ('"', '||Quotation_Mark||'),
(';', '||Semicolon||'), ('!', "||Exclamation_Mark||"), ('?', '||Question_Mark'),
('(', '||Left_Parentheses||'), (')', '||Right_Parentheses||'),
('--', '||Dash||'), ('\n', '||Return||')])
return tokens_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32,[None, None], name='target')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
rnn_layer = 2
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * rnn_layer)
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state')
return (cell, initial_state)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return (outputs, final_state)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(inputs=outputs,num_outputs=vocab_size,
activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
return logits,final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = int_text[1: n_batches * batch_size * seq_length]
ydata.append(int_text[0])
ydata = np.array(ydata)
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 100
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 15
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name("input:0")
initial_state = loaded_graph.get_tensor_by_name("initial_state:0")
final_state = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return (inputs, initial_state, final_state, probs)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
prob = list(probabilities)
word_id= prob.index(max(prob))
return int_to_vocab[word_id]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
VectorBlox/PYNQ | docs/source/9_base_overlay_video.ipynb | bsd-3-clause | from pynq import Overlay
from pynq.drivers.video import HDMI
# Download bitstream
Overlay("base.bit").download()
# Initialize HDMI as an input device
hdmi_in = HDMI('in')
"""
Explanation: Video using the Base Overlay
The PYNQ-Z1 board contains a HDMI input port, and a HDMI output port connected to the FPGA fabric of the Zynq® chip. This means to use the HDMI ports, HDMI controllers must be included in a hardware library or overlay.
The base overlay contains a HDMI input controller, and a HDMI Output controller, both connected to their corresponding HDMI ports. A frame can be captured from the HDMI input, and streamed into DDR memory. The frames in DDR memory, can be accessed from Python.
A framebuffer can be shared between HDMI in and HDMI out to enable streaming.
Video IO
The overlay contains two video controllers, HDMI in and out. Both interfaces can be controlled independently, or used in combination to capture an image from the HDMI, process it, and display it on the HDMI out.
There is also a USB controller connected to the Zynq PS. A webcam can also be used to capture images, or video input, that can be processed and displayed on the HDMI out.
The HDMI video capture controller
To use the HDMI in controller, connect the on-board HDMI In port to a valid video source. E.g. your laptop can be used if it has HDMI out. Any HDMI video source can be used up to 1080p.
To use the HDMI in, ensure you have connected a valid HDMI source and execute the next cell. If a valid HDMI source is not detected, the HDMI in controller will timeout with an error.
End of explanation
"""
hdmi_in.start()
hdmi_in.stop()
"""
Explanation: The HDMI() argument ‘in’ indicates that the object is in capture mode.
When a valid video input source is connected, the controller should recognize it and start automatically. If a HDMI source is not connected, the code will time-out with an error.
Starting and stopping the controller
You can manually start/stop the controller
End of explanation
"""
state = hdmi_in.state()
print(state)
"""
Explanation: Readback from the controller
To check the state of the controller:
End of explanation
"""
hdmi_in.start()
width = hdmi_in.frame_width()
height = hdmi_in.frame_height()
print('HDMI is capturing a video source of resolution {}x{}'\
.format(width,height))
"""
Explanation: The state is returned as an integer value, with one of three possible values:
0 if disconnected
1 if streaming
2 if paused
You can also check the width and height of the input source (assuming a source is connected):
End of explanation
"""
hdmi_in.frame_index()
"""
Explanation: HDMI Frame list
The HDMI object holds a frame list, that can contain up to 3 frames, and is where the controller stores the captured frames. At the object instantiation, the current frame is the one at index 0. You can check at any time which frame index is active:
End of explanation
"""
index = hdmi_in.frame_index()
hdmi_in.frame_index(index + 1)
"""
Explanation: The frame_index() method can also be used to set a new index, if you specify an argument with the method call. For instance:
End of explanation
"""
hdmi_in.frame_index_next()
"""
Explanation: This will set the current frame index to the next in the sequence. Note that, if index is 2 (the last frame in the list), (index+1) will cause an exception.
If you want to set the next frame in the sequence, use:
End of explanation
"""
from IPython.display import Image
frame = hdmi_in.frame()
orig_img_path = '/home/xilinx/jupyter_notebooks/Getting_Started/images/hdmi_in_frame0.jpg'
frame.save_as_jpeg(orig_img_path)
Image(filename=orig_img_path)
"""
Explanation: This will loop through the frame list and it will also return the new index as an integer.
Access the current frame
There are two ways to access pixel data: hdmi.frame() and hdmi.frame_raw().
End of explanation
"""
for x in range(int(width/2)):
for y in range(int(height/2)):
(red,green,blue) = frame[x,y]
green = green*2
if(green>255):
green = 255
frame[x,y] = (red, green, blue)
new_img_path = '/home/xilinx/jupyter_notebooks/Getting_Started/images/hdmi_in_frame1.jpg'
frame.save_as_jpeg(new_img_path)
Image(filename=new_img_path)
"""
Explanation: This will dump the frame as a list _frame[height, width][rgb]. Where rgb is a tuple (r,g,b). If you want to modify the green component of a pixel, you can do it as shown below. In the example, the top left quarter of the image will have the green component increased.
End of explanation
"""
# dumping frame at index 2
frame = hdmi_in.frame(2)
"""
Explanation: This frame() method is a simple way to capture pixel data, but processing it in Python will be slow. If you want to dump a frame at a specific index, just pass the index as an argument of the frame() method:
End of explanation
"""
# dumping frame at current index
frame_raw = hdmi_in.frame_raw()
# dumping frame at index 2
frame_raw = hdmi_in.frame_raw(2)
"""
Explanation: If higher performance is required, the frame_raw() method can be used:
End of explanation
"""
# printing the green component of pixel (0,0)
print(frame_raw[1])
# printing the blue component of pixel (1,399)
print(frame_raw[1920 + 399 + 0])
# printing the red component of the last pixel (599,799)
print(frame_raw[1920*599 + 799 + 2])
"""
Explanation: This method will return a fast memory dump of the internal frame list, as a mono-dimensional list of dimension frame[1920*1080*3] (This array is of fixed size regardless of the input source resolution). 1920x1080 is the maximum supported frame dimension and 3 separate values for each pixel (Blue, Green, Red).
When the resolution is less than 1920x1080, the user must manually extract the correct pixel data.
For example, if the resolution of the video input source is 800x600, meaningful values will only be in the range frame_raw[1920*i*3] to frame_raw[(1920*i + 799)*3] for each i (rows) from 0 to 599. Any other position outside of this range will contain invalid data.
End of explanation
"""
from pynq.drivers import HDMI
hdmi_out = HDMI('out')
"""
Explanation: Frame Lists
To draw or display smooth animations/video, note the following:
Draw a new frame to a frame location not currently in use (an index different to the current hdmi.frame_index()) . Once finished writing the new frame, change the current frame index to the new frame index.
The HDMI out controller
Using the HDMI output is similar to using the HDMI input. Connect the HDMI OUT port to a monitor, or other display device.
To instantiate the HDMI controller:
End of explanation
"""
hdmi_out.start()
hdmi_out.stop()
"""
Explanation: For the HDMI controller, you have to start/stop the device explicitly:
End of explanation
"""
state = hdmi_out.state()
print(state)
"""
Explanation: To check the state of the controller:
End of explanation
"""
print(hdmi_out.mode())
"""
Explanation: The state is returned as an integer value, with 2 possible values:
0 if stopped
1 if running
After initialization, the display resolution is set at the lowest level: 640x480 at 60Hz.
To check the current resolution:
End of explanation
"""
hdmi_out.mode(4)
"""
Explanation: This will print the current mode as a string. To change the mode, insert a valid index as an argument when calling mode():
End of explanation
"""
from pynq.drivers.video import HDMI
hdmi_in = HDMI('in')
hdmi_out = HDMI('out', frame_list=hdmi_in.frame_list)
hdmi_out.mode(4)
"""
Explanation: Valid resolutions are:
0 : 640x480, 60Hz
1 : 800x600, 60Hz
2 : 1280x720, 60Hz
3 : 1280x1024, 60Hz
4 : 1920x1080, 60Hz
Input/Output Frame Lists
To draw or display smooth animations/video, note the following:
Draw a new frame to a frame location not currently in use (an index different to the current hdmi.frame_index()) . Once finished writing the new frame, change the current frame index to the new frame index.
Streaming from HDMI Input to Output
To use the HDMI input and output to capture and display an image, make both the HDMI input and output share the same frame list. The frame list in both cases can be accessed. You can make the two object share the same frame list by a frame list as an argument to the second object’s constructor.
End of explanation
"""
hdmi_out.start()
hdmi_in.start()
"""
Explanation: To start the controllers:
End of explanation
"""
hdmi_out.stop()
hdmi_in.stop()
del hdmi_out
del hdmi_in
"""
Explanation: The last step is always to stop the controllers and delete HDMI objects.
End of explanation
"""
|
jdhp-docs/python-notebooks | python_super_fr.ipynb | mit | help(super)
"""
Explanation: Python's super()
TODO
* https://docs.python.org/3/library/functions.html#super
* https://rhettinger.wordpress.com/2011/05/26/super-considered-super/
* https://stackoverflow.com/questions/904036/chain-calling-parent-constructors-in-python
* https://stackoverflow.com/questions/2399307/how-to-invoke-the-super-constructor
super(type [, objet_ou_type])
retourne la classe de base de type.
Si le second argument est omis, alors l'objet retourné ~n'est pas limité~.
Si le second argument est un objet, alors isinstance(objet, type) doit être vrai.
Si le second argument est un type, alors issubclass(type2, type) doit être vrai.
This is useful for accessing inherited methods that have been overridden in a class.
There are two typical use cases for super. In a class hierarchy with single inheritance, super can be used to refer to parent classes without naming them explicitly, thus making the code more maintainable. This use closely parallels the use of super in other programming languages.
The second use case is to support cooperative multiple inheritance in a dynamic execution environment. This use case is unique to Python and is not found in statically compiled languages or languages that only support single inheritance. This makes it possible to implement “diamond diagrams” where multiple base classes implement the same method. Good design dictates that this method have the same calling signature in every case (because the order of calls is determined at runtime, because that order adapts to changes in the class hierarchy, and because that order can include sibling classes that are unknown prior to runtime).
For both use cases, a typical superclass call looks like this:
class C(B):
def method(self, arg):
super().method(arg) # This does the same thing as:
# super(C, self).method(arg)
Also note that, aside from the zero argument form, super() is not limited to use inside methods. The two argument form specifies the arguments exactly and makes the appropriate references.
The zero argument form only works inside a class definition, as the compiler fills in the necessary details to correctly retrieve the class being defined, as well as accessing the current instance for ordinary methods.
End of explanation
"""
class A:
def bonjour(self):
print("Bonjour de la part de A.")
class B(A):
def bonjour(self):
print("Bonjour de la part de B.")
A.bonjour(self)
b = B()
b.bonjour()
"""
Explanation: Sans super()
Avant l'existance de la fonction super(), we would have hardwired the call with A.bonjour(self).
...
End of explanation
"""
class A:
def bonjour(self, arg):
print("Bonjour de la part de A. J'ai été appelée avec l'argument arg:", arg)
class B(A):
def bonjour(self, arg):
print("Bonjour de la part de B. J'ai été appelée avec l'argument arg:", arg)
A.bonjour(self, arg)
b = B()
b.bonjour('hey')
"""
Explanation: Le même exemple avec un argument:
End of explanation
"""
class A:
def __init__(self):
self.nom = "Alice"
def bonjour(self):
print("Bonjour de la part de A. Je m'appelle:", self.nom)
class B(A):
def __init__(self):
self.nom = "Bob"
def bonjour(self):
A.bonjour(self)
b = B()
b.bonjour()
"""
Explanation: Exemple qui montre que A.bonjour() est bien appelée sur l'objet b:
End of explanation
"""
class A:
def bonjour(self):
print("Bonjour de la part de A.")
class B(A):
def bonjour(self):
print("Bonjour de la part de B.")
super().bonjour() # au lieu de "A.bonjour(self)"
b = B()
b.bonjour()
"""
Explanation: Avec super(): premier exemple
Dans l'exemple précedent, la ligne A.bonjour(self) (dans B.bonjour()) définie explicitement le nom de la classe contenant la fonction à appeler (ici A) ainsi que l'objet (self) sur lequel est appelé la fonction.
Un des deux principaux intérets de super() est de rendre inplicite le nom de la classe d'appel A (ainsi que l'objet self sur lequel est appelé la fonction).
Ainsi, l'appel A.bonjour(self) devient super().bonjour().
Ainsi, par exemple, si on décide de renommer la classe A ou si on décide que B hérite de C plutôt que A, on a pas besoin de mettre à jours le contenu de la fonction B.bonjour(). Les changements sont isolés.
End of explanation
"""
class A:
def bonjour(self, arg):
print("Bonjour de la part de A. J'ai été appelée avec l'argument arg:", arg)
class B(A):
def bonjour(self, arg):
print("Bonjour de la part de B. J'ai été appelée avec l'argument arg:", arg)
super().bonjour(arg)
b = B()
b.bonjour('hey')
"""
Explanation: Le même exemple avec un argument:
End of explanation
"""
class A:
def __init__(self):
self.nom = "Alice"
def bonjour(self):
print("Bonjour de la part de A. Je m'appelle:", self.nom)
class B(A):
def __init__(self):
self.nom = "Bob"
def bonjour(self):
super().bonjour()
b = B()
b.bonjour()
"""
Explanation: Exemple qui montre que super().bonjour() est bien appelée sur l'objet b:
End of explanation
"""
class A:
pass
class B(A):
def bonjour(self):
print(super())
b = B()
b.bonjour()
"""
Explanation: En fait, super() retourne la classe implicite A:
End of explanation
"""
class A:
def bonjour(self):
print("Bonjour de la part de A.")
class B(A):
def bonjour(self):
print("Bonjour de la part de B.")
class C(B):
def bonjour(self):
print("Bonjour de la part de C.")
super(C, self).bonjour()
c = C()
c.bonjour()
"""
Explanation: Ajout de contraintes: super(type, obj_ou_type) [TODO]
Une autre syntaxe peut être utilisée pour rendre un peut plus explicite la classe de base à utiliser pour l'appel de la fonction:
End of explanation
"""
class A:
def bonjour(self):
print("Bonjour de la part de A.")
class B(A):
def bonjour(self):
print("Bonjour de la part de B.")
class C(B):
def bonjour(self):
print("Bonjour de la part de C.")
super().bonjour()
c = C()
c.bonjour()
# **TODO**
class A():
def bonjour(self):
print("Bonjour de la part de A.")
class B:
def bonjour(self):
print("Bonjour de la part de B.")
class C(A, B):
def bonjour(self):
print("Bonjour de la part de C.")
print(super(B, self))
super(B, self).bonjour()
c = C()
c.bonjour()
"""
Explanation: Ce qui est l'équivalant de:
End of explanation
"""
class A:
pass
class B(A):
pass
class C(A):
pass
class D(B, C):
pass
print(D.__mro__)
"""
Explanation: L'ordre de résolution des méthodes ("Method Resolution Order" ou MRO en anglais)
End of explanation
"""
class A:
pass
class B(A):
pass
class C(A):
pass
class D(B, C):
pass
print(D.__bases__)
"""
Explanation: Les bases d'une classe
End of explanation
"""
class A(object):
def hello(self, arg):
print("Hello", arg, "from A.")
class B(A):
def hello(self, arg):
super(B, self).hello(arg)
print("Hello", arg, "from B.")
a = A()
b = B()
#a.hello('john')
b.hello('john')
#This works for class methods too:
class C(B):
@classmethod
def cmeth(cls, arg):
super().cmeth(arg)
class A(object):
def hello(self, arg):
print("Hello", arg, "from A.")
class B(A):
def hello(self, arg):
super(B, self).hello(arg)
print("Hello", arg, "from B.")
class C(B):
def hello(self, arg):
super(C, self).hello(arg)
print("Hello", arg, "from C.")
a = A()
b = B()
c = C()
c.hello('john')
# comment appeler B.hello() sur c ?
# comment appeler A.hello() sur c ?
class A(object):
def hello(self, arg):
print("Hello", arg, "from A.")
class B(A):
def hello(self, arg):
super().hello(arg)
print("Hello", arg, "from B.")
class C(A):
def hello(self, arg):
super().hello(arg)
print("Hello", arg, "from C.")
class D(B, C):
def hello(self, arg):
super().hello(arg)
print("Hello", arg, "from D.")
a = A()
b = B()
c = C()
d = D()
a.hello('john')
print()
b.hello('john')
print()
c.hello('john')
print()
d.hello('john')
class A(object):
def __init__(self, name):
self.name = name
def hello(self, arg):
print("Hello", arg, "from A.")
class B(A):
def hello(self, arg):
super().hello(arg)
print("Hello", arg, "from B.")
a = A("foo")
b = B()
a.hello('john')
print()
b.hello('john')
"""
Explanation: First use case: ...
"In addition to isolating changes, there is another major benefit to computed indirection, one that may not be familiar to people coming from static languages. Since the indirection is computed at runtime, we have the freedom to influence the calculation so that the indirection will point to some other class."
End of explanation
"""
class A:
def __init__(self):
self.name = "A"
def hello(self, arg):
print("Hello from A with arg:", arg, "self beeing", self.name)
class B(A):
def __init__(self):
self.name = "B"
def hello(self, arg):
super().hello(arg)
print("Hello from B with arg:", arg, "self beeing", self.name)
a = A()
a.hello('foo')
b = B()
b.hello('foo')
class A:
def __init__(self, name):
self.name = name
def hello(self, arg):
print("Hello from A with arg:", arg, "self beeing", self.name)
class B(A):
#def __init__(self):
# self.name = "B"
def hello(self, arg):
super().hello(arg)
print("Hello from B with arg:", arg, "self beeing", self.name)
a = A("A")
a.hello('foo')
b = B()
b.hello('foo')
class A:
def __init__(self, arg):
print(arg)
class B(A):
pass
b = B()
"""
Explanation: First use case: ...
End of explanation
"""
|
FishingOnATree/deep-learning | seq2seq/sequence_to_sequence_implementation.ipynb | mit | import helper
source_path = 'data/letters_source.txt'
target_path = 'data/letters_target.txt'
source_sentences = helper.load_data(source_path)
target_sentences = helper.load_data(target_path)
"""
Explanation: Character Sequence to Sequence
In this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models.
<img src="images/sequence-to-sequence.jpg"/>
Dataset
The dataset lives in the /data/ folder. At the moment, it is made up of the following files:
* letters_source.txt: The list of input letter sequences. Each sequence is its own line.
* letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.
End of explanation
"""
source_sentences[:50].split('\n')
"""
Explanation: Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.
End of explanation
"""
target_sentences[:50].split('\n')
"""
Explanation: target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.
End of explanation
"""
def extract_character_vocab(data):
special_words = ['<pad>', '<unk>', '<s>', '<\s>']
set_words = set([character for line in data.split('\n') for character in line])
int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}
vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}
return int_to_vocab, vocab_to_int
# Build int2letter and letter2int dicts
source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)
target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)
# Convert characters to ids
source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<unk>']) for letter in line] for line in source_sentences.split('\n')]
target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<unk>']) for letter in line] for line in target_sentences.split('\n')]
print("Example source sequence")
print(source_letter_ids[:3])
print("\n")
print("Example target sequence")
print(target_letter_ids[:3])
"""
Explanation: Preprocess
To do anything useful with it, we'll need to turn the characters into a list of integers:
End of explanation
"""
def pad_id_sequences(source_ids, source_letter_to_int, target_ids, target_letter_to_int, sequence_length):
new_source_ids = [sentence + [source_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \
for sentence in source_ids]
new_target_ids = [sentence + [target_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \
for sentence in target_ids]
return new_source_ids, new_target_ids
# Use the longest sequence as sequence length
sequence_length = max(
[len(sentence) for sentence in source_letter_ids] + [len(sentence) for sentence in target_letter_ids])
# Pad all sequences up to sequence length
source_ids, target_ids = pad_id_sequences(source_letter_ids, source_letter_to_int,
target_letter_ids, target_letter_to_int, sequence_length)
print("Sequence Length")
print(sequence_length)
print("\n")
print("Input sequence example")
print(source_ids[:3])
print("\n")
print("Target sequence example")
print(target_ids[:3])
"""
Explanation: The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length.
End of explanation
"""
from distutils.version import LooseVersion
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
"""
Explanation: This is the final shape we need them to be in. We can now proceed to building the model.
Model
Check the Version of TensorFlow
This will check to make sure you have the correct version of TensorFlow
End of explanation
"""
# Number of Epochs
epochs = 60
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 50
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 13
decoding_embedding_size = 13
# Learning Rate
learning_rate = 0.001
"""
Explanation: Hyperparameters
End of explanation
"""
input_data = tf.placeholder(tf.int32, [batch_size, sequence_length])
targets = tf.placeholder(tf.int32, [batch_size, sequence_length])
lr = tf.placeholder(tf.float32)
"""
Explanation: Input
End of explanation
"""
source_vocab_size = len(source_letter_to_int)
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)
# Encoder
enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, dtype=tf.float32)
"""
Explanation: Sequence to Sequence
The decoder is probably the most complex part of this model. We need to declare a decoder for the training phase, and a decoder for the inference/prediction phase. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).
First, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.
Then, we'll need to hookup a fully connected layer to the output of decoder. The output of this layer tells us which word the RNN is choosing to output at each time step.
Let's first look at the inference/prediction decoder. It is the one we'll use when we deploy our chatbot to the wild (even though it comes second in the actual code).
<img src="images/sequence-to-sequence-inference-decoder.png"/>
We'll hand our encoder hidden state to the inference decoder and have it process its output. TensorFlow handles most of the logic for us. We just have to use tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder and supply them with the appropriate inputs.
Notice that the inference decoder feeds the output of each time step as an input to the next.
As for the training decoder, we can think of it as looking like this:
<img src="images/sequence-to-sequence-training-decoder.png"/>
The training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).
Encoding
Embed the input data using tf.contrib.layers.embed_sequence
Pass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.
End of explanation
"""
import numpy as np
# Process the input we'll feed to the decoder
ending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<s>']), ending], 1)
demonstration_outputs = np.reshape(range(batch_size * sequence_length), (batch_size, sequence_length))
sess = tf.InteractiveSession()
print("Targets")
print(demonstration_outputs[:2])
print("\n")
print("Processed Decoding Input")
print(sess.run(dec_input, {targets: demonstration_outputs})[:2])
"""
Explanation: Process Decoding Input
End of explanation
"""
target_vocab_size = len(target_letter_to_int)
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Decoder RNNs
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, target_vocab_size, None, scope=decoding_scope)
"""
Explanation: Decoding
Embed the decoding input
Build the decoding RNNs
Build the output layer in the decoding scope, so the weight and bias can be shared between the training and inference decoders.
End of explanation
"""
with tf.variable_scope("decoding") as decoding_scope:
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(enc_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
"""
Explanation: Decoder During Training
Build the training decoder using tf.contrib.seq2seq.simple_decoder_fn_train and tf.contrib.seq2seq.dynamic_rnn_decoder.
Apply the output layer to the output of the training decoder
End of explanation
"""
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, enc_state, dec_embeddings, target_letter_to_int['<s>'], target_letter_to_int['<\s>'],
sequence_length - 1, target_vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
"""
Explanation: Decoder During Inference
Reuse the weights the biases from the training decoder using tf.variable_scope("decoding", reuse=True)
Build the inference decoder using tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder.
The output function is applied to the output in this step
End of explanation
"""
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([batch_size, sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Optimization
Our loss function is tf.contrib.seq2seq.sequence_loss provided by the tensor flow seq2seq module. It calculates a weighted cross-entropy loss for the output logits.
End of explanation
"""
import numpy as np
train_source = source_ids[batch_size:]
train_target = target_ids[batch_size:]
valid_source = source_ids[:batch_size]
valid_target = target_ids[:batch_size]
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch, targets: target_batch, lr: learning_rate})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source})
train_acc = np.mean(np.equal(target_batch, np.argmax(batch_train_logits, 2)))
valid_acc = np.mean(np.equal(valid_target, np.argmax(batch_valid_logits, 2)))
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_ids) // batch_size, train_acc, valid_acc, loss))
"""
Explanation: Train
We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.
End of explanation
"""
input_sentence = 'hello'
input_sentence = [source_letter_to_int.get(word, source_letter_to_int['<unk>']) for word in input_sentence.lower()]
input_sentence = input_sentence + [0] * (sequence_length - len(input_sentence))
batch_shell = np.zeros((batch_size, sequence_length))
batch_shell[0] = input_sentence
chatbot_logits = sess.run(inference_logits, {input_data: batch_shell})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in input_sentence]))
print(' Input Words: {}'.format([source_int_to_letter[i] for i in input_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(chatbot_logits, 1)]))
print(' Chatbot Answer Words: {}'.format([target_int_to_letter[i] for i in np.argmax(chatbot_logits, 1)]))
"""
Explanation: Prediction
End of explanation
"""
|
oscarmore2/deep-learning-study | language-translation/dlnd_language_translation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/training-giga-fren/giga-fren.release2.fixed.en'
target_path = 'data/training-giga-fren/giga-fren.release2.fixed.fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
print('Roughly the number of unique words: {}'.format(len({word: None for word in target_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
#sentences2 = target_text.split('n')
#wordList1 = source_text.split(' ')
mat_in = []
mat_tar = []
# TODO: Implement Function
#print(source_vocab_to_int)
#target_text = temp.replace(".", "<EOS>")
for sentence in source_text.split("\n"):
arr = []
for word in sentence.split():
arr.append(source_vocab_to_int[word])
mat_in.append(arr)
for sentence in target_text.split("\n"):
arr = []
for word in sentence.split():
arr.append(target_vocab_to_int[word])
arr.append(target_vocab_to_int['<EOS>'])
#print(np.array(arr).shape)
mat_tar.append(arr)
#print (mat_tar[1])
#for count, word in enumerate(target_text)
return mat_in, mat_tar
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, shape=(None,None), name='input')
targets = tf.placeholder(tf.int32, shape=(None, None), name='target')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_seq_len = tf.placeholder(tf.int32, shape=[None], name='target_sequence_length')
max_target_len = tf.reduce_max(target_seq_len, name='max_target_len')
source_seq_len = tf.placeholder(tf.int32, shape=[None], name='source_sequence_length')
return inputs, targets, learning_rate, keep_prob, target_seq_len, max_target_len, source_seq_len
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
"""
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
print(target_data[0:200])
go = tf.constant(target_vocab_to_int['<GO>'], shape=(batch_size,1), dtype=tf.int32)
new_target = tf.concat([go,target_data[:,:-1]],1)
return new_target
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
"""
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
"""
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
# TODO: Implement Function
cell_stack = []
for _ in range(num_layers):
lstm = tf.contrib.rnn.LSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell_stack.append(drop)
lstms = tf.contrib.rnn.MultiRNNCell(cell_stack)
encoder = tf.contrib.layers.embed_sequence(rnn_inputs, vocab_size = source_vocab_size, embed_dim = encoding_embedding_size)
output, state = tf.nn.dynamic_rnn(lstms, encoder, source_sequence_length, dtype=tf.float32)
return output, state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
# TODO: Implement Function
#print("seq length")
#print(target_sequence_length)
train_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
basic_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, train_helper, encoder_state, output_layer)
output, _ = tf.contrib.seq2seq.dynamic_decode(basic_decoder,maximum_iterations=max_summary_length)
return output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
# TODO: Implement Function
ids = tf.tile([start_of_sequence_id], [batch_size])
# Create the embedding helper.
embedding_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
dec_embeddings, ids, end_of_sequence_id)
basic_decoder = tf.contrib.seq2seq.BasicDecoder(
dec_cell, embedding_helper, encoder_state, output_layer)
output, _ = tf.contrib.seq2seq.dynamic_decode(
basic_decoder,maximum_iterations=max_target_sequence_length)
return output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
cells = []
for _ in range(num_layers):
lstm = tf.contrib.rnn.LSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cells.append(drop)
lstms = tf.contrib.rnn.MultiRNNCell(cells)
dec_embeds = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_in = tf.nn.embedding_lookup(dec_embeds, dec_input)
dense_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope("decode") as scope:
train_decoder_outputs = decoding_layer_train(encoder_state, lstms, dec_embed_in,
target_sequence_length, max_target_sequence_length, dense_layer, keep_prob)
scope.reuse_variables()
infer_decoder_outputs = decoding_layer_infer(encoder_state, lstms, dec_embeds,
target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size,
dense_layer, batch_size, keep_prob)
return train_decoder_outputs , infer_decoder_outputs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
outputs, state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
processed_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
train_decoder_output, interface_decoder_output = decoding_layer(processed_input, state,
target_sequence_length, max_target_sentence_length,
rnn_size, num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, dec_embedding_size)
return train_decoder_output, interface_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
"""
# Number of Epochs
epochs = 20
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.0009
# Dropout Keep Probability
keep_probability = 0.5
display_step = True
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
"""
Explanation: Batch and pad the source and target sequences
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
ids = []
for word in sentence.split():
if word in vocab_to_int:
print(vocab_to_int[word])
ids.append(vocab_to_int[word])
else:
ids.append(vocab_to_int['<UNK>'])
return ids
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
ktaneishi/deepchem | examples/tutorials/Uncertainty.ipynb | mit | import deepchem as dc
import numpy as np
import matplotlib.pyplot as plot
tasks, datasets, transformers = dc.molnet.load_sampl()
train_dataset, valid_dataset, test_dataset = datasets
model = dc.models.MultitaskRegressor(len(tasks), 1024, uncertainty=True)
model.fit(train_dataset, nb_epoch=200)
y_pred, y_std = model.predict_uncertainty(test_dataset)
"""
Explanation: Tutorial Part 4: Uncertainty in Deep Learning
A common criticism of deep learning models is that they tend to act as black boxes. A model produces outputs, but doesn't given enough context to interpret them properly. How reliable are the model's predictions? Are some predictions more reliable than others? If a model predicts a value of 5.372 for some quantity, should you assume the true value is between 5.371 and 5.373? Or that it's between 2 and 8? In some fields this situation might be good enough, but not in science. For every value predicted by a model, we also want an estimate of the uncertainty in that value so we can know what conclusions to draw based on it.
DeepChem makes it very easy to estimate the uncertainty of predicted outputs (at least for the models that support it—not all of them do). Let's start by seeing an example of how to generate uncertainty estimates. We load a dataset, create a model, train it on the training set, and predict the output on the test set.
End of explanation
"""
# Generate some fake data and plot a regression line.
x = np.linspace(0, 5, 10)
y = 0.15*x + np.random.random(10)
plot.scatter(x, y)
fit = np.polyfit(x, y, 1)
line_x = np.linspace(-1, 6, 2)
plot.plot(line_x, np.poly1d(fit)(line_x))
plot.show()
"""
Explanation: All of this looks exactly like any other example, with just two differences. First, we add the option uncertainty=True when creating the model. This instructs it to add features to the model that are needed for estimating uncertainty. Second, we call predict_uncertainty() instead of predict() to produce the output. y_pred is the predicted outputs. y_std is another array of the same shape, where each element is an estimate of the uncertainty (standard deviation) of the corresponding element in y_pred. And that's all there is to it! Simple, right?
Of course, it isn't really that simple at all. DeepChem is doing a lot of work to come up with those uncertainties. So now let's pull back the curtain and see what is really happening. (For the full mathematical details of calculating uncertainty, see https://arxiv.org/abs/1703.04977)
To begin with, what does "uncertainty" mean? Intuitively, it is a measure of how much we can trust the predictions. More formally, we expect that the true value of whatever we are trying to predict should usually be within a few standard deviations of the predicted value. But uncertainty comes from many sources, ranging from noisy training data to bad modelling choices, and different sources behave in different ways. It turns out there are two fundamental types of uncertainty we need to take into account.
Aleatoric Uncertainty
Consider the following graph. It shows the best fit linear regression to a set of ten data points.
End of explanation
"""
plot.figure(figsize=(12, 3))
line_x = np.linspace(0, 5, 50)
for i in range(3):
plot.subplot(1, 3, i+1)
plot.scatter(x, y)
fit = np.polyfit(np.concatenate([x, [3]]), np.concatenate([y, [i]]), 10)
plot.plot(line_x, np.poly1d(fit)(line_x))
plot.show()
"""
Explanation: The line clearly does not do a great job of fitting the data. There are many possible reasons for this. Perhaps the measuring device used to capture the data was not very accurate. Perhaps y depends on some other factor in addition to x, and if we knew the value of that factor for each data point we could predict y more accurately. Maybe the relationship between x and y simply isn't linear, and we need a more complicated model to capture it. Regardless of the cause, the model clearly does a poor job of predicting the training data, and we need to keep that in mind. We cannot expect it to be any more accurate on test data than on training data. This is known as aleatoric uncertainty.
How can we estimate the size of this uncertainty? By training a model to do it, of course! At the same time it is learning to predict the outputs, it is also learning to predict how accurately each output matches the training data. For every output of the model, we add a second output that produces the corresponding uncertainty. Then we modify the loss function to make it learn both outputs at the same time.
Epistemic Uncertainty
Now consider these three curves. They are fit to the same data points as before, but this time we are using 10th degree polynomials.
End of explanation
"""
abs_error = np.abs(y_pred.flatten()-test_dataset.y.flatten())
plot.scatter(y_std.flatten(), abs_error)
plot.xlabel('Standard Deviation')
plot.ylabel('Absolute Error')
plot.show()
"""
Explanation: Each of them perfectly interpolates the data points, yet they clearly are different models. (In fact, there are infinitely many 10th degree polynomials that exactly interpolate any ten data points.) They make identical predictions for the data we fit them to, but for any other value of x they produce different predictions. This is called epistemic uncertainty. It means the data does not fully constrain the model. Given the training data, there are many different models we could have found, and those models make different predictions.
The ideal way to measure epistemic uncertainty is to train many different models, each time using a different random seed and possibly varying hyperparameters. Then use all of them for each input and see how much the predictions vary. This is very expensive to do, since it involves repeating the whole training process many times. Fortunately, we can approximate the same effect in a less expensive way: by using dropout.
Recall that when you train a model with dropout, you are effectively training a huge ensemble of different models all at once. Each training sample is evaluated with a different dropout mask, corresponding to a different random subset of the connections in the full model. Usually we only perform dropout during training and use a single averaged mask for prediction. But instead, let's use dropout for prediction too. We can compute the output for lots of different dropout masks, then see how much the predictions vary. This turns out to give a reasonable estimate of the epistemic uncertainty in the outputs.
Uncertain Uncertainty?
Now we can combine the two types of uncertainty to compute an overall estimate of the error in each output:
$$\sigma_\text{total} = \sqrt{\sigma_\text{aleatoric}^2 + \sigma_\text{epistemic}^2}$$
This is the value DeepChem reports. But how much can you trust it? Remember how I started this tutorial: deep learning models should not be used as black boxes. We want to know how reliable the outputs are. Adding uncertainty estimates does not completely eliminate the problem; it just adds a layer of indirection. Now we have estimates of how reliable the outputs are, but no guarantees that those estimates are themselves reliable.
Let's go back to the example we started with. We trained a model on the SAMPL training set, then generated predictions and uncertainties for the test set. Since we know the correct outputs for all the test samples, we can evaluate how well we did. Here is a plot of the absolute error in the predicted output versus the predicted uncertainty.
End of explanation
"""
plot.hist(abs_error/y_std.flatten(), 20)
plot.show()
"""
Explanation: The first thing we notice is that the axes have similar ranges. The model clearly has learned the overall magnitude of errors in the predictions. There also is clearly a correlation between the axes. Values with larger uncertainties tend on average to have larger errors.
Now let's see how well the values satisfy the expected distribution. If the standard deviations are correct, and if the errors are normally distributed (which is certainly not guaranteed to be true!), we expect 95% of the values to be within two standard deviations, and 99% to be within three standard deviations. Here is a histogram of errors as measured in standard deviations.
End of explanation
"""
|
lenovor/notes-on-dirichlet-processes | 2015-08-03-nonparametric-latent-dirichlet-allocation.ipynb | mit | %matplotlib inline
%precision 2
"""
Explanation: I wrote this in an IPython Notebook. You may prefer to view it on nbviewer.
End of explanation
"""
vocabulary = ['see', 'spot', 'run']
num_terms = len(vocabulary)
num_topics = 2 # K
num_documents = 5 # M
mean_document_length = 5 # xi
term_dirichlet_parameter = 1 # beta
topic_dirichlet_parameter = 1 # alpha
"""
Explanation: Latent Dirichlet Allocation is a generative model for topic modeling. Given a collection of documents, an LDA inference algorithm attempts to determined (in an unsupervised manner) the topics discussed in the documents. It makes the assumption that each document is generated by a probability model, and, when doing inference, we try to find the parameters that best fit the model (as well as unseen/latent variables generated by the model). If you are unfamiliar with LDA, Edwin Chen has a friendly introduction you should read.
Because LDA is a generative model, we can simulate the construction of documents by forward-sampling from the model. The generative algorithm is as follows (following Heinrich):
for each topic $k\in [1,K]$ do
sample term distribution for topic $\overrightarrow \phi_k \sim \text{Dir}(\overrightarrow \beta)$
for each document $m\in [1, M]$ do
sample topic distribution for document $\overrightarrow\theta_m\sim \text{Dir}(\overrightarrow\alpha)$
sample document length $N_m\sim\text{Pois}(\xi)$
for all words $n\in [1, N_m]$ in document $m$ do
sample topic index $z_{m,n}\sim\text{Mult}(\overrightarrow\theta_m)$
sample term for word $w_{m,n}\sim\text{Mult}(\overrightarrow\phi_{z_{m,n}})$
You can implement this with a little bit of code and start to simulate documents.
In LDA, we assume each word in the document is generated by a two-step process:
Sample a topic from the topic distribution for the document.
Sample a word from the term distribution from the topic.
When we fit the LDA model to a given text corpus with an inference algorithm, our primary objective is to find the set of topic distributions $\underline \Theta$, term distributions $\underline \Phi$ that generated the documents, and latent topic indices $z_{m,n}$ for each word.
To run the generative model, we need to specify each of these parameters:
End of explanation
"""
from scipy.stats import dirichlet, poisson
from numpy import round
from collections import defaultdict
from random import choice as stl_choice
term_dirichlet_vector = num_terms * [term_dirichlet_parameter]
term_distributions = dirichlet(term_dirichlet_vector, 2).rvs(size=num_topics)
print term_distributions
"""
Explanation: The term distribution vector $\underline\Phi$ is a collection of samples from a Dirichlet distribution. This describes how our 3 terms are distributed across each of the two topics.
End of explanation
"""
base_distribution = lambda: stl_choice(term_distributions)
# A sample from base_distribution is a distribution over terms
# Each of our two topics has equal probability
from collections import Counter
for topic, count in Counter([tuple(base_distribution()) for _ in range(10000)]).most_common():
print "count:", count, "topic:", [round(prob, 2) for prob in topic]
"""
Explanation: Each document corresponds to a categorical distribution across this distribution of topics (in this case, a 2-dimensional categorical distribution). This categorical distribution is a distribution of distributions; we could look at it as a Dirichlet process!
The base base distribution of our Dirichlet process is a uniform distribution of topics (remember, topics are term distributions).
End of explanation
"""
from scipy.stats import beta
from numpy.random import choice
class DirichletProcessSample():
def __init__(self, base_measure, alpha):
self.base_measure = base_measure
self.alpha = alpha
self.cache = []
self.weights = []
self.total_stick_used = 0.
def __call__(self):
remaining = 1.0 - self.total_stick_used
i = DirichletProcessSample.roll_die(self.weights + [remaining])
if i is not None and i < len(self.weights) :
return self.cache[i]
else:
stick_piece = beta(1, self.alpha).rvs() * remaining
self.total_stick_used += stick_piece
self.weights.append(stick_piece)
new_value = self.base_measure()
self.cache.append(new_value)
return new_value
@staticmethod
def roll_die(weights):
if weights:
return choice(range(len(weights)), p=weights)
else:
return None
"""
Explanation: Recall that a sample from a Dirichlet process is a distribution that approximates (but varies from) the base distribution. In this case, a sample from the Dirichlet process will be a distribution over topics that varies from the uniform distribution we provided as a base. If we use the stick-breaking metaphor, we are effectively breaking a stick one time and the size of each portion corresponds to the proportion of a topic in the document.
To construct a sample from the DP, we need to again define our DP class:
End of explanation
"""
topic_distribution = DirichletProcessSample(base_measure=base_distribution,
alpha=topic_dirichlet_parameter)
"""
Explanation: For each document, we will draw a topic distribution from the Dirichlet process:
End of explanation
"""
for topic, count in Counter([tuple(topic_distribution()) for _ in range(10000)]).most_common():
print "count:", count, "topic:", [round(prob, 2) for prob in topic]
"""
Explanation: A sample from this topic distribution is a distribution over terms. However, unlike our base distribution which returns each term distribution with equal probability, the topics will be unevenly weighted.
End of explanation
"""
topic_index = defaultdict(list)
documents = defaultdict(list)
for doc in range(num_documents):
topic_distribution_rvs = DirichletProcessSample(base_measure=base_distribution,
alpha=topic_dirichlet_parameter)
document_length = poisson(mean_document_length).rvs()
for word in range(document_length):
topic_distribution = topic_distribution_rvs()
topic_index[doc].append(tuple(topic_distribution))
documents[doc].append(choice(vocabulary, p=topic_distribution))
"""
Explanation: To generate each word in the document, we draw a sample topic from the topic distribution, and then a term from the term distribution (topic).
End of explanation
"""
for doc in documents.values():
print doc
"""
Explanation: Here are the documents we generated:
End of explanation
"""
for i, doc in enumerate(Counter(term_dist).most_common() for term_dist in topic_index.values()):
print "Doc:", i
for topic, count in doc:
print 5*" ", "count:", count, "topic:", [round(prob, 2) for prob in topic]
"""
Explanation: We can see how each topic (term-distribution) is distributed across the documents:
End of explanation
"""
term_dirichlet_vector = num_terms * [term_dirichlet_parameter]
base_distribution = lambda: dirichlet(term_dirichlet_vector).rvs(size=1)[0]
base_dp_parameter = 10
base_dp = DirichletProcessSample(base_distribution, alpha=base_dp_parameter)
"""
Explanation: To recap: for each document we draw a sample from a Dirichlet Process. The base distribution for the Dirichlet process is a categorical distribution over term distributions; we can think of the base distribution as an $n$-sided die where $n$ is the number of topics and each side of the die is a distribution over terms for that topic. By sampling from the Dirichlet process, we are effectively reweighting the sides of the die (changing the distribution of the topics).
For each word in the document, we draw a sample (a term distribution) from the distribution (over term distributions) sampled from the Dirichlet process (with a distribution over term distributions as its base measure). Each term distribution uniquely identifies the topic for the word. We can sample from this term distribution to get the word.
Given this formulation, we might ask if we can roll an infinite sided die to draw from an unbounded number of topics (term distributions). We can do exactly this with a Hierarchical Dirichlet process. Instead of the base distribution of our Dirichlet process being a finite distribution over topics (term distributions) we will instead make it an infinite Distribution over topics (term distributions) by using yet another Dirichlet process! This base Dirichlet process will have as its base distribution a Dirichlet distribution over terms.
We will again draw a sample from a Dirichlet Process for each document. The base distribution for the Dirichlet process is itself a Dirichlet process whose base distribution is a Dirichlet distribution over terms. (Try saying that 5-times fast.) We can think of this as a countably infinite die each side of the die is a distribution over terms for that topic. The sample we draw is a topic (distribution over terms).
For each word in the document, we will draw a sample (a term distribution) from the distribution (over term distributions) sampled from the Dirichlet process (with a distribution over term distributions as its base measure). Each term distribution uniquely identifies the topic for the word. We can sample from this term distribution to get the word.
These last few paragraphs are confusing! Let's illustrate with code.
End of explanation
"""
nested_dp_parameter = 10
topic_index = defaultdict(list)
documents = defaultdict(list)
for doc in range(num_documents):
topic_distribution_rvs = DirichletProcessSample(base_measure=base_dp,
alpha=nested_dp_parameter)
document_length = poisson(mean_document_length).rvs()
for word in range(document_length):
topic_distribution = topic_distribution_rvs()
topic_index[doc].append(tuple(topic_distribution))
documents[doc].append(choice(vocabulary, p=topic_distribution))
"""
Explanation: This sample from the base Dirichlet process is our infinite sided die. It is a probability distribution over a countable infinite number of topics.
The fact that our die is countably infinite is important. The sampler base_distribution draws topics (term-distributions) from an uncountable set. If we used this as the base distribution of the Dirichlet process below each document would be constructed from a completely unique set of topics. By feeding base_distribution into a Dirichlet Process (stochastic memoizer), we allow the topics to be shared across documents.
In other words, base_distribution will never return the same topic twice; however, every topic sampled from base_dp would be sampled an infinite number of times (if we sampled from base_dp forever). At the same time, base_dp will also return an infinite number of topics. In our formulation of the the LDA sampler above, our base distribution only ever returned a finite number of topics (num_topics); there is no num_topics parameter here.
Given this setup, we can generate documents from the hierarchical Dirichlet process with an algorithm that is essentially identical to that of the original latent Dirichlet allocation generative sampler:
End of explanation
"""
for doc in documents.values():
print doc
"""
Explanation: Here are the documents we generated:
End of explanation
"""
for i, doc in enumerate(Counter(term_dist).most_common() for term_dist in topic_index.values()):
print "Doc:", i
for topic, count in doc:
print 5*" ", "count:", count, "topic:", [round(prob, 2) for prob in topic]
"""
Explanation: And here are the latent topics used:
End of explanation
"""
|
Startupsci/data-science-notebooks | .ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb | mit | # data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
"""
Explanation: Titanic Data Science Solutions
This notebook is companion to the book Data Science Solutions. The notebook walks us through a typical workflow for solving data science competitions at sites like Kaggle.
There are several excellent notebooks to study data science competition entries. However many will skip some of the explanation on how the solution is developed as these notebooks are developed by experts for experts. The objective of this notebook is to follow a step-by-step workflow, explaining each step and rationale for every decision we take during solution development.
Workflow stages
The competition solution workflow goes through seven stages described in the Data Science Solutions book's sample chapter online here.
Question or problem definition.
Acquire training and testing data.
Wrangle, prepare, cleanse the data.
Analyze, identify patterns, and explore the data.
Model, predict and solve the problem.
Visualize, report, and present the problem solving steps and final solution.
Supply or submit the results.
The workflow indicates general sequence of how each stage may follow the other. However there are use cases with exceptions.
We may combine mulitple workflow stages. We may analyze by visualizing data.
Perform a stage earlier than indicated. We may analyze data before and after wrangling.
Perform a stage multiple times in our workflow. Visualize stage may be used multiple times.
Drop a stage altogether. We may not need supply stage to productize or service enable our dataset for a competition.
Question and problem definition
Competition sites like Kaggle define the problem to solve or questions to ask while providing the datasets for training your data science model and testing the model results against a test dataset. The question or problem definition for Titanic Survival competition is described here at Kaggle.
Knowing from a training set of samples listing passengers who survived or did not survive the Titanic disaster, can our model determine based on a given test dataset not containing the survival information, if these passengers in the test dataset survived or not.
We may also want to develop some early understanding about the domain of our problem. This is described on the Kaggle competition description page here. Here are the highlights to note.
On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. Translated 32% survival rate.
One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew.
Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
Workflow goals
The data science solutions workflow solves for seven major goals.
Classifying. We may want to classify or categorize our samples. We may also want to understand the implications or correlation of different classes with our solution goal.
Correlating. One can approach the problem based on available features within the training dataset. Which features within the dataset contribute significantly to our solution goal? Statistically speaking is there a correlation among a feature and solution goal? As the feature values change does the solution state change as well, and visa-versa? This can be tested both for numerical and categorical features in the given dataset. We may also want to determine correlation among features other than survival for subsequent goals and workflow stages. Correlating certain features may help in creating, completing, or correcting features.
Converting. For modeling stage, one needs to prepare the data. Depending on the choice of model algorithm one may require all features to be converted to numerical equivalent values. So for instance converting text categorical values to numeric values.
Completing. Data preparation may also require us to estimate any missing values within a feature. Model algorithms may work best when there are no missing values.
Correcting. We may also analyze the given training dataset for errors or possibly innacurate values within features and try to corrent these values or exclude the samples containing the errors. One way to do this is to detect any outliers among our samples or features. We may also completely discard a feature if it is not contribting to the analysis or may significantly skew the results.
Creating. Can we create new features based on an existing feature or a set of features, such that the new feature follows the correlation, conversion, completeness goals.
Charting. How to select the right visualization plots and charts depending on nature of the data and the solution goals. A good start is to read the Tableau paper on Which chart or graph is right for you?.
Refactor Release 2017-Jan-29
We are significantly refactoring the notebook based on (a) comments received by readers, (b) issues in porting notebook from Jupyter kernel (2.7) to Kaggle kernel (3.5), and (c) review of few more best practice kernels.
User comments
Combine training and test data for certain operations like converting titles across dataset to numerical values. (thanks @Sharan Naribole)
Correct observation - nearly 30% of the passengers had siblings and/or spouses aboard. (thanks @Reinhard)
Correctly interpreting logistic regresssion coefficients. (thanks @Reinhard)
Porting issues
Specify plot dimensions, bring legend into plot.
Best practices
Performing feature correlation analysis early in the project.
Using multiple plots instead of overlays for readability.
End of explanation
"""
train_df = pd.read_csv('data/titanic-kaggle/train.csv')
test_df = pd.read_csv('data/titanic-kaggle/test.csv')
combine = [train_df, test_df]
"""
Explanation: Acquire data
The Python Pandas packages helps us work with our datasets. We start by acquiring the training and testing datasets into Pandas DataFrames. We also combine these datasets to run certain operations on both datasets together.
End of explanation
"""
print(train_df.columns.values)
"""
Explanation: Analyze by describing data
Pandas also helps describe the datasets answering following questions early in our project.
Which features are available in the dataset?
Noting the feature names for directly manipulating or analyzing these. These feature names are described on the Kaggle data page here.
End of explanation
"""
# preview the data
train_df.head()
"""
Explanation: Which features are categorical?
These values classify the samples into sets of similar samples. Within categorical features are the values nominal, ordinal, ratio, or interval based? Among other things this helps us select the appropriate plots for visualization.
Categorical: Survived, Sex, and Embarked. Ordinal: Pclass.
Which features are numerical?
Which features are numerical? These values change from sample to sample. Within numerical features are the values discrete, continuous, or timeseries based? Among other things this helps us select the appropriate plots for visualization.
Continous: Age, Fare. Discrete: SibSp, Parch.
End of explanation
"""
train_df.tail()
"""
Explanation: Which features are mixed data types?
Numerical, alphanumeric data within same feature. These are candidates for correcting goal.
Ticket is a mix of numeric and alphanumeric data types. Cabin is alphanumeric.
Which features may contain errors or typos?
This is harder to review for a large dataset, however reviewing a few samples from a smaller dataset may just tell us outright, which features may require correcting.
Name feature may contain errors or typos as there are several ways used to describe a name including titles, round brackets, and quotes used for alternative or short names.
End of explanation
"""
train_df.info()
print('_'*40)
test_df.info()
"""
Explanation: Which features contain blank, null or empty values?
These will require correcting.
Cabin > Age > Embarked features contain a number of null values in that order for the training dataset.
Cabin > Age are incomplete in case of test dataset.
What are the data types for various features?
Helping us during converting goal.
Seven features are integer or floats. Six in case of test dataset.
Five features are strings (object).
End of explanation
"""
train_df.describe()
# Review survived rate using `percentiles=[.61, .62]` knowing our problem description mentions 38% survival rate.
# Review Parch distribution using `percentiles=[.75, .8]`
# SibSp distribution `[.68, .69]`
# Age and Fare `[.1, .2, .3, .4, .5, .6, .7, .8, .9, .99]`
"""
Explanation: What is the distribution of numerical feature values across the samples?
This helps us determine, among other early insights, how representative is the training dataset of the actual problem domain.
Total samples are 891 or 40% of the actual number of passengers on board the Titanic (2,224).
Survived is a categorical feature with 0 or 1 values.
Around 38% samples survived representative of the actual survival rate at 32%.
Most passengers (> 75%) did not travel with parents or children.
Nearly 30% of the passengers had siblings and/or spouse aboard.
Fares varied significantly with few passengers (<1%) paying as high as $512.
Few elderly passengers (<1%) within age range 65-80.
End of explanation
"""
train_df.describe(include=['O'])
"""
Explanation: What is the distribution of categorical features?
Names are unique across the dataset (count=unique=891)
Sex variable as two possible values with 65% male (top=male, freq=577/count=891).
Cabin values have several dupicates across samples. Alternatively several passengers shared a cabin.
Embarked takes three possible values. S port used by most passengers (top=S)
Ticket feature has high ratio (22%) of duplicate values (unique=681).
End of explanation
"""
pivot = train_df[['Pclass', 'Survived']]
pivot = pivot.groupby(['Pclass'], as_index=False).mean()
pivot.sort_values(by='Survived', ascending=False)
pivot = train_df[["Sex", "Survived"]]
pivot = pivot.groupby(['Sex'], as_index=False).mean()
pivot.sort_values(by='Survived', ascending=False)
pivot = train_df[["SibSp", "Survived"]]
pivot = pivot.groupby(['SibSp'], as_index=False).mean()
pivot.sort_values(by='Survived', ascending=False)
pivot = train_df[["Parch", "Survived"]]
pivot = pivot.groupby(['Parch'], as_index=False).mean()
pivot.sort_values(by='Survived', ascending=False)
"""
Explanation: Assumtions based on data analysis
We arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions.
Correlating.
We want to know how well does each feature correlate with Survival. We want to do this early in our project and match these quick correlations with modelled correlations later in the project.
Completing.
We may want to complete Age feature as it is definitely correlated to survival.
We may want to complete the Embarked feature as it may also correlate with survival or another important feature.
Correcting.
Ticket feature may be dropped from our analysis as it contains high ratio of duplicates (22%) and there may not be a correlation between Ticket and survival.
Cabin feature may be dropped as it is highly incomplete or contains many null values both in training and test dataset.
PassengerId may be dropped from training dataset as it does not contribute to survival.
Name feature is relatively non-standard, may not contribute directly to survival, so maybe dropped.
Creating.
We may want to create a new feature called Family based on Parch and SibSp to get total count of family members on board.
We may want to engineer the Name feature to extract Title as a new feature.
We may want to create new feature for Age bands. This turns a continous numerical feature into an ordinal categorical feature.
We may also want to create a Fare range feature if it helps our analysis.
Classifying.
We may also add to our assumptions based on the problem description noted earlier.
Women (Sex=female) were more likely to have survived.
Children (Age<?) were more likely to have survived.
The upper-class passengers (Pclass=1) were more likely to have survived.
Analyze by pivoting features
To confirm some of our observations and assumptions, we can quickly analyze our feature correlations by pivoting features against each other. We can only do so at this stage for features which do not have any empty values. It also makes sense doing so only for features which are categorical (Sex), ordinal (Pclass) or discrete (SibSp, Parch) type.
Pclass We observe significant correlation (>0.5) among Pclass=1 and Survived (classifying #3). We decide to include this feature in our model.
Sex We confirm the observation during problem definition that Sex=female had very high survival rate at 74% (classifying #1).
SibSp and Parch These features have zero correlation for certain values. It may be best to derive a feature or a set of features from these individual features (creating #1).
End of explanation
"""
g = sns.FacetGrid(train_df, col='Survived')
g.map(plt.hist, 'Age', bins=20)
"""
Explanation: Analyze by visualizing data
Now we can continue confirming some of our assumptions using visualizations for analyzing the data.
Correlating numerical features
Let us start by understanding correlations between numerical features and our solution goal (Survived).
A histogram chart is useful for analyzing continous numerical variables like Age where banding or ranges will help identify useful patterns. The histogram can indicate distribution of samples using automatically defined bins or equally ranged bands. This helps us answer questions relating to specific bands (Did infants have better survival rate?)
Note that x-axis in historgram visualizations represents the count of samples or passengers.
Observations.
Infants (Age <=4) had high survival rate.
Oldest passengers (Age = 80) survived.
Large number of 15-25 year olds did not survive.
Most passengers are in 15-35 age range.
Decisions.
This simple analysis confirms our assumptions as decisions for subsequent workflow stages.
We should consider Age (our assumption classifying #2) in our model training.
Complete the Age feature for null values (completing #1).
We should band age groups (creating #3).
End of explanation
"""
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived')
grid = sns.FacetGrid(train_df, col='Survived',
row='Pclass', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend()
"""
Explanation: Correlating numerical and ordinal features
We can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values.
Observations.
Pclass=3 had most passengers, however most did not survive. Confirms our classifying assumption #2.
Infant passengers in Pclass=2 and Pclass=3 mostly survived. Further qualifies our classifying assumption #2.
Most passengers in Pclass=1 survived. Confirms our classifying assumption #3.
Pclass varies in terms of Age distribution of passengers.
Decisions.
Consider Pclass for model training.
End of explanation
"""
# grid = sns.FacetGrid(train_df, col='Embarked')
grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6)
grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
grid.add_legend()
"""
Explanation: Correlating categorical features
Now we can correlate categorical features with our solution goal.
Observations.
Female passengers had much better survival rate than males. Confirms classifying (#1).
Exception in Embarked=C where males had higher survival rate. This could be a correlation between Pclass and Embarked and in turn Pclass and Survived, not necessarily direct correlation between Embarked and Survived.
Males had better survival rate in Pclass=3 when compared with Pclass=2 for C and Q ports. Completing (#2).
Ports of embarkation have varying survival rates for Pclass=3 and among male passengers. Correlating (#1).
Decisions.
Add Sex feature to model training.
Complete and add Embarked feature to model training.
End of explanation
"""
# grid = sns.FacetGrid(train_df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'})
grid = sns.FacetGrid(train_df, row='Embarked',
col='Survived', size=2.2, aspect=1.6)
grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
grid.add_legend()
"""
Explanation: Correlating categorical and numerical features
We may also want to correlate categorical features (with non-numeric values) and numeric features. We can consider correlating Embarked (Categorical non-numeric), Sex (Categorical non-numeric), Fare (Numeric continuous), with Survived (Categorical numeric).
Observations.
Higher fare paying passengers had better survival. Confirms our assumption for creating (#4) fare ranges.
Port of embarkation correlates with survival rates. Confirms correlating (#1) and completing (#2).
Decisions.
Consider banding Fare feature.
End of explanation
"""
print("Before", train_df.shape, test_df.shape,
combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
"After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape
"""
Explanation: Wrangle data
We have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals.
Correcting by dropping features
This is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis.
Based on our assumptions and decisions we want to drop the Cabin (correcting #2) and Ticket (correcting #1) features.
Note that where applicable we perform operations on both training and testing datasets together to stay consistent.
End of explanation
"""
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
"""
Explanation: Creating new feature extracting from existing
We want to analyze if Name feature can be engineered to extract titles and test correlation between titles and survival, before dropping Name and PassengerId features.
In the following code we extract Title feature using regular expressions. The RegEx pattern (\w+\.) matches the first word which ends with a dot character within Name feature. The expand=False flag returns a DataFrame.
Observations.
When we plot Title, Age, and Survived, we note the following observations.
Most titles band Age groups accurately. For example: Master title has Age mean of 5 years.
Survival among Title Age bands varies slightly.
Certain titles mostly survived (Mme, Lady, Sir) or did not (Don, Rev, Jonkheer).
Decision.
We decide to retain the new Title feature for model training.
End of explanation
"""
for dataset in combine:
dataset['Title'] = dataset['Title'].replace([
'Lady', 'Countess','Capt', 'Col',
'Don', 'Dr', 'Major', 'Rev', 'Sir',
'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
pivot = train_df[['Title', 'Survived']]
pivot = pivot.groupby(['Title'], as_index=False).mean()
pivot.sort_values(by='Survived', ascending=False)
"""
Explanation: We can replace many titles with a more common name or classify them as Rare.
End of explanation
"""
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train_df.head()
"""
Explanation: We can convert the categorical titles to ordinal.
End of explanation
"""
train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
combine = [train_df, test_df]
train_df.shape, test_df.shape
"""
Explanation: Now we can safely drop the Name feature from training and testing datasets. We also do not need the PassengerId feature in the training dataset.
End of explanation
"""
for dataset in combine:
dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
train_df.head()
"""
Explanation: Converting a categorical feature
Now we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal.
Let us start by converting Sex feature to a new feature called Gender where female=1 and male=0.
End of explanation
"""
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Gender')
grid = sns.FacetGrid(train_df, row='Pclass',
col='Sex', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend()
"""
Explanation: Completing a numerical continuous feature
Now we should start estimating and completing features with missing or null values. We will first do this for the Age feature.
We can consider three methods to complete a numerical continuous feature.
A simple way is to generate random numbers between mean and standard deviation.
More accurate way of guessing missing values is to use other correlated features. In our case we note correlation among Age, Sex, and Pclass. Guess Age values using median values for Age across sets of Pclass and Sex feature combinations. So, median Age for Pclass=1 and Sex=0, Pclass=1 and Sex=1, and so on...
Combine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Sex combinations.
Method 1 and 3 will introduce random noise into our models. The results from multiple executions might vary. We will prefer method 2.
End of explanation
"""
guess_ages = np.zeros((2,3))
guess_ages
"""
Explanation: Let us start by preparing an empty array to contain guessed Age values based on Pclass x Sex combinations.
End of explanation
"""
for dataset in combine:
for i in range(0, 2):
for j in range(0, 3):
guess_df = dataset[(dataset['Sex'] == i) & \
(dataset['Pclass'] == j+1)]['Age'].dropna()
# age_mean = guess_df.mean()
# age_std = guess_df.std()
# age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)
age_guess = guess_df.median()
# Convert random age float to nearest .5 age
guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5
for i in range(0, 2):
for j in range(0, 3):
dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\
'Age'] = guess_ages[i,j]
dataset['Age'] = dataset['Age'].astype(int)
train_df.head()
"""
Explanation: Now we iterate over Sex (0 or 1) and Pclass (1, 2, 3) to calculate guessed values of Age for the six combinations.
End of explanation
"""
train_df['AgeBand'] = pd.cut(train_df['Age'], 5)
pivot = train_df[['AgeBand', 'Survived']]
pivot = pivot.groupby(['AgeBand'], as_index=False).mean()
pivot.sort_values(by='AgeBand', ascending=True)
"""
Explanation: Let us create Age bands and determine correlations with Survived.
End of explanation
"""
for dataset in combine:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age']
train_df.head()
"""
Explanation: Let us replace Age with ordinals based on these bands.
End of explanation
"""
train_df = train_df.drop(['AgeBand'], axis=1)
combine = [train_df, test_df]
train_df.head()
"""
Explanation: We can now remove the AgeBand feature.
End of explanation
"""
for dataset in combine:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
pivot = train_df[['FamilySize', 'Survived']]
pivot = pivot.groupby(['FamilySize'], as_index=False).mean()
pivot.sort_values(by='Survived', ascending=False)
"""
Explanation: Create new feature combining existing features
We can create a new feature for FamilySize which combines Parch and SibSp. This will enable us to drop Parch and SibSp from our datasets.
End of explanation
"""
for dataset in combine:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()
"""
Explanation: We can create another feature called IsAlone based on FamilySize feature we just created.
End of explanation
"""
train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
combine = [train_df, test_df]
train_df.head()
"""
Explanation: Let us drop Parch, SibSp, and FamilySize features in favor of IsAlone.
End of explanation
"""
for dataset in combine:
dataset['Age*Class'] = dataset.Age * dataset.Pclass
train_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(10)
"""
Explanation: We can also create an artificial feature combining Pclass and Age.
End of explanation
"""
freq_port = train_df.Embarked.dropna().mode()[0]
freq_port
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)
pivot = train_df[['Embarked', 'Survived']]
pivot = pivot.groupby(['Embarked'], as_index=False).mean()
pivot.sort_values(by='Survived', ascending=False)
"""
Explanation: Completing a categorical feature
Embarked feature takes S, Q, C values based on port of embarkation. Our training dataset has two missing values. We simply fill these with the most common occurance.
End of explanation
"""
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].map(
{'S': 0, 'C': 1, 'Q': 2} ).astype(int)
train_df.head()
"""
Explanation: Converting categorical feature to numeric
We can now convert the Embarked feature to a new numeric feature.
End of explanation
"""
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)
test_df.head()
"""
Explanation: Quick completing and converting a numeric feature
We can now complete the Fare feature for single missing value in test dataset using mode to get the value that occurs most frequently for this feature. We do this in a single line of code.
Note that we are not creating an intermediate new feature or doing any further analysis for correlation to guess missing feature as we are replacing only a single value. The completion goal achieves desired requirement for model algorithm to operate on non-null values.
We may also want round off the fare to two decimals as it represents currency.
End of explanation
"""
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)
pivot = train_df[['FareBand', 'Survived']]
pivot = pivot.groupby(['FareBand'], as_index=False).mean()
pivot.sort_values(by='FareBand', ascending=True)
"""
Explanation: We can not create FareBand temporary or reference feature.
End of explanation
"""
for dataset in combine:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) &
(dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) &
(dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
train_df = train_df.drop(['FareBand'], axis=1)
combine = [train_df, test_df]
train_df.head(10)
"""
Explanation: Convert the Fare feature to ordinal values based on the FareBand.
End of explanation
"""
test_df.head(10)
"""
Explanation: And the test dataset.
End of explanation
"""
X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
"""
Explanation: Model, predict and solve
Now we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include:
Logistic Regression
KNN or k-Nearest Neighbors
Support Vector Machines
Naive Bayes classifier
Decision Tree
Random Forrest
Perceptron
Artificial neural network
RVM or Relevance Vector Machine
End of explanation
"""
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
"""
Explanation: Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference Wikipedia.
Note the confidence score generated by the model based on our training dataset.
End of explanation
"""
coeff_df = pd.DataFrame(train_df.columns.delete(0))
coeff_df.columns = ['Feature']
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Correlation', ascending=False)
"""
Explanation: We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function.
Positive coefficients increase the log-odds of the response (and thus increase the probability), and negative coefficients decrease the log-odds of the response (and thus decrease the probability).
Sex is highest positivie coefficient, implying as the Sex value increases (male: 0 to female: 1), the probability of Survived=1 increases the most.
Inversely as Pclass increases, probability of Survived=1 decreases the most.
This way Age*Class is a good artificial feature to model as it has second highest negative correlation with Survived.
So is Title as second highest positive correlation.
End of explanation
"""
# Support Vector Machines
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
"""
Explanation: Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference Wikipedia.
Note that the model generates a confidence score which is higher than Logistics Regression model.
End of explanation
"""
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
"""
Explanation: In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference Wikipedia.
KNN confidence score is better than Logistics Regression but worse than SVM.
End of explanation
"""
# Gaussian Naive Bayes
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
"""
Explanation: In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference Wikipedia.
The model generated confidence score is the lowest among the models evaluated so far.
End of explanation
"""
# Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron
# Linear SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
# Stochastic Gradient Descent
sgd = SGDClassifier()
sgd.fit(X_train, Y_train)
Y_pred = sgd.predict(X_test)
acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)
acc_sgd
"""
Explanation: The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference Wikipedia.
End of explanation
"""
# Decision Tree
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
"""
Explanation: This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference Wikipedia.
The model confidence score is the highest among models evaluated so far.
End of explanation
"""
# Random Forest
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
"""
Explanation: The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference Wikipedia.
The model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.
End of explanation
"""
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Naive Bayes', 'Perceptron',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree'],
'Score': [acc_svc, acc_knn, acc_log,
acc_random_forest, acc_gaussian, acc_perceptron,
acc_sgd, acc_linear_svc, acc_decision_tree]})
models.sort_values(by='Score', ascending=False)
submission = pd.DataFrame({
"PassengerId": test_df["PassengerId"],
"Survived": Y_pred
})
submission.to_csv('data/titanic-kaggle/submission.csv', index=False)
"""
Explanation: Model evaluation
We can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.
End of explanation
"""
|
martinggww/lucasenlights | MachineLearning/DataScience-Python3/CovarianceCorrelation.ipynb | cc0-1.0 | %matplotlib inline
import numpy as np
from pylab import *
def de_mean(x):
xmean = mean(x)
return [xi - xmean for xi in x]
def covariance(x, y):
n = len(x)
return dot(de_mean(x), de_mean(y)) / (n-1)
pageSpeeds = np.random.normal(3.0, 1.0, 1000)
purchaseAmount = np.random.normal(50.0, 10.0, 1000)
scatter(pageSpeeds, purchaseAmount)
covariance (pageSpeeds, purchaseAmount)
"""
Explanation: Covariance and Correlation
Covariance measures how two variables vary in tandem from their means.
For example, let's say we work for an e-commerce company, and they are interested in finding a correlation between page speed (how fast each web page renders for a customer) and how much a customer spends.
numpy offers covariance methods, but we'll do it the "hard way" to show what happens under the hood. Basically we treat each variable as a vector of deviations from the mean, and compute the "dot product" of both vectors. Geometrically this can be thought of as the angle between the two vectors in a high-dimensional space, but you can just think of it as a measure of similarity between the two variables.
First, let's just make page speed and purchase amount totally random and independent of each other; a very small covariance will result as there is no real correlation:
End of explanation
"""
purchaseAmount = np.random.normal(50.0, 10.0, 1000) / pageSpeeds
scatter(pageSpeeds, purchaseAmount)
covariance (pageSpeeds, purchaseAmount)
"""
Explanation: Now we'll make our fabricated purchase amounts an actual function of page speed, making a very real correlation. The negative value indicates an inverse relationship; pages that render in less time result in more money spent:
End of explanation
"""
def correlation(x, y):
stddevx = x.std()
stddevy = y.std()
return covariance(x,y) / stddevx / stddevy #In real life you'd check for divide by zero here
correlation(pageSpeeds, purchaseAmount)
"""
Explanation: But, what does this value mean? Covariance is sensitive to the units used in the variables, which makes it difficult to interpret. Correlation normalizes everything by their standard deviations, giving you an easier to understand value that ranges from -1 (for a perfect inverse correlation) to 1 (for a perfect positive correlation):
End of explanation
"""
np.corrcoef(pageSpeeds, purchaseAmount)
"""
Explanation: numpy can do all this for you with numpy.corrcoef. It returns a matrix of the correlation coefficients between every combination of the arrays passed in:
End of explanation
"""
purchaseAmount = 100 - pageSpeeds * 3
scatter(pageSpeeds, purchaseAmount)
correlation (pageSpeeds, purchaseAmount)
"""
Explanation: (It doesn't match exactly just due to the math precision available on a computer.)
We can force a perfect correlation by fabricating a totally linear relationship (again, it's not exactly -1 just due to precision errors, but it's close enough to tell us there's a really good correlation here):
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/uhh/cmip6/models/sandbox-3/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-3', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
cvxgrp/cvxpylayers | examples/tf/data_poisoning_attack.ipynb | apache-2.0 | import cvxpy as cp
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from cvxpylayers.tensorflow.cvxpylayer import CvxpyLayer
"""
Explanation: Data poisoning attack
In this notebook, we use a convex optimization layer to perform a data poisoning attack; i.e., we show how to perturb the data used to train a logistic regression classifier so as to maximally increase the test loss. This example is also presented in section 6.1 of the paper Differentiable convex optimization layers.
End of explanation
"""
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
tf.random.set_seed(0)
np.random.seed(0)
n = 2
N = 60
X, y = make_blobs(N, n, centers=np.array([[2, 2], [-2, -2]]), cluster_std=3)
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.5)
Xtrain, Xtest, ytrain, ytest = map(
tf.constant, [Xtrain, Xtest, ytrain, ytest])
m = Xtrain.shape[0]
lambda1_tf = tf.constant([[0.1]], dtype=tf.float64)
lambda2_tf = tf.constant([[0.1]], dtype=tf.float64)
a = cp.Variable((n, 1))
b = cp.Variable((1, 1))
lambda1 = cp.Parameter((1, 1), nonneg=True)
lambda2 = cp.Parameter((1, 1), nonneg=True)
X = cp.Parameter((m, n))
Y = ytrain.numpy()[:, np.newaxis]
log_likelihood = (1. / m) * cp.sum(
cp.multiply(Y, X @ a + b) -
cp.log_sum_exp(cp.hstack([np.zeros((m, 1)), X @ a + b]).T, axis=0,
keepdims=True).T
)
regularization = - lambda1 * cp.norm(a, 1) - lambda2 * cp.sum_squares(a)
prob = cp.Problem(cp.Maximize(log_likelihood + regularization))
fit_logreg = CvxpyLayer(prob, [X, lambda1, lambda2], [a, b])
"""
Explanation: We are given training data $(x_i, y_i){i=1}^{N}$,
where $x_i\in\mathbf{R}^n$ are feature vectors and $y_i\in{0,1}$ are the labels.
Suppose we fit a model for this classification problem by solving
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \frac{1}{N}\sum{i=1}^N \ell(\theta; x_i, y_i) + r(\theta),
\end{array}
\label{eq:trainlinear}
\end{equation}
where the loss function $\ell(\theta; x_i, y_i)$ is convex in $\theta \in \mathbf{R}^n$ and $r(\theta)$ is a convex
regularizer. We hope that the test loss $\mathcal{L}^{\mathrm{test}}(\theta) =
\frac{1}{M}\sum_{i=1}^M \ell(\theta; \tilde x_i, \tilde y_i)$ is small, where
$(\tilde x_i, \tilde y_i)_{i=1}^{M}$ is our test set. In this example, we use the logistic loss
\begin{equation}
\ell(\theta; x_i, y_i) = \log(1 + \exp(\beta^Tx_i + b)) - y_i(\beta^Tx_i + b)
\end{equation}
with elastic net regularizaiton
\begin{equation}
r(\theta) = 0.1\|\beta\|_1 + 0.1\|\beta\|_2^2.
\end{equation}
End of explanation
"""
from sklearn.linear_model import LogisticRegression
loss = tf.keras.losses.BinaryCrossentropy()
with tf.GradientTape() as tape:
tape.watch(Xtrain)
# Apply the layer
slope, intercept = fit_logreg(Xtrain, lambda1_tf, lambda2_tf)
# 30 is scale factor so visualization is pretty
test_loss = 30 * loss(ytest, Xtest @ slope + intercept)
# Compute the gradient of the test loss with respect to the training data
Xtrain_grad = tape.gradient(test_loss, Xtrain)
"""
Explanation: Assume that our training data is subject to a data poisoning attack,
before it is supplied to us. The adversary has full knowledge of our modeling
choice, meaning that they know the form of the optimization problem above, and seeks
to perturb the data to maximally increase our loss on the test
set, to which they also have access. The adversary is permitted to apply an
additive perturbation $\delta_i \in \mathbf{R}^n$ to each of the training points $x_i$,
with the perturbations satisfying $\|\delta_i\|_\infty \leq 0.01$.
Let $\theta^\star$ be optimal.
The gradient of
the test loss with respect to a training data point, $\nabla_{x_i}
\mathcal{L}^{\mathrm{test}}(\theta^\star)$, gives the direction
in which the point should be moved to achieve the greatest
increase in test loss. Hence, one reasonable adversarial policy is to set $x_i
:= x_i +
.01\mathrm{sign}(\nabla_{x_i}\mathcal{L}^{\mathrm{test}}(\theta^\star))$. The
quantity $0.01\sum_{i=1}^N \|\nabla_{x_i}
\mathcal{L}^{\mathrm{test}}(\theta^\star)\|_1$ is the predicted increase in
our test loss due to the poisoning.
End of explanation
"""
lr = LogisticRegression(solver='lbfgs')
lr.fit(Xtest.numpy(), ytest.numpy())
beta_train = slope.numpy().flatten()
beta_test = lr.coef_.flatten()
b_train = intercept[0, 0].numpy()
b_test = lr.intercept_[0]
hyperplane = lambda x, beta, b: - (b + beta[0] * x) / beta[1]
Xtrain_np = Xtrain.numpy()
Xtrain_grad_np = Xtrain_grad.numpy()
ytrain_np = ytrain.numpy().astype(np.bool)
plt.figure()
plt.scatter(Xtrain_np[ytrain_np, 0], Xtrain_np[ytrain_np, 1], s=25)
plt.scatter(Xtrain_np[~ytrain_np, 0], Xtrain_np[~ytrain_np, 1], s=25)
for i in range(m):
plt.arrow(Xtrain_np[i, 0], Xtrain_np[i, 1],
Xtrain_grad_np[i, 0], Xtrain_grad_np[i, 1], color='black')
plt.xlim(-8, 8)
plt.ylim(-8, 8)
plt.plot(np.linspace(-8, 8, 100),
[hyperplane(x, beta_train, b_train)
for x in np.linspace(-8, 8, 100)], color='red', label='train')
plt.plot(np.linspace(-8, 8, 100),
[hyperplane(x, beta_test, b_test)
for x in np.linspace(-8, 8, 100)], color='blue', label='test')
plt.legend()
plt.show()
"""
Explanation: Below, we plot the gradient of the test loss with respect to the training data points. The blue and orange points are training data, belonging to different classes. The red line is the hyperplane learned by fitting the the model, while the blue line is the hyperplane that minimizes the test loss. The gradients are visualized as black lines, attached to the data points. Moving the points in the gradient directions torques the learned hyperplane away from the optimal hyperplane for the test set.
End of explanation
"""
|
google-research/ott | ott/tools/gaussian_mixture/gmm_pair_demo.ipynb | apache-2.0 | import typing
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import jax
import jax.numpy as jnp
from colabtools import adhoc_import
import importlib
import ott
from ott.tools.gaussian_mixture import gaussian_mixture
from ott.tools.gaussian_mixture import gaussian_mixture_pair
from ott.tools.gaussian_mixture import probabilities
from ott.tools.gaussian_mixture import fit_gmm
from ott.tools.gaussian_mixture import fit_gmm_pair
def get_cov_ellipse(mean, cov, n_sds=2, **kwargs):
"""Get a matplotlib Ellipse patch for a given mean and covariance.
Adapted from
https://scipython.com/book/chapter-7-matplotlib/examples/bmi-data-with-confidence-ellipses/
"""
# Find and sort eigenvalues and eigenvectors into descending order
eigvals, eigvecs = jnp.linalg.eigh(cov)
order = eigvals.argsort()[::-1]
eigvals, eigvecs = eigvals[order], eigvecs[:, order]
# The anti-clockwise angle to rotate our ellipse by
vx, vy = eigvecs[:,0][0], eigvecs[:,0][1]
theta = np.arctan2(vy, vx)
# Width and height of ellipse to draw
width, height = 2 * n_sds * np.sqrt(eigvals)
return matplotlib.patches.Ellipse(xy=mean, width=width, height=height,
angle=np.degrees(theta), **kwargs)
key = jax.random.PRNGKey(0)
"""
Explanation: Fitting pairs of coupled GMMs
Several papers have recently proposed a Wasserstein-like distance measure between Gaussian mixture models ([1], [2], [3], [4], [5]). The idea is that
(1) there is an analytic solution for the Wasserstein distance between two Gaussians, and
(2) if one limits the set of allowed couplings between GMMs to the space of Gaussian mixtures, one can define a Wasserstein-like distance between a pair of GMMs in terms of the Wasserstein distance between their components.
[1] Y. Chen, T. T. Georgiou, and A. Tannenbaum, Optimal transport for Gaussian mixture models, arXiv, (2017).
[2] Y. Chen, T. T. Georgiou, and A. Tannenbaum, Optimal Transport for Gaussian Mixture Models, IEEE Access, 7 (2019), pp. 6269–6278, https://doi.org/10.1109/ACCESS.2018.2889838.
[3] Y. Chen, J. Ye, and J. Li, A distance for HMMS based on aggregated Wasserstein metric and state registration, arXiv, (2016).
[4] Y. Chen, J. Ye, and J. Li, Aggregated Wasserstein Distance and State Registration for Hidden Markov Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, (2019).
[5] J. Delon, A. Desolneux. A Wasserstein-type distance in the space of Gaussian Mixture Models, SIAM Journal on Imaging Sciences, Society of Industrial and Applied Mathematics, 2020, 13 (2), pp. 936-970. hal-02178204v4
In [5], the distance $MW_2$ between two GMMs, $\mu_0$ and $\mu_1$, is defined as follows:
$$MW_2^2(\mu_0, \mu_1) = \inf_{\gamma\in \Pi(\mu_0, \mu_1) \cap GMM_{2d}(\infty)} \int_{\mathbb{R}^d\times \mathbb{R}^d} \|y_0-y_1\|^2 d\gamma(y_0, y_1)$$
where $\Pi(\mu_0, \mu_1)$ is the set of probability measures on $(\mathbb{R}^d)^2$ having $\mu_0$ and $\mu_1$ as marginals, and $GMM_d(K)$ is the set of Gaussian mixtures in $\mathbb{R}^d$ with less than $K$ components (see (4.1)).
One appealing thing about this distance is that it can be obtained by minimizing the sum,
$$MW_2^2(\mu_0, \mu_1) = \min_{w \in \Pi(\pi_0, \pi_1)} \sum_{k,l} w_{kl} W_2^2(\mu_0^k, \mu_1^l)$$
where here $\Pi(\pi_0, \pi_1)$ is the subset of the simplex $\Gamma_{K_0, K_1}$ with marginals $\pi_0$ and $\pi_1$ and $W^2_2(\mu_0^k, \mu_1^l)$ is the Wasserstein distance between component $k$ of $\mu_0$ and component $l$ of $\mu_1$ (see (4.4)).
We can obtain a regularized solution to this minimization problem by applying the Sinkhorn algorithm with the Bures cost function.
[5] suggests an application of $MW_2$: we can approximate an optimal transport map between two point clouds by simultaneously fitting a GMM to each point cloud and minimizing the $MW_2$ distance between the fitted GMMs (see section 6). The approach scales well to large point clouds since the Sinkhorn algorithm is applied only to the mixture components rather than to individual points. The resulting couplings are easy to interpret since they involve relatively small numbers of components, and the transport maps are mixtures of piecewise linear maps.
Here we demonstrate the approach on some synthetic data.
End of explanation
"""
mean_generator0 = jnp.array([[2., -1.],
[-2., 0.],
[4., 3.]])
cov_generator0 = 3.*jnp.array([[[0.2, 0.], [0., 0.1]],
[[0.6, 0.], [0., 0.3]],
[[0.5, -0.4], [-0.4, 0.5]]])
weights_generator0 = jnp.array([0.2, 0.2, 0.6])
gmm_generator0 = gaussian_mixture.GaussianMixture.from_mean_cov_component_weights(
mean=mean_generator0,
cov=cov_generator0,
component_weights=weights_generator0,
)
def rot(m, theta):
# left multiply m by a theta degree rotation matrix
theta_rad = theta * 2. * np.pi / 360.
m_rot = jnp.array([[jnp.cos(theta_rad), -jnp.sin(theta_rad)],
[jnp.sin(theta_rad), jnp.cos(theta_rad)]])
return jnp.matmul(m_rot, m)
# shift the means to the right by varying amounts
mean_generator1 = mean_generator0 + jnp.array([[1., -0.5],
[-1., -1.],
[-1., 0.]])
# rotate the covariances a bit
cov_generator1 = jnp.stack([rot(cov_generator0[0, :], 5),
rot(cov_generator0[1, :], -5),
rot(cov_generator0[2, :], -10)], axis=0)
weights_generator1 = jnp.array([0.4, 0.4, 0.2])
gmm_generator1 = gaussian_mixture.GaussianMixture.from_mean_cov_component_weights(
mean=mean_generator1,
cov=cov_generator1,
component_weights=weights_generator1,
)
N = 10000
key, subkey0, subkey1 = jax.random.split(key, num=3)
samples_gmm0 = gmm_generator0.sample(key=subkey0, size=N)
samples_gmm1 = gmm_generator1.sample(key=subkey1, size=N)
fig, axes = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(12, 6))
axes[0].scatter(samples_gmm0[:, 0], samples_gmm0[:, 1], marker='.', alpha=0.25)
axes[0].set_title('Samples from generating GMM 0')
axes[1].scatter(samples_gmm1[:, 0], samples_gmm1[:, 1], marker='.', alpha=0.25)
axes[1].set_title('Samples from generating GMM 1')
plt.show()
"""
Explanation: Generate synthetic data
Construct 2 GMMs that we'll use to generate some samples.
The two GMMs have small differences in their means, covariances, and in their weights.
End of explanation
"""
# As a starting point for our optimization, we pool the two sets of samples
# and fit a single GMM to the combined samples
samples = jnp.concatenate([samples_gmm0, samples_gmm1])
key, subkey = jax.random.split(key)
gmm_init = fit_gmm.initialize(key=subkey, points=samples, point_weights=None, n_components=3, verbose=True)
pooled_gmm = fit_gmm.fit_model_em(gmm=gmm_init, points=samples, point_weights=None, steps=20)
# Now we use EM to fit a GMM to each set of samples while penalizing the
# distance between the pair of GMMs
%%time
EPSILON = 1.e-2 # regularization weight for the Sinkhorn algorithm
WEIGHT_TRANSPORT = 0.01 # weight for the MW2 distance penalty between the GMMs
pair_init = gaussian_mixture_pair.GaussianMixturePair(
gmm0=pooled_gmm, gmm1=pooled_gmm, epsilon=EPSILON, tau=1.)
fit_model_em_fn = fit_gmm_pair.get_fit_model_em_fn(
weight_transport=WEIGHT_TRANSPORT,
jit=True)
pair, loss = fit_model_em_fn(pair=pair_init,
points0=samples_gmm0,
points1=samples_gmm1,
point_weights0=None,
point_weights1=None,
em_steps=30,
m_steps=20,
verbose=True)
colors = ['red', 'green', 'blue']
fig, axes = plt.subplots(1, 2, figsize=(8, 4), sharex=True, sharey=True)
for i, (gmm, samples) in enumerate([(pair.gmm0, samples_gmm0), (pair.gmm1, samples_gmm1)]):
assignment_prob = gmm.get_log_component_posterior(samples)
assignment = jnp.argmax(assignment_prob, axis=-1)
for j, component in enumerate(gmm.components()):
subset = assignment == j
axes[i].scatter(samples[subset, 0], samples[subset, 1], marker='.', alpha=0.01, color=colors[j], label=j)
ellipse = get_cov_ellipse(component.loc, component.covariance(), n_sds=2, ec=colors[j], fill=False, lw=2)
axes[i].add_artist(ellipse)
legend = axes[i].legend()
for lh in legend.legendHandles:
lh.set_alpha(1)
axes[i].set_title(f'Fitted GMM {i} and samples')
plt.show()
print('Fitted GMM 0 masses', pair.gmm0.component_weights)
print('Fitted GMM 1 masses', pair.gmm1.component_weights)
print('Mass transfer, rows=source, columns=destination')
cost_matrix = pair.get_cost_matrix()
sinkhorn_output = pair.get_sinkhorn(cost_matrix=cost_matrix)
print(pair.get_normalized_sinkhorn_coupling(sinkhorn_output=sinkhorn_output))
"""
Explanation: Fit a pair of coupled GMMs
End of explanation
"""
#@title x log x - x + 1 { display-mode: "form" }
x = np.arange(0, 4, 0.1)
y = x * jnp.log(x) - x + 1
y = y.at[0].set(1.)
plt.plot(x, y)
plt.title('y = x log x - x + 1')
plt.show()
"""
Explanation: Reweighting components
In the approach above, we can only change the weights of components by transferring mass between them. In some settings, allowing reweightings of components can lead to couplings that are easier to interpret. For example, in a biological application in which points correspond to a population of featurized representations of organisms, mixture components might capture subpopulations and a component reweighting might correspond to a prevalence change for the subpopulation.
We can generalize the approach above to allow component reweightings by using an unbalanced variant of MW2 as our measure of distance between GMMs.
Recall that
$$MW_2^2(\mu_0, \mu_1) = \min_{w \in \Pi(\pi_0, \pi_1)} \sum_{k,l} w_{kl} W_2^2(\mu_0^k, \mu_1^l)$$
We use the Sinkhorn algorithm to obtain a solution to a regularized version of the above minimization:
$$MW_2^2(\mu_0, \mu_1) \approx \min_{w \in \Pi(\pi_0, \pi_1)} \sum_{k,l} w_{kl} W_2^2(\mu_0^k, \mu_1^l) + \epsilon KL(w, a^T b)$$
An unbalanced Wasserstein divergence for GMMs
We define $UW_2^2$, an unbalanced version of $MW_2^2$, as follows:
$$UW_2^2(\mu_0, \mu_1) = \min_{w_{k,l} \geq 0} \sum_{k,l} w_{kl} W_2^2(\mu_0^k, \mu_1^l) + \rho KL(w_{k \cdot}||\pi_0^k) + \rho KL(w_{\cdot l}||\pi_1^l)$$
where $KL(f||g)$ is the generalized KL divergence,
$$KL(f||g) = \sum_i f_i \log \frac{f_i}{g_i} - f_i + g_i$$
which does not assume that either $\sum f_i = 1$ or $\sum g_i = 1$.
As above, we add a regularization term to make the problem convex and solve with the unbalanced Sinkhorn algorithm.
Interpreting the results
The coupling matrix $W$ we obtain from the unbalanced Sinkhorn algorithm has marginals that do not necessarily match the component weights of our GMMs, and it's worth looking in detail at an example to see how we might interpret this mismatch.
Marginal mismatch
Suppose we have a pair of 2-component GMMs:
$\mu_0$ with component weights 0.2 and 0.8, and
$\mu_1$ with component weights 0.4 and 0.6.
Suppose the unbalanced Sinkhorn algorithm yields the coupling matrix
$$W = \begin{pmatrix}0.3 & 0.1\0.2 & 0.4 \end{pmatrix}$$
The first row of the coupling matrix $W$ indicates that 0.4 units of mass flow out of the first component of $\mu_0$, 0.3 units to the first component of $\mu_1$ and 0.1 to the second component of $\mu_1$. However, the first component of $\mu_0$ only has 0.2 units of mass!
Similarly, the first column of $W$ indicates that 0.5 units of mass flow into the first component of $\mu_1$, 0.3 from the first component of $\mu_0$ and 0.2 from the second component of $\mu_0$. Again, while 0.5 units of mass flow in, the first component of $\mu_1$ only has 0.4 units of mass.
Reweighting points
Our interpretation is this: points from $\mu_0$ undergo two reweightings during transport, the first as they leave a component in $\mu_0$ and the second as they enter a component in $\mu_1$. Each of these reweightings has a cost that is reflected in the KL divergence between the marginals of the coupling matrix and the weights of the corresponding GMM components.
Suppose we transport a point with weight 1 from the first component of $\mu_0$ to the first component of $\mu_1$.
We see from the coupling matrix that the first component of $\mu_0$ has mass 0.2 but has an outflow of 0.4. To achieve the indicated outflow, we double the weight of our point as it leaves the first component of $\mu_0$, so now our point has a weight of 2.
We see that the first component of $\mu_1$ has a mass of 0.4 but an inflow of 0.5. To achieve the indicated inflow, we need to decrease the weight of incoming points by a factor of 0.8.
The net effect is that the weight of our point increases by a factor of $2 \times 0.8 = 1.6$
Unnormalized couplings
One point that is worth emphasizing: in the unbalanced case, the coupling matrix we obtain from the Sinkhorn algorithm need not have a total mass of 1!
Let's look at the objective function in more detail to see why this might happen.
Recall that $UW_2^2$ penalizes mismatches between the marginals of the coupling matrix and the GMM component weights via the generalized KL divergence,
$$KL(f||g) = \sum_i f_i \log \frac{f_i}{g_i} - f_i + g_i$$
In the divergence above, $f$ is a marginal of the coupling, which may not sum to 1, and $g$ is the set of weights for a GMM and does sum to 1. Let $p_i = \frac{f_i}{\sum_i f_i} = \frac{f_i}{F}$ be the normalized marginal of the coupling. We have
$$KL(f||g) = \sum_i F p_i \log \frac{F p_i}{g_i} - F p_i + g_i \
= F \sum_i \left(p_i \log \frac{p_i}{g_i} + p_i \log F \right) - F + 1 \
= F \sum_i p_i \log \frac{p_i}{g_i} + F \log F - F + 1 \
= F KL(p||g) + (F \log F - F + 1)$$
Thus, having an unnormalized coupling scales each KL divergence penalty by the total mass of the coupling, $F$, and adds a penalty of the form $F \log F - F + 1$.
In addition, the transport cost for the unnormalized coupling is simply the transport cost for the normalized coupling scaled by the same factor $F$.
The result is that the cost for an unnormalized coupling $W$ that sums to $F$ is $F$ times the cost for the normalized coupling $W/F$ plus $(\epsilon + 2\rho)(F \log F - F + 1)$.
For $F \geq 0$, the function $F \log F - F + 1$ is strictly convex, has a minimum of 0 at 1 and is 1 at 0 and $e$.
End of explanation
"""
%%time
# here we use a larger transport weight because the transport cost is smaller
# (see discussion above)
WEIGHT_TRANSPORT = 0.1
RHO = 1.
TAU = RHO / (RHO + EPSILON)
# Again for our initial model, we will use a GMM fit on the pooled points
pair_init2 = gaussian_mixture_pair.GaussianMixturePair(
gmm0=pooled_gmm, gmm1=pooled_gmm,
epsilon=EPSILON, tau=TAU)
fit_model_em_fn2 = fit_gmm_pair.get_fit_model_em_fn(
weight_transport=WEIGHT_TRANSPORT,
jit=True)
pair2, loss = fit_model_em_fn2(pair=pair_init2,
points0=samples_gmm0,
points1=samples_gmm1,
point_weights0=None,
point_weights1=None,
em_steps=30,
m_steps=20,
verbose=True)
print('Fitted GMM 0 masses', pair2.gmm0.component_weights)
print('Fitted GMM 1 masses', pair2.gmm1.component_weights)
cost_matrix = pair2.get_cost_matrix()
sinkhorn_output = pair2.get_sinkhorn(cost_matrix=cost_matrix)
print('Normalized coupling')
print(pair2.get_normalized_sinkhorn_coupling(sinkhorn_output=sinkhorn_output))
"""
Explanation: We should never get an $F$ larger than 1, since such an $F$ will both increase the cost of the normalized coupling as well as introduce a positive penalty term. If we use the balanced Sinkhorn algorithm, we will always have $F = 1$.
The case of $F \in (0, 1)$ can be interpreted to mean that all points are down-weighted for transport to reduce the overall cost. We can shift the transport and reweighting costs into the normalization penalty, $(\epsilon + 2 \rho)(F \log F - F + 1)$.
The net effect of this flexibility in allocating costs to the normalization penalty term is to bound the total regularized cost to be less than or equal to $(\epsilon + 2 \rho)(F \log F - F + 1) <= (\epsilon + 2 \rho)$, something to consider in setting the various weights used in the overall optimization.
End of explanation
"""
|
osplanning-data-standards/GTFS-PLUS | tools/Tutorial - GTFS to GTFS-PLUS.ipynb | apache-2.0 | import os,datetime,shutil
import pandas as pd
"""
Explanation: Tutorial: Quick Translation of GTFS to GTFS-PLUS
End of explanation
"""
GTFS_LINK = r"http://admin.gotransitnc.org/sites/default/files/developergtfs/GoRaleigh_GTFS_0.zip"
BASE_DIR = os.getcwd()
NEW_FOLDER = "GoRaleigh_GTFS"
GTFS_LOC = os.path.join(BASE_DIR,NEW_FOLDER)
# Download the file from the URL and unzip
from urllib import urlopen
from zipfile import ZipFile
try:
os.stat(os.path.join(BASE_DIR,NEW_FOLDER))
except:
os.mkdir(os.path.join(BASE_DIR,NEW_FOLDER))
tempzip_filename = os.path.join(BASE_DIR,NEW_FOLDER,"tempgtfs.zip")
zipresp = urlopen(GTFS_LINK)
tempzip = open(tempzip_filename, "wb")
tempzip.write(zipresp.read())
tempzip.close()
zf = ZipFile(tempzip_filename)
zf.extractall(path = os.path.join(BASE_DIR,NEW_FOLDER))
zf.close()
os.remove(tempzip_filename)
"""
Explanation: Download GTFS
End of explanation
"""
import transitfeed
loader = transitfeed.Loader(GTFS_LOC, memory_db=True)
schedule = loader.Load()
schedule.Validate()
print "Routes Loaded:"
rts = [r.route_long_name for r in schedule.routes.itervalues()]
for r in rts:
print " - ",r
"""
Explanation: Validate GTFS Feed
Make sure you are starting with a valid network.
This can take a while for a large network.
End of explanation
"""
import csv
import gtfs_plus
GTFS_PLUS_LOC = "GoRaleigh_GTFS_PLUS"
OUTPUT_DIR = os.path.join(BASE_DIR,GTFS_PLUS_LOC)
# start with the GTFS files if you don't have these already
try:
shutil.copytree(GTFS_LOC, "GoRaleigh_GTFS_PLUS")
# copy over the config file from the earlier tutorials
shutil.copy(os.path.join(BASE_DIR,"tta","input","demand-single","config_ft.txt"),
os.path.join(OUTPUT_DIR, "config_ft.txt"))
except:
# hopefully this is ok and you're just doing this multiple times
pass
DEFAULT_MODE = "local_bus"
DEFAULT_VEHICLE = "standard_bus"
SEATED_CAPACITY = 30
STANDING_CAPACITY = 20
MAX_SPEED = 45
ACCELERATION = 3
DECELERATION = 4
DWELL = r'"3 + 2*[boards] + 1.5*[alights]"'
"""
Explanation: Add needed data to turn GTFS to GTFS-PLUS
There are files that we need to add:
* routes_ft.txt
* vehicles_ft.txt
* trips_ft.txt
* transfers_ft.txt
* walk_access_ft.txt
End of explanation
"""
route_modes_dict = gtfs_plus.routesft_assume_mode(schedule, DEFAULT_MODE)
with open(os.path.join(OUTPUT_DIR,'routes_ft.txt'),'wb') as f:
f.write("route_id,mode\n")
w = csv.writer(f)
w.writerows(route_modes_dict.items())
"""
Explanation: Create routes_ft.txt
For now, assume a default mode
End of explanation
"""
trip_vehicle_dict = dict(zip(schedule.trips.keys(),[DEFAULT_VEHICLE]*len(schedule.trips.keys())))
with open(os.path.join(OUTPUT_DIR,'trips_ft.txt'),'wb') as f:
f.write("trip_id,vehicle_name\n")
w = csv.writer(f)
w.writerows(trip_vehicle_dict.items())
"""
Explanation: Create trips_ft.txt
For now, assume a default vehicle
End of explanation
"""
with open(os.path.join(OUTPUT_DIR,'vehicles_ft.txt'),'wb') as f:
f.write("vehicle_name,seated_capacity,standing_capacity,max_speed,acceleration,deceleration,dwell_formula\n")
f.write("%s,%d,%d,%4.2f,%4.2f,%4.2f,%s\n"%(DEFAULT_VEHICLE,SEATED_CAPACITY,STANDING_CAPACITY,MAX_SPEED,ACCELERATION,DECELERATION,DWELL))
"""
Explanation: Create vehicles_ft.txt
FOr now, assume mostly defaults
End of explanation
"""
xfer_dict = gtfs_plus.create_tranfers(schedule,max_xfer_dist=0.6)
with open(os.path.join(OUTPUT_DIR,'transfers_ft.txt'),'wb') as f:
f.write("from_stop_id,to_stop_id,dist\n")
for k,v in xfer_dict.iteritems():
f.write("%s,%s,%4.2f\n" % (k[0],k[1],v))
#and reverse link
f.write("%s,%s,%4.2f\n" % (k[1],k[0],v))
"""
Explanation: Create transfers_ft.txt
End of explanation
"""
|
fweik/espresso | doc/tutorials/lattice_boltzmann/lattice_boltzmann_part2.ipynb | gpl-3.0 | import numpy as np
import logging
import sys
import espressomd
import espressomd.accumulators
import espressomd.observables
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
# Constants
KT = 1.1
STEPS = 400000
# System setup
system = espressomd.System(box_l=[16] * 3)
system.time_step = 0.01
system.cell_system.skin = 0.4
system.part.add(pos=[0, 0, 0])
# Run for different friction coefficients
gammas = [1.0, 2.0, 4.0, 10.0]
tau_results = []
msd_results = []
for gamma in gammas:
system.auto_update_accumulators.clear()
system.thermostat.turn_off()
system.thermostat.set_langevin(kT=KT, gamma=gamma, seed=42)
logging.info("Equilibrating the system.")
system.integrator.run(1000)
logging.info("Equilibration finished.")
# Setup observable correlator
correlator = correlator_msd(0, STEPS)
system.auto_update_accumulators.add(correlator)
logging.info("Sampling started for gamma = {}.".format(gamma))
system.integrator.run(STEPS)
correlator.finalize()
tau_results.append(correlator.lag_times())
msd_results.append(np.sum(correlator.result().reshape([-1, 3]), axis=1))
logging.info("Sampling finished.")
"""
Explanation: The Lattice-Boltzmann Method in ESPResSo - Part 2
Diffusion of a single particle
In these exercises we want to reproduce a classic result of polymer physics: the dependence
of the diffusion coefficient of a polymer on its chain length. If no hydrodynamic interactions
are present, one expects a scaling law $D \propto N ^{- 1}$ and if they are present, a scaling law
$D \propto N^{- \nu}$ is expected. Here $\nu$ is the Flory exponent that plays a very prominent
role in polymer physics. It has a value of $\sim 3/5$ in good solvent conditions in 3D.
Discussions on these scaling laws can be found in polymer physics textbooks like [4–6].
The reason for the different scaling law is the following: when being transported, every monomer
creates a flow field that follows the direction of its motion. This flow field makes it easier for
other monomers to follow its motion. This makes a polymer (given it is sufficiently long) diffuse
more like a compact object including the fluid inside it, although it does not have clear boundaries.
It can be shown that its motion can be described by its hydrodynamic radius. It is defined as:
\begin{equation}
\left\langle \frac{1}{R_h} \right\rangle = \left\langle \frac{1}{N^2}\sum_{i\neq j} \frac{1}{\left| r_i - r_j \right|} \right\rangle
\end{equation}
This hydrodynamic radius exhibits the scaling law $R_h \propto N^{\nu}$
and the diffusion coefficient of a long polymer is proportional to its inverse $R_h$.
For shorter polymers there is a transition region. It can be described
by the Kirkwood–Zimm model:
\begin{equation}
D=\frac{D_0}{N} + \frac{k_B T}{6 \pi \eta } \left\langle \frac{1}{R_h} \right\rangle
\end{equation}
Here $D_0$ is the monomer diffusion coefficient and $\eta$ the
viscosity of the fluid. For a finite system size the second part of the
diffusion is subject to a $1/L$ finite size effect, because
hydrodynamic interactions are proportional to the inverse
distance and thus long ranged. It can be taken into account
by a correction:
\begin{equation}
D=\frac{D_0}{N} + \frac{k_B T}{6 \pi \eta } \left\langle \frac{1}{R_h} \right\rangle \left( 1- \left\langle\frac{R_h}{L} \right\rangle \right)
\end{equation}
It is quite difficult to prove this formula computationally with good accuracy.
It will need quite some computational effort and a careful analysis. So please don't be
too disappointed if you don't manage to do so.
We want to determine the long-time self diffusion coefficient from the mean square
displacement of the center-of-mass of a single polymer. For large $t$ the mean square displacement is
proportional to the time and the diffusion coefficient occurs as a
prefactor:
\begin{equation}
D = \lim_{t\to\infty}\left[ \frac{1}{6t} \left\langle \left(\vec{r}(t) - \vec{r}(0)\right)^2 \right\rangle \right].
\end{equation}
This equation can be found in virtually any simulation textbook, like [7]. We will set up a
polymer in an implicit solvent, simulate for an appropriate amount of time, calculate the mean square
displacement as a function of time and obtain the diffusion coefficient from a linear
fit. However we will have a couple of steps inbetween and divide the full problem into
subproblems that allow to (hopefully) fully understand the process.
1. Setting up the observable
Write a function with signature correlator_msd(pid, tau_max) that returns a
mean-squared displacement correlator that is updated every time step.
python
def correlator_msd(pid, tau_max):
pos = espressomd.observables.ParticlePositions(ids=(pid,))
pos_cor = espressomd.accumulators.Correlator(
obs1=pos, tau_lin=16, tau_max=tau_max, delta_N=1,
corr_operation="square_distance_componentwise", compress1="discard1")
return pos_cor
2. Simulating the Brownian motion
We will simulate the diffusion of a single particle that is coupled to an implicit solvent.
End of explanation
"""
%matplotlib notebook
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 22})
plt.figure(figsize=(10, 10))
plt.xlabel(r'$\tau$ [$\Delta t$]')
plt.ylabel(r'MSD [$\sigma^2$]')
for index, (tau, msd) in enumerate(zip(tau_results, msd_results)):
# We skip the first entry since it's zero by definition and cannot be displayed
# in a loglog plot. Furthermore, we only look at the first 100 entries due to
# the high variance for larger lag times.
plt.loglog(tau[1:100], msd[1:100], label=r'$\gamma=${:.1f}'.format(gammas[index]))
plt.legend()
plt.show()
"""
Explanation: 3. Data analysis
3.1 Plotting the results
End of explanation
"""
import scipy.optimize
def quadratic(x, a, b, c):
return a * x**2 + b * x + c
# cutoffs for the ballistic regime (different for each gamma value)
tau_p_values = [14, 12, 10, 7]
plt.figure(figsize=(10, 10))
plt.xlabel(r'$\tau$ [$\Delta t$]')
plt.ylabel(r'MSD [$\sigma^2$]')
for index, (tau_p, tau, msd) in enumerate(zip(tau_p_values, tau_results, msd_results)):
(a, b, c), _ = scipy.optimize.curve_fit(quadratic, tau[:tau_p], msd[:tau_p])
x = np.linspace(tau[0], tau[max(tau_p_values) - 1], 50)
p = plt.plot(x, quadratic(x, a, b, c), '-')
plt.plot(tau[:max(tau_p_values)], msd[:max(tau_p_values)], 'o', color=p[0].get_color(),
label=r'$\gamma=${:.1f}'.format(gammas[index]))
plt.legend()
plt.show()
"""
Explanation: 3.2 Calculating the diffusion coefficient
In this script an implicit solvent and a single particle are created and thermalized.
The random forces on the particle will cause the particle to move.
The mean squared displacement is calculated during the simulation via a multiple-tau
correlator.
Can you give an explanation for the quadratic time dependency for short times?
The MSD of a Brownian motion can be decomposed in three main regimes [8]:
* for short lag times $\tau < \tau_p$, the particle motion is not
significantly impeded by solvent collisions: it's in the ballistic mode
(collision-free regime) where $\operatorname{MSD}(t) \sim (k_BT / m) t^2$
* for long lag times $\tau > \tau_f$, the particle motion is determined by
numerous collisions with the solvent: it's in the diffusive mode where
$\operatorname{MSD}(t) \sim 6t$
* for lag times between $\tau_p$ and $\tau_f$, there is a crossover mode
The values $\tau_p$ and $\tau_f$ can be obtained manually through visual
inspection of the MSD plot, or more accurately by non-linear fitting [9].
The cutoff lag time $\tau_p$ between the ballistic and crossover modes is proportional
to the particle mass and inversely proportional to the friction coefficient.
In the graph below, a parabola is fitted to the data points in the ballistic mode for
each $\gamma$ and plotted beyond the crossover region to reveal the deviation from the
ballistic mode. This deviation is clearly visible in the $\gamma = 10$ case, because
the assumption of a collision-free regime quickly breaks down when a particle is
coupled to its surrounding fluid with a high friction coefficient.
End of explanation
"""
def linear(x, a, b):
return a * x + b
# cutoffs for the diffusive regime (different for each gamma value)
tau_f_values = [24, 22, 20, 17]
# cutoff for the data series (larger lag times have larger variance due to undersampling)
cutoff_limit = 90
diffusion_results = []
plt.figure(figsize=(10, 8))
plt.xlabel(r'$\tau$ [$\Delta t$]')
plt.ylabel(r'MSD [$\sigma^2$]')
for index, (tau_f, tau, msd) in enumerate(zip(tau_f_values, tau_results, msd_results)):
(a, b), _ = scipy.optimize.curve_fit(linear, tau[tau_f:cutoff_limit], msd[tau_f:cutoff_limit])
x = np.linspace(tau[tau_f], tau[cutoff_limit - 1], 50)
p = plt.plot(x, linear(x, a, b), '-')
plt.plot(tau[tau_f:cutoff_limit], msd[tau_f:cutoff_limit], 'o', color=p[0].get_color(),
label=r'$\gamma=${:.1f}'.format(gammas[index]))
diffusion_results.append(a / 6)
plt.legend()
plt.show()
"""
Explanation: Use the function <tt>curve_fit()</tt> from the module <tt>scipy.optimize</tt> to produce a fit for the linear regime and determine the diffusion coefficients for the different $\gamma$s.
For large $t$ the diffusion coefficient can be expressed as:
$$6D = \lim_{t\to\infty} \frac{\partial \operatorname{MSD}(t)}{\partial t}$$
which is simply the slope of the MSD in the diffusive mode.
End of explanation
"""
plt.figure(figsize=(10, 8))
plt.xlabel(r'$\gamma$')
plt.ylabel('Diffusion coefficient [$\sigma^2/t$]')
x = np.linspace(0.9 * min(gammas), 1.1 * max(gammas), 50)
y = KT / x
plt.plot(x, y, '-', label=r'$k_BT\gamma^{-1}$')
plt.plot(gammas, diffusion_results, 'o', label='D')
plt.legend()
plt.show()
"""
Explanation: Calculate the diffusion coefficient for all cases and plot them as a function of $\gamma$. What relation do you observe?
In the diffusive mode, one can derive $D = k_BT / \gamma$ from the Stokes–Einstein relation [8].
End of explanation
"""
|
avehtari/BDA_py_demos | demos_ch2/demo2_4.ipynb | gpl-3.0 | # Import necessary packages
import numpy as np
from scipy.stats import beta
%matplotlib inline
import matplotlib.pyplot as plt
import arviz as az
# add utilities directory to path
import os, sys
util_path = os.path.abspath(os.path.join(os.path.pardir, 'utilities_and_data'))
if util_path not in sys.path and os.path.exists(util_path):
sys.path.insert(0, util_path)
# import from utilities
import plot_tools
# edit default plot settings
plt.rc('font', size=12)
az.style.use("arviz-grayscale")
"""
Explanation: Bayesian Data Analysis, 3rd ed
Chapter 2, demo 4
Authors:
- Aki Vehtari aki.vehtari@aalto.fi
- Tuomas Sivula tuomas.sivula@aalto.fi
Probability of a girl birth given placenta previa (BDA3 p. 37).
Calculate the posterior distribution on a discrete grid of points by multiplying the likelihood and a non-conjugate prior at each point, and normalizing over the points. Simulate samples from the resulting non-standard posterior distribution using inverse cdf using the discrete grid.
End of explanation
"""
# data (437,543)
a = 437
b = 543
# grid of nx points
nx = 1000
x = np.linspace(0, 1, nx)
# compute density of non-conjugate prior in grid
# this non-conjugate prior is same as in Figure 2.4 in the book
pp = np.ones(nx)
ascent = (0.385 <= x) & (x <= 0.485)
descent = (0.485 <= x) & (x <= 0.585)
pm = 11
pp[ascent] = np.linspace(1, pm, np.count_nonzero(ascent))
pp[descent] = np.linspace(pm, 1, np.count_nonzero(descent))
# normalize the prior
pp /= np.sum(pp)
# unnormalised non-conjugate posterior in grid
po = beta.pdf(x, a, b)*pp
po /= np.sum(po)
# cumulative
pc = np.cumsum(po)
# inverse-cdf sampling
# get n uniform random numbers from [0,1]
n = 10000
r = np.random.rand(n)
# map each r into corresponding grid point x:
# [0, pc[0]) map into x[0] and [pc[i-1], pc[i]), i>0, map into x[i]
rr = x[np.sum(pc[:,np.newaxis] < r, axis=0)]
"""
Explanation: Calculate results
End of explanation
"""
# plot 3 subplots
fig, axes = plt.subplots(nrows=3, ncols=1, sharex=True, figsize=(6, 8), constrained_layout=False)
# show only x-axis
plot_tools.modify_axes.only_x(axes)
# manually adjust spacing
fig.subplots_adjust(hspace=0.5)
# posterior with uniform prior Beta(1,1)
axes[0].plot(x, beta.pdf(x, a+1, b+1))
axes[0].set_title('Posterior with uniform prior')
# non-conjugate prior
axes[1].plot(x, pp)
axes[1].set_title('Non-conjugate prior')
# posterior with non-conjugate prior
axes[2].plot(x, po)
axes[2].set_title('Posterior with non-conjugate prior')
# cosmetics
#for ax in axes:
# ax.set_ylim((0, ax.get_ylim()[1]))
# set custom x-limits
axes[0].set_xlim((0.35, 0.65));
plt.figure(figsize=(8, 6))
fig, axes = plt.subplots(nrows=3, ncols=1, sharex=True, figsize=(6, 8))
plot_tools.modify_axes.only_x(axes)
axes[0].plot(x, po)
axes[0].set_xlim((0.38, 0.52))
axes[0].set_title("Non-conjugate posterior")
axes[1].plot(x, pc)
axes[1].set_title("Posterior-cdf")
az.plot_posterior(rr, kind="hist", point_estimate=None, hdi_prob="hide", ax=axes[2], bins=30)
axes[2].set_title("Histogram of posterior samples")
"""
Explanation: Plot results
End of explanation
"""
|
BrainIntensive/OnlineBrainIntensive | resources/matplotlib/Examples/specialplots.ipynb | mit | %load_ext watermark
%watermark -u -v -d -p matplotlib,numpy
"""
Explanation: Sebastian Raschka
back to the matplotlib-gallery at https://github.com/rasbt/matplotlib-gallery
End of explanation
"""
%matplotlib inline
"""
Explanation: <font size="1.5em">More info about the %watermark extension</font>
End of explanation
"""
from matplotlib import pyplot as plt
import numpy as np
plt.pie(
(10,5),
labels=('spam','ham'),
shadow=True,
colors=('yellowgreen', 'lightskyblue'),
explode=(0,0.15), # space between slices
startangle=90, # rotate conter-clockwise by 90 degrees
autopct='%1.1f%%',# display fraction as percentage
)
plt.legend(fancybox=True)
plt.axis('equal') # plot pyplot as circle
plt.tight_layout()
plt.show()
"""
Explanation: Special plots in matplotlib
Sections
Basic pie chart
Basic triangulation
xkcd-style plots
<br>
<br>
Basic pie chart
[back to top]
End of explanation
"""
from matplotlib import pyplot as plt
import matplotlib.tri as tri
import numpy as np
rand_data = np.random.randn(50, 2)
triangulation = tri.Triangulation(rand_data[:,0], rand_data[:,1])
plt.triplot(triangulation)
plt.show()
"""
Explanation: <br>
<br>
Basic triangulation
[back to top]
End of explanation
"""
import matplotlib.pyplot as plt
x = [1, 2, 3]
y_1 = [50, 60, 70]
y_2 = [20, 30, 40]
with plt.xkcd():
plt.plot(x, y_1, marker='x')
plt.plot(x, y_2, marker='^')
plt.xlim([0, len(x)+1])
plt.ylim([0, max(y_1+y_2) + 10])
plt.xlabel('x-axis label')
plt.ylabel('y-axis label')
plt.title('Simple line plot')
plt.legend(['sample 1', 'sample2'], loc='upper left')
plt.show()
import numpy as np
import random
from matplotlib import pyplot as plt
data = np.random.normal(0, 20, 1000)
bins = np.arange(-100, 100, 5) # fixed bin size
with plt.xkcd():
plt.xlim([min(data)-5, max(data)+5])
plt.hist(data, bins=bins, alpha=0.5)
plt.title('Random Gaussian data (fixed bin size)')
plt.xlabel('variable X (bin size = 5)')
plt.ylabel('count')
plt.show()
from matplotlib import pyplot as plt
import numpy as np
with plt.xkcd():
X = np.random.random_integers(1,5,5) # 5 random integers within 1-5
cols = ['b', 'g', 'r', 'y', 'm']
plt.pie(X, colors=cols)
plt.legend(X)
plt.show()
"""
Explanation: <br>
<br>
xkcd-style plots
[back to top]
End of explanation
"""
|
vasco-da-gama/ros_hadoop | doc/Rosbag larger than 2 GB.ipynb | apache-2.0 | %%bash
ls -tralFh /root/project/doc/el_camino_north.bag
%%bash
# same size, no worries, just the -h (human) formating differs in rounding
hdfs dfs -ls -h
"""
Explanation: Let us have a look at a 20 GB Rosbag file
Note data can be found for instance at https://github.com/udacity/self-driving-car/tree/master/datasets published under MIT License.
The file is not distributed over the Dockerfile but you can download it and put it into HDFS.
End of explanation
"""
%%time
out = !java -jar ../lib/rosbaginputformat.jar -f /root/project/doc/el_camino_north.bag
%%bash
ls -tralFh /root/project/doc/el_camino_north.bag*
"""
Explanation: Show that the we can read the index
Solved the issue https://github.com/valtech/ros_hadoop/issues/6
The issue was due to ByteBuffer being limitted by JVM Integer size and has nothing to do with Spark or how the RosbagMapInputFormat works within Spark. It was only problematic to extract the conf index with the jar.
Integer.MAX_SIZE is 2 GB !!
End of explanation
"""
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
sparkConf = SparkConf()
sparkConf.setMaster("local[*]")
sparkConf.setAppName("ros_hadoop")
sparkConf.set("spark.jars", "../lib/protobuf-java-3.3.0.jar,../lib/rosbaginputformat.jar,../lib/scala-library-2.11.8.jar")
spark = SparkSession.builder.config(conf=sparkConf).getOrCreate()
sc = spark.sparkContext
"""
Explanation: Create the Spark Session or get an existing one
End of explanation
"""
fin = sc.newAPIHadoopFile(
path = "hdfs://127.0.0.1:9000/user/root/el_camino_north.bag",
inputFormatClass = "de.valtech.foss.RosbagMapInputFormat",
keyClass = "org.apache.hadoop.io.LongWritable",
valueClass = "org.apache.hadoop.io.MapWritable",
conf = {"RosbagInputFormat.chunkIdx":"/root/project/doc/el_camino_north.bag.idx.bin"})
fin
"""
Explanation: Create an RDD from the Rosbag file
Note: your HDFS address might differ.
End of explanation
"""
|
thewtex/ieee-nss-mic-scipy-2014 | 4_Cython.ipynb | apache-2.0 | import numpy as np
"""
Explanation: Cython
The Cython language is a superset of the Python language that additionally
supports calling C functions and declaring C types on variables and class
attributes.
This allows the compiler to generate very efficient C code from Cython code.
Write Python code that calls back and forth from and to C or C++ code natively at any point.
Easily tune readable Python code into plain C performance by adding static type declarations.
Use combined source code level debugging to find bugs in your Python, Cython and C code.
Interact efficiently with large data sets, e.g. using multi-dimensional NumPy arrays.
Quickly build your applications within the large, mature and widely used CPython ecosystem.
Integrate natively with existing code and data from legacy, low-level or high-performance libraries and applications.
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
Accelerating Python code with Cython
We use Cython to accelerate the generation of the Mandelbrot fractal.
End of explanation
"""
size = 200
iterations = 100
"""
Explanation: We initialize the simulation and generate the grid
in the complex plane.
End of explanation
"""
def mandelbrot_python(m, size, iterations):
for i in range(size):
for j in range(size):
c = -2 + 3./size*j + 1j*(1.5-3./size*i)
z = 0
for n in range(iterations):
if np.abs(z) <= 10:
z = z*z + c
m[i, j] = n
else:
break
%%timeit -n1 -r1 m = np.zeros((size, size))
mandelbrot_python(m, size, iterations)
"""
Explanation: Pure Python
End of explanation
"""
%load_ext cythonmagic
"""
Explanation: Cython versions
We first import Cython.
End of explanation
"""
%%cython -a
import numpy as np
def mandelbrot_cython(m, size, iterations):
for i in range(size):
for j in range(size):
c = -2 + 3./size*j + 1j*(1.5-3./size*i)
z = 0
for n in range(iterations):
if np.abs(z) <= 10:
z = z*z + c
m[i, j] = n
else:
break
%%timeit -n1 -r1 m = np.zeros((size, size), dtype=np.int32)
mandelbrot_cython(m, size, iterations)
"""
Explanation: Take 1
First, we just add the %%cython magic.
End of explanation
"""
%%cython -a
import numpy as np
def mandelbrot_cython(int[:,::1] m,
int size,
int iterations):
cdef int i, j, n
cdef complex z, c
for i in range(size):
for j in range(size):
c = -2 + 3./size*j + 1j*(1.5-3./size*i)
z = 0
for n in range(iterations):
if z.real**2 + z.imag**2 <= 100:
z = z*z + c
m[i, j] = n
else:
break
%%timeit -n1 -r1 m = np.zeros((size, size), dtype=np.int32)
mandelbrot_cython(m, size, iterations)
"""
Explanation: Virtually no speedup.
Take 2
Now, we add type information, using memory views for NumPy arrays.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb | apache-2.0 | # You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
# Use matplotlib for visualizing the model
import matplotlib as mpl
import matplotlib.pyplot as plt
# Here we'll import Pandas and Numpy data processing libraries
import numpy as np
import pandas as pd
# Use seaborn for data visualization
import seaborn as sns
# Scikit-learn is an open source machine learning library that supports supervised and unsupervised learning.
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
print("TensorFlow version: ",tf.version.VERSION)
"""
Explanation: Advanced Logistic Regression in TensorFlow 2.0
Learning Objectives
Load a CSV file using Pandas
Create train, validation, and test sets
Define and train a model using Keras (including setting class weights)
Evaluate the model using various metrics (including precision and recall)
Try common techniques for dealing with imbalanced data:
Class weighting and
Oversampling
Introduction
This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the Credit Card Fraud Detection dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use Keras to define the model and class weights to help the model learn from the imbalanced data.
PENDING LINK UPDATE: Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Start by importing the necessary libraries for this lab.
End of explanation
"""
# Customize our Matplot lib visualization figure size and colors
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
"""
Explanation: In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
End of explanation
"""
file = tf.keras.utils
# pandas module read_csv() function reads the CSV file into a DataFrame object.
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
# `head()` function is used to get the first n rows of dataframe
raw_df.head()
"""
Explanation: Data processing and exploration
Download the Kaggle Credit Card Fraud data set
Pandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.
Note: This dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available here and the page of the DefeatFraud project
End of explanation
"""
# describe() is used to view some basic statistical details
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
"""
Explanation: Now, let's view the statistics of the raw dataframe.
End of explanation
"""
# Numpy bincount() method is used to obtain the frequency of each element provided inside a numpy array
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
"""
Explanation: Examine the class label imbalance
Let's look at the dataset imbalance:
End of explanation
"""
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps=0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
"""
Explanation: This shows the small fraction of positive samples.
Clean, split and normalize the data
The raw data has a few issues. First the Time and Amount columns are too variable to use directly. Drop the Time column (since it's not clear what it means) and take the log of the Amount column to reduce its range.
End of explanation
"""
# TODO 1
# Use a utility from sklearn to split and shuffle our dataset.
# train_test_split() method split arrays or matrices into random train and test subsets
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
"""
Explanation: Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where overfitting is a significant concern from the lack of training data.
End of explanation
"""
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
# `np.clip()` clip (limit) the values in an array.
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
"""
Explanation: Normalize the input features using the sklearn StandardScaler.
This will set the mean to 0 and standard deviation to 1.
Note: The StandardScaler is only fit using the train_features to be sure the model is not peeking at the validation or test sets.
End of explanation
"""
# pandas DataFrame is two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns)
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)
# Seaborn’s jointplot displays a relationship between 2 variables (bivariate) as well as
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
# The suptitle() function in pyplot module of the matplotlib library is used to add a title to the figure.
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
_ = plt.suptitle("Negative distribution")
"""
Explanation: Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.
Look at the data distribution
Next compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:
Do these distributions make sense?
Yes. You've normalized the input and these are mostly concentrated in the +/- 2 range.
Can you see the difference between the ditributions?
Yes the positive examples contain a much higher rate of extreme values.
End of explanation
"""
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics = METRICS, output_bias=None):
if output_bias is not None:
# `tf.keras.initializers.Constant()` generates tensors with constant values.
output_bias = tf.keras.initializers.Constant(output_bias)
# TODO 1
# Creating a Sequential model
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
# Compile the model
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
"""
Explanation: Define the model and metrics
Define a function that creates a simple neural network with a densly connected hidden layer, a dropout layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
End of explanation
"""
EPOCHS = 100
BATCH_SIZE = 2048
# Stop training when a monitored metric has stopped improving.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
# Display a model summary
model = make_model()
model.summary()
"""
Explanation: Understanding useful metrics
Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
False negatives and false positives are samples that were incorrectly classified
True negatives and true positives are samples that were correctly classified
Accuracy is the percentage of examples correctly classified
$\frac{\text{true samples}}{\text{total samples}}$
Precision is the percentage of predicted positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false positives}}$
Recall is the percentage of actual positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false negatives}}$
AUC refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.
Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time.
Read more:
* True vs. False and Positive vs. Negative
* Accuracy
* Precision and Recall
* ROC-AUC
Baseline model
Build the model
Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.
Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
End of explanation
"""
# use the model to do prediction with model.predict()
model.predict(train_features[:10])
"""
Explanation: Test run the model:
End of explanation
"""
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
"""
Explanation: Optional: Set the correct initial bias.
These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: A Recipe for Training Neural Networks: "init well"). This can help with initial convergence.
With the default bias initialization the loss should be about math.log(2) = 0.69314
End of explanation
"""
# np.log() is a mathematical function that is used to calculate the natural logarithm.
initial_bias = np.log([pos/neg])
initial_bias
"""
Explanation: The correct bias to set can be derived from:
$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$
$$ b_0 = -log_e(1/p_0 - 1) $$
$$ b_0 = log_e(pos/neg)$$
End of explanation
"""
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
"""
Explanation: Set that as the initial bias, and the model will give much more reasonable initial guesses.
It should be near: pos/total = 0.0018
End of explanation
"""
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
"""
Explanation: With this initialization the initial loss should be approximately:
$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
End of explanation
"""
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
"""
Explanation: This initial loss is about 50 times less than if would have been with naive initilization.
This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.
Checkpoint the initial weights
To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
End of explanation
"""
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
# Fit data to model
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
"""
Explanation: Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
End of explanation
"""
model = make_model()
model.load_weights(initial_weights)
# Fit data to model
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels))
"""
Explanation: The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage.
Train the model
End of explanation
"""
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
# subplots() which acts as a utility wrapper and helps in creating common layouts of subplots
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
"""
Explanation: Check training history
In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this tutorial.
Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
End of explanation
"""
# TODO 1
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
"""
Explanation: Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.
Evaluate metrics
You can use a confusion matrix to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
End of explanation
"""
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
"""
Explanation: Evaluate your model on the test dataset and display the results for the metrics you created above.
End of explanation
"""
def plot_roc(name, labels, predictions, **kwargs):
# Plot Receiver operating characteristic (ROC) curve.
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
"""
Explanation: If the model had predicted everything perfectly, this would be a diagonal matrix where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
Plot the ROC
Now plot the ROC. This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
End of explanation
"""
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
# TODO 1
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
"""
Explanation: It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
Class weights
Calculate class weights
The goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
End of explanation
"""
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
"""
Explanation: Train a model with class weights
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
Note: Using class_weights changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like optimizers.SGD, may fail. The optimizer used here, optimizers.Adam, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
End of explanation
"""
plot_metrics(weighted_history)
"""
Explanation: Check training history
End of explanation
"""
# TODO 1
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
"""
Explanation: Evaluate metrics
End of explanation
"""
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
# Function legend() which is used to Place a legend on the axes
plt.legend(loc='lower right')
"""
Explanation: Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application.
Plot the ROC
End of explanation
"""
# TODO 1
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
"""
Explanation: Oversampling
Oversample the minority class
A related approach would be to resample the dataset by oversampling the minority class.
End of explanation
"""
# np.arange() return evenly spaced values within a given interval.
ids = np.arange(len(pos_features))
# choice() method, you can get the random samples of one dimensional array and return the random samples of numpy array.
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
# numpy.concatenate() function concatenate a sequence of arrays along an existing axis.
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
# numpy.random.shuffle() modify a sequence in-place by shuffling its contents.
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
"""
Explanation: Using NumPy
You can balance the dataset manually by choosing the right number of random
indices from the positive examples:
End of explanation
"""
BUFFER_SIZE = 100000
def make_ds(features, labels):
# With the help of tf.data.Dataset.from_tensor_slices() method, we can get the slices of an array in the form of objects
# by using tf.data.Dataset.from_tensor_slices() method.
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
"""
Explanation: Using tf.data
If you're using tf.data the easiest way to produce balanced examples is to start with a positive and a negative dataset, and merge them. See the tf.data guide for more examples.
End of explanation
"""
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
"""
Explanation: Each dataset provides (feature, label) pairs:
End of explanation
"""
# Samples elements at random from the datasets in `datasets`.
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
"""
Explanation: Merge the two together using experimental.sample_from_datasets:
End of explanation
"""
# `np.ceil()` function returns the ceil value of the input array elements
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
"""
Explanation: To use this dataset, you'll need the number of steps per epoch.
The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
End of explanation
"""
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks = [early_stopping],
validation_data=val_ds)
"""
Explanation: Train on the oversampled data
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
End of explanation
"""
plot_metrics(resampled_history )
"""
Explanation: If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight.
This smoother gradient signal makes it easier to train the model.
Check training history
Note that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
End of explanation
"""
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch = 20,
epochs=10*EPOCHS,
callbacks = [early_stopping],
validation_data=(val_ds))
"""
Explanation: Re-train
Because training is easier on the balanced data, the above training procedure may overfit quickly.
So break up the epochs to give the callbacks.EarlyStopping finer control over when to stop training.
End of explanation
"""
plot_metrics(resampled_history)
"""
Explanation: Re-check training history
End of explanation
"""
# TODO 1
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
"""
Explanation: Evaluate metrics
End of explanation
"""
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
"""
Explanation: Plot the ROC
End of explanation
"""
|
google/tf-quant-finance | tf_quant_finance/experimental/notebooks/Cashflows_Rate_Curves.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
#@title Install TF Quant Finance
!pip install tf-quant-finance
"""
Explanation: Interest rate tools in TFF
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Cashflows_Rate_Curves.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Cashflows_Rate_Curves.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
End of explanation
"""
#@title Imports { display-mode: "form" }
import datetime
from dateutil.relativedelta import relativedelta
import holidays
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from pandas.tseries.holiday import USFederalHolidayCalendar
from pandas.tseries.offsets import CustomBusinessDay
import seaborn as sns
import tensorflow as tf
import time
# TFF for Tensorflow Finance
import tf_quant_finance as tff
from tf_quant_finance import rates
from IPython.core.pylabtools import figsize
figsize(21, 14) # better graph size for Colab
import warnings
warnings.filterwarnings("ignore",
category=FutureWarning) # suppress printing warnings
"""
Explanation: This notebook demonstrates the use of the TFF toolbox for performing common tasks related to interest rates. This includes computing the present values of cashflows for a collection of bonds, and for building and interpolating from rate curves, with an emphasis on:
Batching: Tensorflow is vectorized out of the box. Tensorflow Finance (TFF) written to leverage this wherever possible. We illustrate the advantage of batching for computing forward rates.
End of explanation
"""
#@title Implement Business Day Convention
def get_coupon_dates(current_date, maturity_date, months = 6):
# Compute sequence of dates `months` apart starting at maturity_date,
# working backwards to last one after current_date.
cashflow_dates = []
date = maturity_date
while date >= current_date:
cashflow_dates.append(date)
date = date + relativedelta(months=-months)
# Check if dates are on a US holiday or weekend
return pd.to_datetime(cashflow_dates).sort_values()
def get_modified_following_date(dates):
# Get the modified folliwng business day for the US.
BDayUS = CustomBusinessDay(calendar=USFederalHolidayCalendar(), n = 1)
def is_weekday(dates):
# Identify weekend days
return ~dates.weekday_name.isin(['Saturday', 'Sunday'])
def in_next_month(dates1, dates2):
return dates1.month == dates2.month - 1
def next_bus_day(dates):
# If next business day in a new month shift to previous business day
fwd = dates + BDayUS
return fwd.where(~in_next_month(dates, fwd), dates - BDayUS)
def payment_day(dates):
return dates.where(is_weekday(dates), next_bus_day(dates))
return payment_day(dates)
"""
Explanation: Business Day Convention
The pricing in Example 1 uses the modified following business day convention for the US for treasure payment dates. That is, if the coupon date falls on a weekend or holiday, it is paid on the following business day, unless said following business day falls into the next calendar month, in which case we go backwards to the nearest previous business day. It also provides functionality to generate regular coupon payments, before applying business day convention.
End of explanation
"""
#@title Pricing US Treasury Bonds
dtype = np.float64
exp_dates = ['"2021-09-30"', '"2022-09-15"', '"2024-09-30"', '"2026-09-30"',
'"2029-08-15"', '"2049-08-15"']
us_bond_data = {
'present_date': [datetime.datetime.strptime('2019-09-20', '%Y-%m-%d').date()] * 6,
'expiry_date': [datetime.datetime.strptime(date, '"%Y-%m-%d"').date() for date in exp_dates],
'bond_type': ['2yr_note', '3yr_note', '5yr_note', '7yr_note', '10yr_note',
'30yr_bond'],
'face_value': [100, 100, 100, 100, 100, 1000],
'coupon_rate': [0.015, 0.015, 0.015, 0.01625, 0.01625, 0.02250],
'coupon_frequency': [0.5] * 6
}
us_bond_data = pd.DataFrame.from_dict(us_bond_data)
# Generate times of cashflows (using modified following business day convention)
# for US federal holidays.
payment_dates = list(map(get_coupon_dates, us_bond_data.present_date,
us_bond_data.expiry_date))
number_of_coupons = list(map(len, payment_dates)) # get number of coupons per bond
payment_dates = np.concatenate(payment_dates, axis = 0)
payment_dates_modified = get_modified_following_date(
pd.to_datetime(payment_dates))
current_date = pd.Series(pd.to_datetime(us_bond_data.present_date[0])).repeat(len(payment_dates_modified))
payment_times_days = (payment_dates_modified.values - current_date)
times = payment_times_days.apply(lambda x: float(x.days) / 365) # Days to years
# Generate actual cashflows.
coupon_payments = (us_bond_data.face_value * us_bond_data.coupon_rate *
us_bond_data.coupon_frequency)
coupon_cashflows = np.repeat(coupon_payments, number_of_coupons)
redemption_cashflows = np.zeros(np.sum(number_of_coupons))
redemption_indexes = np.cumsum(number_of_coupons) - 1
redemption_cashflows[redemption_indexes] = us_bond_data.face_value
cashflows = np.array(coupon_cashflows + redemption_cashflows, dtype = dtype)
# Compute groups for bond cashflows.
groups = np.repeat(range(0, us_bond_data.shape[0]), number_of_coupons)
# Bond Yield Curve
# Yields ontained from https://www.wsj.com/market-data/bonds (as on 20/09/2019)
tenor_curve = [2, 3, 5, 10, 30]
rate_curve = [0.017419, 0.016885, 0.016614, 0.017849, 0.02321]
days_to_maturity = (us_bond_data.expiry_date - us_bond_data.present_date)
years_to_maturity = list(days_to_maturity.apply(lambda x: float(x.days) / 365))
# Linear Interpolation of curve get yields to maturity.
rate_curve_interpolated = tff.math.interpolation.linear.interpolate(
years_to_maturity, tenor_curve,
rate_curve, dtype = np.float64)
with tf.Session() as sess:
rate_curve_interpolated = sess.run(rate_curve_interpolated)
# Create Tensorflow Graph using pv_from_yields in rates.
present_values = rates.cashflows.pv_from_yields(cashflows, times,
rate_curve_interpolated,
groups)
with tf.Session() as sess:
present_values = sess.run(present_values)
us_bond_data['present_value'] = present_values
print("Priced US Treasury Bonds:")
print('\n')
us_bond_data
"""
Explanation: Example 1: Cashflows in TFF: Computing present values for a portfolio of bonds.
### Coupon Bond Valuation
Calculating the value of a coupon bond factors in the present value of annual or semi-annual coupon payments and the face value of the bond.
The present value of expected cash flows is added to the present value (PV) of the face value of the bond as seen in the following formula:
$$
\begin{align}
PV(Bond) &= PV(Coupons) + PV(FaceValue) \
&= \sum_t \frac{C_t}{(1+i)^t} + \frac{F}{(1+i)^T}\
\end{align}
$$
Where \
$C_t$ are future coupon payments
$i$ is the yield to maturity (or internal rate of return, IRR) of the bond.
$F$ is the face value of the bond.
$t$ is times at which the corresponding coupon payments occur.
$T$ is the time to maturity of the bond.
Example Data (US Treasury Bonds)
The example below shows how to price a selection of US Treasury Bonds
Source: https://www.wsj.com/market-data/bonds (Close of market on 20/09/2019)
The data represent six US Treasuries:
* 2-Year Note (Coupon: 1.5%, Maturity: 30/09/2021)
* 3-Year Note (Coupon: 1.5%, Maturity: 15/09/2022)
* 5-Year Note (Coupon: 1.5%, Maturity: 30/09/2024)
* 7-Year Note (Coupon: 1.625%, Maturity: 30/09/2026)
* 10-Year Note (Coupon: 1.625%, Maturity: 15/08/2029)
* 30-Year Bond (Coupon: 2.25%, Maturity: 15/08/2049)
We use Modified Following day count convention (i.e. move to the next business day,
unless it falls in a different month, in which case use the previous business day),
and the US holidays in the Python holidays module.
End of explanation
"""
#@title Create Bond Data
number_of_bonds = 100000 #@param
min_face_value = 100
max_face_value = 1000
# Face values for bonds
bond_face_values = range(min_face_value,
max_face_value, 100)
coupon_frequencies = [0.5, 1]
coupon_rates = [0.02, 0.04, 0.06, 0.08, 0.10]
# Range of bond maturities.
bond_maturities = [1, 2, 3, 5, 7, 10, 15, 20, 30]
# Create a mix of 100,000 bonds.
large_bond_data = {
'face_value': np.random.choice(bond_face_values, number_of_bonds),
'coupon_frequency': np.random.choice(coupon_frequencies, number_of_bonds),
'coupon_rate': np.random.choice(coupon_rates, number_of_bonds,
p=[0.1, 0.2, 0.3, 0.3, 0.1]),
'maturity': np.random.choice(bond_maturities, number_of_bonds,
p=[0.1, 0.1, 0.1, 0.2, 0.3, 0.1, 0.05,
0.025, 0.025])
}
large_bond_data = pd.DataFrame.from_dict(large_bond_data)
# Rate curve interpolation
curve_required_tenors2 = np.arange(0.5, 30.5, 0.5)
rate_curve_interpolated2 = tff.math.interpolation.linear.interpolate(
curve_required_tenors2, tenor_curve,
rate_curve, dtype = np.float64)
with tf.Session() as sess:
rate_curve_interpolated2 = sess.run(rate_curve_interpolated2)
# Plot distribution of bonds by face value, coupon rate, and yield.
plt.figure(figsize=(16,12))
col_palette = sns.color_palette("Blues")
plt.subplot(2, 2, 1)
# Plot Rate Curve
sns.set()
sns.lineplot(curve_required_tenors2, rate_curve_interpolated2,
color=col_palette[2])
plt.title('Rate Curve', fontsize=14)
plt.xlabel('Tenor', fontsize=12)
plt.ylabel('Rate', fontsize=12)
# Coupon rate distribution
plt.subplot(2, 2, 2)
sns.set()
sns.distplot(large_bond_data['coupon_rate'], kde=False,
color = col_palette[3], bins=5)
plt.title('Bond Mix by Coupon Rate', fontsize=14)
plt.xlabel('Coupon Rate', fontsize=12)
plt.ylabel('Frequency', fontsize=12)
# Nominal value distribution
plt.subplot(2, 2, 3)
sns.set()
sns.distplot(large_bond_data['face_value'], kde=False,
color = col_palette[4], bins=9)
plt.title('Bond Mix by Nominal', fontsize=14)
plt.xlabel('Nominal', fontsize=12)
plt.ylabel('Frequency', fontsize=12)
# Nominal value distribution
plt.subplot(2, 2, 4)
sns.set()
sns.distplot(large_bond_data['maturity'], kde=False,
color = col_palette[5], bins=9)
plt.title('Bond Mix by Maturity', fontsize=14)
plt.xlabel('Maturity', fontsize=12)
plt.ylabel('Frequency', fontsize=12)
plt.show()
#@title Compute the present value for portfolio of bonds
dtype = np.float64
tf.reset_default_graph()
rate_curve_df = pd.DataFrame.from_dict({
'tenor': curve_required_tenors2,
'rate': rate_curve_interpolated2
})
# Create inputs (cashflows, times, groups) for `pv_from_yields`
large_number_of_coupons = large_bond_data.maturity / large_bond_data.coupon_frequency
large_number_of_coupons = large_number_of_coupons.astype(int)
large_coupon_payments = (large_bond_data.face_value * large_bond_data.coupon_rate *
large_bond_data.coupon_frequency)
large_coupon_cashflows = np.repeat(large_coupon_payments, large_number_of_coupons)
large_redemption_cashflows = np.zeros(np.sum(large_number_of_coupons))
large_redemption_indexes = np.cumsum(large_number_of_coupons) - 1
large_redemption_cashflows[large_redemption_indexes] = large_bond_data.face_value
large_cashflows = np.array(large_coupon_cashflows +
large_redemption_cashflows, dtype = dtype)
# The times of the cashflows.
large_times = list(map(np.arange, large_bond_data.coupon_frequency,
large_bond_data.maturity + large_bond_data.coupon_frequency,
large_bond_data.coupon_frequency))
large_times = np.concatenate(large_times, axis = 0)
large_groups = np.repeat(range(0, large_bond_data.shape[0]),
large_number_of_coupons)
# Create Tensorflow Graph using pv_from_yields in rates.
present_values = rates.cashflows.pv_from_yields(large_cashflows, large_times,
rate_curve_interpolated2,
groups = large_groups)
with tf.Session() as sess:
present_values = sess.run(present_values)
# Plot distribution of present values of portfolio of bonds.
plt.figure(figsize=(12,8))
sns.set_context("talk")
col_palette = sns.color_palette("Blues")
sns.set()
ax = sns.distplot(present_values, kde=True)
plot_label = "Present Value Disribution of {} Priced Bonds.".format(number_of_bonds)
plt.title(plot_label, fontsize=16)
plt.xlabel('Present Value', fontsize=14)
plt.show()
"""
Explanation: Generating large bond portfolio
To demonstrate scale we simulate a mix of bonds (of total number_of_bonds) with face values between min_face_value and max_face_value (in increments of 100), paying either semi-annual or annual coupons, with coupon rates of 2%, 4%, 6%, 8%, and 10%. We use the yield curve we used above for US treasury bonds.
End of explanation
"""
#@title Create Bond Data
num_zero_rate_bonds = 100000 #@param
num_tenors = [2, 3, 4, 5, 6, 7, 8, 10]
marked_tenors = [0.25, 0.5, 1, 1.5, 2, 3, 5, 10, 20, 30]
# Create a mix of `num_zero_rate_bonds` bonds.
set_num_tenors = np.random.choice(num_tenors, num_zero_rate_bonds)
def get_slice(n):
return marked_tenors[slice(n)]
times = np.concatenate(list(map(get_slice, set_num_tenors)), axis = 0)
# Set up a grouping argument for implementing batching. See
# `forward_rates_from_yields` in tff.forwards.
groups = np.repeat(range(0, num_zero_rate_bonds), set_num_tenors)
# Construct Rate Curve to generate Zero Rates
tf.reset_default_graph()
curve_required_tenors3 = marked_tenors
rate_curve_interpolated3 = tff.math.interpolation.linear.interpolate(
curve_required_tenors3, tenor_curve,
rate_curve, dtype = np.float64)
with tf.Session() as sess:
rate_curve_interpolated3 = sess.run(rate_curve_interpolated3)
def get_rates(n):
# Perturb rate curve
rates = rate_curve_interpolated3[0:n]
rates = rates + np.random.uniform(-0.0005, 0.0005, n)
return rates
rates = np.concatenate(list(map(get_rates, set_num_tenors)), axis = 0)
zero_rate_data = {
'times': times,
'groups': groups,
'rates': rates
}
zero_rate_data_df = pd.DataFrame.from_dict(zero_rate_data)
#@title Compute forward rates for sets with different number of tenors of zero rates with batching.
import tf_quant_finance.rates.forwards as forwards
dtype = np.float64
tf.reset_default_graph()
forward_rates = forwards.forward_rates_from_yields(
rates, times, groups=groups, dtype=dtype)
t = time.time()
with tf.Session() as sess:
forward_rates = sess.run(forward_rates)
time_batch = time.time() - t
zero_rate_data_df['forward_rates'] = forward_rates
# Plot forward rates for a random sample sets of zero rates
sample_groups = np.random.choice(np.unique(groups), 5)
plt.figure(figsize=(14,6))
col_palette = sns.color_palette("Blues", 5)
mask = list(zero_rate_data_df.groups.isin(sample_groups))
plot_data = zero_rate_data_df.iloc[mask]
sns.set()
sns.set_context("talk")
sns.lineplot(x='times', y='forward_rates', data=plot_data,
hue='groups',legend='full', palette=col_palette)
plt.title('Sample of estimated forward rate sets', fontsize=16)
plt.xlabel('Marked Tenor', fontsize=14)
plt.ylabel('Forward Rate', fontsize=14)
legend = plt.legend()
legend.texts[0].set_text("Fwd Rate Group")
plt.show()
"""
Explanation: Example 2: Compute forward rates given a set of zero rates
Denote the price of a zero coupon bond maturing at time $t$ by $Z(t)$. Then the zero rate to time $t$ is defined as
$$
\begin{equation}
r(t) = - ln(Z(t)) / t
\end{equation}
$$
This is the (continuously compounded) interest rate that applies between time $0$ and time $t$ as seen at time $0$. The forward rate between times $t1$ and $t2$ is defined as the interest rate that applies to the period $[t1, t2]$ as seen from today. Let $f(t1, t2) = -ln\frac{Z(t2)}{Z(t1)}$, then it followes that
$$\begin{align}
\
exp(-f(t1, t2)(t2-t1)) &= Z(t2) / Z(t1) \
f(t1, t2) &= - (ln Z(t2) - ln Z(t1)) / (t2 - t1) \
f(t1, t2) &= (t2 * r(t2) - t1 * r(t1)) / (t2 - t1) \\
\end{align}$$
Given a sequence of increasing times $[t1, t2, ... tn]$ and the zero rates for those times, this function computes the forward rates that apply to the consecutive time intervals i.e. $[0, t1], [t1, t2], ... [t_{n-1}, tn]$ using the last equation above. Note that for the interval $[0, t1]$ the forward rate is the same as the zero rate.
Generating zero rates data
We generate num_zero_rates_bonds sets of zero rates data with between 3 and 8 coupon payments at time points $[0.25, 0.5, 1, 1.5,2, 3, 5, 10]$, always starting at $0.25$ and then at subsequent time points, depending on the number of marked tenors. We generate zero rates as follows:
Randomly draw a rate in $[0,0.15]$ for the first tenor
Generate the rates for the subsequent tenors by incrementing the rate at the first tenor by a random draw from $[0, 0.02]$.
End of explanation
"""
#@title Compare forward rate computation: batching vs non-batching.
num_zero_rate_bonds2 = 100
num_tenors = [2, 3, 4, 5, 6, 7, 8, 10]
marked_tenors = [0.25, 0.5, 1, 1.5, 2, 3, 5, 10, 20, 30]
# Create a mix of 100,000 bonds.
set_num_tenors = np.random.choice(num_tenors, num_zero_rate_bonds2)
def get_slice(n):
# Function to get marked tenors for a bond with 'n' tenors.
return marked_tenors[slice(n)]
times = np.concatenate(list(map(get_slice, set_num_tenors)), axis = 0)
# Set up a grouping argument for implementing batching. See
# `forward_rates_from_yields` in tff.forwards.
groups = np.repeat(range(0, num_zero_rate_bonds2), set_num_tenors)
# Construct Rate Curve to generate Zero Rates
tf.reset_default_graph()
curve_required_tenors3 = marked_tenors
rate_curve_interpolated3 = tff.math.interpolation.linear.interpolate(
curve_required_tenors3, tenor_curve,
rate_curve, dtype = np.float64)
with tf.Session() as sess:
rate_curve_interpolated3 = sess.run(rate_curve_interpolated3)
def get_rates(n):
# Perturb rate curve
rates = rate_curve_interpolated3[0:n]
rates = rates + np.random.uniform(-0.0005, 0.0005, n)
return rates
rates = np.concatenate(list(map(get_rates, set_num_tenors)), axis = 0)
# Non-batch.
tf.reset_default_graph()
time_non_batch = 0
with tf.Session() as sess:
for group in np.unique(groups):
forward_rates_non_batch = forwards.forward_rates_from_yields(
rates[groups == group], times[groups == group], dtype=dtype)
t = time.time()
forward_rates_non_batch = sess.run(forward_rates_non_batch)
time_non_batch += time.time() - t
print('wall time to price {} options without batching: '.format(num_zero_rate_bonds2), time_non_batch)
print('wall time to price {} options with batching: '.format(num_zero_rate_bonds), time_batch)
output_string = """Pricing {} bonds without batching is {} times slower than
pricing {} bonds with batching."""
print(output_string.format(num_zero_rate_bonds2,
round(time_non_batch/time_batch, 1),
num_zero_rate_bonds))
"""
Explanation: Forward rates (batching vs non-batching)
Below we compare the computation of forward rates with and without batching. We see that computing the forward rates for 100 bonds is about 2.5 times slower than computing the forward rates for 100000 bonds with batching.
End of explanation
"""
# @title Create bond data
# The following example demonstrates the usage by building the implied curve
# from four coupon bearing bonds.
dtype=np.float64
# These need to be sorted by expiry time.
cashflow_times = [
np.array([0.25, 0.5, 0.75, 1.0], dtype=dtype),
np.array([0.5, 1.0, 1.5, 2.0], dtype=dtype),
np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0], dtype=dtype),
np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0],
dtype=dtype)
]
cashflows = [
# 1 year bond with 5% three monthly coupon.
np.array([12.5, 12.5, 12.5, 1012.5], dtype=dtype),
# 2 year bond with 6% semi-annual coupon.
np.array([30, 30, 30, 1030], dtype=dtype),
# 3 year bond with 8% semi-annual coupon.
np.array([40, 40, 40, 40, 40, 1040], dtype=dtype),
# 4 year bond with 3% semi-annual coupon.
np.array([15, 15, 15, 15, 15, 15, 15, 1015], dtype=dtype)
]
# The present values of the above cashflows.
pvs = np.array([
999.68155223943393, 1022.322872470043, 1093.9894418810143,
934.20885689015677
], dtype=dtype)
#@title Build and plot the bond curve
tf.reset_default_graph()
from tf_quant_finance.rates import hagan_west
results = hagan_west.bond_curve(cashflows, cashflow_times, pvs)
with tf.Session() as sess:
results = sess.run(results)
# Plot Rate Curve
plt.figure(figsize=(14,6))
col_palette = sns.color_palette("Blues", 2)
sns.set()
sns.set_context("talk")
sns.lineplot(x=results.times, y=results.discount_rates, palette=col_palette)
plt.title('Estimated Discount Rates', fontsize=16)
plt.xlabel('Marked Tenor', fontsize=14)
plt.ylabel('Discount Rate', fontsize=14)
plt.show()
"""
Explanation: Pricing 100 bonds without batching is about 10 times slower than pricing 100000 bonds with batching.
Example 3: Constructing a bond discount curve
Building discount curves is a core problem in mathematical finance. Discount curves are built using the available market data in liquidly traded rates
products. These include bonds, swaps, forward rate agreements (FRAs) or eurodollar futures contracts.
Here we show how to build a bond discount rate curve. A discount curve is a function of time which gives the interest rate that applies to a unit of currency deposited today for a period of time $t$. The traded price of bonds implicitly contains the market view on the discount rates. The purpose of discount curve construction is to extract this information.
The algorithm we use here here is based on the Monotone Convex Interpolation method described by Hagan and West (2006, 2008).
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/b36af73820a7a52a4df3c42b66aef8a5/source_power_spectrum_opm.ipynb | bsd-3-clause | # Authors: Denis Engemann <denis.engemann@gmail.com>
# Luke Bloy <luke.bloy@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import os.path as op
from mne.filter import next_fast_len
import mne
print(__doc__)
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
src_fname = op.join(bem_dir, '%s-oct6-src.fif' % subject)
vv_fname = data_path + '/MEG/SQUID/SQUID_resting_state.fif'
vv_erm_fname = data_path + '/MEG/SQUID/SQUID_empty_room.fif'
vv_trans_fname = data_path + '/MEG/SQUID/SQUID-trans.fif'
opm_fname = data_path + '/MEG/OPM/OPM_resting_state_raw.fif'
opm_erm_fname = data_path + '/MEG/OPM/OPM_empty_room_raw.fif'
opm_trans = mne.transforms.Transform('head', 'mri') # use identity transform
opm_coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
"""
Explanation: Compute source power spectral density (PSD) of VectorView and OPM data
Here we compute the resting state from raw for data recorded using
a Neuromag VectorView system and a custom OPM system.
The pipeline is meant to mostly follow the Brainstorm :footcite:TadelEtAl2011
OMEGA resting tutorial pipeline <bst_omega_>_.
The steps we use are:
Filtering: downsample heavily.
Artifact detection: use SSP for EOG and ECG.
Source localization: dSPM, depth weighting, cortically constrained.
Frequency: power spectral density (Welch), 4 sec window, 50% overlap.
Standardize: normalize by relative power for each source.
Preprocessing
End of explanation
"""
raws = dict()
raw_erms = dict()
new_sfreq = 60. # Nyquist frequency (30 Hz) < line noise freq (50 Hz)
raws['vv'] = mne.io.read_raw_fif(vv_fname, verbose='error') # ignore naming
raws['vv'].load_data().resample(new_sfreq)
raws['vv'].info['bads'] = ['MEG2233', 'MEG1842']
raw_erms['vv'] = mne.io.read_raw_fif(vv_erm_fname, verbose='error')
raw_erms['vv'].load_data().resample(new_sfreq)
raw_erms['vv'].info['bads'] = ['MEG2233', 'MEG1842']
raws['opm'] = mne.io.read_raw_fif(opm_fname)
raws['opm'].load_data().resample(new_sfreq)
raw_erms['opm'] = mne.io.read_raw_fif(opm_erm_fname)
raw_erms['opm'].load_data().resample(new_sfreq)
# Make sure our assumptions later hold
assert raws['opm'].info['sfreq'] == raws['vv'].info['sfreq']
"""
Explanation: Load data, resample. We will store the raw objects in dicts with entries
"vv" and "opm" to simplify housekeeping and simplify looping later.
End of explanation
"""
titles = dict(vv='VectorView', opm='OPM')
kinds = ('vv', 'opm')
n_fft = next_fast_len(int(round(4 * new_sfreq)))
print('Using n_fft=%d (%0.1f sec)' % (n_fft, n_fft / raws['vv'].info['sfreq']))
for kind in kinds:
fig = raws[kind].plot_psd(n_fft=n_fft, proj=True)
fig.suptitle(titles[kind])
fig.subplots_adjust(0.1, 0.1, 0.95, 0.85)
"""
Explanation: Explore data
End of explanation
"""
# Here we use a reduced size source space (oct5) just for speed
src = mne.setup_source_space(
subject, 'oct5', add_dist=False, subjects_dir=subjects_dir)
# This line removes source-to-source distances that we will not need.
# We only do it here to save a bit of memory, in general this is not required.
del src[0]['dist'], src[1]['dist']
bem = mne.read_bem_solution(bem_fname)
# For speed, let's just use a 1-layer BEM
bem = mne.make_bem_solution(bem['surfs'][-1:])
fwd = dict()
# check alignment and generate forward for VectorView
kwargs = dict(azimuth=0, elevation=90, distance=0.6, focalpoint=(0., 0., 0.))
fig = mne.viz.plot_alignment(
raws['vv'].info, trans=vv_trans_fname, subject=subject,
subjects_dir=subjects_dir, dig=True, coord_frame='mri',
surfaces=('head', 'white'))
mne.viz.set_3d_view(figure=fig, **kwargs)
fwd['vv'] = mne.make_forward_solution(
raws['vv'].info, vv_trans_fname, src, bem, eeg=False, verbose=True)
"""
Explanation: Alignment and forward
End of explanation
"""
with mne.use_coil_def(opm_coil_def_fname):
fig = mne.viz.plot_alignment(
raws['opm'].info, trans=opm_trans, subject=subject,
subjects_dir=subjects_dir, dig=False, coord_frame='mri',
surfaces=('head', 'white'))
mne.viz.set_3d_view(figure=fig, **kwargs)
fwd['opm'] = mne.make_forward_solution(
raws['opm'].info, opm_trans, src, bem, eeg=False, verbose=True)
del src, bem
"""
Explanation: And for OPM:
End of explanation
"""
freq_bands = dict(alpha=(8, 12), beta=(15, 29))
topos = dict(vv=dict(), opm=dict())
stcs = dict(vv=dict(), opm=dict())
snr = 3.
lambda2 = 1. / snr ** 2
for kind in kinds:
noise_cov = mne.compute_raw_covariance(raw_erms[kind])
inverse_operator = mne.minimum_norm.make_inverse_operator(
raws[kind].info, forward=fwd[kind], noise_cov=noise_cov, verbose=True)
stc_psd, sensor_psd = mne.minimum_norm.compute_source_psd(
raws[kind], inverse_operator, lambda2=lambda2,
n_fft=n_fft, dB=False, return_sensor=True, verbose=True)
topo_norm = sensor_psd.data.sum(axis=1, keepdims=True)
stc_norm = stc_psd.sum() # same operation on MNE object, sum across freqs
# Normalize each source point by the total power across freqs
for band, limits in freq_bands.items():
data = sensor_psd.copy().crop(*limits).data.sum(axis=1, keepdims=True)
topos[kind][band] = mne.EvokedArray(
100 * data / topo_norm, sensor_psd.info)
stcs[kind][band] = \
100 * stc_psd.copy().crop(*limits).sum() / stc_norm.data
del inverse_operator
del fwd, raws, raw_erms
"""
Explanation: Compute and apply inverse to PSD estimated using multitaper + Welch.
Group into frequency bands, then normalize each source point and sensor
independently. This makes the value of each sensor point and source location
in each frequency band the percentage of the PSD accounted for by that band.
End of explanation
"""
def plot_band(kind, band):
"""Plot activity within a frequency band on the subject's brain."""
title = "%s %s\n(%d-%d Hz)" % ((titles[kind], band,) + freq_bands[band])
topos[kind][band].plot_topomap(
times=0., scalings=1., cbar_fmt='%0.1f', vmin=0, cmap='inferno',
time_format=title)
brain = stcs[kind][band].plot(
subject=subject, subjects_dir=subjects_dir, views='cau', hemi='both',
time_label=title, title=title, colormap='inferno',
time_viewer=False, show_traces=False,
clim=dict(kind='percent', lims=(70, 85, 99)), smoothing_steps=10)
brain.show_view(azimuth=0, elevation=0, roll=0)
return fig, brain
fig_alpha, brain_alpha = plot_band('vv', 'alpha')
"""
Explanation: Now we can make some plots of each frequency band. Note that the OPM head
coverage is only over right motor cortex, so only localization
of beta is likely to be worthwhile.
Alpha
End of explanation
"""
fig_beta, brain_beta = plot_band('vv', 'beta')
"""
Explanation: Beta
Here we also show OPM data, which shows a profile similar to the VectorView
data beneath the sensors. VectorView first:
End of explanation
"""
fig_beta_opm, brain_beta_opm = plot_band('opm', 'beta')
"""
Explanation: Then OPM:
End of explanation
"""
|
xpmanoj/content | HW5.ipynb | mit | %matplotlib inline
import json
import numpy as np
import networkx as nx
import requests
from pattern import web
import matplotlib.pyplot as plt
from bs4 import BeautifulSoup as bs
# set some nicer defaults for matplotlib
from matplotlib import rcParams
#these colors come from colorbrewer2.org. Each is an RGB triplet
dark2_colors = [(0.10588235294117647, 0.6196078431372549, 0.4666666666666667),
(0.8509803921568627, 0.37254901960784315, 0.00784313725490196),
(0.4588235294117647, 0.4392156862745098, 0.7019607843137254),
(0.9058823529411765, 0.1607843137254902, 0.5411764705882353),
(0.4, 0.6509803921568628, 0.11764705882352941),
(0.9019607843137255, 0.6705882352941176, 0.00784313725490196),
(0.6509803921568628, 0.4627450980392157, 0.11372549019607843),
(0.4, 0.4, 0.4)]
rcParams['figure.figsize'] = (10, 6)
rcParams['figure.dpi'] = 150
rcParams['axes.color_cycle'] = dark2_colors
rcParams['lines.linewidth'] = 2
rcParams['axes.grid'] = False
rcParams['axes.facecolor'] = 'white'
rcParams['font.size'] = 14
rcParams['patch.edgecolor'] = 'none'
def remove_border(axes=None, top=False, right=False, left=True, bottom=True):
"""
Minimize chartjunk by stripping out unnecessary plot borders and axis ticks
The top/right/left/bottom keywords toggle whether the corresponding plot border is drawn
"""
ax = axes or plt.gca()
ax.spines['top'].set_visible(top)
ax.spines['right'].set_visible(right)
ax.spines['left'].set_visible(left)
ax.spines['bottom'].set_visible(bottom)
#turn off all ticks
ax.yaxis.set_ticks_position('none')
ax.xaxis.set_ticks_position('none')
#now re-enable visibles
if top:
ax.xaxis.tick_top()
if bottom:
ax.xaxis.tick_bottom()
if left:
ax.yaxis.tick_left()
if right:
ax.yaxis.tick_right()
"""
Explanation: Homework 5: Networks and Congress
Due Friday, November 15, 11:59pm
<img src="http://img.washingtonpost.com/rf/image_1024w/2010-2019/WashingtonPost/2011/08/05/National-Politics/Images/uscap.JPG">
<br>
End of explanation
"""
"""
Function
--------
get_senate_vote
Scrapes a single JSON page for a particular Senate vote, given by the vote number
Parameters
----------
vote : int
The vote number to fetch
Returns
-------
vote : dict
The JSON-decoded dictionary for that vote
Examples
--------
>>> get_senate_vote(11)['bill']
{u'congress': 113,
u'number': 325,
u'title': u'A bill to ensure the complete and timely payment of the obligations of the United States Government until May 19, 2013, and for other purposes.',
u'type': u'hr'}
"""
#your code here
def get_senate_vote(vote):
url = 'https://www.govtrack.us/data/congress/113/votes/2013/s%i/data.json' %vote
vote_data = requests.get(url)
return json.loads(vote_data.text)
get_senate_vote(10)
"""
Function
--------
get_all_votes
Scrapes all the Senate votes from http://www.govtrack.us/data/congress/113/votes/2013,
and returns a list of dicts
Parameters
-----------
None
Returns
--------
votes : list of dicts
List of JSON-parsed dicts for each senate vote
"""
#Your code here
def get_all_votes():
url = 'http://www.govtrack.us/data/congress/113/votes/2013'
response = requests.get(url)
soup = bs(response.content, "lxml")
s_votes = [a['href'][1:-1] for a in soup.find_all('a')
if a['href'].startswith('s')]
return [get_senate_vote(int(vote)) for vote in s_votes]
vote_data = get_all_votes()
vote_data[0]['votes']['Yea'][0]['display_name']
"""
Explanation: The website govtrack.us collects data on activities in the Senate and House of Representatives. It's a great source of information for making data-driven assessments about Congress.
Problem 1.
The directories at http://www.govtrack.us/data/congress/113/votes/2013 contain JSON information about every vote cast for the current (113th) Congress. Subdirectories beginning with "S" correspond to Senate votes, while subdirectories beginning with "H" correspond to House votes.
Write two functions: one that downloads and parses a single Senate vote page given the vote number, and another that repeatedly calls this function to build a full collection of Senate votes from the 113th Congress.
End of explanation
"""
"""
Function
--------
vote_graph
Parameters
----------
data : list of dicts
The vote database returned from get_vote_data
Returns
-------
graph : NetworkX Graph object, with the following properties
1. Each node in the graph is labeled using the `display_name` of a Senator (e.g., 'Lee (R-UT)')
2. Each node has a `color` attribute set to 'r' for Republicans,
'b' for Democrats, and 'k' for Independent/other parties.
3. The edges between two nodes are weighted by the number of
times two senators have cast the same Yea or Nay vote
4. Each edge also has a `difference` attribute, which is set to `1 / weight`.
Examples
--------
>>> graph = vote_graph(vote_data)
>>> graph.node['Lee (R-UT)']
{'color': 'r'} # attributes for this senator
>>> len(graph['Lee (R-UT)']) # connections to other senators
101
>>> graph['Lee (R-UT)']['Baldwin (D-WI)'] # edge relationship between Lee and Baldwin
{'difference': 0.02, 'weight': 50}
"""
def _color(s):
if '(R' in s:
return 'r'
if '(D' in s:
return 'b'
return 'k'
def vote_graph(data):
senators = set(x['display_name'] for d in data for vote_grp in d['votes'].values() for x in vote_grp)
weights = {s: {ss: 0 for ss in senators if ss != s} for s in senators}
for d in data:
for grp in ['Yea', 'Nay']:
if grp not in d['votes']:
continue
vote_grp = d['votes'][grp]
for i in range(len(vote_grp)):
for j in range(i + 1, len(vote_grp)):
sen1 = vote_grp[i]['display_name']
sen2 = vote_grp[j]['display_name']
weights[min(sen1, sen2)][max(sen1, sen2)] += 1
g = nx.Graph()
for s in senators:
g.add_node(s)
g.node[s]['color'] = _color(s)
for s1, neighbors in weights.items():
for s2, weight in neighbors.items():
if weight == 0:
continue
g.add_edge(s1, s2, weight= weight, difference = 1. / weight)
return g
votes = vote_graph(vote_data)
"""
Explanation: Problem 2
Now, turn these data into a NetworkX graph, according to the spec below. For details on using NetworkX, consult the lab materials for November 1, as well as the NetworkX documentation.
End of explanation
"""
#this makes sure draw_spring results are the same at each call
np.random.seed(1)
color = [votes.node[senator]['color'] for senator in votes.nodes()]
#determine position of each node using a spring layout
pos = nx.spring_layout(votes, iterations=200)
#plot the edges
nx.draw_networkx_edges(votes, pos, alpha = .05)
#plot the nodes
nx.draw_networkx_nodes(votes, pos, node_color=color)
#draw the labels
lbls = nx.draw_networkx_labels(votes, pos, alpha=5, font_size=8)
#coordinate information is meaningless here, so let's remove it
plt.xticks([])
plt.yticks([])
remove_border(left=False, bottom=False)
"""
Explanation: How (and how not) to visualize networks
Network plots often look impressive, but creating sensible network plots is tricky. From Ben Fry, the author of the Processing program:
<blockquote>
Usually a graph layout isn’t the best option for data sets larger than a few dozen nodes. You’re most likely to wind up with enormous spider webs or balls of string, and the mess seen so far is more often the case than not. Graphs can be a powerful way to represent relationships between data, but they are also a very abstract concept, which means that they run the danger of meaning something only to the creator of the graph. Often, simply showing the structure of the data says very little about what it actually means, even though it’s a perfectly accurate means of representing the data. Everything looks like a graph, but almost nothing should ever be drawn as one.
</blockquote>
Let's look at bad and better ways of visualizing the senate vote network.
First, consider the "default" plot from networkx.
End of explanation
"""
#Your code here
plt.figure(figsize=(15, 10))
np.random.seed(5)
mst = nx.minimum_spanning_tree(votes, weight='difference')
pos = nx.spring_layout(mst, iterations=900, k=.008, weight='difference')
mst_edges = list(nx.minimum_spanning_edges(votes, weight='difference'))
nl = votes.nodes()
c = [votes.node[n]['color'] for n in nl]
nx.draw_networkx_edges(votes, pos, edgelist=mst_edges, alpha=.2)
nx.draw_networkx_nodes(votes, pos, nodelist = nl, node_color = c, node_size=60)
for p in pos.values():
p[1] += .02
nx.draw_networkx_labels(votes, pos, font_color='k', font_size=7)
plt.title("MST of Vote Disagreement", fontsize=18)
plt.xticks([])
plt.yticks([])
remove_border(left=False, bottom=False)
"""
Explanation: The spring layout tries to group nodes with large edge-weights near to each other. In this context, that means it tries to organize the Senate into similarly-voting cliques. However, there's simply too much going on in this plot -- we should simplify the representation.
Problem 3
Compute the Minimum Spanning Tree of this graph, using the difference edge attribute as the weight to minimize. A Minimum Spanning Tree is the subset of edges which trace at least one path through all nodes ("spanning"), with minimum total edge weight. You can think of it as a simplification of a network.
Plot this new network, making modifications as necessary to prevent the graph from becoming too busy.
End of explanation
"""
#Your code here
bet = nx.closeness_centrality(votes, distance='difference')
bipartisans = sorted(bet, key=lambda x: -bet[x])
print "Highest closeness"
for senator in bipartisans[:5]:
print "%20.20s\t%0.3f" % (senator, bet[senator])
print
print "Lowest closeness"
for senator in bipartisans[-5:]:
print "%20.20s\t%0.3f" % (senator, bet[senator])
plt.figure(figsize=(15, 4))
x = np.arange(len(nl))
y = np.array([bet[n] for n in nl])
c = np.array([votes.node[n]['color'] for n in nl])
ind = np.argsort(y)
y = y[ind]
c = c[ind]
plt.bar(x, y, color=c, align='center', width=.8)
remove_border(left=None, bottom=None)
ticks = plt.xticks(x, [nl[i] for i in x[ind]],
rotation='vertical', fontsize=7)
limits = plt.xlim(-1, x[-1] + 1)
"""
Explanation: Problem 4
While this graph has less information, the remaining information is easier to digest. What does the Minimum Spanning Tree mean in this context? How does this graph relate to partisanship in the Senate? Which nodes in this graph are the most and least bi-partisan?
Your answer here
The Minimum spanning tree represents the pairs of senators who voted most similarly for all the votes. Specifically, each edge in the tree above expresses the similarity in views (either Yes or No) of the senator pair in supporting a particular bill. The graph also indicates clear partisanship in the Senate with a strong cohesion within the two parties in sharing similar views with very few exceptions.
Official answer:
The edges of a minimum spanning tree trace a path of low resistance through the network. In the present context, this has the effect of moving bipartisan Senators like Hagan towards the center of the graph -- it is much easier to connect Hagan to a Republican node than, say, a partisan Democrat like Al Franken. Partisan Senators are pushed away from the center of the graph and deeper into the party cliques.
This scheme also moves outlier senators to the outside of the graph. For example, John Kerry cast very few votes before becoming Secretary of State. Most of the edges connected to John Kerry have large difference values, so the fewest possible number of edges (1) remain in the MST.
Problem 5
(For this problem, use the full graph for centrality computation, and not the Minimum Spanning Tree)
Networkx can easily compute centrality measurements.
Briefly discuss what closeness_centrality means, both mathematically and in the context of the present graph -- how does the centrality relate to partisanship? Choose a way to visualize the closeness_centrality score for each member of the Senate, using edge difference as the distance measurement. Determine the 5 Senators with the highest and lowest centralities.
Comment on your results. In particular, note the outliers John Kerry (who recently resigned his Senate seat when he became Secretary of State), Mo Cowan (Kerry's interim replacement) and Ed Markey (Kerry's permanent replacement) have low centrality scores -- why?
Your discussion here
The closeness centrality measures the average difference between a Senator and all other Senators. Bipartisan voters will have more shared votes with the members of the opposite party, which tends to increase their centrality. However, these senators also vote less often with their own party, which can decrease centrality
Centrality scores are also small for people who haven't cast many votes (like John Kerry, Mo Cowan, and Ed Markey). This says nothing about bipartisanship
End of explanation
"""
#your code here
"""
Here, we compute the mean weight for the edges that connect a Senator
to a node in the other party (we consider Independents to be Democrats
for this analysis).
This only considers how similarly a Senator votes with the other party.
The scatter plot shows that the betweenness centrality and bipartisan score
correlate with each other. However, the betweenness centrality judges Democrats
to be more bipartisan as a whole. Part of this is a bias due to the fact
that Democrats are the majority party in the Senate right now, so their
votes are considered more "central" due to their bigger numbers.
"""
def bipartisan_score(graph, node):
party = graph.node[node]['color']
other = 'r' if party != 'r' else 'b'
return np.mean([v['weight'] for k, v in graph[node].items() if graph.node[k]['color'] == other])
bp_score = {node: bipartisan_score(votes, node) for node in votes.nodes()}
bp2 = sorted(bp_score, key=lambda x: -1 * bp_score[x])
print "Most Bipartisan"
for senator in bp2[:5]:
print "%20.20s\t%0.3f" % (senator, bp_score[senator])
print
print "Least Bipartisan"
for senator in bp2[-5:]:
print "%20.20s\t%0.3f" % (senator, bp_score[senator])
senators = bp_score.keys()
x = [bet[s] for s in senators]
y = [bp_score[s] for s in senators]
c = [votes.node[s]['color'] for s in senators]
plt.scatter(x, y, 80, color=c,
alpha=.5, edgecolor='white')
plt.xlabel("Betweenness Centrality")
plt.ylabel("Bipartisan Score")
remove_border()
"""
Explanation: Problem 6
Centrality isn't a perfect proxy for bipartisanship, since it gauges how centralized a node is to the network as a whole, and not how similar a Democrat node is to the Republican sub-network (and vice versa).
Can you come up with another measure that better captures bipartisanship than closeness centrality? Develop your own metric -- how does it differ from the closeness centrality? Use visualizations to support your points.
End of explanation
"""
"""
Function
--------
get_senate_bill
Scrape the bill data from a single JSON page, given the bill number
Parameters
-----------
bill : int
Bill number to fetch
Returns
-------
A dict, parsed from the JSON
Examples
--------
>>> bill = get_senate_bill(10)
>>> bill['sponsor']
{u'district': None,
u'name': u'Reid, Harry',
u'state': u'NV',
u'thomas_id': u'00952',
u'title': u'Sen',
u'type': u'person'}
>>> bill['short_title']
u'Agriculture Reform, Food, and Jobs Act of 2013'
"""
#your code here
#your code here
def get_senate_bill(bill):
url = 'https://www.govtrack.us/data/congress/113/bills/s/s%i/data.json' %bill
bill_data = requests.get(url)
return json.loads(bill_data.text)
"""
Function
--------
get_all_bills
Scrape all Senate bills at http://www.govtrack.us/data/congress/113/bills/s
Parameters
----------
None
Returns
-------
A list of dicts, one for each bill
"""
#your code here
def get_all_bills():
url = 'http://www.govtrack.us/data/congress/113/bills/s'
response = requests.get(url)
soup = bs(response.content, "lxml")
s_bills = [a['href'][1:-1] for a in soup.find_all('a') if a['href'].startswith('s')]
n_bills = len(s_bills)
return [get_senate_bill(i) for i in range(1, n_bills+1)]
#write json data in to a file
bills = json.dumps(get_all_bills())
with open('./data/bills.json','w') as fp:
fp.write(bills)
#read data from file
bill_list = json.load(open('./data/bills.json'))
bill_list[0]['cosponsors']
"""
Explanation: Your discussion here
Leadership in the Senate
There are many metrics to quantify the leadership in the Senate.
Senate leaders sponsor and co-sponsor lots of bills
Leaders sit on many committees, as well as more important committees
Leaders usually have been in office for a long time
Another approach uses the philosophy behind how Google ranks search results. The core idea behind Google's PageRank algorithm is:
A "good" website (i.e. one to rank highly in search results) is linked to by many other websites
A link found on a "good" website is more important than a link found on a "bad" website
The PageRank algorithm thus assigns scores to nodes in a graph based on how many neighbors a node has, as well as the score of those neighbors.
This technique can be adapted to rank Senate leadership. Here, nodes correspond to Senators, and edges correspond to a senator co-sponsoring a bill sponsored by another Senator. The weight of each edge from node A to B is the number of times Senator A has co-sponsored a bill whose primary sponsor is Senator B. If you interpret the PageRank scores of such a network to indicate Senate leadership, you are then assuming:
Leaders sponsor more bills
Leaders attract co-sponsorship from other leaders
Problem 7
Govtrack stores information about each Senate bill in the current congress at http://www.govtrack.us/data/congress/113/bills/s/. As in problem 1, write two functions to scrape these data -- the first function downloads a single bill, and the second function calls the first to loop over all bills.
End of explanation
"""
"""
Function
--------
bill_graph
Turn the bill graph data into a NetworkX Digraph
Parameters
----------
data : list of dicts
The data returned from get_all_bills
Returns
-------
graph : A NetworkX DiGraph, with the following properties
* Each node is a senator. For a label, use the 'name' field
from the 'sponsor' and 'cosponsors' dict items
* Each edge from A to B is assigned a weight equal to how many
bills are sponsored by B and co-sponsored by A
"""
#Your code here
bg = nx.DiGraph()
def bill_graph(data):
sp = nx.DiGraph()
for bill in data:
sponsor = bill['sponsor']['name']
sponsor_data = bill['sponsor']
cosponsors = [cs['name'] for cs in bill['cosponsors']]
if sponsor not in sp:
sp.add_node(sponsor, **sponsor_data)
for cosponsor in bill['cosponsors']:
if cosponsor['name'] not in sp:
sp.add_node(cosponsor['name'], **cosponsor)
cosponsor = cosponsor['name']
try:
w = sp[cosponsor][sponsor]['weight'] + 1
except KeyError:
w = + 1
sp.add_edge(cosponsor, sponsor, weight=w)
return sp
bills = bill_graph(bill_list)
"""
Explanation: Problem 8
Write a function to builded a Directed Graph (DiGraph) from these data, according to the following spec:
End of explanation
"""
#Your code here
pagerank = nx.pagerank_numpy(bills)
names = np.array(pagerank.keys())
vals = np.array([pagerank[n] for n in names])
ind = np.argsort(vals)
names = names[ind]
vals = vals[ind]
print "Highest Scores"
for n, v in zip(names, vals)[-5:][::-1]:
print "%20.20s\t%0.3f" % (n, v)
print
print "Lowest Scores"
for n, v in zip(names, vals)[:5]:
print "%20.20s\t%0.3f" % (n, v)
#Your code here
deg = nx.degree(bills)
plt.scatter([deg[n] for n in bills.nodes()],
[pagerank[n] for n in bills.nodes()], 80, alpha=.8,
color='k', edgecolor='white')
labels = ['Reid, Harry', 'Lautenberg, Frank R.', 'Menendez, Robert', 'Harkin, Tom']
for lbl in labels:
plt.annotate(lbl, (deg[lbl], pagerank[lbl] + .002), fontsize=10, rotation=10)
plt.xlabel("Degree")
plt.ylabel("PageRank")
remove_border()
"""
Explanation: Problem 9
Using nx.pagerank_numpy, compute the PageRank score for each senator in this graph. Visualize the results. Determine the 5 Senators with the highest
PageRank scores. How effective is this approach at identifying leaders? How does the PageRank rating compare to the degree of each node?
Note: you can read about individual Senators by searching for them on the govtrack website.
End of explanation
"""
nx.write_gexf(votes, 'votes.gexf')
"""
Explanation: Your discussion here
The PageRank approach does seem to be effective at identifying influential Senators like Tom Harkin and Harry Reid (the Majority Leader). We see in particular that Harry Reid's PageRank score is relatively higher than his degree -- he seems to sponsor fewer bills overall, but those bills appear to be more important. This makes sense, since he is the figurehead of the Democratic party in the Senate, and thus probably focuses on the highest-profile legislation.*
Interactive Visualization
Producing a good node link layout is not quite so simple. Nevertheless, we will give it a try.
We will use Gephi for interactive graph visualization. Gephi supports a wide variety of graph file formats, and NetworkX exports to several of them. We'll use the Graph Exchange XML Format (GEXF).
End of explanation
"""
from IPython.display import Image
path = 'name_of_your_screenshot'
Image(path)
"""
Explanation: Problem 10: Analysis with Gephi
Download and install Gephi. See the lab for a brief introduction. Load the exported votes file. Try to produce a layout that clearly separates Democrats from Republicans (hint: filter on edge weight and re-layout once you filtered). Run PageRank and some other statistics and try encoding them with node color and node size. Run the "Modularity" statistic and encode the results in color.
Include a screenshot of your "best" visualization and embed the image here with IPython.display.Image. Make sure to include this image in your submission.
Explain your observations. Is the network visualization very helpful? Try to visualize your LinkedIn network (see the lab) or the one provided in the lab. Which dataset is more suitable for visualization and why is there a difference?
End of explanation
"""
|
Nathx/think_stats | resolved/chap05ex.ipynb | gpl-3.0 | from __future__ import print_function, division
import thinkstats2
import thinkplot
from brfss import *
import populations as p
import random
import pandas as pd
import test_models
%matplotlib inline
"""
Explanation: Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
End of explanation
"""
import scipy.stats
"""
Explanation: Exercise 5.1
In the BRFSS (see Section 5.4), the distribution of heights is roughly normal with parameters µ = 178 cm and σ = 7.7 cm for men, and µ = 163 cm and σ = 7.3 cm for women.
In order to join Blue Man Group, you have to be male between 5’10” and 6’1” (see http://bluemancasting.com). What percentage of the U.S. male population is in this range? Hint: use scipy.stats.norm.cdf.
<tt>scipy.stats</tt> contains objects that represent analytic distributions
End of explanation
"""
mu = 178
sigma = 7.7
dist = scipy.stats.norm(loc=mu, scale=sigma)
type(dist)
"""
Explanation: For example <tt>scipy.stats.norm</tt> represents a normal distribution.
End of explanation
"""
dist.mean(), dist.std()
"""
Explanation: A "frozen random variable" can compute its mean and standard deviation.
End of explanation
"""
dist.cdf(mu-sigma)
"""
Explanation: It can also evaluate its CDF. How many people are more than one standard deviation below the mean? About 16%
End of explanation
"""
dist.cdf(185.42) - dist.cdf(177.8)
thinkstats2.RandomSeed(17)
nrows = int(1000)
df = brfss.ReadBrfss(nrows=10000)
MakeNormalPlot(df.age)
p.MakeFigures()
"""
Explanation: How many people are between 5'10" and 6'1"?
End of explanation
"""
alpha = 1.7
xmin = 1
dist = scipy.stats.pareto(b=alpha, scale=xmin)
dist.median()
xs, ps = thinkstats2.RenderParetoCdf(xmin, alpha, 0, 10.0, n=100)
thinkplot.Plot(xs, ps, label=r'$\alpha=%g$' % alpha)
thinkplot.Config(xlabel='height (m)', ylabel='CDF')
"""
Explanation: Exercise 5.2
To get a feel for the Pareto distribution, let’s see how different the world would be if the distribution of human height were Pareto. With the parameters $x_m = 1$ m and $α = 1.7$, we get a distribution with a reasonable minimum, 1 m, and median, 1.5 m.
Plot this distribution. What is the mean human height in Pareto world? What fraction of the population is shorter than the mean? If there are 7 billion people in Pareto world, how many do we expect to be taller than 1 km? How tall do we expect the tallest person to be?
<tt>scipy.stats.pareto</tt> represents a pareto distribution. In Pareto world, the distribution of human heights has parameters alpha=1.7 and xmin=1 meter. So the shortest person is 100 cm and the median is 150.
End of explanation
"""
dist.mean()
"""
Explanation: What is the mean height in Pareto world?
End of explanation
"""
dist.cdf(dist.mean())
"""
Explanation: What fraction of people are shorter than the mean?
End of explanation
"""
(1 - dist.cdf(1000))*7000000000
"""
Explanation: Out of 7 billion people, how many do we expect to be taller than 1 km? You could use <tt>dist.cdf</tt> or <tt>dist.sf</tt>.
End of explanation
"""
dist.isf(1/7000000000)
"""
Explanation: How tall do we expect the tallest person to be? Hint: find the height that yields about 1 person.
End of explanation
"""
alpha = 100
lam = 1
sample = [random.weibullvariate(alpha, lam) for i in xrange(1000)]
cdf = thinkstats2.Cdf(sample)
thinkplot.Cdf(np.log(np.log(cdf)), complement=True)
thinkplot.Show()
"""
Explanation: Exercise 5.3
The Weibull distribution is a generalization of the exponential distribution that comes up in failure analysis (see http://wikipedia.org/wiki/Weibull_distribution). Its CDF is
$CDF(x) = 1 − \exp(−(x / λ)^k)$
Can you find a transformation that makes a Weibull distribution look like a straight line? What do the slope and intercept of the line indicate?
Use random.weibullvariate to generate a sample from a Weibull distribution and use it to test your transformation.
End of explanation
"""
import analytic
df = analytic.ReadBabyBoom()
diffs = df.minutes.diff()
cdf = thinkstats2.Cdf(diffs, label='actual')
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(yscale='log')
sample = [random.expovariate(1/33) for i in xrange(44)]
cdf = thinkstats2.Cdf(sample)
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(yscale='log')
test_models.main("test_models.py", "mystery2.dat")
"""
Explanation: Exercise 5.4
For small values of n, we don’t expect an empirical distribution to fit an analytic distribution exactly. One way to evaluate the quality of fit is to generate a sample from an analytic distribution and see how well it matches the data.
For example, in Section 5.1 we plotted the distribution of time between births and saw that it is approximately exponential. But the distribution is based on only 44 data points. To see whether the data might have come from an exponential distribution, generate 44 values from an exponential distribution with the same mean as the data, about 33 minutes between births.
Plot the distribution of the random values and compare it to the actual distribution. You can use random.expovariate to generate the values.
End of explanation
"""
|
datascience-practice/data-quest | python_introduction/beginner/.ipynb_checkpoints/Functions and Debugging-checkpoint.ipynb | mit | # The story is stored in the file "story.txt".
f = open("story.txt", "r")
story = f.read()
print(story)
"""
Explanation: 2: Reading the file in
Instructions
The story is stored in the "story.txt" file. Open the file and read the contents into the story variable.
Answer
End of explanation
"""
# We can split strings into lists with the .split() method.
# If we use a space as the input to .split(), it will split based on the space.
text = "Bears are probably better than sharks, but I can't get close enough to one to be sure."
tokenized_text = text.split(" ")
tokenized_story = story.split(" ")
print(tokenized_story)
"""
Explanation: 3: Tokenizing the file
Instructions
The story is loaded into the story variable.
Tokenize the story, and store the tokens into the tokenized_story variable.
Answer
End of explanation
"""
# We can use the .replace function to replace punctuation in a string.
text = "Who really shot John F. Kennedy?"
text = text.replace("?", "?!")
# The question mark has been replaced with ?!.
##print(text)
# We can replace strings with blank spaces, meaning that they are just removed.
text = text.replace("?", "")
# The question mark is gone now.
##print(text)
no_punctuation_tokens = []
for token in tokenized_story:
for p in [".", ",", "\n", "'", ";", "?", "!", "-", ":"]:
token = token.replace(p, "")
no_punctuation_tokens.append(token)
print(no_punctuation_tokens)
"""
Explanation: 4: Replacing punctuation
Instructions
The story has been loaded into tokenized_story.
Replace all of the punctuation in each of the tokens.
You'll need to loop through tokenized_story to do so.
You'll need to use multiple replace statements, one for each punctuation character to replace.
Append the token to no_punctuation_tokens once you are done replacing characters.
Don't forget to remove newlines!
Print out no_punctuation_tokens if you want to see which types of punctuation are still in the data.
Answer
End of explanation
"""
# We can make strings all lowercase using the .lower() method.
text = "MY CAPS LOCK IS STUCK"
text = text.lower()
# The text is much nicer to read now.
print(text)
lowercase_tokens = []
for token in no_punctuation_tokens:
lowercase_tokens.append(token.lower())
print(lowercase_tokens)
"""
Explanation: 5: Lowercasing the words
Instructions
The tokens without punctuation have been loaded into no_punctuation_tokens.
Loop through the tokens and lowercase each one.
Append each token to lowercase_tokens when you're done lowercasing.
Answer
End of explanation
"""
# A simple function that takes in a number of miles, and turns it into kilometers
# The input at position 0 will be put into the miles variable.
def miles_to_km(miles):
# return is a special keyword that indicates that the function will output whatever comes after it.
return miles/0.62137
# Returns the number of kilometers equivalent to one mile
print(miles_to_km(1))
# Convert a from 10 miles to kilometers
a = 10
a = miles_to_km(a)
# We can convert and assign to a different variable
b = 50
c = miles_to_km(b)
fahrenheit = 80
celsius = (fahrenheit - 32)/1.8
def f2c(f):
c = (f - 32)/1.8
return c
celsius_100 = f2c(100)
celsius_150 = f2c(150)
print(celsius_100, celsius_150)
"""
Explanation: 7: Making a basic function
Instructions
Define a function that takes degrees in fahrenheit as an input, and return degrees celsius
Use it to convert 100 degrees fahrenheit to celsius. Assign the result to celsius_100.
Use it to convert 150 degrees fahrenheit to celsius. Assign the result to celsius_150.
Answer
End of explanation
"""
def split_string(text):
return text.split(" ")
sally = "Sally sells seashells by the seashore."
# This splits the string into a list.
print(split_string(sally))
# We can assign the output of a function to a variable.
sally_tokens = split_string(sally)
lowercase_me = "I wish I was in ALL lowercase"
def to_lowercase(text):
return text.lower()
lowercased_string = to_lowercase(lowercase_me)
print(lowercased_string)
"""
Explanation: 8: Practice: functions
Instructions
Make a function that takes a string as input and outputs a lowercase version.
Then use it to turn the string lowercase_me to lowercase.
Assign the result to lowercased_string.
Answer
End of explanation
"""
# Sometimes, you will have problems with your code that cause python to throw an exception.
# Don't worry, it happens to all of us many times a day.
# An exception means that the program can't run, so you'll get an error in the results view instead of the normal output.
# There are a few different types of exceptions.
# The first we'll look at is a SyntaxError.
# This means that something is typed incorrectly (statements misspelled, quotes missing, and so on)
a = ["Errors are no fun!", "But they can be fixed", "Just fix the syntax and everything will be fine"]
b = 5
for item in a:
if b == 5:
print(item)
"""
Explanation: 9: Types of errors
Instructions
There are multiple syntax errors in the code cell below. You can tell because of the error showing up in the results panel. Fix the errors and get the code running properly. It should print all of the items in a.
Answer
End of explanation
"""
a = 5
if a == 6:
print("6 is obviously the best number")
print("What's going on, guys?")
else:
print("I never liked that 6")
"""
Explanation: 10: More syntax errors
Instructions
The code below has multiple syntax errors. Fix them so the code prints out "I never liked that 6"
Answer
End of explanation
"""
|
texib/spark_tutorial | 2.ProcessText Data.ipynb | gpl-2.0 | urllist = ['http://chahabi77.pixnet.net/blog/post/436715527',
'http://chahabi77.pixnet.net/blog/post/403682269',
'http://chahabi77.pixnet.net/blog/post/354943724',
'http://chahabi77.pixnet.net/blog/post/386442944',
'http://chahabi77.pixnet.net/blog/post/235296791',
]
"""
Explanation: 欲抓的網頁列表
End of explanation
"""
import urllib2
import json
f = open('./pixnet.txt',"w")
for u in urllist:
line = {}
response = urllib2.urlopen(u)
html = response.read()
html = html.replace('\r','').replace('\n','')
line['html'] = html
line['url'] =u
line_str = json.dumps(line)
f.write(line_str+"\r\n")
f.close()
"""
Explanation: 下載網頁並且組成一個檔案
End of explanation
"""
import json
pixnet = sc.textFile('./pixnet.txt',use_unicode=False).map(
lambda x : json.loads(x)).map(lambda x : (x['url'],x['html']))
print "URL:", pixnet.first()[0]
print "資料筆數: ", pixnet.count()
print "HTML 前 200 字元:", pixnet.first()[1][:200]
"""
Explanation: 我們來看一下實際檔案的內容
請點即此鏈結
載入頁面,並進行 json parsing
End of explanation
"""
count_nummber = pixnet.filter(lambda x : u"好吃" in x[1] ).count()
if count_nummber == 4 : print "你答對了"
"""
Explanation: RDD 常用的 Function 如下
map(funct) - 對 RDD 的裡頭的元素進行處理動作
mapValues (func) - 排除 Key,只對 RDD 的裡頭的元素進行處理動作
reduceByKey(func) - 將相同的 Key 裡頭的 Values 給予合併起來
count() - 計算 RDD 裡頭元素的個數
filter(func) - 根據 condition 判斷是否需要保留
first() - 取得 RDD 裡頭的第一個元素
<span style="color: blue"> 請填入??,來計算"好吃"的次數(多少頁面之中)</span>
End of explanation
"""
def word_count(text):
return text.count(u"好吃")
print "好吃出現了",word_count(u"老師好吃好吃好吃好吃!!!!"),"次"
pixnet.mapValues(word_count).collect()
total_count = pixnet.mapValues(word_count).map(lambda x : x[1]).reduce(lambda x,y: x+y)
if total_count == 23 : print "你答對了"
else : print "答錯了!你的答案是 %d, 正確答案是59" % (total_count)
"""
Explanation: <span style="color: blue">請修改以下的程式碼,並計算"好吃"所有出現次數,注意!!不是頁面數</span>
<span style="color:red">提示:修改 word_count 函式</span>
End of explanation
"""
|
angelmtenor/deep-learning | tensorboard/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
"""
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
"""
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size, state_is_tuple=False)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers, state_is_tuple=False)
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
preds = tf.nn.softmax(logits, name='predictions')
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
"""
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/1', sess.graph)
"""
Explanation: Write out the graph for TensorBoard
End of explanation
"""
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
"""
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
|
NEONScience/NEON-Data-Skills | tutorials/Python/Hyperspectral/indices/Calc_NDVI_Extract_Spectra_Masks_Tiles_py/Calc_NDVI_Extract_Spectra_Masks_Tiles_py.ipynb | agpl-3.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore') #don't display warnings
# %load ../neon_aop_hyperspectral.py
"""
Created on Wed Jun 20 10:34:49 2018
@author: bhass
"""
import matplotlib.pyplot as plt
import numpy as np
import h5py, os, copy
def aop_h5refl2array(refl_filename):
"""aop_h5refl2array reads in a NEON AOP reflectance hdf5 file and returns
1. reflectance array (with the no data value and reflectance scale factor applied)
2. dictionary of metadata including spatial information, and wavelengths of the bands
--------
Parameters
refl_filename -- full or relative path and name of reflectance hdf5 file
--------
Returns
--------
reflArray:
array of reflectance values
metadata:
dictionary containing the following metadata:
bad_band_window1 (tuple)
bad_band_window2 (tuple)
bands: # of bands (float)
data ignore value: value corresponding to no data (float)
epsg: coordinate system code (float)
map info: coordinate system, datum & ellipsoid, pixel dimensions, and origin coordinates (string)
reflectance scale factor: factor by which reflectance is scaled (float)
wavelength: wavelength values (float)
wavelength unit: 'm' (string)
--------
NOTE: This function applies to the NEON hdf5 format implemented in 2016, and should be used for
data acquired 2016 and after. Data in earlier NEON hdf5 format (collected prior to 2016) is
expected to be re-processed after the 2018 flight season.
--------
Example Execution:
--------
sercRefl, sercRefl_metadata = h5refl2array('NEON_D02_SERC_DP3_368000_4306000_reflectance.h5') """
import h5py
#Read in reflectance hdf5 file
hdf5_file = h5py.File(refl_filename,'r')
#Get the site name
file_attrs_string = str(list(hdf5_file.items()))
file_attrs_string_split = file_attrs_string.split("'")
sitename = file_attrs_string_split[1]
#Extract the reflectance & wavelength datasets
refl = hdf5_file[sitename]['Reflectance']
reflData = refl['Reflectance_Data']
reflRaw = refl['Reflectance_Data'].value
#Create dictionary containing relevant metadata information
metadata = {}
metadata['map info'] = refl['Metadata']['Coordinate_System']['Map_Info'].value
metadata['wavelength'] = refl['Metadata']['Spectral_Data']['Wavelength'].value
#Extract no data value & scale factor
metadata['data ignore value'] = float(reflData.attrs['Data_Ignore_Value'])
metadata['reflectance scale factor'] = float(reflData.attrs['Scale_Factor'])
#metadata['interleave'] = reflData.attrs['Interleave']
#Apply no data value
reflClean = reflRaw.astype(float)
arr_size = reflClean.shape
if metadata['data ignore value'] in reflRaw:
print('% No Data: ',np.round(np.count_nonzero(reflClean==metadata['data ignore value'])*100/(arr_size[0]*arr_size[1]*arr_size[2]),1))
nodata_ind = np.where(reflClean==metadata['data ignore value'])
reflClean[nodata_ind]=np.nan
#Apply scale factor
reflArray = reflClean/metadata['reflectance scale factor']
#Extract spatial extent from attributes
metadata['spatial extent'] = reflData.attrs['Spatial_Extent_meters']
#Extract bad band windows
metadata['bad band window1'] = (refl.attrs['Band_Window_1_Nanometers'])
metadata['bad band window2'] = (refl.attrs['Band_Window_2_Nanometers'])
#Extract projection information
#metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value
metadata['epsg'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)
#Extract map information: spatial extent & resolution (pixel size)
mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value
hdf5_file.close
return reflArray, metadata
def plot_aop_refl(band_array,refl_extent,colorlimit=(0,1),ax=plt.gca(),title='',cbar ='on',cmap_title='',colormap='Greys'):
'''plot_refl_data reads in and plots a single band or 3 stacked bands of a reflectance array
--------
Parameters
--------
band_array: array of reflectance values, created from aop_h5refl2array
refl_extent: extent of reflectance data to be plotted (xMin, xMax, yMin, yMax)
use metadata['spatial extent'] from aop_h5refl2array function
colorlimit: optional, range of values to plot (min,max).
- helpful to look at the histogram of reflectance values before plotting to determine colorlimit.
ax: optional, default = current axis
title: optional; plot title (string)
cmap_title: optional; colorbar title
colormap: optional (string, see https://matplotlib.org/examples/color/colormaps_reference.html) for list of colormaps
--------
Returns
--------
plots flightline array of single band of reflectance data
--------
Examples:
--------
plot_aop_refl(sercb56,
sercMetadata['spatial extent'],
colorlimit=(0,0.3),
title='SERC Band 56 Reflectance',
cmap_title='Reflectance',
colormap='Greys_r') '''
import matplotlib.pyplot as plt
plot = plt.imshow(band_array,extent=refl_extent,clim=colorlimit);
if cbar == 'on':
cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap);
cbar.set_label(cmap_title,rotation=90,labelpad=20)
plt.title(title); ax = plt.gca();
ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation for ticklabels
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees
def stack_rgb(reflArray,bands):
red = reflArray[:,:,bands[0]-1]
green = reflArray[:,:,bands[1]-1]
blue = reflArray[:,:,bands[2]-1]
stackedRGB = np.stack((red,green,blue),axis=2)
return stackedRGB
def plot_aop_rgb(rgbArray,ext,ls_pct=5,plot_title=''):
from skimage import exposure
pLow, pHigh = np.percentile(rgbArray[~np.isnan(rgbArray)], (ls_pct,100-ls_pct))
img_rescale = exposure.rescale_intensity(rgbArray, in_range=(pLow,pHigh))
plt.imshow(img_rescale,extent=ext)
plt.title(plot_title + '\n Linear ' + str(ls_pct) + '% Contrast Stretch');
ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
"""
Explanation: syncID: 19e0b890b3c64f46b2189c8273a2e0a4
title: "Calculate NDVI & Extract Spectra Using Masks in Python - Tiled Data"
description: "Learn to calculate Normalized Difference Vegetation Index (NDVI) and extract spectral using masks with Python and NEON tiled hyperspectral data products."
dateCreated: 2018-07-05
authors: Bridget Hass
contributors: Donal O'Leary
estimatedTime: 0.5 hours
packagesLibraries: numpy, h5py, gdal, matplotlib.pyplot
topics: hyperspectral-remote-sensing, HDF5, remote-sensing,
languagesTool: python
dataProduct: NEON.DP3.30006, NEON.DP3.30008
code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/indices/Calc_NDVI_Extract_Spectra_Masks_Tiles_py/Calc_NDVI_Extract_Spectra_Masks_Tiles_py.ipynb
tutorialSeries: intro-hsi-py-series
urlTitle: calc-ndvi-tiles-py
In this tutorial, we will calculate the Normalized Difference Vegetation Index
(NDVI).
This tutorial uses the mosaiced or tiled NEON data product. For a tutorial
using the flightline data, please see <a href="/calc-ndvi-py" target="_blank"> Calculate NDVI & Extract Spectra Using Masks in Python - Flightline Data</a>.
<div id="ds-objectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Calculate NDVI from hyperspectral data in Python.
### Install Python Packages
* **numpy**
* **pandas**
* **gdal**
* **matplotlib**
* **h5py**
### Download Data
To complete this tutorial, you will use data available from the NEON 2017 Data
Institute.
This tutorial uses the following files:
<ul>
<li> <a href="https://www.neonscience.org/sites/default/files/neon_aop_spectral_python_functions_tiled_data.zip">neon_aop_spectral_python_functions_tiled_data.zip (10 KB)</a> <- Click to Download</li>
<li><a href="https://ndownloader.figshare.com/files/25752665" target="_blank">NEON_D02_SERC_DP3_368000_4306000_reflectance.h5 (618 MB)</a> <- Click to Download</li>
</ul>
<a href="https://ndownloader.figshare.com/files/25752665" class="link--button link--arrow">
Download Dataset</a>
The LiDAR and imagery data used to create this raster teaching data subset
were collected over the
<a href="http://www.neonscience.org/" target="_blank"> National Ecological Observatory Network's</a>
<a href="http://www.neonscience.org/science-design/field-sites/" target="_blank" >field sites</a>
and processed at NEON headquarters.
The entire dataset can be accessed on the
<a href="http://data.neonscience.org" target="_blank"> NEON data portal</a>.
</div>
Calculate NDVI & Extract Spectra with Masks
Background:
The Normalized Difference Vegetation Index (NDVI) is a standard band-ratio calculation frequently used to analyze ecological remote sensing data. NDVI indicates whether the remotely-sensed target contains live green vegetation. When sunlight strikes objects, certain wavelengths of the electromagnetic spectrum are absorbed and other wavelengths are reflected. The pigment chlorophyll in plant leaves strongly absorbs visible light (with wavelengths in the range of 400-700 nm) for use in photosynthesis. The cell structure of the leaves, however, strongly reflects near-infrared light (wavelengths ranging from 700 - 1100 nm). Plants reflect up to 60% more light in the near infrared portion of the spectrum than they do in the green portion of the spectrum. By calculating the ratio of Near Infrared (NIR) to Visible (VIS) bands in hyperspectral data, we can obtain a metric of vegetation density and health.
The formula for NDVI is: $$NDVI = \frac{(NIR - VIS)}{(NIR+ VIS)}$$
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-indices/ndvi_tree.png">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-indices/ndvi_tree.png"></a>
<figcaption> NDVI is calculated from the visible and near-infrared light
reflected by vegetation. Healthy vegetation (left) absorbs most of the
visible light that hits it, and reflects a large portion of
near-infrared light. Unhealthy or sparse vegetation (right) reflects more
visible light and less near-infrared light. Source: <a href="https://www.researchgate.net/figure/266947355_fig1_Figure-1-Green-vegetation-left-absorbs-visible-light-and-reflects-near-infrared-light" target="_blank">Figure 1 in Wu et. al. 2014. PLOS. </a>
</figcaption>
</figure>
Start by setting plot preferences and loading the neon_aop_refl_hdf5_functions module:
End of explanation
"""
# Note you will need to update this filepath for your local machine
sercRefl, sercRefl_md = aop_h5refl2array('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5')
"""
Explanation: Read in SERC Reflectance Tile
End of explanation
"""
print('band 58 center wavelength (nm): ',sercRefl_md['wavelength'][57])
print('band 90 center wavelength (nm) : ', sercRefl_md['wavelength'][89])
"""
Explanation: Extract NIR and VIS bands
Now that we have uploaded all the required functions, we can calculate NDVI and plot it.
Below we print the center wavelengths that these bands correspond to:
End of explanation
"""
vis = sercRefl[:,:,57]
nir = sercRefl[:,:,89]
ndvi = np.divide((nir-vis),(nir+vis))
"""
Explanation: Calculate & Plot NDVI
Here we see that band 58 represents red visible light, while band 90 is in the NIR portion of the spectrum. Let's extract these two bands from the reflectance array and calculate the ratio using the numpy.divide which divides arrays element-wise.
End of explanation
"""
plot_aop_refl(ndvi,sercRefl_md['spatial extent'],
colorlimit = (np.min(ndvi),np.max(ndvi)),
title='SERC Subset NDVI \n (VIS = Band 58, NIR = Band 90)',
cmap_title='NDVI',
colormap='seismic')
"""
Explanation: We can use the function plot_aop_refl to plot this, and choose the seismic color pallette to highlight the difference between positive and negative NDVI values. Since this is a normalized index, the values should range from -1 to +1.
End of explanation
"""
import copy
ndvi_gtpt6 = copy.copy(ndvi)
#set all pixels with NDVI < 0.6 to nan, keeping only values > 0.6
ndvi_gtpt6[ndvi<0.6] = np.nan
print('Mean NDVI > 0.6:',round(np.nanmean(ndvi_gtpt6),2))
plot_aop_refl(ndvi_gtpt6,
sercRefl_md['spatial extent'],
colorlimit=(0.6,1),
title='SERC Subset NDVI > 0.6 \n (VIS = Band 58, NIR = Band 90)',
cmap_title='NDVI',
colormap='RdYlGn')
"""
Explanation: Extract Spectra Using Masks
In the second part of this tutorial, we will learn how to extract the average spectra of pixels whose NDVI exceeds a specified threshold value. There are several ways to do this using numpy, including the mask functions numpy.ma, as well as numpy.where and finally using boolean indexing.
To start, lets copy the NDVI calculated above and use booleans to create an array only containing NDVI > 0.6.
End of explanation
"""
import numpy.ma as ma
def calculate_mean_masked_spectra(reflArray,ndvi,ndvi_threshold,ineq='>'):
mean_masked_refl = np.zeros(reflArray.shape[2])
for i in np.arange(reflArray.shape[2]):
refl_band = reflArray[:,:,i]
if ineq == '>':
ndvi_mask = ma.masked_where((ndvi<=ndvi_threshold) | (np.isnan(ndvi)),ndvi)
elif ineq == '<':
ndvi_mask = ma.masked_where((ndvi>=ndvi_threshold) | (np.isnan(ndvi)),ndvi)
else:
print('ERROR: Invalid inequality. Enter < or >')
masked_refl = ma.MaskedArray(refl_band,mask=ndvi_mask.mask)
mean_masked_refl[i] = ma.mean(masked_refl)
return mean_masked_refl
"""
Explanation: Calculate the mean spectra, thresholded by NDVI
Below we will demonstrate how to calculate statistics on arrays where you have applied a mask numpy.ma. In this example, the function calculates the mean spectra for values that remain after masking out values by a specified threshold.
End of explanation
"""
sercSpectra_ndvi_gtpt6 = calculate_mean_masked_spectra(sercRefl,ndvi,0.6)
sercSpectra_ndvi_ltpt3 = calculate_mean_masked_spectra(sercRefl,ndvi,0.3,ineq='<')
"""
Explanation: We can test out this function for various NDVI thresholds. We'll test two together, and you can try out different values on your own. Let's look at the average spectra for healthy vegetation (NDVI > 0.6), and for a lower threshold (NDVI < 0.3).
End of explanation
"""
import pandas
#Remove water vapor bad band windows & last 10 bands
w = copy.copy(sercRefl_md['wavelength'])
w[((w >= 1340) & (w <= 1445)) | ((w >= 1790) & (w <= 1955))]=np.nan
w[-10:]=np.nan;
nan_ind = np.argwhere(np.isnan(w))
sercSpectra_ndvi_gtpt6[nan_ind] = np.nan
sercSpectra_ndvi_ltpt3[nan_ind] = np.nan
#Create dataframe with masked NDVI mean spectra
sercSpectra_ndvi_df = pandas.DataFrame()
sercSpectra_ndvi_df['wavelength'] = w
sercSpectra_ndvi_df['mean_refl_ndvi_gtpt6'] = sercSpectra_ndvi_gtpt6
sercSpectra_ndvi_df['mean_refl_ndvi_ltpt3'] = sercSpectra_ndvi_ltpt3
"""
Explanation: Finally, we can use pandas to plot the mean spectra. First set up the pandas dataframe.
End of explanation
"""
ax = plt.gca();
sercSpectra_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_gtpt6',color='green',
edgecolor='none',kind='scatter',label='NDVI > 0.6',legend=True);
sercSpectra_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_ltpt3',color='red',
edgecolor='none',kind='scatter',label='NDVI < 0.3',legend=True);
ax.set_title('Mean Spectra of Reflectance Masked by NDVI')
ax.set_xlim([np.nanmin(w),np.nanmax(w)]); ax.set_ylim(0,0.45)
ax.set_xlabel("Wavelength, nm"); ax.set_ylabel("Reflectance")
ax.grid('on');
"""
Explanation: Plot the masked NDVI dataframe to display the mean spectra for NDVI values that exceed 0.6 and that are less than 0.3:
End of explanation
"""
|
KitwareMedical/ITKUltrasound | examples/PlotPowerSpectra.ipynb | apache-2.0 | import sys
!"{sys.executable}" -m pip install itk matplotlib scipy numpy
import os
import itk
import matplotlib.pyplot as plt
from scipy import signal
import numpy as np
"""
Explanation: Plot Power Spectra
Power spectra are used to analyze the average frequency content across signals in an RF image such as that produced by a transducer. This example relies on scipy and matplotlib to generate the power spectral density plot for sample RF data.
End of explanation
"""
RF_IMAGE_PATH = './MouseLiverRF.mha'
SAMPLING_FREQUENCY = 60e6 # Hz
assert os.path.exists(RF_IMAGE_PATH)
rf_image = itk.imread(RF_IMAGE_PATH)
rf_array = itk.array_view_from_image(rf_image)
print(rf_array.shape)
"""
Explanation: Load Data
End of explanation
"""
plt.figure(1, figsize=(10,8))
for frame_idx in range(rf_image.shape[0]):
arr = rf_array[frame_idx,:,:]
freq, Pxx = signal.periodogram(arr,
SAMPLING_FREQUENCY,
window='hamming',
detrend='linear',
axis=1)
# Take mean spectra across lateral dimension
Pxx = np.mean(Pxx,0)
plt.semilogy([f / 1e6 for f in freq], Pxx, label=frame_idx)
plt.title('RF Image Power Spectra')
plt.xlabel('Frequency [MHz]')
plt.ylabel('Power spectral density [V**2/Hz]')
plt.legend([f'Frame {idx}' for idx in range(rf_image.shape[0])],loc='upper right')
os.makedirs('./Output',exist_ok=True)
plt.savefig('./Output/PowerSpectralDensity.png',dpi=300)
plt.show()
"""
Explanation: Plot Power Spectral Density
End of explanation
"""
|
joelowj/Udacity-Projects | Udacity-Deep-Learning-Foundation-Nanodegree/Project-1/dlnd-your-first-neural-network.ipynb | apache-2.0 | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork:
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
"""
Constructor for NeuralNetwork.
input_nodes : int
the number of input nodes
hidden_nodes : int
the number of hidden nodes
output_nodes : int
the number of output nodes
learning_rate : float
the learning rate
"""
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(
0.0,
self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes)
)
self.weights_hidden_to_output = np.random.normal(
0.0,
self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes)
)
self.lr = learning_rate
# Hidden Layer activation function
# sigmoid function
self.activation_function = lambda x: ( 1 / (1 + np.exp(-x)) )
# f'(h) = f(h)*(1 - f(h))
self.activation_derivative = lambda x: ( x * ( 1 - x) )
# f(h) = h
self.output_activation_function = lambda x: ( x )
# f'(h) = 1
self.output_activation_derivative = lambda x: ( 1 )
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin = 2).T
### Forward pass ###
# Note - this code is identical to what is executed in run() and could potentially be refactored
# Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = self.output_activation_function(final_inputs)
### Backward pass ###
# Output error
output_errors = targets - final_outputs
# Hidden Layer Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
hidden_grad = self.activation_derivative(hidden_outputs)
# Update the weights
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad, inputs.T)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# Hidden Layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = self.output_activation_function(final_inputs)
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import sys
### Set the hyperparameters here ###
epochs = 1000
learning_rate = 0.01
hidden_nodes = 25
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
The model was able to predict the data consistently. However, there is a consistent under performance starting from Dec 22 where the model predicted higher usage than that of the data around the period of Dec 22 to Dec 26. The decreased in usage during the period could be attributed by Christmas. Since we only provided a year worth of training data, the model fails to predict the usage during the period of Christmas accurately.
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation
"""
|
mauroalberti/geocouche | pygsf/docs/notebooks/General 2 - orientations.ipynb | gpl-2.0 | %load_ext autoreload
%autoreload 1
"""
Explanation: pygsf 2: vectors and orientations
March-April, 2018, Mauro Alberti, alberti.m65@gmail.com
Developement code:
End of explanation
"""
%matplotlib inline
"""
Explanation: 1. Introduction
Since we will plot geometric data into stereonets, prior to any other operation, we import mplstereonet and run the IPython command %matplotlib inline, that allows to incorporate Matplotlib plots in the Jupyter notebook.
End of explanation
"""
from pygsf.plotting.stereonets import splot
"""
Explanation: After that, we import the splot function from the plotting.stereonets module.
End of explanation
"""
from pygsf.mathematics.vectors import *
"""
Explanation: 2. Cartesian vectors
Cartesian vectors represent the base to analytically process orientational data, such as lineations and planes.
The reference axis orientations used in pygsf are the x axis parallel to East, y parallel to North and z vertical, upward-directed.
To use vectors, we import the submodule:
End of explanation
"""
v1, v2 = Vect(3.1, 7.2, 5.6), Vect(4.2, 9.17, 8.0)
"""
Explanation: Vectors can be created by providing their Cartesian components:
End of explanation
"""
v1 + v2 # vector addition
v1 - v2 # vector subtraction
"""
Explanation: Vectors addition and subtraction are expressed with the usual operators:
End of explanation
"""
v1.vDot(v2) # scalar product
v1.vCross(v2) # vector product
"""
Explanation: Scalar and vector products are obtained with the methods:
End of explanation
"""
v1.angle(v2) # angle in degrees bwtween two Cartesian vectors
"""
Explanation: The angle (in degrees) between two vectors can be derived:
End of explanation
"""
from pygsf.orientations.orientations import *
"""
Explanation: 3. Orientations
Orientations can refer to linear or planar features. When referring to linear orientations, we subdivide them into directional, with a defined direction, or axial, without direction.
We import all classes/methods from the orientations sub-module:
End of explanation
"""
or1 = Direct.fromAzPl(130, 10) # orientation defined by its trend and plunge. Pointing downward, since positive plunge.
or2 = Direct.fromAzPl(312, -45) # or2 points upwards, since it has a negative plunge
or3 = Direct.fromAzPl(300, -20) # as or2
"""
Explanation: A Direct is equivalent to a unit vector in the 3D space, with orientation expressed by polar coordinates: a direction defined by a trend (from the North, 0°-360°) and a plunge (-90° to 90°, where positive values are downward-directed while negative ones are upward-directed):
End of explanation
"""
splot([or1, or2, or3]) # we provide the function arguments in a list
"""
Explanation: We can plot geological vectors in a stereoplot. As previously said, we need to have imported the mplstereonet module, as well as run the %matplotlib inline command.
We can plot using the splot function. The data to plot have to be inserted into a list.
End of explanation
"""
splot([or1, (or2, or3, "m=s,c=red")]) # gv2 and gv3 are customized. "s" stands for "square"
"""
Explanation: Note the default color, as well the default symbol used for upward-pointing geological vectors, different from the one for downward-pointing geological vectors.
We can customize the colors/symbols used for certain values, by inserting them into round brackets (i.e., creating Python tuples) and adding at the end of the tuple a string defining the color "c=xxx" and/or the marker: "m=x", where color and marker names are those standard for Matplotlib. See example below:
End of explanation
"""
splot([or1, (or2, or3, "m=o, c=red")], force='lower') # "o" means circle
"""
Explanation: If we want to force all measures to plot in the upper, or lower hemisphere, instead of plotting with their original orientations, we can use, after the data list, the keyword force, setting it to force='upper' or force='lower'.
End of explanation
"""
Direct.fromAzPl(130, 4).isSubHoriz()
Direct.fromAzPl(110, 88).isSubVert()
"""
Explanation: It is possible to check if a geological vector is subhorizontal or subvertical:
End of explanation
"""
or1.angle(or2)
"""
Explanation: The default dip angle threshold for subhorizontal and subvertical faults is 5°, so in the previous examples geological vectors are considered to be respectively subhorizontal and subvertical.
As for the Cartesian vectors, we can calculate the angle between two geological vectors:
End of explanation
"""
or2.isSubParallel(or3)
Direct.fromAzPl(90, 10).isSubAParallel(Direct.fromAzPl(270, -10.5))
Direct.fromAzPl(90, 0).isSubOrthog(Direct.fromAzPl(0, 89.5))
"""
Explanation: Since geological vectors are oriented, the angle range is from 0° to 180°.
We can check if two geological vectors are parallel, anti-parallel or sub-orthogonal:
End of explanation
"""
norm_or = or2.normDirect(or3) # orientation normal to or2 and or3
splot([or2, or3, (norm_or, "m=s,c=red")])
"""
Explanation: We calculate and plot the vector normal to two geological vectors:
End of explanation
"""
or2.angle(norm_or), or3.angle(norm_or)
"""
Explanation: We check that the two source geological vectors are normal to the calculated one:
End of explanation
"""
ax1 = Axis.fromAzPl(130, 10) # creating a geological axis given trend and plunge (same as previous gv1)
ax2 = or2.asAxis() # converting the geological vector to a geological axis
print(ax1, ax2)
splot([ax1, ax2]) # note that we provide the arguments inside a list
"""
Explanation: 2.3 Polar axes
Polar axes are similar to directions (in fact they inherit from the Direct class), but do not have a specific direction, i.e., they have only an orientation. As for orientations, they are defined by a trend and a plunge, but the two possible, opposite directions are both considered in the calculations (e.g., the angles between axes).
We can create an axis given its trend and plunge values, or alternatively converting to an axis from an orientation:
End of explanation
"""
splot([(ax1, ax2, "c=blue,m=x")], force="upper")
"""
Explanation: Since GAxis instances are inherently bi-directional, when plotting in stereonets upward-pointing cases are automatically converted to their equivalent, downward-pointing cases, unless explicitely forced to be projected in the upper hemisphere:
End of explanation
"""
or_angle = or1.angle(or2) # angle (in degrees) between two geological vectors
print(or_angle)
"""
Explanation: Their original, stored value is not modified.
The difference between orientations and axes is evident for instance in the calculation of the angle between two orientations or two axes:
End of explanation
"""
ax1, ax2 = or1.asAxis(), or2.asAxis()
axis_angle = ax1.angle(ax2) # angle (in degrees) between two geological axes
print(axis_angle)
"""
Explanation: We convert the geological vectors to axes and then calculate the angle:
End of explanation
"""
pl1 = Plane(112, 67) # dip direction and dip angle input
print(pl1)
"""
Explanation: The angle between the two axes is the complement to 180° of the angle between the two geological vectors.
2.4 Planes
A plane has orientations expressed by the azimuth (from the geographic) North of its strike or dip direction, and by the dip angle. The default in pygsf is to use the dip direction.
End of explanation
"""
pl2 = Plane(24, 56, is_rhr_strike=True) # RHR strike and dip angle inputs
print(pl2) # output is always expressed as dip direction and dip angle
"""
Explanation: It is however possible to define a geological plane providing the right-hand rule strike, instead of the dip direction:
End of explanation
"""
splot([(pl1, pl2, "c=brown")])
splot([pl1, pl2], force="upper") # note in the plot the default line color and dashing are usde
"""
Explanation: We plot the two planes in the default (lower) emisphere:
End of explanation
"""
norm_or = pl1.normDirect()
splot([pl1, norm_or])
"""
Explanation: The orientation normal to a plane is calculated as in this example:
End of explanation
"""
plane = or2.commonPlane(or3) # geological plane common to two geological vectors
"""
Explanation: It is possible to derive the plane that is common to two orientations (or axes):
End of explanation
"""
splot([or2, or3, plane], force="lower")
"""
Explanation: We plot the three geological measures:
End of explanation
"""
norm_pl = or1.normPlane() # geological plane normal to a given geological vector
splot([or1, norm_pl])
"""
Explanation: Considering just a single geological vector, the geological plane normal to the vector is obtained as follows:
End of explanation
"""
|
bollwyvl/ipylivecoder | examples/Three Little Circles.ipynb | bsd-2-clause | from livecoder.widgets import Livecoder
from IPython.utils import traitlets as T
"""
Explanation: Three Little Circles
The "Hello World" (or Maxwell's Equations) of d3, Three Little Circles introduces all of the main concepts in d3, which gives you a pretty good grounding in data visualization, JavaScript, and SVG. Let's try out some circles in livecoder.
First, we need Livecoder, and traitlets, the Observer/Observable pattern used in building widgets.
End of explanation
"""
class ThreeCircles(Livecoder):
x = T.Tuple([1, 2, 3], sync=True)
"""
Explanation: Livecoder by itself doesn't do much. Let's add a traitlet for where we want to draw the circles (the cx attribute).
End of explanation
"""
circles = ThreeCircles(description="three-circles")
circles.description
"""
Explanation: Notice the sync argument: this tells IPython that it should propagate changes to the front-end. No REST for the wicked?
End of explanation
"""
circles
"""
Explanation: Almost there! To view our widget, we need to display it, which is the default behavior by just having the widget be the last line of a code cell.
End of explanation
"""
|
hainesr/tdd-fibonacci-example | walkthrough-notebook.ipynb | bsd-3-clause | import unittest
def run_tests():
suite = unittest.TestLoader().loadTestsFromTestCase(TestFibonacci)
unittest.TextTestRunner().run(suite)
"""
Explanation: Agile and Test-Driven Development
TDD Worked Example
Robert Haines, University of Manchester, UK
Adapted from "Test-Driven Development By Example", Kent Beck
Introduction
Very simple example
Implement a function to return the nth number in the Fibonacci sequence
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, ...
http://oeis.org/A000045
$
F_0 = 0, \
F_1 = 1, \
F_n = F_{n-1} + F_{n-2}
$
Step 0a: Local python setup
You need to do this if you're using python on the command line.
Create two directories
src
test
Add the src directory to PYTHONPATH
$ export PYTHONPATH=`pwd`/src
Step 0b: IPython setup
You need to do this if you're using this IPython Notebook.
The run_tests() method, below, is called at the end of each step to run the tests.
End of explanation
"""
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
run_tests()
"""
Explanation: Step 1: Write a test (and run it)
End of explanation
"""
def fibonacci(n):
return 0
run_tests()
"""
Explanation: Step 1: Implement and re-test
End of explanation
"""
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
run_tests()
"""
Explanation: Step 2: Write a test (and run it)
End of explanation
"""
def fibonacci(n):
if n == 0: return 0
return 1
run_tests()
"""
Explanation: Step 2: Implement and re-test
End of explanation
"""
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
self.assertEqual(1, fibonacci(2), "fibonacci(2) should equal 1")
run_tests()
"""
Explanation: Step 3: Write a test (and run it)
End of explanation
"""
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
self.assertEqual(1, fibonacci(2), "fibonacci(2) should equal 1")
self.assertEqual(2, fibonacci(3), "fibonacci(3) should equal 2")
run_tests()
"""
Explanation: Step 3: It works!
The current code outputs 1 whenever n is not 0. So this behaviour is correct.
Step 4: Write a test (and run it)
End of explanation
"""
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
return 2
run_tests()
"""
Explanation: Step 4: Implement and re-test
End of explanation
"""
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
self.assertEqual(1, fibonacci(2), "fibonacci(2) should equal 1")
self.assertEqual(2, fibonacci(3), "fibonacci(3) should equal 2")
self.assertEqual(3, fibonacci(4), "fibonacci(4) should equal 3")
run_tests()
"""
Explanation: Step 5: Write a test (and run it)
End of explanation
"""
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
if n == 3: return 2
return 3
run_tests()
"""
Explanation: Step 5: Implement and re-test
End of explanation
"""
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
if n == 3: return 2
return 2 + 1
run_tests()
"""
Explanation: Pause
How many tests are we going to write?
Just how big is the set of if statements going to get if we carry on like this?
Where do we stop?
Remember:
$
F_0 = 0, \
F_1 = 1, \
F_n = F_{n-1} + F_{n-2}
$
Can we reflect that in the code?
Step 6: Refactor and test
End of explanation
"""
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
return fibonacci(n - 1) + fibonacci(n - 2)
run_tests()
"""
Explanation: Step 7: Refactor and test
End of explanation
"""
def fibonacci(n):
if n == 0: return 0
if n == 1: return 1
return fibonacci(n - 1) + fibonacci(n - 2)
run_tests()
"""
Explanation: Step 8: Refactor and test (and done)
End of explanation
"""
|
unpingco/Python-for-Probability-Statistics-and-Machine-Learning | chapters/statistics/notebooks/Hypothesis_Testing.ipynb | mit | from __future__ import division
%pylab inline
"""
Explanation: Python for Probability, Statistics, and Machine Learning
End of explanation
"""
%matplotlib inline
from matplotlib.pylab import subplots
import numpy as np
fig,ax=subplots()
fig.set_size_inches((6,3))
xi = np.linspace(0,1,50)
_=ax.plot(xi, (xi)**5,'-k',label='all heads')
_=ax.set_xlabel(r'$\theta$',fontsize=22)
_=ax.plot(0.5,(0.5)**5,'ko')
fig.tight_layout()
#fig.savefig('fig-statistics/Hypothesis_Testing_001.png')
"""
Explanation: It is sometimes very difficult to unequivocally attribute outcomes to causal
factors. For example, did your experiment generate the outcome you were hoping
for or not? Maybe something did happen, but the effect is not pronounced
enough to separate it from inescapable measurement errors or other
factors in the ambient environment? Hypothesis testing is a powerful
statistical method to address these questions. Let's begin by again
considering our coin-tossing experiment with unknown parameter $p$. Recall
that the individual coin-flips are Bernoulli distributed. The first step is
to establish separate hypotheses. First, $H_0$ is the so-called null
hypothesis. In our case this can be
$$
H_0 \colon \theta < \frac{1}{2}
$$
and the alternative hypothesis is then
$$
H_1 \colon \theta \geq \frac{1}{2}
$$
With this set up, the question now boils down to figuring out which
hypothesis the data is most consistent with. To choose between these, we need
a statistical test that is a function, $G$, of the sample set
$\mathbf{X}_n=\left{ X_i \right}_n $ into the real line, where $X_i$ is the
heads or tails outcome ($X_i \in \lbrace 0,1 \rbrace$). In other words, we
compute $G(\mathbf{X}_n)$ and check if it exceeds a threshold $c$. If not, then
we declare $H_0$ (otherwise, declare $H_1$). Notationally, this is the
following:
$$
\begin{align}
G(\mathbf{X}_n) < c & \Rightarrow H_0 \\
G(\mathbf{X}_n) \geq c & \Rightarrow H_1
\end{align}
$$
In summary, we have the observed data $\mathbf{X}_n$ and a function
$G$ that maps that data onto the real line. Then, using the
constant $c$ as a threshold, the inequality effectively divides the real line
into two parts, one corresponding to each of the hypotheses.
Whatever this test $G$ is, it will make mistakes of two types --- false
negatives and false positives. The false positives arise from the case where we
declare $H_0$ when the test says we should declare $H_1$. This is
summarized in the Table ref{tbl:decision}.
<!-- Equation labels as ordinary links -->
<div id="tbl:decision"></div>
$$
\begin{table}
\footnotesize
\centering
\begin{tabular}{l|p{1.3in}|p{1.3in}}
\multicolumn{1}{c}{ } & \multicolumn{1}{c}{Declare $H_0$ } & \multicolumn{1}{c}{ Declare $H_1$ } \
\hline
$H_0\:$ True & Correct & False positive (Type I error) \
$H_1\:$ True & False negative (Type II error) & Correct (true-detect) \
\hline
\end{tabular}
\caption{Truth table for hypotheses testing.}
\label{tbl:decision} \tag{1}
\end{table}
$$
For this example, here are the false positives (aka false alarms):
$$
P_{FA} = \mathbb{P}\left( G(\mathbf{X}_n) > c \mid \theta \leq \frac{1}{2} \right)
$$
Or, equivalently,
$$
P_{FA} = \mathbb{P}\left( G(\mathbf{X}_n) > c \mid H_0 \right)
$$
Likewise, the other error is a false negative, which we can write
analogously as
$$
P_{FN} = \mathbb{P}\left( G(\mathbf{X}_n) < c \vert H_1\right)
$$
By choosing some acceptable values for either of these errors,
we can solve for the other one. The practice is usually to pick a value of
$P_{FA}$ and then find the corresponding value of $P_{FN}$. Note that it is
traditional in engineering to speak about detection probability, which is
defined as
$$
P_{D} = 1- P_{FN} = \mathbb{P}\left( G(\mathbf{X}_n) > c \mid H_1\right)
$$
In other words, this is the probability of declaring $H_1$ when the
test exceeds the threshold. This is otherwise known as the probability of a
true detection or true-detect.
Back to the Coin Flipping Example
In our previous maximum likelihood discussion, we wanted to derive an
estimator for the value of the probability of heads for the coin
flipping experiment. For hypthesis testing, we want to ask a softer
question: is the probability of heads greater or less than $\nicefrac{1}{2}$? As we
just established, this leads to the two hypotheses:
$$
H_0 \colon \theta < \frac{1}{2}
$$
versus,
$$
H_1 \colon \theta > \frac{1}{2}
$$
Let's assume we have five observations. Now we need the $G$ function
and a threshold $c$ to help pick between the two hypotheses. Let's count the
number of heads observed in five observations as our
criterion. Thus, we have
$$
G(\mathbf{X}5) := \sum{i=1}^5 X_i
$$
and, suppose further that we pick $H_1$ only if exactly five out of
five observations are heads. We'll call this the all-heads test.
Now, because all of the $X_i$ are random variables, so is $G$ and we must
find the corresponding probability mass function for $G$. Assuming the
individual coin tosses are independent, the probability of five heads is $\theta^5$.
This means that the probability of rejecting the $H_0$ hypothesis (and choosing
$H_1$, because there are only two choices here) based on the unknown underlying
probability is $\theta^5$. In the parlance, this is known and the power function
as in denoted by $\beta$ as in
$$
\beta(\theta) = \theta^5
$$
Let's get a quick plot this in Figure.
<!-- @@@CODE src-statistics/Hypothesis_Testing.py fromto: import numpy as np@plt.savefig -->
End of explanation
"""
fig,ax=subplots()
fig.set_size_inches((6,3))
from sympy.abc import theta,k # get some variable symbols
import sympy as S
xi = np.linspace(0,1,50)
expr=S.Sum(S.binomial(5,k)*theta**(k)*(1-theta)**(5-k),(k,3,5)).doit()
_=ax.plot(xi, (xi)**5,'-k',label='all heads')
_=ax.plot(xi, S.lambdify(theta,expr)(xi),'--k',label='majority vote')
_=ax.plot(0.5, (0.5)**5,'ko')
_=ax.plot(0.5, S.lambdify(theta,expr)(0.5),'ko')
_=ax.set_xlabel(r'$\theta$',fontsize=22)
_=ax.legend(loc=0)
fig.tight_layout()
#fig.savefig('fig-statistics/Hypothesis_Testing_002.png')
"""
Explanation: <!-- dom:FIGURE: [fig-statistics/Hypothesis_Testing_001.png, width=500 frac=0.85] Power function for the all-heads test. The dark circle indicates the value of the function indicating $\alpha$. <div id="fig:Hypothesis_testing_001"></div> -->
<!-- begin figure -->
<div id="fig:Hypothesis_testing_001"></div>
<p>Power function for the all-heads test. The dark circle indicates the value of the function indicating $\alpha$.</p>
<img src="fig-statistics/Hypothesis_Testing_001.png" width=500>
<!-- end figure -->
Now, we have the following false alarm probability,
$$
P_{FA} = \mathbb{P}( G(\mathbf{X}_n)= 5 \vert H_0) =\mathbb{P}( \theta^5 \vert H_0)
$$
Notice that this is a function of $\theta$, which means there are
many false alarm probability values that correspond to this test. To be on the
conservative side, we'll pick the supremum (i.e., maximum) of this function,
which is known as the size of the test, traditionally denoted by $\alpha$,
$$
\alpha = \sup_{\theta \in \Theta_0} \beta(\theta)
$$
with domain $\Theta_0 = \lbrace \theta < 1/2 \rbrace$ which in our case is
$$
\alpha = \sup_{\theta < \frac{1}{2}} \theta^5 = \left(\frac{1}{2}\right)^5 = 0.03125
$$
Likewise, for the detection probability,
$$
\mathbb{P}_{D}(\theta) = \mathbb{P}( \theta^5 \vert H_1)
$$
which is again a function of the parameter $\theta$. The problem with
this test is that the $P_{D}$ is pretty low for most of the domain of
$\theta$. For instance, values in the nineties for $P_{D}$
only happen when $\theta > 0.98$. In other words, if the coin produces
heads 98 times out of 100, then we can detect $H_1$ reliably. Ideally, we want
a test that is zero for the domain corresponding to $H_0$ (i.e., $\Theta_0$) and
equal to one otherwise. Unfortunately, even if we increase the length of the
observed sequence, we cannot escape this effect with this test. You can try
plotting $\theta^n$ for larger and larger values of $n$ to see this.
Majority Vote Test
Due to the problems with the detection probability in the all-heads test, maybe
we can think of another test that will have the performance we want? Suppose we
reject $H_0$ if the majority of the observations are heads. Then, using the
same reasoning as above, we have
$$
\beta(\theta) = \sum_{k=3}^5 \binom{5}{k} \theta^k(1-\theta)^{5-k}
$$
Figure shows the power function
for both the majority vote and the all-heads tests.
End of explanation
"""
>>> from sympy.stats import P, Binomial
>>> theta = S.symbols('theta',real=True)
>>> X = Binomial('x',100,theta)
>>> beta_function = P(X>60)
>>> print beta_function.subs(theta,0.5) # alpha
0.0176001001088524
>>> print beta_function.subs(theta,0.70)
0.979011423996075
"""
Explanation: <!-- dom:FIGURE: [fig-statistics/Hypothesis_Testing_002.png, width=500 frac=0.85] Compares the power function for the all-heads test with that of the majority-vote test. <div id="fig:Hypothesis_testing_002"></div> -->
<!-- begin figure -->
<div id="fig:Hypothesis_testing_002"></div>
<p>Compares the power function for the all-heads test with that of the majority-vote test.</p>
<img src="fig-statistics/Hypothesis_Testing_002.png" width=500>
<!-- end figure -->
In this case, the new test has size
$$
\alpha = \sup_{\theta < \frac{1}{2}} \theta^{5} + 5 \theta^{4} \left(- \theta + 1\right) + 10 \theta^{3} \left(- \theta + 1\right)^{2} = \frac{1}{2}
$$
As before we only get to upwards of 90% for detection
probability only when the underlying parameter $\theta > 0.75$.
Let's see what happens when we consider more than five samples. For
example, let's suppose that we have $n=100$ samples and we want to
vary the threshold for the majority vote test. For example, let's have
a new test where we declare $H_1$ when $k=60$ out of the 100 trials
turns out to be heads. What is the $\beta$ function in this case?
$$
\beta(\theta) = \sum_{k=60}^{100} \binom{100}{k} \theta^k(1-\theta)^{100-k}
$$
This is too complicated to write by hand, but the statistics module
in Sympy has all the tools we need to compute this.
End of explanation
"""
from scipy import stats
rv=stats.bernoulli(0.5) # true p = 0.5
# number of false alarms ~ 0.018
print sum(rv.rvs((1000,100)).sum(axis=1)>60)/1000.
"""
Explanation: These results are much better than before because the $\beta$
function is much steeper. If we declare $H_1$ when we observe 60 out of 100
trials are heads, then we wrongly declare heads approximately 1.8% of the
time. Otherwise, if it happens that the true value for $p>0.7$, we will
conclude correctly approximately 97% of the time. A quick simulation can sanity
check these results as shown below:
End of explanation
"""
import sympy as S
from sympy import stats
s = stats.Normal('s',1,1) # signal+noise
n = stats.Normal('n',0,1) # noise
x = S.symbols('x',real=True)
L = stats.density(s)(x)/stats.density(n)(x)
"""
Explanation: The above code is pretty dense so let's unpack it. In the first line, we use the scipy.stats module to define the
Bernoulli random variable for the coin flip. Then, we use the rvs method of
the variable to generate 1000 trials of the experiment where each trial
consists of 100 coin flips. This generates a $1000 \times 100$ matrix where the
rows are the individual trials and the columns are the outcomes of each
respective set of 100 coin flips. The sum(axis=1) part computes the sum across the
columns. Because the values of the embedded matrix are only 1 or 0 this
gives us the count of flips that are heads per row. The next >60 part
computes the boolean 1000-long vector of values that are bigger than 60. The
final sum adds these up. Again, because the entries in the array are True
or False the sum computes the count of times the number of heads has
exceeded 60 per 100 coin flips in each of 1000 trials. Then, dividing this
number by 1000 gives a quick approximation of false alarm probability we
computed above for this case where the true value of $p=0.5$.
Receiver Operating Characteristic
Because the majority vote test is a binary test, we can compute the Receiver
Operating Characteristic (ROC) which is the graph of the $(P_{FA},
P_D)$. The term comes from radar systems but is a very general method for
consolidating all of these issues into a single graph. Let's consider a typical
signal processing example with two hypotheses. In $H_0$, there is noise but no
signal present at the receiver,
$$
H_0 \colon X = \epsilon
$$
where $\epsilon \sim \mathcal{N}(0,\sigma^2)$ represents additive
noise. In the alternative hypothesis, there is a deterministic signal at the receiver,
$$
H_1 \colon X = \mu + \epsilon
$$
Again, the problem is to choose between these two hypotheses. For
$H_0$, we have $X \sim \mathcal{N}(0,\sigma^2)$ and for $H_1$, we have $ X \sim
\mathcal{N}(\mu,\sigma^2)$. Recall that we only observe values for $x$ and
must pick either $H_0$ or $H_1$ from these observations. Thus, we need a
threshold, $c$, to compare $x$ against in order to distinguish the two
hypotheses. Figure shows the probability density
functions under each of the hypotheses. The dark vertical line is the threshold
$c$. The gray shaded area is the probability of detection, $P_D$ and the shaded
area is the probability of false alarm, $P_{FA}$. The test evaluates every
observation of $x$ and concludes $H_0$ if $x<c$ and $H_1$ otherwise.
<!-- dom:FIGURE: [fig-statistics/Hypothesis_Testing_003.png, width=500 frac=0.85] The two density functions for the $H_0$ and $H_1$ hypotheses. The shaded gray area is the detection probability and the shaded blue area is the probability of false alarm. The vertical line is the decision threshold. <div id="fig:Hypothesis_testing_003"></div> -->
<!-- begin figure -->
<div id="fig:Hypothesis_testing_003"></div>
<p>The two density functions for the $H_0$ and $H_1$ hypotheses. The shaded gray area is the detection probability and the shaded blue area is the probability of false alarm. The vertical line is the decision threshold.</p>
<img src="fig-statistics/Hypothesis_Testing_003.png" width=500>
<!-- end figure -->
Programming Tip.
The shading shown in Figure comes from
Matplotlib's fill_between function. This function has a where keyword
argument to specify which part of the plot to apply shading with specified
color keyword argument. Note there is also a fill_betweenx function that
fills horizontally. The text function can place formatted
text anywhere in the plot and can utilize basic \LaTeX{} formatting.
See the IPython notebook corresponding to this section for the source code.
As we slide the threshold left and right along the horizontal axis, we naturally change the corresponding areas under
each of the curves shown in Figure and thereby
change the values of $P_D$ and $P_{FA}$. The contour that emerges from sweeping
the threshold this way is the ROC as shown in Figure. This figure also shows the diagonal line which
corresponds to making decisions based on the flip of a fair coin. Any
meaningful test must do better than coin flipping so the more the ROC bows up
to the top left corner of the graph, the better. Sometimes ROCs are quantified
into a single number called the area under the curve (AUC), which varies from
0.5 to 1.0 as shown. In our example, what separates the two probability density
functions is the value of $\mu$. In a real situation, this would be determined
by signal processing methods that include many complicated trade-offs. The key
idea is that whatever those trade-offs are, the test itself boils down to the
separation between these two density functions --- good tests separate the two
density functions and bad tests do not. Indeed, when there is no separation, we
arrive at the diagonal-line coin-flipping situation we just discussed.
What values for $P_D$ and $P_{FA}$ are considered acceptable depends on the
application. For example, suppose you are testing for a fatal disease. It could
be that you are willing to except a relatively high $P_{FA}$ value if that
corresponds to a good $P_D$ because the test is relatively cheap to administer
compared to the alternative of missing a detection. On the other hand,
may be a false alarm triggers an expensive response, so that minimizing
these alarms is more important than potentially missing a detection. These
trade-offs can only be determined by the application and design factors.
<!-- dom:FIGURE: [fig-statistics/Hypothesis_Testing_004.png, width=500 frac=0.65] The Receiver Operating Characteristic (ROC) corresponding to [Figure](#fig:Hypothesis_testing_003). <div id="fig:Hypothesis_testing_004"></div> -->
<!-- begin figure -->
<div id="fig:Hypothesis_testing_004"></div>
<p>The Receiver Operating Characteristic (ROC) corresponding to [Figure](#fig:Hypothesis_testing_003).</p>
<img src="fig-statistics/Hypothesis_Testing_004.png" width=500>
<!-- end figure -->
P-Values
There are a lot of moving parts in hypothesis testing. What we need
is a way to consolidate the findings. The idea is that we want to find
the minimum level at which the test rejects $H_0$. Thus, the p-value
is the probability, under $H_0$, that the test-statistic is at least
as extreme as what was actually observed. Informally, this means
that smaller values imply that $H_0$ should be rejected, although
this doesn't mean that large values imply that $H_0$ should be
retained. This is because a large p-value can arise from either $H_0$
being true or the test having low statistical power.
If $H_0$ is true, the p-value is uniformly distributed in the interval $(0,1)$.
If $H_1$ is true, the distribution of the p-value will concentrate closer to
zero. For continuous distributions, this can be proven rigorously and implies
that if we reject $H_0$ when the corresponding p-value is less than $\alpha$,
then the probability of a false alarm is $\alpha$. Perhaps it helps to
formalize this a bit before computing it. Suppose $\tau(X)$ is a test
statistic that rejects $H_0$ as it gets bigger. Then, for each sample $x$,
corresponding to the data we actually have on-hand, we define
$$
p(x) = \sup_{\theta \in \Theta_0} \mathbb{P}_{\theta}(\tau(X) > \tau(x))
$$
This equation states that the supremum (i.e., maximum)
probability that the test statistic, $\tau(X)$, exceeds the value for
the test statistic on this particular data ($\tau(x)$) over the
domain $\Theta_0$ is defined as the p-value. Thus, this embodies a
worst-case scenario over all values of $\theta$.
Here's one way to think about this. Suppose you rejected $H_0$, and someone
says that you just got lucky and somehow just drew data that happened to
correspond to a rejection of $H_0$. What p-values provide is a way to address
this by capturing the odds of just a favorable data-draw. Thus, suppose that
your p-value is 0.05. Then, what you are showing is that the odds of just
drawing that data sample, given $H_0$ is in force, is just 5%. This means that
there's a 5% chance that you somehow lucked out and got a favorable draw of
data.
Let's make this concrete with an example. Given, the majority-vote rule above,
suppose we actually do observe three of five heads. Given the $H_0$, the
probability of observing this event is the following:
$$
p(x) =\sup_{\theta \in \Theta_0} \sum_{k=3}^5\binom{5}{k} \theta^k(1-\theta)^{5-k} = \frac{1}{2}
$$
For the all-heads test, the corresponding computation is the following:
$$
p(x) =\sup_{\theta \in \Theta_0} \theta^5 = \frac{1}{2^5} = 0.03125
$$
From just looking at these p-values, you might get the feeling that the second
test is better, but we still have the same detection probability issues we
discussed above; so, p-values help in summarizing some aspects of our
hypothesis testing, but they do not summarize all the salient aspects of the
entire situation.
Test Statistics
As we have seen, it is difficult to derive good test statistics for hypothesis
testing without a systematic process. The Neyman-Pearson Test is derived from
fixing a false-alarm value ($\alpha$) and then maximizing the detection
probability. This results in the Neyman-Pearson Test,
$$
L(\mathbf{x}) = \frac{f_{X|H_1}(\mathbf{x})}{f_{X|H_0}(\mathbf{x})} \stackrel[H_0]{H_1}{\gtrless} \gamma
$$
where $L$ is the likelihood ratio and where the threshold
$\gamma$ is chosen such that
$$
\int_{x:L(\mathbf{x})>\gamma} f_{X|H_0}(\mathbf{x}) d\mathbf{x}=\alpha
$$
The Neyman-Pearson Test is one of a family of tests that use
the likelihood ratio.
Example. Suppose we have a receiver and we want to distinguish
whether just noise ($H_0$) or signal pluse noise ($H_1$) is received.
For the noise-only case, we have $x\sim \mathcal{N}(0,1)$ and for the
signal pluse noise case we have $x\sim \mathcal{N}(1,1)$. In other
words, the mean of the distribution shifts in the presence of the
signal. This is a very common problem in signal processing and
communications. The Neyman-Pearson Test then boils down to the
following,
$$
L(x)= e^{-\frac{1}{2}+x}\stackrel[H_0]{H_1}{\gtrless}\gamma
$$
Now we have to find the threshold $\gamma$ that solves the
maximization problem that characterizes the Neyman-Pearson Test. Taking
the natural logarithm and re-arranging gives,
$$
x\stackrel[H_0]{H_1}{\gtrless} \frac{1}{2}+\log\gamma
$$
The next step is find $\gamma$ corresponding to the desired
$\alpha$ by computing it from the following,
$$
\int_{1/2+\log\gamma}^{\infty} f_{X|H_0}(x)dx = \alpha
$$
For example, taking $\alpha=1/100$, gives
$\gamma\approx 6.21$. To summarize the test in this case, we have,
$$
x\stackrel[H_0]{H_1}{\gtrless} 2.32
$$
Thus, if we measure $X$ and see that its value
exceeds the threshold above, we declare $H_1$ and otherwise
declare $H_0$. The following code shows how to
solve this example using Sympy and Scipy. First, we
set up the likelihood ratio,
End of explanation
"""
g = S.symbols('g',positive=True) # define gamma
v=S.integrate(stats.density(n)(x),
(x,S.Rational(1,2)+S.log(g),S.oo))
"""
Explanation: Next, to find the $\gamma$ value,
End of explanation
"""
print S.nsolve(v-0.01,3.0) # approx 6.21
"""
Explanation: Programming Tip.
Providing additional information regarding the Sympy variable by using the
keyword argument positive=True helps the internal simplification algorithms
work faster and better. This is especially useful when dealing with complicated
integrals that involve special functions. Furthermore, note that we used the
Rational function to define the 1/2 fraction, which is another way of
providing hints to Sympy. Otherwise, it's possible that the floating-point
representation of the fraction could disguise the simple fraction and
thereby miss internal simplification opportunities.
We want to solve for g in the above expression. Sympy has some
built-in numerical solvers as in the following,
End of explanation
"""
from scipy.stats import binom, chi2
import numpy as np
# some sample parameters
p0,p1,p2 = 0.3,0.4,0.5
n0,n1,n2 = 50,180,200
brvs= [ binom(i,j) for i,j in zip((n0,n1,n2),(p0,p1,p2))]
def gen_sample(n=1):
'generate samples from separate binomial distributions'
if n==1:
return [i.rvs() for i in brvs]
else:
return [gen_sample() for k in range(n)]
"""
Explanation: Note that in this situation it is better to use the numerical
solvers because Sympy solve may grind along for a long time to
resolve this.
Generalized Likelihood Ratio Test
The likelihood ratio test can be generalized using the following statistic,
$$
\Lambda(\mathbf{x})= \frac{\sup_{\theta\in\Theta_0} L(\theta)}{\sup_{\theta\in\Theta} L(\theta)}=\frac{L(\hat{\theta}_0)}{L(\hat{\theta})}
$$
where $\hat{\theta}0$ maximizes $L(\theta)$ subject to
$\theta\in\Theta_0$ and $\hat{\theta}$ is the maximum likelihood estimator.
The intuition behind this generalization of the Likelihood Ratio Test is that
the denomimator is the usual maximum likelihood estimator and the numerator is
the maximum likelihood estimator, but over a restricted domain ($\Theta_0$).
This means that the ratio is always less than unity because the maximum
likelihood estimator over the entire space will always be at least as maximal
as that over the more restricted space. When this $\Lambda$ ratio gets small
enough, it means that the maximum likelihood estimator over the entire domain
($\Theta$) is larger which means that it is safe to reject the null hypothesis
$H_0$. The tricky part is that the statistical distribution of $\Lambda$ is
usually eye-wateringly difficult. Fortunately, Wilks Theorem says that with
sufficiently large $n$, the distribution of $-2\log\Lambda$ is approximately
chi-square with $r-r_0$ degrees of freedom, where $r$ is the number of free
parameters for $\Theta$ and $r_0$ is the number of free parameters in
$\Theta_0$. With this result, if we want an approximate test at level
$\alpha$, we can reject $H_0$ when $-2\log\Lambda \ge \chi^2{r-r_0}(\alpha)$
where $\chi^2_{r-r_0}(\alpha)$ denotes the $1-\alpha$ quantile of the
$\chi^2_{r-r_0}$ chi-square distribution. However, the problem with this
result is that there is no definite way of knowing how big $n$ should be. The
advantage of this generalized likelihood ratio test is that it
can test multiple hypotheses simultaneously, as illustrated
in the following example.
Example. Let's return to our coin-flipping example, except now we have
three different coins. The likelihood function is then,
$$
L(p_1,p_2,p_3) = \texttt{binom}(k_1;n_1,p_1)\texttt{binom}(k_2;n_2,p_2)\texttt{binom}(k_3;n_3,p_3)
$$
where $\texttt{binom}$ is the binomial distribution with
the given parameters. For example,
$$
\texttt{binom}(k;n,p) =\sum_{k=0}^n \binom{n}{k} p^k(1-p)^{n-k}
$$
The null hypothesis is that all three coins have the
same probability of heads, $H_0:p=p_1=p_2=p_3$. The alternative hypothesis is
that at least one of these probabilites is different. Let's consider the
numerator of the $\Lambda$ first, which will give us the maximum likelihood
estimator of $p$. Because the null hypothesis is that all the $p$ values are
equal, we can just treat this as one big binomial distribution with
$n=n_1+n_2+n_3$ and $k=k_1+k_2+k_3$ is the total number of heads observed for
any coin. Thus, under the null hypothesis, the distribution of $k$ is binomial
with parameters $n$ and $p$. Now, what is the maximum likelihood estimator for
this distribution? We have worked this problem before and have the following,
$$
\hat{p}_0= \frac{k}{n}
$$
In other words, the maximum likelihood estimator under the null
hypothesis is the proportion of ones observed in the sequence of $n$ trials
total. Now, we have to substitute this in for the likelihood under the null
hypothesis to finish the numerator of $\Lambda$,
$$
L(\hat{p}_0,\hat{p}_0,\hat{p}_0) = \texttt{binom}(k_1;n_1,\hat{p}_0)\texttt{binom}(k_2;n_2,\hat{p}_0)\texttt{binom}(k_3;n_3,\hat{p}_0)
$$
For the denomimator of $\Lambda$, which represents the case of maximizing over
the entire space, the maximum likelihood estimator for each separate binomial
distribution is likewise,
$$
\hat{p}_i= \frac{k_i}{n_i}
$$
which makes the likelihood in the denominator the following,
$$
L(\hat{p}_1,\hat{p}_2,\hat{p}_3) = \texttt{binom}(k_1;n_1,\hat{p}_1)\texttt{binom}(k_2;n_2,\hat{p}_2)\texttt{binom}(k_3;n_3,\hat{p}_3)
$$
for each of the $i\in \lbrace 1,2,3 \rbrace$ binomial distributions. Then, the
$\Lambda$ statistic is then the following,
$$
\Lambda(k_1,k_2,k_3) = \frac{L(\hat{p}_0,\hat{p}_0,\hat{p}_0)}{L(\hat{p}_1,\hat{p}_2,\hat{p}_3)}
$$
Wilks theorems states that $-2\log\Lambda$ is chi-square
distributed. We can compute this example with the statistics tools in Sympy and
Scipy.
End of explanation
"""
from __future__ import division
np.random.seed(1234)
k0,k1,k2 = gen_sample()
print k0,k1,k2
pH0 = sum((k0,k1,k2))/sum((n0,n1,n2))
numer = np.sum([np.log(binom(ni,pH0).pmf(ki))
for ni,ki in
zip((n0,n1,n2),(k0,k1,k2))])
print numer
"""
Explanation: Programming Tip.
Note the recursion in the definition of the gen_sample function where a
conditional clause of the function calls itself. This is a quick way to reusing
code and generating vectorized output. Using np.vectorize is another way, but
the code is simple enough in this case to use the conditional clause. In
Python, it is generally bad for performance to have code with nested recursion
because of how the stack frames are managed. However, here we are only
recursing once so this is not an issue.
Next, we compute the logarithm of the numerator of the $\Lambda$
statistic,
End of explanation
"""
denom = np.sum([np.log(binom(ni,pi).pmf(ki))
for ni,ki,pi in
zip((n0,n1,n2),(k0,k1,k2),(p0,p1,p2))])
print denom
"""
Explanation: Note that we used the null hypothesis estimate for the $\hat{p}_0$.
Likewise, for the logarithm of the denominator we have the following,
End of explanation
"""
chsq=chi2(2)
logLambda =-2*(numer-denom)
print logLambda
print 1- chsq.cdf(logLambda)
"""
Explanation: Now, we can compute the logarithm of the $\Lambda$ statistic as
follows and see what the corresponding value is according to Wilks theorem,
End of explanation
"""
c= chsq.isf(.05) # 5% significance level
out = []
for k0,k1,k2 in gen_sample(100):
pH0 = sum((k0,k1,k2))/sum((n0,n1,n2))
numer = np.sum([np.log(binom(ni,pH0).pmf(ki))
for ni,ki in
zip((n0,n1,n2),(k0,k1,k2))])
denom = np.sum([np.log(binom(ni,pi).pmf(ki))
for ni,ki,pi in
zip((n0,n1,n2),(k0,k1,k2),(p0,p1,p2))])
out.append(-2*(numer-denom)>c)
print np.mean(out) # estimated probability of detection
"""
Explanation: Because the value reported above is less than the 5% significance
level, we reject the null hypothesis that all the coins have the same
probability of heads. Note that there are two degrees of freedom because the
difference in the number of parameters between the null hypothesis ($p$) and
the alternative ($p_1,p_2,p_3$) is two. We can build a quick Monte
Carlo simulation to check the probability of detection for this example using
the following code, which is just a combination of the last few code blocks,
End of explanation
"""
x=binom(10,0.3).rvs(5) # p=0.3
y=binom(10,0.5).rvs(3) # p=0.5
z = np.hstack([x,y]) # combine into one array
t_o = abs(x.mean()-y.mean())
out = [] # output container
for k in range(1000):
perm = np.random.permutation(z)
T=abs(perm[:len(x)].mean()-perm[len(x):].mean())
out.append((T>t_o))
print 'p-value = ', np.mean(out)
"""
Explanation: The above simulation shows the estimated probability of
detection, for this set of example parameters. This relative low
probability of detection means that while the test is unlikely (i.e.,
at the 5% significance level) to mistakenly pick the null hypothesis,
it is likewise missing many of the $H_1$ cases (i.e., low probability
of detection). The trade-off between which is more important is up to
the particular context of the problem. In some situations, we may
prefer additional false alarms in exchange for missing fewer $H_1$
cases.
Permutation Test
<!-- p 475, Essential_Statistical_Inference_Boos.pdf -->
<!-- p. 35, Applied_adaptive_statistical_methods_OGorman.pdf -->
<!-- p. 80, Introduction_to_Statistics_Through_Resampling_Methods_and_R_Good.pdf -->
<!-- p. 104, Statistical_inference_for_data_science_Caffo.pdf -->
<!-- p. 178, All of statistics -->
The Permutation Test is good way to test whether or not
samples samples come from the same distribution. For example, suppose that
$$
X_1, X_2, \ldots, X_m \sim F
$$
and also,
$$
Y_1, Y_2, \ldots, Y_n \sim G
$$
That is, $Y_i$ and $X_i$ come from different distributions. Suppose
we have some test statistic, for example
$$
T(X_1,\ldots,X_m,Y_1,\ldots,Y_n) = \vert\overline{X}-\overline{Y}\vert
$$
Under the null hypothesis for which $F=G$, any of the
$(n+m)!$ permutations are equally likely. Thus, suppose for
each of the $(n+m)!$ permutations, we have the computed
statistic,
$$
\lbrace T_1,T_2,\ldots,T_{(n+m)!} \rbrace
$$
Then, under the null hypothesis, each of these values is equally
likely. The distribution of $T$ under the null hypothesis is the permutation
distribution that puts weight $1/(n+m)!$ on each $T$-value. Suppose $t_o$ is
the observed value of the test statistic and assume that large $T$ rejects the
null hypothesis, then the p-value for the permutation test is the following,
$$
P(T>t_o)= \frac{1}{(n+m)!} \sum_{j=1}^{(n+m)!} I(T_j>t_o)
$$
where $I()$ is the indicator function. For large $(n+m)!$, we can
sample randomly from the set of all permutations to estimate this p-value.
Example. Let's return to our coin-flipping example from last time, but
now we have only two coins. The hypothesis is that both coins
have the same probability of heads. We can use the built-in
function in Numpy to compute the random permutations.
End of explanation
"""
from scipy import stats
theta0 = 0.5 # H0
k=np.random.binomial(1000,0.3)
theta_hat = k/1000. # MLE
W = (theta_hat-theta0)/np.sqrt(theta_hat*(1-theta_hat)/1000)
c = stats.norm().isf(0.05/2) # z_{alpha/2}
print abs(W)>c # if true, reject H0
"""
Explanation: Note that the size of total permutation space is
$8!=40320$ so we are taking relatively few (i.e., 100) random
permutations from this space.
Wald Test
The Wald Test is an asympotic test. Suppose we have $H_0:\theta=\theta_0$ and
otherwise $H_1:\theta\ne\theta_0$, the corresponding statistic is defined as
the following,
$$
W=\frac{\hat{\theta}_n-\theta_0}{\texttt{se}}
$$
where $\hat{\theta}$ is the maximum likelihood estimator and
$\texttt{se}$ is the standard error,
$$
\texttt{se} = \sqrt{\mathbb{V}(\hat{\theta}_n)}
$$
Under general conditions, $W\overset{d}{\to} \mathcal{N}(0,1)$.
Thus, an asympotic test at level $\alpha$ rejects when $\vert W\vert>
z_{\alpha/2}$ where $z_{\alpha/2}$ corresponds to $\mathbb{P}(\vert
Z\vert>z_{\alpha/2})=\alpha$ with $Z \sim \mathcal{N}(0,1)$. For our favorite
coin-flipping example, if $H_0:\theta=\theta_0$, then
$$
W = \frac{\hat{\theta}-\theta_0}{\sqrt{\hat{\theta}(1-\hat{\theta})/n}}
$$
We can simulate this using the following code at the usual
5% significance level,
End of explanation
"""
theta0 = 0.5 # H0
c = stats.norm().isf(0.05/2.) # z_{alpha/2}
out = []
for i in range(100):
k=np.random.binomial(1000,0.3)
theta_hat = k/1000. # MLE
W = (theta_hat-theta0)/np.sqrt(theta_hat*(1-theta_hat)/1000.)
out.append(abs(W)>c) # if true, reject H0
print np.mean(out) # detection probability
"""
Explanation: This rejects $H_0$ because the true $\theta=0.3$ and the null hypothesis
is that $\theta=0.5$. Note that $n=1000$ in this case which puts us well inside the
asympotic range of the result. We can re-do this example to estimate
the detection probability for this example as in the following code,
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/kubeflow_pipelines/pipelines/solutions/kfp_pipeline_vertex_automl_batch_predictions.ipynb | apache-2.0 | import os
from google.cloud import aiplatform
REGION = "us-central1"
PROJECT = !(gcloud config get-value project)
PROJECT = PROJECT[0]
os.environ["PROJECT"] = PROJECT
# Set `PATH` to include the directory containing KFP CLI
PATH = %env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
"""
Explanation: Continuous Training with AutoML Vertex Pipelines with Batch Predictions
Learning Objectives:
1. Learn how to use Vertex AutoML pre-built components
1. Learn how to build a Vertex AutoML pipeline with these components using BigQuery as a data source
1. Learn how to compile, upload, and run the Vertex AutoML pipeline
1. Serve batch predictions with BigQuery source from the AutoML pipeline
In this lab, you will build, deploy, and run a Vertex AutoML pipeline that orchestrates the Vertex AutoML AI services to train, tune, and serve batch predictions to BigQuery with a model.
Setup
End of explanation
"""
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT mk --dataset $DATASET_ID
bq --project_id=$PROJECT --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
"""
Explanation: BigQuery Data
If you have not gone through the KFP Walkthrough lab, you will need to run the following cell to create a BigQuery dataset and table containing the data required for this lab.
NOTE If you already have the covertype data in a bigquery table at <PROJECT_ID>.covertype_dataset.covertype you may skip to Understanding the pipeline design.
End of explanation
"""
%%writefile ./pipeline_vertex/pipeline_vertex_automl_batch_preds.py
"""Kubeflow Covertype Pipeline."""
import os
from google_cloud_pipeline_components.aiplatform import (
AutoMLTabularTrainingJobRunOp,
TabularDatasetCreateOp,
ModelBatchPredictOp
)
from kfp.v2 import dsl
PIPELINE_ROOT = os.getenv("PIPELINE_ROOT")
PROJECT = os.getenv("PROJECT")
DATASET_SOURCE = os.getenv("DATASET_SOURCE")
PIPELINE_NAME = os.getenv("PIPELINE_NAME", "covertype")
DISPLAY_NAME = os.getenv("MODEL_DISPLAY_NAME", PIPELINE_NAME)
TARGET_COLUMN = os.getenv("TARGET_COLUMN", "Cover_Type")
BATCH_PREDS_SOURCE_URI = os.getenv("BATCH_PREDS_SOURCE_URI")
@dsl.pipeline(
name=f"{PIPELINE_NAME}-vertex-automl-pipeline-batch-preds",
description=f"AutoML Vertex Pipeline for {PIPELINE_NAME}",
pipeline_root=PIPELINE_ROOT,
)
def create_pipeline():
dataset_create_task = TabularDatasetCreateOp(
display_name=DISPLAY_NAME,
bq_source=DATASET_SOURCE,
project=PROJECT,
)
automl_training_task = AutoMLTabularTrainingJobRunOp(
project=PROJECT,
display_name=DISPLAY_NAME,
optimization_prediction_type="classification",
dataset=dataset_create_task.outputs["dataset"],
target_column=TARGET_COLUMN,
)
batch_predict_op = ModelBatchPredictOp(
project=PROJECT,
job_display_name="batch_predict_job",
model=automl_training_task.outputs["model"],
bigquery_source_input_uri=BATCH_PREDS_SOURCE_URI,
instances_format="bigquery",
predictions_format="bigquery",
bigquery_destination_output_uri=f'bq://{PROJECT}',
)
"""
Explanation: Understanding the pipeline design
The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the pipeline_vertex/pipeline_vertex_automl_batch_preds.py file that we will generate below.
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
Building and deploying the pipeline
Let us write the pipeline to disk:
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE covertype_dataset.newdata AS
SELECT * EXCEPT(Cover_Type)
FROM covertype_dataset.covertype
LIMIT 10000
"""
Explanation: Understanding the ModelBatchPredictOp
When working with an AutoML Tabular model, the ModelBatchPredictOp can take the following inputs:
* model: The model resource to serve batch predictions with
* bigquery_source_uri: A URI to a BigQuery table containing examples to serve batch predictions on in the format bq://PROJECT.DATASET.TABLE
* instances_format: "bigquery" to serve batch predictions on BigQuery data.
* predictions_format: "bigquery" to store the results of the batch prediction in BigQuery.
* bigquery_destination_output_uri: In the format bq://PROJECT_ID. This is the project that the results of the batch prediction will be stored. The ModelBatchPredictOp will create a dataset in this project.
Upon completion of the ModelBatchPredictOp you will see a new BigQuery dataset with name prediction_<model-display-name>_<job-create-time>. Inside this dataset you will see a predictions table, containing the batch prediction examples and predicted labels. If there were any errors in the batch prediction, you will also see an errors table. The errors table contains rows for which the prediction has failed.
Create BigQuery table with data for batch predictions
Before we compile and run the pipeline, let's create a BigQuery table with data we want to serve batch predictions on. To simulate "new" data we will simply query the existing table for all columns except the label and create a table called newdata. The URI to this table will be the bigquery_source_input_uri input to the ModelBatchPredictOp.
End of explanation
"""
ARTIFACT_STORE = f"gs://{PROJECT}-kfp-artifact-store"
PIPELINE_ROOT = f"{ARTIFACT_STORE}/pipeline"
DATASET_SOURCE = f"bq://{PROJECT}.covertype_dataset.covertype"
BATCH_PREDS_SOURCE_URI = f"bq://{PROJECT}.covertype_dataset.newdata"
%env PIPELINE_ROOT={PIPELINE_ROOT}
%env PROJECT={PROJECT}
%env REGION={REGION}
%env DATASET_SOURCE={DATASET_SOURCE}
%env BATCH_PREDS_SOURCE_URI={BATCH_PREDS_SOURCE_URI}
"""
Explanation: Compile the pipeline
Let's start by defining the environment variables that will be passed to the pipeline compiler:
End of explanation
"""
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
"""
Explanation: Let us make sure that the ARTIFACT_STORE has been created, and let us create it if not:
End of explanation
"""
PIPELINE_JSON = "covertype_automl_vertex_pipeline_batch_preds.json"
!dsl-compile-v2 --py pipeline_vertex/pipeline_vertex_automl_batch_preds.py --output $PIPELINE_JSON
"""
Explanation: Use the CLI compiler to compile the pipeline
We compile the pipeline from the Python file we generated into a JSON description using the following command:
End of explanation
"""
!head {PIPELINE_JSON}
"""
Explanation: Note: You can also use the Python SDK to compile the pipeline:
```python
from kfp.v2 import compiler
compiler.Compiler().compile(
pipeline_func=create_pipeline,
package_path=PIPELINE_JSON,
)
```
The result is the pipeline file.
End of explanation
"""
aiplatform.init(project=PROJECT, location=REGION)
pipeline = aiplatform.PipelineJob(
display_name="automl_covertype_kfp_pipeline_batch_predictions",
template_path=PIPELINE_JSON,
enable_caching=True,
)
pipeline.run()
"""
Explanation: Deploy the pipeline package
End of explanation
"""
|
ColeLab/informationtransfermapping | MasterScripts/ManuscriptS5b_PerformanceDecoding_withITE.ipynb | gpl-3.0 | import sys
import os
sys.path.append('utils/')
import numpy as np
import loadGlasser as lg
import scipy.stats as stats
import matplotlib.pyplot as plt
import statsmodels.sandbox.stats.multicomp as mc
import sys
import multiprocessing as mp
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import nibabel as nib
os.environ['OMP_NUM_THREADS'] = str(1)
from matplotlib.colors import Normalize
from matplotlib.colors import LogNorm
class MidpointNormalize(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
class MidpointNormalize2(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
t1 = (self.midpoint - self.vmin)/2.0
t2 = (self.vmax - self.midpoint)/30.0 + self.midpoint
x, y = [self.vmin, t1, self.midpoint, t2, self.vmax], [0, 0.25, .5, .75, 1.0]
return np.ma.masked_array(np.interp(value, x, y))
"""
Explanation: ManuscriptS5b - Predicting task performance using information transfer estimates
Analysis for Supplementary Figure 5
Master code for Ito et al., 2017¶
Takuya Ito (takuya.ito@rutgers.edu)
End of explanation
"""
# Set basic parameters
basedir = '/projects2/ModalityControl2/'
datadir = basedir + 'data/'
resultsdir = datadir + 'resultsMaster/'
runLength = 4648
subjNums = ['032', '033', '037', '038', '039', '045',
'013', '014', '016', '017', '018', '021',
'023', '024', '025', '026', '027', '031',
'035', '046', '042', '028', '048', '053',
'040', '049', '057', '062', '050', '030', '047', '034']
glasserparcels = lg.loadGlasserParcels()
networkdef = lg.loadGlasserNetworks()
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud1':8, 'aud2':9, 'dan':11}
# Force aud2 key to be the same as aud1
aud2_ind = np.where(networkdef==networkmappings['aud2'])[0]
networkdef[aud2_ind] = networkmappings['aud1']
# Define new network mappings with no aud1/aud2 distinction
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud':8, 'dan':11,
'prem':5, 'pcc':10, 'none':12, 'hipp':13, 'pmulti':14}
netkeys = {0:'fpn', 1:'dan', 2:'con', 3:'dmn', 4:'vis', 5:'aud', 6:'smn'}
nParcels = 360
# Import network reordering
networkdir = '/projects/AnalysisTools/netpartitions/ColeLabNetPartition_v1/'
networkorder = np.asarray(sorted(range(len(networkdef)), key=lambda k: networkdef[k]))
order = networkorder
order.shape = (len(networkorder),1)
# Construct xticklabels and xticks for plotting figures
networks = networkmappings.keys()
xticks = {}
reorderednetworkaffil = networkdef[order]
for net in networks:
netNum = networkmappings[net]
netind = np.where(reorderednetworkaffil==netNum)[0]
tick = np.max(netind)
xticks[tick] = net
# Load in Glasser parcels in their native format
glasserfile2 = '/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii'
glasser2 = nib.load('/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii')
glasser2 = glasser2.get_data()
glasser2 = glasser2[0][0][0][0][0]
def convertCSVToCIFTI64k(inputfilename,outputfilename):
ciftitemplate = glasserfile2
wb_command = 'wb_command -cifti-convert -from-text'
wb_command += ' ' + inputfilename
wb_command += ' ' + ciftitemplate
wb_command += ' ' + outputfilename
wb_command += " -col-delim ','"
wb_command += ' -reset-scalars'
os.system(wb_command)
# print wb_command
"""
Explanation: 0.0 Basic parameters
End of explanation
"""
behavdata = {}
for subj in subjNums:
behavdata[subj] = {}
behavdir = basedir + 'data/resultsMaster/behavresults/'
behavdata[subj]['acc'] = np.loadtxt(behavdir + subj + '_accuracy.csv',dtype='str',delimiter=',')
behavdata[subj]['rt'] = np.loadtxt(behavdir + subj + '_RT.csv')
"""
Explanation: 0.1 Load in behavioral data (performance + RT)
End of explanation
"""
n_mbs = 128
ntrialspermb = 3
for subj in subjNums:
filename = basedir + 'data/resultsMaster/behavresults/' + subj+'_acc_by_mb.txt'
tmp = behavdata[subj]['acc']=='Correct'
tmp = tmp.astype(int)
mb_tmp = []
count = 0
for mb in range(n_mbs):
mb_tmp.append(np.mean(tmp[count:(count+ntrialspermb)]))
count += ntrialspermb
mb_tmp = np.asarray(mb_tmp)
np.savetxt(filename, mb_tmp)
"""
Explanation: Write out miniblock-by-miniblock accuracy performance for within-subject region-to-region logistic regression with information transfer estimates (for revision)
End of explanation
"""
n_mbs = 128
ntrialspermb = 3
for subj in subjNums:
filename = '/projects2/ModalityControl2/data/resultsMaster/behavresults/' + subj+'_rt_by_mb.txt'
tmp = behavdata[subj]['rt']
mb_tmp = []
count = 0
for mb in range(n_mbs):
mb_tmp.append(np.mean(tmp[count:(count+ntrialspermb)]))
count += ntrialspermb
mb_tmp = np.asarray(mb_tmp)
np.savetxt(filename, mb_tmp)
"""
Explanation: Do the same for RT
End of explanation
"""
acc = []
rt = []
for subj in subjNums:
acc.append(np.mean(behavdata[subj]['acc']=='Correct'))
rt.append(np.mean(behavdata[subj]['rt']))
plt.figure()
plt.hist(acc)
plt.title('Distribution of accuracies across 32 subjects\nNormalTest p = '+str(stats.normaltest(acc)[1]),
y=1.04,fontsize=16)
plt.xlabel('Avg Accuracy Per Subj')
plt.ylabel('# in bin')
plt.figure()
plt.hist(rt,bins=10)
plt.title('Distribution of RTs across 32 subjects\nNormalTest p = '+str(stats.normaltest(rt)[1]),
y=1.04,fontsize=16)
plt.xlabel('Avg RT per Subj')
plt.ylabel('RT')
"""
Explanation: 0.2 Verify normality of behavioral data
End of explanation
"""
## Load in NM3 Data
datadir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
# Load in RSA matrices
logitBetas= np.zeros((nParcels,nParcels,len(subjNums)))
scount = 0
for subj in subjNums:
filename = datadir +subj+'_RegionToRegionActFlowGlasserParcels_BehavioralAcc.csv'
logitBetas[:,:,scount] = np.loadtxt(filename, delimiter=',')
scount += 1
"""
Explanation: 1.0 Run Information transfer mapping
Due to obvious computational constraints, all region-to-region ActFlow procedures and RSA analyses were run on NM3
Code: ./SupercomputerScripts/Fig6_RegionToRegionInformationTransferMapping/ActFlow_ITE_DecodePerformance_LogRegression_v2.m
2.0 Identify information transfers that are correlated with behavior
Load in logistic regression betas
End of explanation
"""
df_stats = {}
df_stats['logit_t'] = np.zeros((nParcels,nParcels))
df_stats['logit_p'] = np.ones((nParcels,nParcels))
df_stats['logit_q'] = np.ones((nParcels,nParcels))
df_stats['logit_avg'] = np.mean(logitBetas,axis=2)
for i in range(nParcels):
for j in range(nParcels):
if i==j: continue
t, p = stats.ttest_1samp(logitBetas[i,j,:],.5)
if t > 0:
p = p/2.0
else:
p = 1.0 - p/2.0
df_stats['logit_t'][i,j] = t
df_stats['logit_p'][i,j] = p
roi = 260
# indices = np.where((networkdef==networkmappings['fpn']) | (networkdef==networkmappings['con']))[0]
# # indices = np.where(networkdef==networkmappings['con'])[0]
fpn_ind = np.where(networkdef==networkmappings['fpn'])[0]
fpn_ind.shape = (len(fpn_ind),1)
con_ind = np.where(networkdef==networkmappings['con'])[0]
con_ind.shape = (len(con_ind),1)
# indices = np.arange(nParcels)
mat = np.zeros((nParcels,nParcels))
# mat[260,:] = 1
# mat[:,260] = 1
mat[260,fpn_ind.T] = 1
mat[260,con_ind.T] = 1
# mat[fpn_ind,con_ind.T] = 1
# mat[fpn_ind,fpn_ind.T] = 1
# mat[con_ind,con_ind.T] = 1
# mat[con_ind,fpn_ind.T] = 1
# mat = np.ones((nParcels,nParcels))
# mat[:,roi] = 1
np.fill_diagonal(mat,0)
indices = np.where(mat==1)
# Perform multiple comparisons
# df_stats['logit_q'][sig_ind] = mc.fdrcorrection0(df_stats['logit_p'][sig_ind])[1]
df_stats['logit_q'] = np.ones((nParcels,nParcels))
df_stats['logit_q'][indices] = mc.fdrcorrection0(df_stats['logit_p'][indices])[1]
ind = np.where(df_stats['logit_q']<0.05)
print 'Significant transfers predicted of performance:'
count = 0
for i in range(len(ind[0])):
print 'Transfers from', ind[0][count], 'to', ind[1][count]
print 'Effect size =', df_stats['logit_avg'][ind[0][count],ind[1][count]]
print 'p =', df_stats['logit_p'][ind[0][count],ind[1][count]]
print 'q =', df_stats['logit_q'][ind[0][count],ind[1][count]]
count += 1
"""
Explanation: Perform group statistics
End of explanation
"""
df_stats = {}
df_stats['logit_t'] = np.zeros((nParcels,nParcels))
df_stats['logit_p'] = np.ones((nParcels,nParcels))
df_stats['logit_q'] = np.ones((nParcels,nParcels))
df_stats['logit_avg'] = np.mean(logitBetas,axis=2)
for i in range(nParcels):
for j in range(nParcels):
if i==j: continue
t, p = stats.ttest_1samp(logitBetas[i,j,:],.5)
if t > 0:
p = p/2.0
else:
p = 1.0 - p/2.0
df_stats['logit_t'][i,j] = t
df_stats['logit_p'][i,j] = p
roi = 260
# indices = np.where((networkdef==networkmappings['fpn']) | (networkdef==networkmappings['con']))[0]
# # indices = np.where(networkdef==networkmappings['con'])[0]
fpn_ind = np.where(networkdef==networkmappings['fpn'])[0]
fpn_ind.shape = (len(fpn_ind),1)
con_ind = np.where(networkdef==networkmappings['con'])[0]
con_ind.shape = (len(con_ind),1)
# indices = np.arange(nParcels)
mat = np.zeros((nParcels,nParcels))
# mat[260,:] = 1
# mat[:,260] = 1
mat[260,fpn_ind.T] = 1
mat[260,con_ind.T] = 1
# mat[fpn_ind,con_ind.T] = 1
# mat[fpn_ind,fpn_ind.T] = 1
# mat[con_ind,con_ind.T] = 1
# mat[con_ind,fpn_ind.T] = 1
# mat = np.ones((nParcels,nParcels))
# mat[:,roi] = 1
np.fill_diagonal(mat,0)
indices = np.where(mat==1)
# Permutation testing
import permutationTesting as pt
chance = .5
tmp = logitBetas - chance # Subtract decoding accuracies by chance
t, p = pt.permutationFWE(tmp[indices[0],indices[1],:], permutations=1000, nproc=15)
pfwe = np.zeros((nParcels,nParcels))
tfwe = np.zeros((nParcels,nParcels))
pfwe[indices] = p
tfwe[indices] = t
ind = np.where(pfwe>0.95)
print 'Significant transfers predictive of performance:'
count = 0
for i in range(len(ind[0])):
print 'Transfers from', ind[0][count], 'to', ind[1][count]
print 'Effect size =', df_stats['logit_avg'][ind[0][count],ind[1][count]]
print 'T-statistic =', tfwe[ind[0][count],ind[1][count]]
print 'p (FWE) =', 1.0 - pfwe[ind[0][count],ind[1][count]]
count += 1
"""
Explanation: FWE correction on results
Perform group statistics
End of explanation
"""
|
SeismicPi/SeismicPi | Lessons/Stethoscope/Piezo Stethoscope.ipynb | mit | import SensDisLib as s
plot = s.SensorDisplay()
"""
Explanation: Piezo Stethoscope
In this module we will be creating a sthethoscope to monitor our heartbeats via a piezosensor and learn how log and read data. The idea is that if you tape a contact microphone directly to your skin, it will pick up on your pulse and generate a voltage every time your heart beats. We can then view this voltage-plot in real time as well as logging the data to a SD card so it can be analysed later. Using a piezoelectric sensor has been trialed in real clinical settings to as an inexpensive method to monitor heart rate, respiration rate, and other vital functions of the body.
Materials Needed
Raspberry Pi
HAT Board //insert actual name here
Contact microphones
One SD card
Steps Outline
Connect the HAT Board with the Raspberry Pi and insert the SD card
Insert the contact microphones into the inputs of the board (up to four)
Set up any code necessary to view the plots in real time
Run the sensor display and wait until all the channels produce a steady signal
Start the data logger and wait about one minute
Stop the logging and remove the SD card
Analyse the data to find out the heart rates of the connected students
Write code to detect peaks and automatically calculate the heart rates
Set up the device as indicated in the outline above. Have the students whose heart rate will be monitored tape the piezosensors onto the pad of their thumbs. Do not tape it too softly, or else it won't correctly pick up the pulese. Do not tape it too tightly as to break or crack the sensor either.
In this section we will set up the code to view the voltage in real time and learn how to use the sensor display library. Firstly, we have to import the library (SensDisLib) and create a new instance of it. Do this below.
End of explanation
"""
plot.add_sensor_one("Person one")
plot.add_sensor_two("Person two")
plot.add_sensor_three("Person three")
plot.add_sensor_four("Person four")
"""
Explanation: Now we have to add to our display the channels that are connected. We can do this by the add_sensor_X("Name") method. For example, if we wanted to view channel four and call it "My Heartbeat", we would call .add_sensor_one("My Heartbeat"). The variable we call in the constructor ("My Heartbeat") will be name of the graph that is displayed. Write the code to view the sensors that you want to use below.
End of explanation
"""
#plot.runPlot()
"""
Explanation: Now run the it via the runPlot() method.
End of explanation
"""
plot.setYRange_sensor_one(1800, 2100);
plot.setYRange_sensor_two(1800, 2100);
plot.setYRange_sensor_three(1800, 2100);
plot.setYRange_sensor_four(1800, 2100);
plot.runPlot()
"""
Explanation: You may notice that you don't see a very strong signal and the line is flat when you are sitting still. This is because the default voltage (y- axis) range is zoomed out to (0, 4096). However, we can modify this with the setYRange_sensor_x(low, high) method. For example, I wanted to zoom in on the voltage range (500, 700) on channel two, I would call setYRange_sensor_two(500, 700). Look at the plot and estimate a reasonable range to zoom on onto (about ±150 volts of the steady voltage) and set this for the necessary plots. Make sure you write this code before you call runPlot(), or comment it out above and write it again.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.22/_downloads/59a29cf7eb53c7ab95857dfb2e3b31ba/plot_40_sensor_locations.ipynb | bsd-3-clause | import os
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, preload=True, verbose=False)
"""
Explanation: Working with sensor locations
This tutorial describes how to read and plot sensor locations, and how
the physical location of sensors is handled in MNE-Python.
:depth: 2
As usual we'll start by importing the modules we need and loading some
example data <sample-dataset>:
End of explanation
"""
montage_dir = os.path.join(os.path.dirname(mne.__file__),
'channels', 'data', 'montages')
print('\nBUILT-IN MONTAGE FILES')
print('======================')
print(sorted(os.listdir(montage_dir)))
"""
Explanation: About montages and layouts
:class:Montages <mne.channels.DigMontage> contain sensor
positions in 3D (x, y, z, in meters), and can be used to set
the physical positions of sensors. By specifying the location of sensors
relative to the brain, :class:Montages <mne.channels.DigMontage> play an
important role in computing the forward solution and computing inverse
estimates.
In contrast, :class:Layouts <mne.channels.Layout> are idealized 2-D
representations of sensor positions, and are primarily used for arranging
individual sensor subplots in a topoplot, or for showing the approximate
relative arrangement of sensors as seen from above.
Working with built-in montages
The 3D coordinates of MEG sensors are included in the raw recordings from MEG
systems, and are automatically stored in the info attribute of the
:class:~mne.io.Raw file upon loading. EEG electrode locations are much more
variable because of differences in head shape. Idealized montages for many
EEG systems are included during MNE-Python installation; these files are
stored in your mne-python directory, in the
:file:mne/channels/data/montages folder:
End of explanation
"""
ten_twenty_montage = mne.channels.make_standard_montage('standard_1020')
print(ten_twenty_montage)
"""
Explanation: .. sidebar:: Computing sensor locations
If you are interested in how standard ("idealized") EEG sensor positions
are computed on a spherical head model, the `eeg_positions`_ repository
provides code and documentation to this end.
These built-in EEG montages can be loaded via
:func:mne.channels.make_standard_montage. Note that when loading via
:func:~mne.channels.make_standard_montage, provide the filename without
its file extension:
End of explanation
"""
# these will be equivalent:
# raw_1020 = raw.copy().set_montage(ten_twenty_montage)
# raw_1020 = raw.copy().set_montage('standard_1020')
"""
Explanation: Once loaded, a montage can be applied to data via one of the instance methods
such as :meth:raw.set_montage <mne.io.Raw.set_montage>. It is also possible
to skip the loading step by passing the filename string directly to the
:meth:~mne.io.Raw.set_montage method. This won't work with our sample
data, because it's channel names don't match the channel names in the
standard 10-20 montage, so these commands are not run here:
End of explanation
"""
fig = ten_twenty_montage.plot(kind='3d')
fig.gca().view_init(azim=70, elev=15)
ten_twenty_montage.plot(kind='topomap', show_names=False)
"""
Explanation: :class:Montage <mne.channels.DigMontage> objects have a
:meth:~mne.channels.DigMontage.plot method for visualization of the sensor
locations in 3D; 2D projections are also possible by passing
kind='topomap':
End of explanation
"""
biosemi_montage = mne.channels.make_standard_montage('biosemi64')
biosemi_montage.plot(show_names=False)
"""
Explanation: Controlling channel projection (MNE vs EEGLAB)
Channel positions in 2d space are obtained by projecting their actual 3d
positions using a sphere as a reference. Because 'standard_1020' montage
contains realistic, not spherical, channel positions, we will use a different
montage to demonstrate controlling how channels are projected to 2d space.
End of explanation
"""
biosemi_montage.plot(show_names=False, sphere=0.07)
"""
Explanation: By default a sphere with an origin in (0, 0, 0) x, y, z coordinates and
radius of 0.095 meters (9.5 cm) is used. You can use a different sphere
radius by passing a single value to sphere argument in any function that
plots channels in 2d (like :meth:~mne.channels.DigMontage.plot that we use
here, but also for example :func:mne.viz.plot_topomap):
End of explanation
"""
biosemi_montage.plot(show_names=False, sphere=(0.03, 0.02, 0.01, 0.075))
"""
Explanation: To control not only radius, but also the sphere origin, pass a
(x, y, z, radius) tuple to sphere argument:
End of explanation
"""
biosemi_montage.plot()
"""
Explanation: In mne-python the head center and therefore the sphere center are calculated
using fiducial points. Because of this the head circle represents head
circumference at the nasion and ear level, and not where it is commonly
measured in 10-20 EEG system: above nasion at T4/T8, T3/T7, Oz, Fz level.
Notice below that by default T7 and Oz channels are placed within the head
circle, not on the head outline:
End of explanation
"""
biosemi_montage.plot(sphere=(0, 0, 0.035, 0.094))
"""
Explanation: If you have previous EEGLAB experience you may prefer its convention to
represent 10-20 head circumference with the head circle. To get EEGLAB-like
channel layout you would have to move the sphere origin a few centimeters
up on the z dimension:
End of explanation
"""
fig = plt.figure()
ax2d = fig.add_subplot(121)
ax3d = fig.add_subplot(122, projection='3d')
raw.plot_sensors(ch_type='eeg', axes=ax2d)
raw.plot_sensors(ch_type='eeg', axes=ax3d, kind='3d')
ax3d.view_init(azim=70, elev=15)
"""
Explanation: Instead of approximating the EEGLAB-esque sphere location as above, you can
calculate the sphere origin from position of Oz, Fpz, T3/T7 or T4/T8
channels. This is easier once the montage has been applied to the data and
channel positions are in the head space - see
this example <ex-topomap-eeglab-style>.
Reading sensor digitization files
In the sample data, setting the digitized EEG montage was done prior to
saving the :class:~mne.io.Raw object to disk, so the sensor positions are
already incorporated into the info attribute of the :class:~mne.io.Raw
object (see the documentation of the reading functions and
:meth:~mne.io.Raw.set_montage for details on how that works). Because of
that, we can plot sensor locations directly from the :class:~mne.io.Raw
object using the :meth:~mne.io.Raw.plot_sensors method, which provides
similar functionality to
:meth:montage.plot() <mne.channels.DigMontage.plot>.
:meth:~mne.io.Raw.plot_sensors also allows channel selection by type, can
color-code channels in various ways (by default, channels listed in
raw.info['bads'] will be plotted in red), and allows drawing into an
existing matplotlib axes object (so the channel positions can easily be
made as a subplot in a multi-panel figure):
End of explanation
"""
fig = mne.viz.plot_alignment(raw.info, trans=None, dig=False, eeg=False,
surfaces=[], meg=['helmet', 'sensors'],
coord_frame='meg')
mne.viz.set_3d_view(fig, azimuth=50, elevation=90, distance=0.5)
"""
Explanation: It's probably evident from the 2D topomap above that there is some
irregularity in the EEG sensor positions in the sample dataset
<sample-dataset> — this is because the sensor positions in that dataset are
digitizations of the sensor positions on an actual subject's head, rather
than idealized sensor positions based on a spherical head model. Depending on
what system was used to digitize the electrode positions (e.g., a Polhemus
Fastrak digitizer), you must use different montage reading functions (see
dig-formats). The resulting :class:montage <mne.channels.DigMontage>
can then be added to :class:~mne.io.Raw objects by passing it to the
:meth:~mne.io.Raw.set_montage method (just as we did above with the name of
the idealized montage 'standard_1020'). Once loaded, locations can be
plotted with :meth:~mne.channels.DigMontage.plot and saved with
:meth:~mne.channels.DigMontage.save, like when working with a standard
montage.
<div class="alert alert-info"><h4>Note</h4><p>When setting a montage with :meth:`~mne.io.Raw.set_montage`
the measurement info is updated in two places (the ``chs``
and ``dig`` entries are updated). See `tut-info-class`.
``dig`` may contain HPI, fiducial, or head shape points in
addition to electrode locations.</p></div>
Rendering sensor position with mayavi
It is also possible to render an image of a MEG sensor helmet in 3D, using
mayavi instead of matplotlib, by calling :func:mne.viz.plot_alignment
End of explanation
"""
layout_dir = os.path.join(os.path.dirname(mne.__file__),
'channels', 'data', 'layouts')
print('\nBUILT-IN LAYOUT FILES')
print('=====================')
print(sorted(os.listdir(layout_dir)))
"""
Explanation: :func:~mne.viz.plot_alignment requires an :class:~mne.Info object, and
can also render MRI surfaces of the scalp, skull, and brain (by passing
keywords like 'head', 'outer_skull', or 'brain' to the
surfaces parameter) making it useful for assessing coordinate frame
transformations <plot_source_alignment>. For examples of various uses of
:func:~mne.viz.plot_alignment, see plot_montage,
:doc:../../auto_examples/visualization/plot_eeg_on_scalp, and
:doc:../../auto_examples/visualization/plot_meg_sensors.
Working with layout files
As with montages, many layout files are included during MNE-Python
installation, and are stored in the :file:mne/channels/data/layouts folder:
End of explanation
"""
biosemi_layout = mne.channels.read_layout('biosemi')
biosemi_layout.plot() # same result as: mne.viz.plot_layout(biosemi_layout)
"""
Explanation: You may have noticed that the file formats and filename extensions of the
built-in layout and montage files vary considerably. This reflects different
manufacturers' conventions; to make loading easier the montage and layout
loading functions in MNE-Python take the filename without its extension so
you don't have to keep track of which file format is used by which
manufacturer.
To load a layout file, use the :func:mne.channels.read_layout function, and
provide the filename without its file extension. You can then visualize the
layout using its :meth:~mne.channels.Layout.plot method, or (equivalently)
by passing it to :func:mne.viz.plot_layout:
End of explanation
"""
midline = np.where([name.endswith('z') for name in biosemi_layout.names])[0]
biosemi_layout.plot(picks=midline)
"""
Explanation: Similar to the picks argument for selecting channels from
:class:~mne.io.Raw objects, the :meth:~mne.channels.Layout.plot method of
:class:~mne.channels.Layout objects also has a picks argument. However,
because layouts only contain information about sensor name and location (not
sensor type), the :meth:~mne.channels.Layout.plot method only allows
picking channels by index (not by name or by type). Here we find the indices
we want using :func:numpy.where; selection by name or type is possible via
:func:mne.pick_channels or :func:mne.pick_types.
End of explanation
"""
layout_from_raw = mne.channels.make_eeg_layout(raw.info)
# same result as: mne.channels.find_layout(raw.info, ch_type='eeg')
layout_from_raw.plot()
"""
Explanation: If you're working with a :class:~mne.io.Raw object that already has sensor
positions incorporated, you can create a :class:~mne.channels.Layout object
with either the :func:mne.channels.make_eeg_layout function or
(equivalently) the :func:mne.channels.find_layout function.
End of explanation
"""
|
NuGrid/NuPyCEE | regression_tests/Stellab_tests.ipynb | bsd-3-clause | # Import the needed packages
import matplotlib
import matplotlib.pyplot as plt
# Import the observational data module
import stellab
import sys
# Trigger interactive or non-interactive depending on command line argument
__RUNIPY__ = sys.argv[0]
if __RUNIPY__:
%matplotlib inline
else:
%pylab nbagg
"""
Explanation: STELLAB test notebook
The STELLAB module (which is a contraction for Stellar Abundances) enables to plot observational data for comparison with galactic chemical evolution (GCE) predictions. The abundance ratios are presented in the following spectroscopic notation :
$$[A/B]=\log(n_A/n_B)-\log(n_A/n_B)_\odot.$$
The following sections describe how to use the code.
End of explanation
"""
# Create an instance of Stellab
s = stellab.stellab()
# Plot observational data (you can try all the ratios you want)
s.plot_spectro(xaxis='[Fe/H]', yaxis='[Eu/Fe]')
plt.xlim(-4.5,0.75)
plt.ylim(-1.6,1.6)
"""
Explanation: Simple Plot
In order to plot observed stellar abundances, you just need to enter the wanted ratios with the xaxis and yaxis parameters. Stellab has been coded in a way that any abundance ratio can be plotted (see Appendix A below), as long as the considered data sets contain the elements. In this example, we consider the Milky Way.
End of explanation
"""
# First, you can see the list of the available solar abundances
s.list_solar_norm()
"""
Explanation: Solar Normalization
By default, the solar normalization $\log(n_A/n_B)_\odot$ is taken from the reference paper that provide the data set. But every data point can be re-normalized to any other solar values (see Appendix B), using the norm parameter. This is highly recommended, since the original data points may not have the same solar normalization.
End of explanation
"""
# Plot using the default solar normalization of each data set
s.plot_spectro(xaxis='[Fe/H]', yaxis='[Ca/Fe]')
plt.xlim(-4.5,0.75)
plt.ylim(-1.4,1.6)
# Plot using the same solar normalization for all data sets
s.plot_spectro(xaxis='[Fe/H]', yaxis='[Ca/Fe]',norm='Asplund_et_al_2009')
plt.xlim(-4.5,0.75)
plt.ylim(-1.4,1.6)
"""
Explanation: Here is an example of how the observational data can be re-normalized.
End of explanation
"""
# First, you can see the list of the available reference papers
s.list_ref_papers()
# Create a list of reference papers
obs = ['stellab_data/milky_way_data/Jacobson_et_al_2015_stellab',\
'stellab_data/milky_way_data/Venn_et_al_2004_stellab',\
'stellab_data/milky_way_data/Yong_et_al_2013_stellab',\
'stellab_data/milky_way_data/Bensby_et_al_2014_stellab']
# Plot data using your selection of data points
s.plot_spectro(xaxis='[Fe/H]', yaxis='[Ca/Fe]', norm='Asplund_et_al_2009', obs=obs)
plt.xlim(-4.5,0.7)
plt.ylim(-1.4,1.6)
"""
Explanation: Important Note
In some papers, I had a hard time finding the solar normalization used by the authors. This means I cannot apply the re-normalization for their data set. When that happens, I print a warning below the plot and add two asterisk after the reference paper in the legend.
Personal Selection
You can select a subset of the observational data implemented in Stellab.
End of explanation
"""
# Plot data using a specific galaxy
s.plot_spectro(xaxis='[Fe/H]', yaxis='[Si/Fe]',norm='Asplund_et_al_2009', galaxy='fornax')
plt.xlim(-4.5,0.75)
plt.ylim(-1.4,1.4)
"""
Explanation: Galaxy Selection
The Milky Way (milky_way) is the default galaxy. But you can select another galaxy among Sculptor, Fornax, and Carina (use lower case letters).
End of explanation
"""
# Plot error bars for a specific galaxy
s.plot_spectro(xaxis='[Fe/H]',yaxis='[Ti/Fe]',\
norm='Asplund_et_al_2009', galaxy='sculptor', show_err=True, show_mean_err=True)
plt.xlim(-4.5,0.75)
plt.ylim(-1.4,1.4)
"""
Explanation: Plot Error Bars
It is possible to plot error bars with the show_err parameter, and print the mean errors with the show_mean_err parameter.
End of explanation
"""
# Everything should be on a horizontal line
s.plot_spectro(xaxis='[Mg/H]', yaxis='[Ti/Ti]')
plt.xlim(-1,1)
plt.ylim(-1,1)
# Everything should be on a vertical line
s.plot_spectro(xaxis='[Mg/Mg]', yaxis='[Ti/Mg]')
plt.xlim(-1,1)
plt.ylim(-1,1)
# Everything should be at zero
s.plot_spectro(xaxis='[Mg/Mg]', yaxis='[Ti/Ti]')
plt.xlim(-1,1)
plt.ylim(-1,1)
"""
Explanation: Appendix A - Abundance Ratios
Let's consider that a data set provides stellar abundances in the form of [X/Y], where Y is the reference element (often H or Fe) and X represents any element. It is possible to change the reference element by using simple substractions and additions.
Substraction
Let's say we want [Ca/Mg] from [Ca/Fe] and [Mg/Fe].
$$[\mathrm{Ca}/\mathrm{Mg}]=\log(n_\mathrm{Ca}/n_\mathrm{Mg})-\log(n_\mathrm{Ca}/n_\mathrm{Mg})_\odot$$
$$=\log\left(\frac{n_\mathrm{Ca}/n_\mathrm{Fe}}{n_\mathrm{Mg}/n_\mathrm{Fe}}\right)-\log\left(\frac{n_\mathrm{Ca}/n_\mathrm{Fe}}{n_\mathrm{Mg}/n_\mathrm{Fe}}\right)_\odot$$
$$=\log(n_\mathrm{Ca}/n_\mathrm{Fe})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})-\log(n_\mathrm{Ca}/n_\mathrm{Fe})\odot+\log(n\mathrm{Mg}/n_\mathrm{Fe})_\odot$$
$$=[\mathrm{Ca}/\mathrm{Fe}]-[\mathrm{Mg}/\mathrm{Fe}]$$
Addition
Let's say we want [Mg/H] from [Fe/H] and [Mg/Fe].
$$[\mathrm{Mg}/\mathrm{H}]=\log(n_\mathrm{Mg}/n_\mathrm{H})-\log(n_\mathrm{Mg}/n_\mathrm{H})_\odot$$
$$=\log\left(\frac{n_\mathrm{Mg}/n_\mathrm{Fe}}{n_\mathrm{H}/n_\mathrm{Fe}}\right)-\log\left(\frac{n_\mathrm{Mg}/n_\mathrm{Fe}}{n_\mathrm{H}/n_\mathrm{Fe}}\right)_\odot$$
$$=\log(n_\mathrm{Mg}/n_\mathrm{Fe})-\log(n_\mathrm{H}/n_\mathrm{Fe})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})\odot+\log(n\mathrm{H}/n_\mathrm{Fe})_\odot$$
$$=\log(n_\mathrm{Mg}/n_\mathrm{Fe})+\log(n_\mathrm{Fe}/n_\mathrm{H})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})\odot-\log(n\mathrm{Fe}/n_\mathrm{H})_\odot$$
$$=[\mathrm{Mg}/\mathrm{Fe}]+[\mathrm{Fe}/\mathrm{H}]$$
Test
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.