repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
choderalab/assaytools | examples/direct-fluorescence-assay/1b Simulating fluorescence binding data - protein concentration design a la Nick Levinson.ipynb | lgpl-2.1 | import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
%pylab inline
"""
Explanation: In this notebook we will explore how varying protein concentration can affect our fluorescence assay results
We will simulate expected fluorescence results for a ligand protein with known Kd and protein concentrations.
End of explanation
"""
# Now we can use this to define a function that gives us PL from Kd, Ptot, and Ltot.
def two_component_binding(Kd, Ptot, Ltot):
"""
Parameters
----------
Kd : float
Dissociation constant
Ptot : float
Total protein concentration
Ltot : float
Total ligand concentration
Returns
-------
P : float
Free protein concentration
L : float
Free ligand concentration
PL : float
Complex concentration
"""
PL = 0.5 * ((Ptot + Ltot + Kd) - np.sqrt((Ptot + Ltot + Kd)**2 - 4*Ptot*Ltot)) # complex concentration (uM)
P = Ptot - PL; # free protein concentration in sample cell after n injections (uM)
L = Ltot - PL; # free ligand concentration in sample cell after n injections (uM)
return [P, L, PL]
"""
Explanation: Here we are using the same setup as 'Simulating Experimental Fluorescence Binding Data'.
End of explanation
"""
Kd_high = 0.1e-9 # M
Kd_mid_high = 1e-9 # M
Kd_mid_low = 10e-9 # M
Kd_low = 100e-9 # M
"""
Explanation: Let's say we have four ligands with 0.1, 1, 10, and 100 nM binding affinities.
End of explanation
"""
Ptot = 500e-9 # M
"""
Explanation: In this case we will define our protein concentration as higher than the Kd of our lowest affinity ligand.
End of explanation
"""
Ltot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)]) # M
[L, P, PL] = two_component_binding(Kd_low, Ptot, Ltot)
# y will be complex concentration
# x will be total ligand concentration
plt.semilogx(Ltot, PL, 'ko')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]$')
plt.ylim(0, 1.05*np.max(PL))
plt.axvline(Kd_low,color='r',linestyle='--',label='K_d')
plt.legend(loc=0);
[L_high, P_high, PL_high] = two_component_binding(Kd_high, Ptot, Ltot)
[L_mid_high, P_mid_high, PL_mid_high] = two_component_binding(Kd_mid_high, Ptot, Ltot)
[L_mid_low, P_mid_low, PL_mid_low] = two_component_binding(Kd_mid_low, Ptot, Ltot)
[L_low, P_low, PL_low] = two_component_binding(Kd_low, Ptot, Ltot)
plt.semilogx(Ltot, PL_high, 'bo',label = 'high')
plt.semilogx(Ltot, PL_mid_high, 'ro',label = 'mid_high')
plt.semilogx(Ltot, PL_mid_low, 'go',label = 'mid_low')
plt.semilogx(Ltot, PL_low, 'ko',label = 'low')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]$')
plt.ylim(0, 1.05*np.max(PL))
plt.legend(loc=0);
plt.semilogx(Ltot, PL_high, 'bo',label = 'high')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]$')
plt.ylim(0, 1.05*np.max(PL))
plt.axvline(Kd_high,color='r',linestyle='--',label='K_d')
plt.legend(loc=0);
"""
Explanation: We will define the ligand concentration as previously done. (Half log dilution from 20 uM.)
End of explanation
"""
#Our high affinity ligand has an affinity of 0.1e-9 # M
Ptot_dilute = 0.5e-9 # M
Ptot_mid_dilute = 5e-9 # M
Ptot_mid_conc = 50e-9 # M
Ptot_conc = 500e-9 # M
[L_dilute, P_dilute, PL_dilute] = two_component_binding(Kd_high, Ptot_dilute, Ltot)
[L_mid_dilute, P_mid_dilute, PL_mid_dilute] = two_component_binding(Kd_high, Ptot_mid_dilute, Ltot)
[L_mid_conc, P_mid_conc, PL_mid_conc] = two_component_binding(Kd_high, Ptot_mid_conc, Ltot)
[L_conc, P_conc, PL_conc] = two_component_binding(Kd_high, Ptot_conc, Ltot)
plt.semilogx(Ltot, PL_dilute, 'bo',label = 'dilute')
plt.semilogx(Ltot, PL_mid_dilute, 'ro',label = 'mid_dilute')
plt.semilogx(Ltot, PL_mid_conc, 'go',label = 'mid_conc')
plt.semilogx(Ltot, PL_conc, 'ko',label = 'conc')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]$')
plt.ylim(0, 1.05*np.max(PL))
plt.legend(loc=0);
#This will be easier to see if we do fraction bound
plt.semilogx(Ltot, PL_dilute/Ptot_dilute, 'bo',label = 'dilute')
plt.semilogx(Ltot, PL_mid_dilute/Ptot_mid_dilute, 'ro',label = 'mid_dilute')
plt.semilogx(Ltot, PL_mid_conc/Ptot_mid_conc, 'go',label = 'mid_conc')
plt.semilogx(Ltot, PL_conc/Ptot_conc, 'ko',label = 'conc')
plt.axvline(Kd_high,color='0.3',linestyle='--',label='K_d')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]/[P_tot]$')
plt.ylim(0, 1.05)
plt.legend(loc=0);
"""
Explanation: Hmm. Let's see if varying our protein concentration for our high affinity ligand so that it's closer to our Kd can give us better fluorescence data.
End of explanation
"""
[L_dilute_mid_high, P_dilute_mid_high, PL_dilute_mid_high] = two_component_binding(Kd_mid_high, Ptot_dilute, Ltot)
[L_mid_dilute_mid_high, P_mid_dilute_mid_high, PL_mid_dilute_mid_high] = two_component_binding(Kd_mid_high, Ptot_mid_dilute, Ltot)
[L_mid_con_mid_highc, P_mid_conc_mid_high, PL_mid_conc_mid_high] = two_component_binding(Kd_mid_high, Ptot_mid_conc, Ltot)
[L_conc_mid_high, P_conc_mid_high, PL_conc_mid_high] = two_component_binding(Kd_mid_high, Ptot_conc, Ltot)
Ptot_dilute = 0.5e-9 # M
Ptot_mid_dilute = 5e-9 # M
Ptot_mid_conc = 50e-9 # M
Ptot_conc = 500e-9 # M
plt.figure(figsize=(12,8));
sns.set(style='white')
sns.set_context('talk')
plt.subplot(221)
plt.semilogx(Ltot, PL_dilute/Ptot_dilute, 'bo',label = 'dilute')
plt.semilogx(Ltot, PL_dilute_mid_high/Ptot_dilute, 'bo',alpha=0.3,label = 'dilute mid_high')
plt.axvline(Kd_high,color='0.3',linestyle='--',label='K_d = 0.1 nM')
plt.axvline(Kd_mid_high,color='0.7',linestyle='--',label='K_d = 1.0 nM')
plt.title('$[Ptot] = 0.5 nM$')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]/[P_tot]$')
plt.ylim(0, 1.05)
plt.legend(loc=0,frameon=True);
plt.subplot(222)
plt.semilogx(Ltot, PL_mid_dilute/Ptot_mid_dilute, 'ro')
plt.semilogx(Ltot, PL_mid_dilute_mid_high/Ptot_mid_dilute, 'ro',alpha=0.3)
plt.axvline(Kd_high,color='0.3',linestyle='--',label='K_d = 0.1 nM')
plt.axvline(Kd_mid_high,color='0.7',linestyle='--',label='K_d = 1.0 nM')
plt.title('$[Ptot] = 5 nM$')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]/[P_tot]$')
plt.ylim(0, 1.05)
plt.legend(loc=0,frameon=True);
plt.subplot(223)
plt.semilogx(Ltot, PL_mid_conc/Ptot_mid_conc, 'go')
plt.semilogx(Ltot, PL_mid_conc_mid_high/Ptot_mid_conc, 'go',alpha=0.3)
plt.axvline(Kd_high,color='0.3',linestyle='--',label='K_d = 0.1 nM')
plt.axvline(Kd_mid_high,color='0.7',linestyle='--',label='K_d = 1.0 nM')
plt.title('$[Ptot] = 50 nM$')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]/[P_tot]$')
plt.ylim(0, 1.05)
plt.legend(loc=0,frameon=True);
plt.subplot(224)
plt.semilogx(Ltot, PL_conc/Ptot_conc, 'ko')
plt.semilogx(Ltot, PL_conc_mid_high/Ptot_conc, 'ko',alpha=0.3)
plt.axvline(Kd_high,color='0.3',linestyle='--',label='K_d = 0.1 nM')
plt.axvline(Kd_mid_high,color='0.7',linestyle='--',label='K_d = 1.0 nM')
plt.title('$[Ptot] = 500 nM$')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]/[P_tot]$')
plt.ylim(0, 1.05)
plt.legend(loc=0,frameon=True);
plt.tight_layout();
"""
Explanation: Let's see how well this helps us discern between our 'high' and 'mid_high' affinity ligands.
End of explanation
"""
#Our other question here, is whether our molar fluorescence for our complex
# will allow is to go to this low of a concentration.
plt.figure(figsize=(12,4));
plt.subplot(121)
plt.semilogx(Ltot, PL_dilute, 'bo',label = 'dilute')
plt.semilogx(Ltot, PL_mid_dilute, 'ro',label = 'mid_dilute')
plt.semilogx(Ltot, PL_mid_conc, 'go',label = 'mid_conc')
plt.semilogx(Ltot, PL_conc, 'ko',label = 'conc \n max=%s'%np.max(PL_conc))
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]$')
plt.ylim(0, 1.05*np.max(PL_conc))
plt.legend(loc=0);
plt.subplot(122)
plt.semilogx(Ltot, PL_dilute, 'bo',label = 'dilute \n max=%s' %np.max(PL_dilute))
plt.semilogx(Ltot, PL_mid_dilute, 'ro',label = 'mid_dilute')
plt.semilogx(Ltot, PL_mid_conc, 'go',label = 'mid_conc')
plt.semilogx(Ltot, PL_conc, 'ko',label = 'conc')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]$')
plt.ylim(0, 1.05*np.max(PL_dilute))
plt.legend(loc=0);
"""
Explanation: "As long as the affinity is much tighter than the concentration of macromolecule your binding curve will look similar to this hypothetical extreme case, and you will only be able to place a lower bound on the affinity." - Nick Levinson
There you go!
End of explanation
"""
|
mulhod/spaCy_demo | Cython_demo_notebook.ipynb | mit | import timeit
# There are two packages, one containing regular Python modules and
# the other containing corresponding Cython modules
"""
Explanation: Cython Demo
End of explanation
"""
# Let's create a C extension from the `hello` module
! rm -f awesome_cython_stuff/hello.c awesome_cython_stuff/hello*.so awesome_cython_stuff/hello.html
! ls awesome_cython_stuff/hello*
# The Cython module contains code that is unmodified Python code (in this case)
! more awesome_cython_stuff/hello.pyx
"""
Explanation: Cythonization
End of explanation
"""
# Example setup.py
! head -25 setup.py
# After generating the C extension, there will be a .c file and a .so file,
# the latter being the more important since that is what actually gets
# imported
%%bash
ROOTENV=$(conda info | grep "root environment :" | awk '{print $4}')
PYTHON_HEADER_DIR=${ROOTENV}/pkgs/python-3.4.3-2/include/python3.4m
cython -a awesome_cython_stuff/hello.pyx
gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -fno-strict-aliasing -I${PYTHON_HEADER_DIR} -o awesome_cython_stuff/hello.so awesome_cython_stuff/hello.c
ls awesome_cython_stuff/hello*
"""
Explanation: C Extensions
To generate the C extension, there are a couple ways:
1. Run cython manually to generate the .c file and then run gcc to generate the .so:
cython -a awesome_cython_stuff/hello.pyx
gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -fno-strict-aliasing -I${PYTHON_HEADER_DIR} -o awesome_cython_stuff/hello.so awesome_cython_stuff/hello.c
Run setup.py (using distutils.extension.Extension and Cython.Distutils.build_ext or Cython.Build.cythonize, etc. to generate the extensions):
python setup.py install
End of explanation
"""
from regular_old_yet_fine_python_stuff.hello import say_hello as say_hello_python
from awesome_cython_stuff.hello import say_hello as say_hello_cython
say_hello_python()
say_hello_cython()
# Let's see what kind of difference there is in terms of speed
t = timeit.Timer("say_hello_python()", "from regular_old_yet_fine_python_stuff.hello import say_hello as say_hello_python")
print("Python function: {} seconds".format(t.timeit(100000)))
t = timeit.Timer("say_hello_cython()", "from awesome_cython_stuff.hello import say_hello as say_hello_cython")
print("Cython function: {} seconds".format(t.timeit(100000)))
# The C extension version is *usually* at least marginally faster even though
# the original code is unchanged
"""
Explanation: Hello World in Python vs. Cython
End of explanation
"""
# The "great circle" function calculates the distance between two points on
# the surface of the earth
# Source: http://blog.perrygeo.net/2008/04/19/a-quick-cython-introduction/
# Python version
! cat regular_old_yet_fine_python_stuff/great_circle.py
# Cython version (with modified Python code)
# All that's different is we're using C data types
! cat awesome_cython_stuff/great_circle.pyx
from regular_old_yet_fine_python_stuff.great_circle import great_circle as great_circle_python
from awesome_cython_stuff.great_circle import great_circle as great_circle_cython
lon1, lat1, lon2, lat2 = -72.345, 34.323, -61.823, 54.826
args = "lon1, lat1, lon2, lat2"
print("great_circle_python({1}) = {0}".format(great_circle_python(lon1, lat1, lon2, lat2),
args))
print("great_circle_cython({1}) = {0}".format(great_circle_cython(lon1, lat1, lon2, lat2),
args))
# Test
num = 100000
t1 = timeit.Timer("great_circle_python(%f, %f, %f, %f)" % (lon1, lat1, lon2, lat2),
"from regular_old_yet_fine_python_stuff.great_circle import great_circle as great_circle_python")
t2 = timeit.Timer("great_circle_cython(%f, %f, %f, %f)" % (lon1, lat1, lon2, lat2),
"from awesome_cython_stuff.great_circle import great_circle as great_circle_cython")
print("Pure python function: {} seconds".format(t1.timeit(num)))
print("Cython function: {} seconds".format(t2.timeit(num)))
# The C extension function is faster by a small, but consistent amount
# (though the difference decreases as num increases)
# Let's try to speed up the C extension by using more C code
! cat awesome_cython_stuff/great_circle_2.pyx
# Now, it does not use the Python `math` library at all to do the calculations
from awesome_cython_stuff.great_circle_2 import great_circle_2
print("great_circle_2({1}) = {0}".format(great_circle_2(lon1, lat1, lon2, lat2),
args))
# Test
t3 = timeit.Timer("great_circle_2(%f, %f, %f, %f)" % (lon1, lat1, lon2, lat2),
"from awesome_cython_stuff.great_circle_2 import great_circle_2")
print("Pure python function: {} seconds".format(t1.timeit(num)))
print("Cython function #2: {} seconds".format(t3.timeit(num)))
# Now the difference really shows as the Cython module is 10x faster
"""
Explanation: "Great Circle" Function in Python vs. Cython
End of explanation
"""
|
liganega/Gongsu-DataSci | notebooks/GongSu06_Errors_and_Exception_Handling.ipynb | gpl-3.0 | input_number = input("A number please: ")
number = int(input_number)
print("제곱의 결과는", number**2, "입니다.")
input_number = input("A number please: ")
number = int(input_number)
print("제곱의 결과는", number**2, "입니다.")
"""
Explanation: 오류 및 예외 처리
개요
코딩할 때 발생할 수 있는 다양한 오류 살펴 보기
오류 메시지 정보 확인 방법
예외 처리, 즉 오류가 발생할 수 있는 예외적인 상황을 미리 고려하는 방법 소개
오늘의 주요 예제
아래 코드는 input() 함수를 이용하여 사용자로부터 숫자를 입력받아
그 숫자의 제곱을 리턴하는 내용을 담고 있다.
코드를 실행하면 숫자를 입력하라는 창이 나오며,
여기에 숫자 3을 입력하면 정상적으로 작동한다.
하지만, 예를 들어, 3.2를 입력하면 값 오류(value error)가 발생한다.
End of explanation
"""
sentence = 'I am a sentence
"""
Explanation: 위 코드는 정수들의 제곱을 계산하는 프로그램이다.
하지만 사용자가 경우에 따라 정수 이외의 값을 입력하면 시스템이 다운된다.
이에 대한 해결책을 다루고자 한다.
오류 예제
먼저 오류의 다양한 예제를 살펴보자.
다음 코드들은 모두 오류를 발생시킨다.
예제: 0으로 나누기 오류
python
4.6/0
오류 설명: 0으로 나눌 수 없다.
예제: 문법 오류
python
sentence = 'I am a sentence
오류 설명: 문자열 양 끝의 따옴표가 짝이 맞아야 한다.
* 작은 따옴표끼리 또는 큰 따옴표끼리
예제: 들여쓰기 문법 오류
python
for i in range(3):
j = i * 2
print(i, j)
오류 설명: 2번 줄과 3번 줄의 들여쓰기 정도가 동일해야 한다.
예제: 자료형 오류
아래 연산은 모두 오류를 발생시킨다.
```python
new_string = 'cat' - 'dog'
new_string = 'cat' * 'dog'
new_string = 'cat' / 'dog'
new_string = 'cat' + 3
new_string = 'cat' - 3
new_string = 'cat' / 3
```
이유: 문자열 끼리의 합, 문자열과 정수의 곱셈만 정의되어 있다.
예제: 이름 오류
python
print(party)
오류 설명: 미리 선언된 변수만 사용할 수 있다.
예제: 인덱스 오류
python
a_string = 'abcdefg'
a_string[12]
오류 설명: 인덱스는 문자열의 길이보다 작은 수만 사용할 수 있다.
예제: 값 오류
python
int(a_string)
오류 설명: int() 함수는 정수로만 구성된 문자열만 처리할 수 있다.
예제: 속성 오류
python
print(a_string.len())
오류 설명: 문자열 자료형에는 len() 메소드가 존재하지 않는다.
주의: len() 이라는 함수는 문자열의 길이를 확인하지만 문자열 메소드는 아니다.
이후에 다룰 리스트, 튜플 등에 대해서도 사용할 수 있는 함수이다.
오류 확인
앞서 언급한 코드들을 실행하면 오류가 발생하고 어디서 어떤 오류가 발생하였는가에 대한 정보를
파이썬 해석기가 바로 알려 준다.
예제
End of explanation
"""
a = 0
4/a
"""
Explanation: 오류를 확인하는 메시지가 처음 볼 때는 매우 생소하다.
위 오류 메시지를 간단하게 살펴보면 다음과 같다.
File "<ipython-input-3-a6097ed4dc2e>", line 1
1번 줄에서 오류 발생
sentence = 'I am a sentence
^
오류 발생 위치 명시
SyntaxError: EOL while scanning string literal
오류 종류 표시: 문법 오류(SyntaxError)
예제
아래 예제는 0으로 나눌 때 발생하는 오류를 나타낸다.
오류에 대한 정보를 잘 살펴보면서 어떤 내용을 담고 있는지 확인해 보아야 한다.
End of explanation
"""
number_to_square = input("정수를 입력하세요: ")
# number_to_square 변수의 자료형이 문자열(str)임에 주의하라.
# 따라서 연산을 하고 싶으면 정수형(int)으로 형변환을 먼저 해야 한다.
number = int(number_to_square)
print("제곱의 결과는", number**2, "입니다.")
number_to_square = input("정수를 입력하세요: ")
# number_to_square 변수의 자료형이 문자열(str)임에 주의하라.
# 따라서 연산을 하고 싶으면 정수형(int)으로 형변환을 먼저 해야 한다.
number = int(number_to_square)
print("제곱의 결과는", number**2, "입니다.")
"""
Explanation: 오류의 종류
앞서 예제들을 통해 살펴 보았듯이 다양한 종류의 오류가 발생하며,
코드가 길어지거나 복잡해지면 오류가 발생할 가능성은 점차 커진다.
오류의 종류를 파악하면 어디서 왜 오류가 발생하였는지를 보다 쉽게 파악하여
코드를 수정할 수 있게 된다.
따라서 코드의 발생원인을 바로 알아낼 수 있어야 하며 이를 위해서는 오류 메시지를
제대로 확인할 수 있어야 한다.
하지만 여기서는 언급된 예제 정도의 수준만 다루고 넘어간다.
코딩을 하다 보면 어차피 다양한 오류와 마주치게 될 텐데 그때마다
스스로 오류의 내용과 원인을 확인해 나가는 과정을 통해
보다 많은 경험을 쌓는 길 외에는 달리 방법이 없다.
예외 처리
코드에 문법 오류가 포함되어 있는 경우 아예 실행되지 않는다.
그렇지 않은 경우에는 일단 실행이 되고 중간에 오류가 발생하면 바로 멈춰버린다.
이렇게 중간에 오류가 발생할 수 있는 경우를 미리 생각하여 대비하는 과정을
예외 처리(exception handling)라고 부른다.
예를 들어, 오류가 발생하더라도 오류발생 이전까지 생성된 정보들을 저장하거나, 오류발생 이유를 좀 더 자세히 다루거나, 아니면 오류발생에 대한 보다 자세한 정보를 사용자에게 알려주기 위해 예외 처리를 사용한다.
사용방식은 다음과 같다.
python
try:
코드1
except:
코드2
* 먼저 코드1 부분을 실행한다.
* 코드1 부분이 실행되면서 오류가 발생하지 않으면 코드2 부분은 무시하고 다음으로 넘어간다.
* 코드1 부분이 실행되면서 오류가 발생하면 더이상 진행하지 않고 바로 코드2 부분을 실행한다.
예제
아래 코드는 input() 함수를 이용하여 사용자로부터 숫자를 입력받아 그 숫자의 제곱을 리턴하고자 하는 내용을 담고 있으며, 코드에는 문법적 오류가 없다.
그리고 코드를 실행하면 숫자를 입력하라는 창이 나온다.
여기에 숫자 3을 입력하면 정상적으로 작동하지만
예를 들어, 3.2를 입력하면 값 오류(value error)가 발생한다.
End of explanation
"""
number_to_square = input("정수를 입력하세요: ")
try:
number = int(number_to_square)
print("제곱의 결과는", number ** 2, "입니다.")
except:
print("정수를 입력해야 합니다.")
"""
Explanation: 3.2를 입력했을 때 오류가 발생하는 이유는 int() 함수가 정수 모양의 문자열만
처리할 수 있기 때문이다.
사실 정수들의 제곱을 계산하는 프로그램을 작성하였지만 경우에 따라
정수 이외의 값을 입력하는 경우가 발생하게 되며, 이런 경우를 대비해야 한다.
즉, 오류가 발생할 것을 미리 예상해야 하며, 어떻게 대처해야 할지 준비해야 하는데,
try ... except ...문을 이용하여 예외를 처리하는 방식을 활용할 수 있다.
End of explanation
"""
while True:
try:
number = int(input("정수를 입력하세요: "))
print("제곱의 결과는", number**2, "입니다.")
break
except:
print("정수를 입력해야 합니다.")
"""
Explanation: 올바른 값이 들어올 때까지 입력을 요구할 수 있다.
End of explanation
"""
number_to_square = input("정수를 입력하세요: ")
try:
number = int(number_to_square)
a = 5/(number - 4)
print("결과는", a, "입니다.")
except ValueError:
print("정수를 입력해야 합니다.")
except ZeroDivisionError:
print("4는 빼고 하세요.")
"""
Explanation: 오류 종류에 맞추어 다양한 대처를 하기 위해서는 오류의 종류를 명시하여 예외처리를 하면 된다.
아래 코드는 입력 갑에 따라 다른 오류가 발생하고 그에 상응하는 방식으로 예외처리를 실행한다.
값 오류(ValueError)의 경우
End of explanation
"""
number_to_square = input("A number please: ")
try:
number = int(number_to_square)
a = 5/(number - 4)
print("결과는", a, "입니다.")
except ValueError:
print("정수를 입력해야 합니다.")
except ZeroDivisionError:
print("4는 빼고 하세요.")
"""
Explanation: 0으로 나누기 오류(ZeroDivisionError)의 경우
End of explanation
"""
try:
a = 1/0
except ValueError:
print("This program stops here.")
"""
Explanation: 주의: 이와 같이 발생할 수 예외를 가능한 한 모두 염두하는 프로그램을 구현해야 하는 일은
매우 어려운 일이다.
앞서 보았듯이 오류의 종류를 정확히 알 필요가 발생한다.
다음 예제에서 보듯이 오류의 종류를 틀리게 명시하면 예외 처리가 제대로 작동하지 않는다.
End of explanation
"""
def to_define():
"""아주 복잡하지만 지금 당장 불필요"""
raise NotImplementedError("아직 정의되어 있지 않음")
print(to_define())
"""
Explanation: raise 함수
강제로 오류를 발생시키고자 하는 경우에 사용한다.
예제
어떤 함수를 정확히 정의하지 않은 상태에서 다른 중요한 일을 먼저 처리하고자 할 때
아래와 같이 함수를 선언하고 넘어갈 수 있다.
그런데 아래 함수를 제대로 선언하지 않은 채로 다른 곳에서 호출하면
"아직 정의되어 있지 않음"
이란 메시지로 정보를 알려주게 된다.
End of explanation
"""
def to_define1():
"""아주 복잡하지만 지금 당장 불필요"""
print(to_define1())
"""
Explanation: 주의: 오류 처리를 사용하지 않으면 오류 메시지가 보이지 않을 수도 있음에 주의해야 한다.
End of explanation
"""
def square(number):
"""
정수를 인자로 입력 받아 제곱을 리턴한다.
"""
square_of_number = number * 2
return square_of_number
"""
Explanation: 코드의 안전성 문제
문법 오류 또는 실행 중에 오류가 발생하지 않는다 하더라도 코드의 안전성이 보장되지는 않는다.
코드의 안정성이라 함은 코드를 실행할 때 기대하는 결과가 산출된다는 것을 보장한다는 의미이다.
예제
아래 코드는 숫자의 제곱을 리턴하는 square() 함수를 제대로 구현하지 못한 경우를 다룬다.
End of explanation
"""
square(3)
"""
Explanation: 위 함수를 아래와 같이 호출하면 오류가 전혀 발생하지 않지만,
엉뚱한 값을 리턴한다.
End of explanation
"""
help(square)
"""
Explanation: 주의: help() 를 이용하여 어떤 함수가 무슨 일을 하는지 내용을 확인할 수 있다.
단, 함수를 정의할 때 함께 적힌 문서화 문자열(docstring) 내용이 확인된다.
따라서, 함수를 정의할 때 문서화 문자열에 가능한 유효한 정보를 입력해 두어야 한다.
End of explanation
"""
number_to_square = input("100을 나눌 숫자를 입력하세요: ")
number = int(number_to_square)
print("100을 입력한 값으로 나눈 결과는", 100/number, "입니다.")
"""
Explanation: 오류에 대한 보다 자세한 정보
파이썬에서 다루는 오류에 대한 보다 자세한 정보는 아래 사이트들에 상세하게 안내되어 있다.
파이썬 기본 내장 오류 정보 문서:
https://docs.python.org/3.4/library/exceptions.html
파이썬 예외처리 정보 문서:
https://docs.python.org/3.4/tutorial/errors.html
연습문제
연습
아래 코드는 100을 입력한 값으로 나누는 함수이다.
다만 0을 입력할 경우 0으로 나누기 오류(ZeroDivisionError)가 발생한다.
End of explanation
"""
number_to_square = input("A number to divide 100: ")
try:
number = float(number_to_square)
print("100을 입력한 값으로 나눈 결과는", 100/number, "입니다.")
except ZeroDivisionError:
raise ZeroDivisionError('0이 아닌 숫자를 입력하세요.')
except ValueError:
raise ValueError('숫자를 입력하세요.')
number_to_square = input("A number to divide 100: ")
try:
number = float(number_to_square)
print("100을 입력한 값으로 나눈 결과는", 100/number, "입니다.")
except ZeroDivisionError:
raise ZeroDivisionError('0이 아닌 숫자를 입력하세요.')
except ValueError:
raise ValueError('숫자를 입력하세요.')
"""
Explanation: 아래 내용이 충족되도록 위 코드를 수정하라.
나눗셈이 부동소수점으로 계산되도록 한다.
0이 아닌 숫자가 입력될 경우 100을 그 숫자로 나눈다.
0이 입력될 경우 0이 아닌 숫자를 입력하라고 전달한다.
숫자가 아닌 값이 입력될 경우 숫자를 입력하라고 전달한다.
견본답안:
End of explanation
"""
while True:
try:
a, b = input("정수 두 개를 입력하세요. 쉼표를 사용해야 합니다.\n").split(',')
a, b = int(a), int(b)
print("계산의 결과는", a/b, "입니다.")
break
except ValueError:
print("정수 두 개를 쉼표로 구분해서 입력해야 합니다.\n")
except ZeroDivisionError:
print("둘째 수는 0이 아니어야 합니다.\n")
"""
Explanation: 연습
두 개의 정수 a와 b를 입력 받아 a/b를 계산하여 출력하는 코드를 작성하라.
견본답안 1:
End of explanation
"""
while True:
try:
a, b = map(int, input("정수 두 개를 입력하세요. 쉼표를 사용해야 합니다.\n").split(','))
print("계산의 결과는", a/b, "입니다.")
break
except ValueError:
print("정수 두 개를 쉼표로 구분해서 입력해야 합니다.\n")
except ZeroDivisionError:
print("둘째 수는 0이 아니어야 합니다.\n")
"""
Explanation: 견본답안 2: map 함수를 활용하여 a, b 각각에 int 함수를 자동으로 적용할 수 있다.
map 함수에 대한 설명은 여기를 참조하면 된다.
End of explanation
"""
while True:
try:
print("키와 몸무게를 입력하세요: ")
a, b = map(float, input().split(", "))
BMI = b/(a**2)
if BMI <= 18.5:
print("BMI는", BMI, "입니다. 저체중입니다.")
elif 18.5 < BMI <= 23:
print("BMI는", BMI, "입니다. 정상 체중입니다.")
elif 23 < BMI <= 25:
print("BMI는", BMI, "입니다. 비만입니다.")
elif 25 < BMI <= 30:
print("BMI는", BMI, "입니다. 과체중입니다.")
else:
print("BMI는", BMI, "입니다. 고도비만입니다.")
break
except ValueError:
print("숫자를 입력하세요.")
except ZeroDivisionError:
print("0이 아닌 숫자를 입력하세요.")
"""
Explanation: 연습
키와 몸무게를 인자로 받아 체질량지수(BMI)를 구하는 코드를 작성하라.
아래 사항들을 참고한다.
$$BMI = \frac{weight}{height^2}$$
단위:
몸무게(weight): kg
키(height): m
BMI 수치에 따른 체중 분류
BMI <= 18.5이면 저체중
18.5 < BMI <= 23이면 정상
23 < BMI <= 25이면 과체중
25 < BMI <= 30이면 비만
BMI > 30이면 고도비만
견본답안:
End of explanation
"""
|
torgebo/deep_learning_workshop | 4-gan/1-gan-multimodal-distribution.ipynb | mit | import pandas as pd
import numpy as np
import admin.tools as tools
data = pd.read_csv('resources/multinomial.csv', index_col=0 )
"""
Explanation: Generative Adversarial Networks 1
<div class="alert alert-warning">
In this notebook we will use what we have learned about artificial neural networks to explore generative modelling with generative adversarial networks.
</div>
Generative adversarial networks, or GANs, is a generative modelling methodology by Goodfellow et al. [1] from 2014 that has garnered much interest these past few years. It is based on the idea of transforming samples of latent variables $\mathbf{z}$ to samples $\mathbf{x}$ of a probability distribution that we would like to learn. The transformation is done via a differentiable function, which typically is defined as an artifical neural network.
When viewed through the lens of game theory, a GAN consists of a generator and an adversary called the discriminator. The generator network $\mathbf{G}$ produces samples $\mathbf{x}$ by transforming latent variables $\mathbf{z}$ with the help of a neural network. The adversary, the discriminator network $\mathbf{D}$, attempts to discriminate between the samples $\mathbf{x}$ generated by $\mathbf{G}$ and the training data. In other words, the discriminator seeks to detect whether the input data is fake or real. At the same time, the generator attempts to fool the discriminator by generating plausible samples. A GAN has converged when the discriminator can no longer differentiate between real data and samples generated by the generator.
Distinguishing between fake and real data sounds like something we have done several times before; indeed, it is a binary classification problem. The original formulation of GANs as a zero-sum game can be seen below:
$$
\begin{equation}
\underset{\mathbf{G}}{\arg\min}\max_{\mathbf{D}}
\frac{1}{N}\sum_{i=1}^{N} \ln\mathbf{D}(\mathbf{x}_i)+\ln(1-\mathbf{D}(\mathbf{G}(\mathbf{z}_i)))
\end{equation}
$$
We can see that the discriminator tries to maximise the log-likelihood of giving the correct prediction, whilst the generator tries to minimise this quantity. In practice, the training is done by alternating optimisation between the generator and the discriminator.
GANs are notoriously difficult to train, so for this and the next notebook we are going to have to do some simplifications. First, we are going to train on what we consider to be easy datasets:
Notebook 1: A 1-d multimodal distribution
Notebook 2: The MNIST database of handwritten digits
Secondly, most of the implementation will already be done for you; the focus will be on testing out different kinds of network definitions for the generator and the discriminator. This notebook will start with the 1-d multimodal distribution dataset, while the next one will handle the familiar MNIST dataset.
[1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio “Generative Adversarial Nets” in: Advances in neural information processing systems 2014, pp. 2672–2680
Example: Multimodal distiribution
In this notebook we will use a GAN to generate sample from a 1-d multimodal distribution.
We will start loading our uni-dimensional data from a CSV file.
<div class="alert alert-info">
<strong>In the following snippet of code we will:</strong>
<ul>
<li>Load data from a CSV file</li>
<li>Plot the normalised data histogram</li>
</ul>
</div>
End of explanation
"""
X_data = data.values
"""
Explanation: Unpack the Pandas DataFrames to NumPy arrays:
End of explanation
"""
fig, ax = plt.subplots(1,1)
tools.plot_normalized_histogram(ax,X_data)
ax.set_title('Data Histogram')
ax.set_xlabel('Data Value')
ax.set_ylabel('Normalized frequency')
ax.grid()
fig.canvas.draw()
time.sleep(0.04)
"""
Explanation: Plot normalised histogram:
End of explanation
"""
# Import some useful Keras libraries
import keras
from keras.models import Model
from keras.layers import *
def generator(z_dim, nb_outputs, output_shape):
# Define the input_noise as a function of Input()
latent_var = None
# Insert the desired amount of layers for your network
x = None
# Map you latest layer to nb_outputs
x = None
# Reshape you data
x = Reshape(output_shape)(x)
model = Model(inputs=latent_var, outputs=x)
return model
"""
Explanation: Task I: Implement a Generator Network
As previously mentioned, the generator network is built to map a latent space to a specific data distribution.
In this task we will make a network that has as input a vector of zdim dimensions and is mapped to a pre-defined number of outputs. The number of outputs and its shape is defined by the data distribution we are learning.
<div class="alert alert-success">
<strong>Task:</strong> :
<ul>
<li>Make a network that accepts inputs where the shape is defined by `zdim` $\rightarrow$ `shape=(z_dim,)`</li>
<li>The number of outputs of your network need to be defined as `nb_outputs`</li>
<li>Reshape the final layer to be in the shape of `output_shape`</li>
</ul>
</div>
Since the data lies in the range [-1,1] try using the 'tanh' as the final activation function.
Keras references: Reshape()
End of explanation
"""
# Define the dimension of the latent vector
z_dim = 100
# Dimension of our sample
sample_dimentions = X_data[0].shape
# Calculate the number of dimensions in a sample
n_dimensions=1
for x in list(sample_dimentions):
n_dimensions *= x
print('A sample of data has shape {} composed of {} dimension(s)'.format(sample_dimentions, n_dimensions))
# Create the generative network
G = generator(z_dim, n_dimensions, sample_dimentions)
# We recommend the followin optimizer
g_optim = keras.optimizers.adam(lr=0.002, beta_1=0.5, beta_2=0.999, epsilon=1e-08, decay=0.0)
# Compile network
G.compile (loss='binary_crossentropy', optimizer=g_optim)
# Network Summary
G.summary()
"""
Explanation: Now, let's build a generative network using the function you just made.
<div class="alert alert-info">
<strong>In the following code snippet we will:</strong>
<ul>
<li>Define the number of dimensions of the latent vector $\mathbf{z}$</li>
<li>Find out the shape of a sample of data</li>
<li>Compute numbers of dimensions in a sample of data</li>
<li>Create the network using your function</li>
<li>Display a summary of your generator network</li>
</ul>
</div>
End of explanation
"""
def discriminator(input_shape, nb_inputs):
# Define the network input to have input_shape shape
input_x = None
# Reshape the input
x = None
# Implement the rest of you classifier
x = None
# Get the output activation (binary classification)
probabilities = Dense(1, activation='sigmoid')(x)
model = Model(inputs=input_x, outputs=probabilities)
return model
"""
Explanation: Task II: Implement a Discriminative Network
The discriminator network is a simple binary classifier where the output indicates the probability of the input data being real or fake.
<div class="alert alert-success">
<strong>Task:</strong>
<ul>
<li> Create a network where the input shape is `input_shape`
<li> We recomend reshaping your network just after input. This way you can have a vector with shape `(None, nb_inputs)`</li>
<li> Implement a simple network that can classify data</li>
</ul>
</div>
Keras references: Reshape()
End of explanation
"""
# We already computed the shape and number of dimensions in a data sample
print('The data has shape {} composed of {} dimension(s)'.format(sample_dimentions, n_dimensions))
# Discriminative network
D = discriminator(sample_dimentions, n_dimensions)
# Recommended optimiser
d_optim = keras.optimizers.adam(lr=0.002, beta_1=0.5, beta_2=0.999, epsilon=1e-08, decay=0.0)
# Compile network
D.compile(loss='binary_crossentropy', optimizer=d_optim)
# Network summary
D.summary()
"""
Explanation: Now, let's build a discriminator network using the function you just made.
<div class="alert alert-info">
<strong>In the following code snippet we will:</strong>
<ul>
<li>Create the network using your function</li>
<li>Display a summary of your generator network</li>
</ul>
</div>
End of explanation
"""
from keras.models import Sequential
def build(generator, discriminator):
"""Build a base model for a Generative Adversarial Networks.
Parameters
----------
generator : keras.engine.training.Model
A keras model built either with keras.models ( Model, or Sequential ).
This is the model that generates the data for the Generative Adversarial networks.
Discriminator : keras.engine.training.Model
A keras model built either with keras.models ( Model, or Sequential ).
This is the model that is a binary classifier for REAL/GENERATED data.
Returns
-------
(keras.engine.training.Model)
It returns a Sequential Keras Model by connecting a Generator model to a
Discriminator model. [ generator-->discriminator]
"""
model = Sequential()
model.add(generator)
discriminator.trainable = False
model.add(discriminator)
return model
# Create GAN
G_plus_D = build(G, D)
G_plus_D.compile(loss='binary_crossentropy', optimizer=g_optim)
D.trainable = True
"""
Explanation: Putting the GAN Together
In the following code we will put the generator and discriminator together so we can train our adversarial model.
<div class="alert alert-info">
<strong>In the following code snippet we will:</strong>
<ul>
<li>Use the generator and discriminator to construct a GAN</li>
</ul>
</div>
End of explanation
"""
BATCH_SIZE = 32
NB_EPOCHS = 50
"""
Explanation: Task III: Define Hyperparameters
Please define the following hyper-parameters to train your GAN.
<br>
<div class="alert alert-success">
<strong>Task:</strong> Please define the following hyperparameters to train your GAN:
<ul>
<li> Batch size</li>
<li>Number of training epochs</li>
</ul>
</div>
End of explanation
"""
# Figure for live plot
fig, ax = plt.subplots(1,1)
# Allocate space for noise variable
z = np.zeros((BATCH_SIZE, z_dim))
# n_bathces
number_of_batches = int(X_data.shape[0] / BATCH_SIZE)
for epoch in range(NB_EPOCHS):
for index in range(number_of_batches):
# Sample minimibath m=BATCH_SIZE from data generating distribution
# in other words :
# Grab a batch of the real data
data_batch = X_data[index*BATCH_SIZE:(index+1)*BATCH_SIZE]
# Sample minibatch of m= BATCH_SIZE noise samples
# in other words, we sample from a uniform distribution
z = np.random.uniform(-1, 1, (BATCH_SIZE,z_dim))
# Sample minibatch m=BATCH_SIZE from data generating distribution Pdata
# in ohter words
# Use genrator to create new fake samples
generated_batch = G.predict(z, verbose=0)
# Update/Train discriminator D
X = np.concatenate((data_batch, generated_batch))
y = [1] * BATCH_SIZE + [0.0] * BATCH_SIZE
d_loss = D.train_on_batch(X, y)
# Sample minibatch of m= BATCH_SIZE noise samples
# in other words, we sample from a uniform distribution
z = np.random.uniform(-1, 1, (BATCH_SIZE,z_dim))
#Update Generator while not updating discriminator
D.trainable = False
# to do gradient ascent we just flip the labels ...
g_loss = G_plus_D.train_on_batch(z, [1] * BATCH_SIZE)
D.trainable = True
# Plot data every 10 mini batches
if index % 10 == 0:
ax.clear()
# Histogram of generated data
tools.plot_normalized_histogram(ax , generated_batch.flatten(), color='b',label='Generated')
# Histogram of real data
tools.plot_normalized_histogram(ax , X_data, color='y',label='Real')
# Plot details
ax.legend()
ax.grid()
ax.set_xlim([-1,1])
fig.canvas.draw()
time.sleep(0.01)
# End of epoch ....
print("epoch %d : g_loss : %f | d_loss : %f" % (epoch, g_loss, d_loss))
"""
Explanation: <div class="alert alert-info">
<strong>In the following code snippet we will:</strong>
<ul>
<li>Train the constructed GAN</li>
<li>Live plot the histogram of the generated data</li>
</ul>
</div>
End of explanation
"""
|
ucsdlib/python-novice-inflammation | 4-files & conditionals-short.ipynb | cc0-1.0 | print(glob.glob('data/inflammation*.csv'))
"""
Explanation: glob contains function glob that finds files that match a pattern
* matches 0+ characters; ? matches any one char
End of explanation
"""
# loop here
counter = 0
for filename in glob.glob('data/*.csv'):
#counter+=1
counter = counter + 1
print("number of files:", counter)
counter = 0
for filename in glob.glob('data/infl*.csv'):
#counter+=1
counter = counter + 1
print("number of files:", counter)
counter = 0
for filename in glob.glob('data/infl*.csv'):
#counter+=1
data = numpy.loadtxt(fname=filename, delimiter=',')
print(filename, "mean is: ", data.mean())
counter = counter + 1
print("number of files:", counter)
"""
Explanation: results in a list of strings, we can loop oer
we want to create sets of plots
End of explanation
"""
#We use an if statement to take different actions
#based on conditions
num = 37
if num > 100:
print('greater')
else:
print('not greater')
print('done')
"""
Explanation: We can ask Python to take different actions, depending on a condition, with an if statement:
Making choices
last lesson we discovereed something suspicious in our inflammatin data by drawing plots
how can python recognized these different features and act on it
we will write code that runs on certain conditions
End of explanation
"""
num = 53
print('before conditional...')
if num > 100:
print('53 is greater than 100')
print('...after conditional')
"""
Explanation: second line of code above uses keyword if to denote choice
if the test after if is true, the body of the if are executed
if test false the body else is executed
conditional statements don't have to include else - if not present python does nothing
End of explanation
"""
num = -3
if num > 0:
print(num, "is positive")
elif num == 0:
print(num, "is zero")
else:
print(num, "is negative")
"""
Explanation: we can also chain several tests together using elif, short for else if
End of explanation
"""
if (1 > 0) and (-1 > 0):
print('both parts are true')
else:
print('at least one part is false')
"""
Explanation: NOTE: we use == to test for equality rather than single equal b/c the later is the assignment operator
we can also combine tests using and and or. and is only true if both parts are true
End of explanation
"""
if (1 < 0) or (-1 < 0):
print('at least one test is true')
"""
Explanation: while or is true if at least one part is true:
End of explanation
"""
if 4 > 5:
print('A')
elif 4 == 5:
print('B')
elif 4 < 5:
print('C')
"""
Explanation: Challenge - making choices:
Which of the following would be printed if you were to run this code? Why did you pick this answer?
A
B
C
B and C
python
if 4 > 5:
print('A')
elif 4 == 5:
print('B')
elif 4 < 5:
print('C')
End of explanation
"""
if '':
print('empty string is true')
if 'word':
print('word is true')
if []:
print('empty list is true')
if [1, 2, 3]:
print('non-empty list is true')
if 0:
print('zero is true')
if 1:
print('one is true')
"""
Explanation: Challenge - making choices 2:
True and False are special words in Python called booleans which represent true and false statements. However, they aren’t the only values in Python that are true and false. In fact, any value can be used in an if or elif. After reading and running the code below, explain what the rule is for which values are considered true and which are considered false.
python
if '':
print('empty string is true')
if 'word':
print('word is true')
if []:
print('empty list is true')
if [1, 2, 3]:
print('non-empty list is true')
if 0:
print('zero is true')
if 1:
print('one is true')
End of explanation
"""
|
rebeccabilbro/rebeccabilbro.github.io | _drafts/words-in-space-nb.ipynb | mit | import os
from sklearn.datasets.base import Bunch
from yellowbrick.download import download_all
## The path to the test data sets
FIXTURES = os.path.join(os.getcwd(), "data")
## Dataset loading mechanisms
datasets = {
"hobbies": os.path.join(FIXTURES, "hobbies")
}
def load_data(name, download=True):
"""
Loads and wrangles the passed in text corpus by name.
If download is specified, this method will download any missing files.
"""
# Get the path from the datasets
path = datasets[name]
# Check if the data exists, otherwise download or raise
if not os.path.exists(path):
if download:
download_all()
else:
raise ValueError((
"'{}' dataset has not been downloaded, "
"use the download.py module to fetch datasets"
).format(name))
# Read the directories in the directory as the categories.
categories = [
cat for cat in os.listdir(path)
if os.path.isdir(os.path.join(path, cat))
]
files = [] # holds the file names relative to the root
data = [] # holds the text read from the file
target = [] # holds the string of the category
# Load the data from the files in the corpus
for cat in categories:
for name in os.listdir(os.path.join(path, cat)):
files.append(os.path.join(path, cat, name))
target.append(cat)
with open(os.path.join(path, cat, name), 'r') as f:
data.append(f.read())
# Return the data bunch for use similar to the newsgroups example
return Bunch(
categories=categories,
files=files,
data=data,
target=target,
)
corpus = load_data('hobbies')
"""
Explanation: Words in Space
Text analysis tasks like vectorization, document classification, and topic modeling require us to define what it means for any two documents to be similar. There are a number of different measures you can use to determine similarity for text (e.g. Minkowski, Cosine, Levenstein, Jaccard, etc.). In this post, we'll take a look a couple of these ways of measuring distance between documents in a specific corpus, and then visualize the result of our decisions on the distribution of the data.
Words in Space: Introduction to Document Similarity for Text Analysis
While there are a number of different ways to quantify "distance" between documents (e.g. Minkowski, Cosine, Levenstein, Jaccard, etc.), fundamentally, each relies on our ability to imagine documents as points in space, where the relative closeness of any two is a measure of their similarity.
Nevertheless, our choice of distance metric is very important!
Why? Because text data:
is very high dimensional
is often sparsely distributed
has some features that are more important than others
has some feature variations matter more than others
So... in this post we'll experiment with a few different distance measures and visualize the results with t-distributed stochastic neighbor embedding, or t-SNE. Later on (or in a separate post) we'll also visualize feature importances and explore some feature distributions with dispersion plots.
Load the corpus
End of explanation
"""
import matplotlib.pyplot as plt
from yellowbrick.text import TSNEVisualizer
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
docs = vectorizer.fit_transform(corpus.data)
labels = corpus.target
"""
Explanation: Vectorize the Documents
End of explanation
"""
tsne = TSNEVisualizer()
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: About t-SNE
Scikit-learn implements this decomposition method as the sklearn.manifold.TSNE transformer. By decomposing high-dimensional document vectors into 2 dimensions using probability distributions from both the original dimensionality and the decomposed dimensionality, t-SNE is able to effectively cluster similar documents. By decomposing to 2 or 3 dimensions, the documents can be visualized with a scatter plot.
Unfortunately, TSNE is very expensive, so typically a simpler decomposition method such as SVD or PCA is applied ahead of time. In Yellowbrick, the TSNEVisualizer creates an inner transformer pipeline that applies such a decomposition first (SVD with 50 components by default), then performs the t-SNE embedding. The visualizer then plots the scatter plot, coloring by cluster or by class, or neither if a structural analysis is required.
Squared Euclidean Distance
By default, sklearn.manifold.TSNE (and therefore also TSNEVisualizer) uses Euclidean distance, interpreted as squared Euclidean distance (in Scipy, this is "sqeuclidean").
End of explanation
"""
tsne = TSNEVisualizer(metric="euclidean")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Euclidean Distance
Euclidean computes the distance between m points using 2-norm as the distance metric between the points. The points are arranged as m n-dimensional row vectors in the matrix X.
End of explanation
"""
tsne = TSNEVisualizer(metric="cityblock")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: As you've probably heard, Euclidean distance is not an ideal choice for any sparse data (also this). That's because when we vectorize a corpus, we end up with huge, sparse vectors. That means that it's sort of a crapshoot as to whether the most informative features (e.g. words) will vary in a way that will be captured by Euclidean distance. We can see in the above that Euclidean distance hasn't done a terrible job of spacially differentiating the different categories of documents; the "sports" and "cooking" clusters look pretty clear. However there's a lot of overlap and muddling of the other categories.
This makes more sense when we think in terms of informative features, which in the case of text are words. Some of the most informative features in the cooking category might occur only in a chunk of the corpus, others (maybe words from the "cinema" and "gaming" categories) might be spread out through the whole corpus, but more frequently in some documents than others. Some of our documents might be really long, while others are short, which means they'll be hard to compare; the short one will have a much more sparse vector representation.
What to do?
Distance Metrics for Non-numerical Inputs
Per sklearn:
"scikit-learn estimators assume you’ll feed them real-valued feature vectors. This assumption is hard-coded in pretty much all of the library. However, you can feed non-numerical inputs to estimators in several ways."
In our case, the sklearn.manifold.TSNE has a metric param that will allow us to leverage any of the distance metrics available in Scipy!
We could try them all!
```
distance_functions = [
"braycurtis", "canberra", "chebyshev", "cityblock", "correlation", "cosine",
"dice", "euclidean", "hamming", "jaccard", "kulsinski", "mahalanobis",
"matching", "minkowski", "rogerstanimoto", "russellrao", "seuclidean",
"sokalmichener", "sokalsneath", "sqeuclidean", "yule"
]
for metric in distance_functions:
tsne = TSNEVisualizer(metric=metric)
tsne.fit(docs, labels)
tsne.poof()
```
... but some are a bit finicky, so let's walk through them all individually.
We'll start to see pretty quickly that there are several that just don't make sense at all in the context of text data.
Manhattan (aka "Taxicab" or "City Block") Distance
Manhattan distance between two points is computed as the sum of the absolute differences of their Cartesian coordinates.
End of explanation
"""
tsne = TSNEVisualizer(metric="minkowski")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Minkowski Distance
Minkowski distance is a generalization of Euclidean and Manhattan distance, and defines the distance
between two points in a normalized vector space.
End of explanation
"""
tsne = TSNEVisualizer(metric="braycurtis")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Bray Curtis Dissimilarity
Bray–Curtis dissimilarity is a statistic used to quantify the compositional dissimilarity between two different sites, based on counts at each site. The Bray–Curtis dissimilarity is bounded between 0 and 1, where 0 means the two sites have the same composition (that is they share all the species), and 1 means the two sites do not share any species. Note: not technically a distance measure because it doesn't satisfy the triangle inequality.
End of explanation
"""
tsne = TSNEVisualizer(metric="canberra")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Note the weird beauty mark in the plot above. We'll see this in several of the plots below.
Canberra Distance
Canberra distance is a numerical measure of the distance between pairs of points in a vector space. It is a weighted version of Manhattan distance and is sometimes used as a metric for comparing ranked lists and for intrusion detection in computer security.
End of explanation
"""
tsne = TSNEVisualizer(metric="chebyshev")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Another beauty mark!
Chebyshev Distance
Chebyshev distance between two n-vectors u and v is the maximum norm-1 distance between their respective elements.
End of explanation
"""
tsne = TSNEVisualizer(metric="cosine")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Chebyshev does seem to produce a beauty mark also, but it's less distinct from the rest of the points.
Cosine Distance
We can also measure vector similarity with cosine distance, using the cosine of the angle between the two vectors to assess the degree to which they share the same orientation. In effect, the more parallel any two vectors are, the more similar the documents will be (regardless of their magnitude). Cosine distance is not technically a distance measure because it doesn't satisfy the triangle inequality. Nevertheless, cosine distance is often an excellent option for text data because it corrects for an variations in the length of the documents (since we're measuring the angle between vectors rather than their magnitudes). Moreover, it can be a very efficient way to compute distance with sparse vectors because it considers only the non-zero elements.
End of explanation
"""
tsne = TSNEVisualizer(metric="jaccard")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Danger!
There are also a boatload of distance metrics that make no sense at all for sparse, non-numeric data.
Let's take a look!
Jaccard Distance
Jaccard distance defines similarity between finite sets as the quotient of their intersection and their union. For instance, we could measure the Jaccard distance between two documents A and B by dividing the number of unique words that appear in both A and B by the total number of unique words that appear in A and B. A value of 0 would indicate that the two documents have nothing in common, a 1 that they were the same document, and values between 0 and 1 indicating their relative degree of similarity.
Jaccard distanceis actually not a bad metric for text distance, but it's much more effective for detecting things like document duplication. It's much less helpful for finding more nuanced similarities and patterns.
End of explanation
"""
tsne = TSNEVisualizer(metric="dice")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Notice the crescent shape!
Dice Dissimilarity
Dice distance computes the Dice dissimilarity between two boolean 1-D arrays. It is similar to Jaccard distance.
End of explanation
"""
tsne = TSNEVisualizer(metric="kulsinski")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Again we see the crescent shape we saw in Jaccard.
Kulsinski Dissimilarity
Like Jaccard and Dice, Kulsinski distance is a set distance, computed as the Kulsinski dissimilarity between two boolean 1-D arrays, in this case where nonzero entries are treated as True, zero entries are treated as False.
End of explanation
"""
# russellrao
tsne = TSNEVisualizer(metric="russellrao")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Another crescent!
Russell-Rao Dissimilarity
Like the last few examples, Russell-Rao dissimilarity quantifies the difference between two boolean 1-D arrays.
End of explanation
"""
tsne = TSNEVisualizer(metric="sokalsneath")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Another crescent!
Sokal Sneath Distance
Computes the Sokal-Sneath distance between each pair of boolean vectors.
End of explanation
"""
tsne = TSNEVisualizer(metric="sokalmichener")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Sokal Michener
End of explanation
"""
tsne = TSNEVisualizer(metric="rogerstanimoto")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Rogers Tanimoto
End of explanation
"""
tsne = TSNEVisualizer(metric="hamming")
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Hamming
Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of substitutions required to change one string into the other, or the minimum number of errors that could have transformed one string into the other. In a more general context, the Hamming distance is one of several string metrics for measuring the edit distance between two sequences.
End of explanation
"""
# tsne = TSNEVisualizer(metric="correlation")
# tsne.fit(docs, labels)
# tsne.poof()
"""
Explanation: Hehe.
Errors to Sort Out Later
Correlation Coefficient Distance
The correlation coefficient distance between vectors u and v.
End of explanation
"""
tsne = TSNEVisualizer(metric="mahalanobis", method='exact')
tsne.fit(docs, labels)
tsne.poof()
"""
Explanation: Mahalanobis Distance
End of explanation
"""
# tsne = TSNEVisualizer(metric="seuclidean")
# tsne.fit(docs, labels)
# tsne.poof()
"""
Explanation: Standardized Euclidean Distance
End of explanation
"""
# tsne = TSNEVisualizer(metric="yule")
# tsne.fit(docs, labels)
# tsne.poof()
"""
Explanation: Yule
End of explanation
"""
|
DamienIrving/ocean-analysis | development/calc_ensemble.ipynb | mit | sample_points_grid1 = [('depth', cube1_grid1.coord('depth').points),
('latitude', cube1_grid1.coord('latitude').points)]
cube2_grid1 = cube2_grid2.interpolate(sample_points_grid1, iris.analysis.Linear())
cube2_grid1.coord('latitude').bounds = cube1_grid1.coord('latitude').bounds
cube2_grid1.coord('depth').bounds = cube1_grid1.coord('depth').bounds
sample_points_grid2 = [('depth', cube2_grid2.coord('depth').points),
('latitude', cube2_grid2.coord('latitude').points)]
cube1_grid2 = cube1_grid1.interpolate(sample_points_grid2, iris.analysis.Linear())
cube1_grid2.coord('latitude').bounds = cube2_grid2.coord('latitude').bounds
cube1_grid2.coord('depth').bounds = cube2_grid2.coord('depth').bounds
cf = iplt.contourf(cube1_grid1, cmap=cmocean.cm.thermal, levels=numpy.arange(270, 290, 1), extend='both')
cbar = plt.colorbar(cf)
plt.ylim(7000, 0)
plt.show()
cf = iplt.contourf(cube1_grid2, cmap=cmocean.cm.thermal, levels=numpy.arange(270, 290, 1), extend='both')
cbar = plt.colorbar(cf)
plt.ylim(7000, 0)
plt.show()
cf = iplt.contourf(cube2_grid2, cmap=cmocean.cm.thermal, levels=numpy.arange(270, 290, 1), extend='both')
cbar = plt.colorbar(cf)
plt.ylim(7000, 0)
plt.show()
cf = iplt.contourf(cube2_grid1, cmap=cmocean.cm.thermal, levels=numpy.arange(270, 290, 1), extend='both')
cbar = plt.colorbar(cf)
plt.ylim(7000, 0)
plt.show()
"""
Explanation: Regrid to a common grid
The regrid function can only be used for remapping to a different horizontal grid.
Instead, need to use interpolate.
End of explanation
"""
cube1_grid1_mask = cube1_grid1.data.mask
cube2_grid1_mask = cube2_grid1.data.mask
grid1_mask = cube1_grid1_mask + cube2_grid1_mask
cube1_grid2_mask = cube1_grid2.data.mask
cube2_grid2_mask = cube2_grid2.data.mask
grid2_mask = cube1_grid2_mask + cube2_grid2_mask
cube1_grid1.data.mask = grid1_mask
cf = iplt.contourf(cube1_grid1, cmap=cmocean.cm.thermal, levels=numpy.arange(270, 290, 1), extend='both')
cbar = plt.colorbar(cf)
plt.ylim(7000, 0)
plt.show()
cube2_grid1.data.mask = grid1_mask
cf = iplt.contourf(cube2_grid1, cmap=cmocean.cm.thermal, levels=numpy.arange(270, 290, 1), extend='both')
cbar = plt.colorbar(cf)
plt.ylim(7000, 0)
plt.show()
cube1_grid2.data.mask = grid2_mask
cf = iplt.contourf(cube1_grid2, cmap=cmocean.cm.thermal, levels=numpy.arange(270, 290, 1), extend='both')
cbar = plt.colorbar(cf)
plt.ylim(7000, 0)
plt.show()
cube2_grid2.data.mask = grid2_mask
cf = iplt.contourf(cube2_grid2, cmap=cmocean.cm.thermal, levels=numpy.arange(270, 290, 1), extend='both')
cbar = plt.colorbar(cf)
plt.ylim(7000, 0)
plt.show()
"""
Explanation: Apply a common mask
End of explanation
"""
def ensmean(cube_list):
"""Calculate the ensemble mean"""
cube_list = iris.cube.CubeList(cube_list)
equalise_attributes(cube_list)
ensemble_cube = cube_list.merge_cube()
ensemble_mean = ensemble_cube.collapsed('ensemble_member', iris.analysis.MEAN)
return ensemble_mean
cube1_grid1.data = cube1_grid1.data.astype(numpy.float32)
cube2_grid1.data = cube1_grid1.data.astype(numpy.float32)
cube1_grid1.cell_methods = ()
cube2_grid1.cell_methods = ()
ensmean_grid1 = ensmean((cube1_grid1, cube2_grid1))
cf = iplt.contourf(ensmean_grid1, cmap=cmocean.cm.thermal, levels=numpy.arange(270, 290, 1), extend='both')
cbar = plt.colorbar(cf)
plt.ylim(7000, 0)
plt.show()
cube1_grid2.data = cube1_grid2.data.astype(numpy.float32)
cube2_grid2.data = cube1_grid2.data.astype(numpy.float32)
cube1_grid2.cell_methods = ()
cube2_grid2.cell_methods = ()
ensmean_grid2 = ensmean((cube1_grid2, cube2_grid2))
cf = iplt.contourf(ensmean_grid2, cmap=cmocean.cm.thermal, levels=numpy.arange(270, 290, 1), extend='both')
cbar = plt.colorbar(cf)
plt.ylim(7000, 0)
plt.show()
"""
Explanation: Ensemble mean
End of explanation
"""
|
kgrodzicki/machine-learning-specialization | course-3-classification/module-8-boosting-assignment-1-blank.ipynb | mit | import graphlab
"""
Explanation: Exploring Ensemble Methods
In this assignment, we will explore the use of boosting. We will use the pre-implemented gradient boosted trees in GraphLab Create. You will:
Use SFrames to do some feature engineering.
Train a boosted ensemble of decision-trees (gradient boosted trees) on the LendingClub dataset.
Predict whether a loan will default along with prediction probabilities (on a validation set).
Evaluate the trained model and compare it with a baseline.
Find the most positive and negative loans using the learned model.
Explore how the number of trees influences classification performance.
Let's get started!
Fire up Graphlab Create
End of explanation
"""
loans = graphlab.SFrame('lending-club-data.gl/')
"""
Explanation: Load LendingClub dataset
We will be using the LendingClub data. As discussed earlier, the LendingClub is a peer-to-peer leading company that directly connects borrowers and potential lenders/investors.
Just like we did in previous assignments, we will build a classification model to predict whether or not a loan provided by lending club is likely to default.
Let us start by loading the data.
End of explanation
"""
loans.column_names()
"""
Explanation: Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset. We have done this in previous assignments, so we won't belabor this here.
End of explanation
"""
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
"""
Explanation: Modifying the target column
The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan.
As in past assignments, in order to make this more intuitive and consistent with the lectures, we reassign the target to be:
* +1 as a safe loan,
* -1 as a risky (bad) loan.
We put this in a new column called safe_loans.
End of explanation
"""
target = 'safe_loans'
features = ['grade', # grade of the loan (categorical)
'sub_grade_num', # sub-grade of the loan as a number from 0 to 1
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'payment_inc_ratio', # ratio of the monthly payment to income
'delinq_2yrs', # number of delinquincies
'delinq_2yrs_zero', # no delinquincies in last 2 years
'inq_last_6mths', # number of creditor inquiries in last 6 months
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'open_acc', # number of open credit accounts
'pub_rec', # number of derogatory public records
'pub_rec_zero', # no derogatory public records
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
'int_rate', # interest rate of the loan
'total_rec_int', # interest received to date
'annual_inc', # annual income of borrower
'funded_amnt', # amount committed to the loan
'funded_amnt_inv', # amount committed by investors for the loan
'installment', # monthly payment owed by the borrower
]
"""
Explanation: Selecting features
In this assignment, we will be using a subset of features (categorical and numeric). The features we will be using are described in the code comments below. If you are a finance geek, the LendingClub website has a lot more details about these features.
The features we will be using are described in the code comments below:
End of explanation
"""
loans, loans_with_na = loans[[target] + features].dropna_split()
# Count the number of rows with missing data
num_rows_with_na = loans_with_na.num_rows()
num_rows = loans.num_rows()
print 'Dropping %s observations; keeping %s ' % (num_rows_with_na, num_rows)
"""
Explanation: Skipping observations with missing values
Recall from the lectures that one common approach to coping with missing values is to skip observations that contain missing values.
We run the following code to do so:
End of explanation
"""
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
"""
Explanation: Fortunately, there are not too many missing values. We are retaining most of the data.
Make sure the classes are balanced
We saw in an earlier assignment that this dataset is also imbalanced. We will undersample the larger class (safe loans) in order to balance out our dataset. We used seed=1 to make sure everyone gets the same results.
End of explanation
"""
train_data, validation_data = loans_data.random_split(.8, seed=1)
"""
Explanation: Checkpoint: You should now see that the dataset is balanced (approximately 50-50 safe vs risky loans).
Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Split data into training and validation sets
We split the data into training data and validation data. We used seed=1 to make sure everyone gets the same results. We will use the validation data to help us select model parameters.
End of explanation
"""
model_5 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 5)
"""
Explanation: Gradient boosted tree classifier
Gradient boosted trees are a powerful variant of boosting methods; they have been used to win many Kaggle competitions, and have been widely used in industry. We will explore the predictive power of multiple decision trees as opposed to a single decision tree.
Additional reading: If you are interested in gradient boosted trees, here is some additional reading material:
* GraphLab Create user guide
* Advanced material on boosted trees
We will now train models to predict safe_loans using the features above. In this section, we will experiment with training an ensemble of 5 trees. To cap the ensemble classifier at 5 trees, we call the function with max_iterations=5 (recall that each iterations corresponds to adding a tree). We set validation_set=None to make sure everyone gets the same results.
End of explanation
"""
# Select all positive and negative examples.
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
# Select 2 examples from the validation set for positive & negative loans
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
# Append the 4 examples into a single dataset
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
"""
Explanation: Making predictions
Just like we did in previous sections, let us consider a few positive and negative examples from the validation set. We will do the following:
* Predict whether or not a loan is likely to default.
* Predict the probability with which the loan is likely to default.
End of explanation
"""
model_5.predict(dataset=sample_validation_data)
"""
Explanation: Predicting on sample validation data
For each row in the sample_validation_data, write code to make model_5 predict whether or not the loan is classified as a safe loan.
Hint: Use the predict method in model_5 for this.
End of explanation
"""
model_5.predict(dataset=sample_validation_data, output_type='probability')
"""
Explanation: Quiz question: What percentage of the predictions on sample_validation_data did model_5 get correct?
Prediction probabilities
For each row in the sample_validation_data, what is the probability (according model_5) of a loan being classified as safe?
Hint: Set output_type='probability' to make probability predictions using model_5 on sample_validation_data:
End of explanation
"""
e = model_5.evaluate(validation_data)
e
"""
Explanation: Quiz Question: According to model_5, which loan is the least likely to be a safe loan?
Checkpoint: Can you verify that for all the predictions with probability >= 0.5, the model predicted the label +1?
Evaluating the model on the validation data
Recall that the accuracy is defined as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}}
$$
Evaluate the accuracy of the model_5 on the validation_data.
Hint: Use the .evaluate() method in the model.
End of explanation
"""
confusion_matrix = e['confusion_matrix']
confusion_matrix[(confusion_matrix['target_label']==-1) & (confusion_matrix['predicted_label']==1)]
"""
Explanation: Calculate the number of false positives made by the model.
End of explanation
"""
confusion_matrix[(confusion_matrix['target_label']==1) & (confusion_matrix['predicted_label']==-1)]
"""
Explanation: Quiz question: What is the number of false positives on the validation_data?
Calculate the number of false negatives made by the model.
End of explanation
"""
cost = 10000 * 1463 + 20000 * 1618
cost
"""
Explanation: Comparison with decision trees
In the earlier assignment, we saw that the prediction accuracy of the decision trees was around 0.64 (rounded). In this assignment, we saw that model_5 has an accuracy of 0.67 (rounded).
Here, we quantify the benefit of the extra 3% increase in accuracy of model_5 in comparison with a single decision tree from the original decision tree assignment.
As we explored in the earlier assignment, we calculated the cost of the mistakes made by the model. We again consider the same costs as follows:
False negatives: Assume a cost of \$10,000 per false negative.
False positives: Assume a cost of \$20,000 per false positive.
Assume that the number of false positives and false negatives for the learned decision tree was
False negatives: 1936
False positives: 1503
Using the costs defined above and the number of false positives and false negatives for the decision tree, we can calculate the total cost of the mistakes made by the decision tree model as follows:
cost = $10,000 * 1936 + $20,000 * 1503 = $49,420,000
The total cost of the mistakes of the model is $49.42M. That is a lot of money!.
Quiz Question: Using the same costs of the false positives and false negatives, what is the cost of the mistakes made by the boosted tree model (model_5) as evaluated on the validation_set?
End of explanation
"""
validation_data['predictions'] = model_5.predict(dataset=validation_data, output_type='probability')
"""
Explanation: Reminder: Compare the cost of the mistakes made by the boosted trees model with the decision tree model. The extra 3% improvement in prediction accuracy can translate to several million dollars! And, it was so easy to get by simply boosting our decision trees.
Most positive & negative loans.
In this section, we will find the loans that are most likely to be predicted safe. We can do this in a few steps:
Step 1: Use the model_5 (the model with 5 trees) and make probability predictions for all the loans in the validation_data.
Step 2: Similar to what we did in the very first assignment, add the probability predictions as a column called predictions into the validation_data.
Step 3: Sort the data (in descreasing order) by the probability predictions.
Start here with Step 1 & Step 2. Make predictions using model_5 for examples in the validation_data. Use output_type = probability.
End of explanation
"""
validation_data.sort('predictions', ascending=False)
print "Your loans : %s\n" % validation_data['predictions'].head(4)
print "Expected answer : %s" % [0.4492515948736132, 0.6119100103640573,
0.3835981314851436, 0.3693306705994325]
"""
Explanation: Checkpoint: For each row, the probabilities should be a number in the range [0, 1]. We have provided a simple check here to make sure your answers are correct.
End of explanation
"""
s = validation_data.sort('predictions', ascending=True)
"""
Explanation: Now, we are ready to go to Step 3. You can now use the prediction column to sort the loans in validation_data (in descending order) by prediction probability. Find the top 5 loans with the highest probability of being predicted as a safe loan.
End of explanation
"""
print "Your loans : %s\n" % s['grade'].head(5)
"""
Explanation: Quiz question: What grades are the top 5 loans?
Let us repeat this excercise to find the top 5 loans (in the validation_data) with the lowest probability of being predicted as a safe loan:
End of explanation
"""
model_10 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 10, verbose=False)
"""
Explanation: Checkpoint: You should expect to see 5 loans with the grade ['D', 'C', 'C', 'C', 'B'].
Effect of adding more trees
In this assignment, we will train 5 different ensemble classifiers in the form of gradient boosted trees. We will train models with 10, 50, 100, 200, and 500 trees. We use the max_iterations parameter in the boosted tree module.
Let's get sarted with a model with max_iterations = 10:
End of explanation
"""
model_50 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 50, verbose=False)
model_100 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 100, verbose=False)
model_200 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 200, verbose=False)
model_500 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 500, verbose=False)
"""
Explanation: Now, train 4 models with max_iterations to be:
* max_iterations = 50,
* max_iterations = 100
* max_iterations = 200
* max_iterations = 500.
Let us call these models model_50, model_100, model_200, and model_500. You can pass in verbose=False in order to suppress the printed output.
Warning: This could take a couple of minutes to run.
End of explanation
"""
e_50 = model_50.evaluate(validation_data)
e_100 = model_100.evaluate(validation_data)
e_200 = model_200.evaluate(validation_data)
e_500 = model_500.evaluate(validation_data)
"""
Explanation: Compare accuracy on entire validation set
Now we will compare the predicitve accuracy of our models on the validation set. Evaluate the accuracy of the 10, 50, 100, 200, and 500 tree models on the validation_data. Use the .evaluate method.
End of explanation
"""
print "Model 50 accuracy: %s\n" % e_50['accuracy']
print "Model 100 accuracy: %s\n" % e_100['accuracy']
print "Model 200 accuracy: %s\n" % e_200['accuracy']
print "Model 500 accuracy: %s\n" % e_500['accuracy']
"""
Explanation: Quiz Question: Which model has the best accuracy on the validation_data?
Quiz Question: Is it always true that the model with the most trees will perform best on test data?
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
def make_figure(dim, title, xlabel, ylabel, legend):
plt.rcParams['figure.figsize'] = dim
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
if legend is not None:
plt.legend(loc=legend, prop={'size':15})
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
"""
Explanation: Plot the training and validation error vs. number of trees
Recall from the lecture that the classification error is defined as
$$
\mbox{classification error} = 1 - \mbox{accuracy}
$$
In this section, we will plot the training and validation errors versus the number of trees to get a sense of how these models are performing. We will compare the 10, 50, 100, 200, and 500 tree models. You will need matplotlib in order to visualize the plots.
First, make sure this block of code runs on your computer.
End of explanation
"""
train_err_10 = 1 - model_10.evaluate(train_data)['accuracy']
train_err_50 = 1 - model_50.evaluate(train_data)['accuracy']
train_err_100 = 1 - model_100.evaluate(train_data)['accuracy']
train_err_200 = 1 - model_200.evaluate(train_data)['accuracy']
train_err_500 = 1 - model_500.evaluate(train_data)['accuracy']
"""
Explanation: In order to plot the classification errors (on the train_data and validation_data) versus the number of trees, we will need lists of these accuracies, which we get by applying the method .evaluate.
Steps to follow:
Step 1: Calculate the classification error for model on the training data (train_data).
Step 2: Store the training errors into a list (called training_errors) that looks like this:
[train_err_10, train_err_50, ..., train_err_500]
Step 3: Calculate the classification error of each model on the validation data (validation_data).
Step 4: Store the validation classification error into a list (called validation_errors) that looks like this:
[validation_err_10, validation_err_50, ..., validation_err_500]
Once that has been completed, the rest of the code should be able to evaluate correctly and generate the plot.
Let us start with Step 1. Write code to compute the classification error on the train_data for models model_10, model_50, model_100, model_200, and model_500.
End of explanation
"""
training_errors = [train_err_10, train_err_50, train_err_100,
train_err_200, train_err_500]
"""
Explanation: Now, let us run Step 2. Save the training errors into a list called training_errors
End of explanation
"""
validation_err_10 = 1 - model_10.evaluate(validation_data)['accuracy']
validation_err_50 = 1 - model_50.evaluate(validation_data)['accuracy']
validation_err_100 = 1 - model_100.evaluate(validation_data)['accuracy']
validation_err_200 = 1 - model_200.evaluate(validation_data)['accuracy']
validation_err_500 = 1 - model_500.evaluate(validation_data)['accuracy']
"""
Explanation: Now, onto Step 3. Write code to compute the classification error on the validation_data for models model_10, model_50, model_100, model_200, and model_500.
End of explanation
"""
validation_errors = [validation_err_10, validation_err_50, validation_err_100,
validation_err_200, validation_err_500]
"""
Explanation: Now, let us run Step 4. Save the training errors into a list called validation_errors
End of explanation
"""
plt.plot([10, 50, 100, 200, 500], training_errors, linewidth=4.0, label='Training error')
plt.plot([10, 50, 100, 200, 500], validation_errors, linewidth=4.0, label='Validation error')
make_figure(dim=(10,5), title='Error vs number of trees',
xlabel='Number of trees',
ylabel='Classification error',
legend='best')
"""
Explanation: Now, we will plot the training_errors and validation_errors versus the number of trees. We will compare the 10, 50, 100, 200, and 500 tree models. We provide some plotting code to visualize the plots within this notebook.
Run the following code to visualize the plots.
End of explanation
"""
|
vvishwa/deep-learning | intro-to-tensorflow/intro_to_tensorflow.ipynb | mit | import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
"""
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
"""
def download(url, file):
"""
Download file from <url>
:param url: URL to file
:param file: Local file path
"""
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
"""
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
"""
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
"""
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
"""
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
# TODO: Implement Min-Max scaling for grayscale image data
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
"""
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
"""
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
"""
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
"""
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
# features =
# labels =
# TODO: Set the weights and biases tensors
# weights =
# biases =
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
"""
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
"""
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
# epochs =
# learning_rate =
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
"""
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
"""
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
"""
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation
"""
|
duetosymmetry/slimplectic | PostNewtonian_Inspiral_with_RK.ipynb | mit | %matplotlib inline
from __future__ import print_function
import numpy as np, matplotlib.pyplot as plt
import slimplectic, orbit_util as orbit
# Parameters of the compact binary
# One solar mass in seconds
G = 6.67428e-11 #(in m^3/kg/s^2)
c = 2.99792458e8 # (in m/s)
Msun_in_kg = 1.98892e30
Msun_in_sec = G/c**3 * Msun_in_kg
sec_to_km = c/1000.
# Component masses
m1 = 1.4*Msun_in_sec
m2 = 1.4*Msun_in_sec
# Total mass
M = m1+m2
# Symmetric mass ratio
nu = m1*m2/M**2
# Reduced mass
mu = nu*M
"""
Explanation: Compact binary inspirals from gravitational radiation reaction
In this example, we look at highly nonlinear dissipative forces arising in the orbital dynamics of compact binaries that are inspiraling due to the emission of gravitational waves. For demonstration purposes, we focus here on the leading order conservative and non-conservative forces in a post-Newtonian expansion in the binary's relative speed.
The conservative part, $L$, of the total Lagrangian, $\Lambda$, is given by
$$L = \frac{1}{2} \mu \dot{\bf q}^2 + \frac{ \mu M }{ | {\bf q}^2 | }$$
and the nonconservative part, $K$, by
$$K = \frac{16}{5} \nu^2 M^4 \frac{ \dot{\mathbf q}+ \cdot {\mathbf q}- }{ |{\mathbf q}+|^4 } - \frac{48}{5} \nu^2 M^3 \frac{|\dot{\mathbf q}+|^2 (\dot{\mathbf q}+ \cdot {\mathbf q}-) }{|{\mathbf q}+|^3}$$
$$\qquad \qquad \qquad \qquad + 24 \nu^2 M^3 \frac{ (\dot{\mathbf q}+ \cdot {\mathbf q}+)^2 (\dot{\mathbf q}+ \cdot {\mathbf q}-) }{|{\mathbf q}+|^5} + \frac{ 16 }{15} \nu^2 M^4 \frac{ (\dot{\mathbf q}+ \cdot {\mathbf q}+) ({\mathbf q}+ \cdot {\mathbf q}-) }{ |{\mathbf q}+|^6 }$$
$$\qquad \qquad \qquad \qquad \qquad ~~~ + \frac{144}{5} \nu^2 M^3 \frac{|\dot{\mathbf q}+|^2 (\dot{\mathbf q}+ \cdot {\mathbf q}+) ({\mathbf q}+ \cdot {\mathbf q}-)}{|{\mathbf q}+|^5} - 40 \nu^2 M^3 \frac{(\dot{\mathbf q}+ \cdot {\mathbf q}+)^3 ({\mathbf q}+ \cdot {\mathbf q}-)}{|{\mathbf q}+|^7} ~.$$
We work in units where $G=c=1$ and where the center-of-mass frame so that ${\bf q}$ is the relative separation vector in Cartesian coordinates. The quantities $\nu$ and $M$ are the symmetric mass ratio and total mass, respectively.
End of explanation
"""
# Create an instance of the GalerkinGaussLobatto class and call it `pn` for post-Newtonian
pn = slimplectic.GalerkinGaussLobatto('t', ['x', 'y'], ['vx', 'vy'])
# Define the conservative $L$ and nonconservative $K$ parts of the total Lagrangian $\Lambda$
L = 0.5*mu*np.dot(pn.v, pn.v) + mu*M/np.dot(pn.q, pn.q)**0.5
K = 16./5.*nu**2*M**4*np.dot(pn.vp, pn.qm)/np.dot(pn.qp, pn.qp)**2 \
- 48./5.*nu**2*M**3*np.dot(pn.vp, pn.vp)*np.dot(pn.vp, pn.qm)/np.dot(pn.qp, pn.qp)**1.5 \
+ 24.*nu**2*M**3*np.dot(pn.vp, pn.qp)**2*np.dot(pn.vp, pn.qm)/np.dot(pn.qp, pn.qp)**2.5 \
+ 16./15.*nu**2*M**4*np.dot(pn.vp, pn.qp)*np.dot(pn.qp, pn.qm)/np.dot(pn.qp, pn.qp)**3 \
+ 144./5.*nu**2*M**3*np.dot(pn.vp, pn.vp)*np.dot(pn.vp, pn.qp)*np.dot(pn.qp, pn.qm)/np.dot(pn.qp, pn.qp)**2.5 \
- 40.*nu**2*M**3*np.dot(pn.vp, pn.qp)**3*np.dot(pn.qp, pn.qm)/np.dot(pn.qp, pn.qp)**3.5
# Discretize total Lagrangian using a 4th order (r=1) implicit scheme
pn.discretize(L, K, 1, method='implicit')
# Specify time samples at which the numerical solution is to be given initial data.
# We take data for a quasi-circular inspiral
q0 = np.array([100., 0.])*M
v0 = np.asarray([-64./5.*nu*(M/q0[0])**3, np.sqrt(M/q0[0])])
# (The initial velocity is found by calculating the leading correction
# to pure circular motion due to radiation reaction.)
pi0 = mu*v0
# Estimate time to "coalescence" when the orbital separation is zero
r0 = np.sqrt(q0[0]**2+q0[1]**2)
tcoal = 5./256./nu*(r0/M)**4 * M
# Time samples at every 1000M
dt = 1000*M
t_1000 = np.arange(0, tcoal+dt, dt)
# Now integrate the 4th order slimplectic integrator
q_1000, pi_1000 = pn.integrate(q0, pi0, t_1000)
"""
Explanation: Slimplectic integration
End of explanation
"""
# Analytical solution from the adiabatic approximation
def r_ad(time):
"""Orbital separation from the adiabatic approximation"""
return np.abs(r0**4 - 256./5.*nu*M**3*time)**0.25
def phi_ad(time):
"""Orbital phase from the adiabatic approximation"""
return (r0**2.5 - r_ad(time)**2.5) / (32.*nu*M**2.5)
def rdot_ad(time):
"""Time derivative of radial separation, r_ad(t)"""
return - 256./5.*M**3*nu / 4 / r_ad(time)**3
def omega_ad(time):
"""Orbital angular frequency from the adiabatic approximation"""
return np.sqrt(M) / r_ad(time)**1.5
def q_ad(time):
"""Orbital separation vector in Cartesian coordinates"""
return r_ad(time)*np.cos(phi_ad(time)), r_ad(time)*np.sin(phi_ad(time))
def v_ad(time):
"""Orbital velocity vector in Cartesian coordinates"""
return rdot_ad(time)*np.cos(phi_ad(time)) - omega_ad(time)*r_ad(time)*np.sin(phi_ad(time)), \
rdot_ad(time)*np.sin(phi_ad(time)) + omega_ad(time)*r_ad(time)*np.cos(phi_ad(time))
# Plot the numerical and analyatical orbital separations
r_1000 = np.sqrt(q_1000[0]**2 + q_1000[1]**2)
plt.plot(t_1000, r_ad(t_1000)*sec_to_km, 'r--')
plt.plot(t_1000, r_1000*sec_to_km, 'b-')
plt.ylim(0, 450)
plt.xlabel('Time, t [sec]', fontsize=14)
plt.ylabel('Orbital separation, $r$ [km]', fontsize=14)
"""
Explanation: Let's plot this numerical solution with the analytical solution obtained by applying the adiabatic approximation to the same equations of motion. In the adiabatic approximation, one evaluates the radiation reaction forces (derived from $K$) on a sequence of circular orbits, ignoring the contribution from the radial velocity. The adiabatic approximation gives the following expressions for the orbital separation and phase
$$r_{\rm ad}(t) = \left( r_0^4 - \frac{ 256 }{ 5 } \nu M^3 t \right)^{!1/4}$$
$$\phi_{\rm ad}(t) = \frac{ r_0^{5/2} - r_{\rm ad}^{5/2}(t) }{32 \nu M^{5/2} }$$
so that the Cartesian components of the relative orbital motion are
$$\big( r_{\rm ad}(t) \cos \phi_{\rm ad}(t), r_{\rm ad}(t) \sin \phi_{\rm ad} (t), 0 \big) .$$ The quantity $r_0$ is the initial separation and the initial orbital phase is taken to be zero.
End of explanation
"""
# Time samples with increment given by the approximate orbital period at ISCO
T_isco = 6.**1.5*(2.*np.pi*M)
t_isco = np.arange(0, tcoal+T_isco, T_isco)
# ...and integrate
q_isco, pi_isco = pn.integrate(q0, pi0, t_isco)
"""
Explanation: The numerical solution clearly becomes unstable. The reason is because eventually the period of the orbit becomes comparable to or smaller than the time-step, $dt$, of $1000M$. One can show that adaptive time-step methods are better-suited to this problem, which is what we are exploring currently.
For fixed time-steppers, we can take $dt$ to be increasingly smaller. We'll show how the solution changes with step-size but keeping the integration order fixed (at 4th). In particular, we take $dt$ to be the orbital period of the binary near the innermost stable circular orbit (ISCO).
(WARNING: Integrating this higher-resolution run may take a few minutes to complete.)
End of explanation
"""
# Plot the numerical and analyatical orbital separations
r_isco = np.sqrt(q_isco[0]**2 + q_isco[1]**2)
plt.plot(t_isco, r_ad(t_isco)*sec_to_km, 'r--')
plt.plot(t_isco, r_isco*sec_to_km, 'b-')
plt.ylim(0, 450)
plt.xlabel('Time, t [sec]', fontsize=14)
plt.ylabel('Orbital separation, $r$ [km]', fontsize=14)
"""
Explanation: Comparing this higher resolution solution with the adiabatic approximation gives much better agreement.
End of explanation
"""
# Instantiate the 4th order Runge-Kutta class
rk4 = slimplectic.RungeKutta4()
# Define the derivative operator for leading order radiation reaction in the PN expansion
def dydt(time, y):
deriv = np.zeros(4)
[q_x, q_y, v_x, v_y] = y
r2 = (q_x*q_x + q_y*q_y)
v2 = (v_x*v_x + v_y*v_y)
vdotq = v_x*q_x + v_y*q_y
deriv[0] = v_x
deriv[1] = v_y
deriv[2] = -M*mu*q_x/r2**1.5
deriv[2] += 16./5.*nu**2*M**4*v_x/r2**2 \
- 48./5.*nu**2*M**3*v2*v_x/r2**1.5 \
+ 24.*nu**2*M**3*vdotq**2*v_x/r2**2.5 \
+ 16./15.*nu**2*M**4*vdotq*q_x/r2**3 \
+ 144./5.*nu**2*M**3*v2*vdotq*q_x/r2**2.5 \
- 40.*nu**2*M**3*vdotq**3*q_x/r2**3.5
deriv[2] /= mu
deriv[3] = -mu*M*q_y/r2**1.5
deriv[3] += 16./5.*nu**2*M**4*v_y/r2**2 \
- 48./5.*nu**2*M**3*v2*v_y/r2**1.5 \
+ 24.*nu**2*M**3*vdotq**2*v_y/r2**2.5 \
+ 16./15.*nu**2*M**4*vdotq*q_y/r2**3 \
+ 144./5.*nu**2*M**3*v2*vdotq*q_y/r2**2.5 \
- 40.*nu**2*M**3*vdotq**3*q_y/r2**3.5
deriv[3] /= mu
return deriv
# Integrate
q_1000_rk4, v_1000_rk4 = rk4.integrate(q0, v0, t_1000, dydt)
q_isco_rk4, v_isco_rk4 = rk4.integrate(q0, v0, t_isco, dydt)
# Please note that q and pi are outputs of the slimplectic integration,
# while q and v are output from the Runge-Kutta integrators.
"""
Explanation: Runge-Kutta integration
Generate the 4th order Runge-Kutta solutions at increments of $1000M$ and the orbital period at ISCO to compare below with output from the slimplectic integrators.
End of explanation
"""
# Orbital separations from integrator outputs
r_1000 = np.sqrt(q_1000[0]**2+q_1000[1]**2)
r_Tisco = np.sqrt(q_isco[0]**2+q_isco[1]**2)
r_1000_rk4 = np.sqrt(q_1000_rk4[0]**2 + q_1000_rk4[1]**2)
r_isco_rk4 = np.sqrt(q_isco_rk4[0]**2 + q_isco_rk4[1]**2)
fig1 = plt.figure(figsize=(12,5), dpi=300)
ax1 = fig1.add_subplot(111)
ax1.plot(t_1000, r_1000*sec_to_km, 'r-', linewidth=2.0, rasterized = True, label='4th order Slimplectic $\Delta t=1000M$');
ax1.plot(t_isco, r_isco*sec_to_km, '-', color = 'orange', linewidth=2.0, rasterized = True, label='4th order Slimplectic $\Delta t=T_{\\rm ISCO}$');
ax1.plot(t_1000, r_1000_rk4*sec_to_km, 'g--', linewidth=2.0, rasterized = True, label='RK4, $\Delta t=1000M$');
ax1.plot(t_isco, r_isco_rk4*sec_to_km, 'b--', linewidth=2.0, rasterized = True, label='RK4, $\Delta t =T_{\\rm ISCO}$');
ax1.plot(t_isco, r_ad(t_isco)*sec_to_km, 'k:', linewidth=2.0, rasterized = True, label='Adiabatic approx.');
ax1.set_xlim(0, 1.05*tcoal);
ax1.set_ylim(0, 1.05*r0*sec_to_km);
ax1.set_xlabel('Time [sec]', fontsize = 18);
ax1.set_ylabel('Orbital separation [km]', fontsize=18);
ax1.legend(loc='lower left', prop={'size':15});
ax1.tick_params(axis='both', which='major', labelsize=16)
"""
Explanation: Comparison plots
Plot the orbital separation for the 4th order slimplectic and RK integrators for the numerical solutions generated at increments in time of $1000M$ and the orbital period at ISCO, respectively.
End of explanation
"""
fig2 = plt.figure(figsize=(12,5), dpi=300)
ax2 = fig2.add_subplot(111)
ax2.loglog(t_1000, np.abs(r_1000/r_ad(t_1000)-1.), 'r-', linewidth=2.0, rasterized = True);
ax2.loglog(t_isco, np.abs(r_isco/r_ad(t_isco)-1.), '-', color = 'orange', linewidth=2.0, rasterized = True);
ax2.loglog(t_1000, np.abs(r_1000_rk4/r_ad(t_1000)-1.), 'g--', linewidth=2.0, rasterized = True);
ax2.loglog(t_isco, np.abs(r_isco_rk4/r_ad(t_isco)-1.), 'b--', linewidth=2.0, rasterized = True);
ax2.text(1, 7e-4, r'Slim4, $\Delta t = 1000M$', fontsize = 15, color = 'black')
ax2.text(1, 5e-8, r'Slim4, $\Delta t = T_{\rm ISCO}$', fontsize = 15, color = 'black')
ax2.text(0.0125, 1.25e0, r'RK4 $\Delta t = 1000M$', fontsize = 15, color = 'green', rotation=16)
ax2.text(0.01, 1.5e-5, r'RK4 $\Delta t = T_{\rm ISCO}$', fontsize = 15, color = 'blue', rotation=16)
ax2.set_xlim(0, 1.05*tcoal);
ax2.set_ylim(1e-10, 1e2)
ax2.set_xlabel('Time [sec]', fontsize = 18);
ax2.set_ylabel('Fractional separation error $|\delta r/ r_{\\rm ad}|$', fontsize = 18);
ax2.set_yticks([1e-10, 1e-8, 1e-6, 1e-4, 1e-2, 1e0, 1e2])
ax2.tick_params(axis='both', which='major', labelsize=16)
"""
Explanation: Next, plot the fractional error in the orbital separation comparing the numerical solutions to the analytic adiabatic approximation. The errors in the slimplectic integration arise because the adiabatic approximation is breaking down towards the end of the inspiral.
End of explanation
"""
# Orbital phases
phi_1000 = orbit.phase(*q_1000)
phi_isco = orbit.phase(*q_isco)
phi_1000_rk4 = orbit.phase(*q_1000_rk4)
phi_isco_rk4 = orbit.phase(*q_isco_rk4)
fig3 = plt.figure(figsize=(12,5), dpi=300)
ax3 = fig3.add_subplot(111)
ax3.plot(t_1000, phi_1000, 'r-', linewidth=2.0, rasterized = True, label='4th order Slimplectic $\Delta t=1000M$');
ax3.plot(t_isco, phi_isco, '-', color = 'orange', linewidth=2.0, rasterized = True, label='4th order Slimplectic $\Delta t=T_{\\rm ISCO}$');
ax3.plot(t_1000, phi_1000_rk4, 'g--', linewidth=2.0, rasterized = True, label='RK4, $\Delta t=1000M$');
ax3.plot(t_isco, phi_isco_rk4, 'b--', linewidth=2.0, rasterized = True, label='RK4, $\Delta t =T_{\\rm ISCO}$');
ax3.plot(t_isco, phi_ad(t_isco), 'k:', linewidth=2.0, rasterized = True, label='Adiabatic approx.');
ax3.set_xlim(0, 1.05*tcoal);
ax3.set_xlabel('Time [sec]', fontsize = 18);
ax3.set_ylabel('Orbital phase [rad]', fontsize=18);
ax3.legend(loc='upper left', prop={'size':15});
ax3.tick_params(axis='both', which='major', labelsize=16)
fig4 = plt.figure(figsize=(12,5), dpi=300)
ax4 = fig4.add_subplot(111)
ax4.loglog(t_1000, np.abs(phi_1000 - phi_ad(t_1000)), 'r-', linewidth=2.0, rasterized = True);
ax4.loglog(t_isco, np.abs(phi_isco - phi_ad(t_isco)), '-', color = 'orange', linewidth=2.0, rasterized = True);
ax4.loglog(t_1000, np.abs(phi_1000_rk4 - phi_ad(t_1000)), 'g--', linewidth=2.0, rasterized = True);
ax4.loglog(t_isco, np.abs(phi_isco_rk4 - phi_ad(t_isco)), 'b--', linewidth=2.0, rasterized = True);
ax4.text(2, 2e1, r'Slim4, $\Delta t = 1000M$', fontsize = 15, color = 'r', rotation=8)
ax4.text(2, 1.5e-3, r'Slim4, $\Delta t = T_{\rm ISCO}$', fontsize = 15, color = 'orange', rotation=9)
ax4.text(2, 3e3, r'RK4 $\Delta t = 1000M$', fontsize = 15, color = 'green', rotation=9)
ax4.text(2, 3e-1, r'RK4 $\Delta t = T_{\rm ISCO}$', fontsize = 15, color = 'blue', rotation=15)
ax4.set_xlim(0, 1.05*tcoal);
ax4.set_yticks([1e-8, 1e-4, 1, 1e4]);
ax4.tick_params(axis='both', which='major', labelsize=16)
ax4.set_xlabel('Time [sec]', fontsize = 18);
ax4.set_ylabel('Absolute phase error, $|\delta \phi|$ [rad]', fontsize = 18);
"""
Explanation: Similar plots can be made for the orbital phase.
End of explanation
"""
# Energy function
def Energy(q, v):
return 0.5*mu*(v[0]**2+v[1]**2) - mu*M/np.sqrt(q[0]**2+q[1]**2)
# Energies from the analytic adiabatic approximation and from different integrators
E_1000_ad = Energy(q_ad(t_1000), v_ad(t_1000))
E_isco_ad = Energy(q_ad(t_isco), v_ad(t_isco))
E_1000 = Energy(q_1000, pi_1000/mu)
E_isco = Energy(q_isco, pi_isco/mu)
E_1000_rk4 = Energy(q_1000_rk4, v_1000_rk4)
E_isco_rk4 = Energy(q_isco_rk4, v_isco_rk4)
fig5 = plt.figure(figsize=(12,5), dpi=500)
ax5 = fig5.add_subplot(1,1,1)
ax5.loglog(t_1000, np.abs(E_1000/E_1000_ad-1.), 'r-', linewidth=2.0, rasterized=True)
ax5.loglog(t_isco, np.abs(E_isco/E_isco_ad-1.), color='orange', linestyle='-', linewidth=2.0, rasterized=True)
ax5.loglog(t_1000, np.abs(E_1000_rk4/E_1000_ad-1.), 'g-', linewidth=2.0, rasterized=True)
ax5.loglog(t_isco, np.abs(E_isco_rk4/E_isco_ad-1.), 'b-', linewidth=2.0, rasterized=True)
ax5.set_xlabel('Time, $t$ [sec]', fontsize=18)
ax5.set_ylabel('Fractional energy error, $\delta E/E$', fontsize=18)
ax5.text(50, 1e-3, r'$2^{nd}$ order Slimplectic', fontsize=15, color='r', rotation=5)
ax5.text(50, 5e-8, r'$4^{th}$ order Slimplectic', fontsize=15, color='orange', rotation=5)
ax5.text(50, 3e0, r'$2^{nd}$ order RK', fontsize=15, color='g', rotation=0)
ax5.text(50, 1e-2, r'$4^{th}$ order RK', fontsize=15, color='b', rotation=5)
ax5.tick_params(axis='both', which='major', labelsize=16)
ax5.set_ylim(1e-10, 1e2)
ax5.set_xlim(0, 1.02*tcoal)
ax5.set_yticks([1e-10, 1e-8, 1e-6, 1e-4, 1e-2, 1e0, 1e2]);
"""
Explanation: Let's see how the orbit's energy changes with time according to the different integration schemes and time increments. The energy is
$$E = \frac{1}{2} \mu \dot{\mathbf q}^2 - \frac{\mu M}{ |{\mathbf q}^2|} $$
and changes in time according to Einstein's quadrupole formula (when time-averaging over the orbital period).
To quantify the errors incurred by discretization and subsequent numerical integration, we define the fractional or relative energy difference as $\delta E / E = ( E_X - E )/ E$ where $E_X$ is the energy as measured by case $X$ with $X \in { {\rm Slim_1000,~Slim_ISCO,~RK4_1000,~RK4_ISCO} }$.
End of explanation
"""
|
pdebuyl-lab/colloidal_chemotaxis_companion | diffusion.ipynb | bsd-3-clause | %matplotlib inline
import h5py
import matplotlib.pyplot as plt
from matplotlib.figure import SubplotParams
import numpy as np
from scipy.signal import fftconvolve
from scipy.optimize import leastsq, curve_fit
from scipy.integrate import simps, cumtrapz
from glob import glob
plt.rcParams['figure.figsize'] = (12, 6)
plt.rcParams['figure.subplot.hspace'] = 0.25
plt.rcParams['figure.subplot.wspace'] = 0.25
plt.rcParams['figure.subplot.left'] = 0.17
plt.rcParams['axes.labelsize'] = 16
def expfitfunc(t, f0, tau):
"""Exponential fitting function"""
return f0*np.exp(-t/tau)
def fitfunc(p, t):
"""Linear fitting function"""
return p[0] + p[1]*t
def errfunc(p, t, y):
"""Error function for `fitfunc`"""
return fitfunc(p, t) - y
def get_block_data(group, name, dim=3):
"""Return the time and correlation function for the data
read from RMPCDMD output files."""
block = group[name]['value'][:]
count = group[name]['count'][:]
block /= count.reshape((-1, 1, 1, 1))
t_data = [np.array([0])]
data = [block[0,:1,:,:].reshape((-1,dim))]
dt = group[name]['time'][()]
for i in range(block.shape[0]):
t = dt*np.arange(block.shape[1])*block.shape[1]**i
t_data.append(t[1:])
data.append(block[i,1:,:,:].reshape((-1,dim)))
return np.concatenate(t_data), np.concatenate(data)
# Collect simulation data
runs = glob('cceq_*.h5')
runs.sort()
msd_all = []
vacf_all = []
tvacf_all = []
pvacf_all = []
wacf_all = []
for f in runs:
a = h5py.File(f, 'r')
group = a['block_correlators']
msd_t, msd_data = get_block_data(group, 'mean_square_displacement')
msd_all.append(msd_data)
vacf_t, vacf_data = get_block_data(group, 'velocity_autocorrelation')
vacf_all.append(vacf_data)
do_pvacf = 'parallel_velocity_autocorrelation' in group
if do_pvacf:
pvacf_t, pvacf_data = get_block_data(group, 'parallel_velocity_autocorrelation')
pvacf_all.append(pvacf_data)
do_tvacf = 'transverse_velocity_autocorrelation' in group
if do_tvacf:
tvacf_t, tvacf_data = get_block_data(group, 'transverse_velocity_autocorrelation')
tvacf_all.append(tvacf_data)
do_wacf = 'planar_angular_velocity_autocorrelation' in group
if do_wacf:
wacf_t, w_data = get_block_data(group, 'planar_angular_velocity_autocorrelation', dim=1)
wacf_all.append(w_data.flatten())
a.close()
msd_all = np.array(msd_all)
vacf_all = np.array(vacf_all)
pvacf_all = np.array(pvacf_all)
tvacf_all = np.array(tvacf_all)
wacf_all = np.array(wacf_all)
"""
Explanation: Passive and active colloidal chemotaxis in a microfluidic channel: mesoscopic and stochastic models
Author: Pierre de Buyl
Supplemental information to the article by L. Deprez and P. de Buyl
This notebook reports the characterization of the diffusion coefficients for a rigid dimer
confined between plates.
The data originates from the RMPCDMD simulation program. Please read its documentation and the
published paper for meaningful use of this notebook.
The correlation functions are computed online in RMPCDMD and stored in the H5MD files. They are read here
and integrated to obtain the diffusion coefficients. A time limit on the integral is set for all integrals,
and displayed in the figures, to obtain the value of the plateau of the running integral for D.
End of explanation
"""
# Plot and fit the mean-squared displacement
plt.ylabel(r'$\langle (\mathbf{r}(\tau) - \mathbf{r}(0))^2 \rangle$')
m = msd_all.mean(axis=0)
# Plot all three components
plt.plot(msd_t, m, marker='o')
# Sum only xy components
m = m[...,:2].sum(axis=-1)
# Fit data to t>100
mask = msd_t>100
solution, ierr = leastsq(errfunc, [0, 0.1], args=(msd_t[mask], m[mask]))
intercept, D = solution
# MSD = 2 d D t = 4 D t -> The coefficient of the linear fit must be divided by 4
# as the diffusion in z is bounded by the confining plates.
D = D/4
plt.plot(msd_t, fitfunc((intercept, 2*D), msd_t))
plt.xlabel(r'$\tau$')
plt.loglog()
# Via the MSD, we can only access the sum of D_parallel and D_perp
print("D_parallel + D_perp = ", 2*D)
"""
Explanation: Below, we plot the mean-square displacement (MSD) of the dimer in cartesian coordinates.
There are thus three components. The z component saturates because of the confinement.
The x and y components result from a mixing of the parallel and transverse diffusion
coefficients.
The fit is for the long-time behaviour of the x-y MSD.
End of explanation
"""
# Integrate the VACF
limit = 800
params = SubplotParams(hspace=0.08, wspace=0.15)
plt.figure(figsize=(14,8), subplotpars=params)
# Transverse VACF
m = tvacf_all[...,:2].sum(axis=-1).mean(axis=0)
ax1 = plt.subplot(221)
plt.plot(tvacf_t, m, marker='o')
plt.axvline(limit)
plt.xscale('log')
plt.xticks([])
plt.ylabel(r'Transv. VACF')
# Integral of transverse VACF
ax1_int = plt.subplot(222)
plt.plot(tvacf_t, cumtrapz(m, tvacf_t, initial=0))
plt.axvline(limit)
plt.xscale('log')
plt.xticks([])
idx = np.searchsorted(tvacf_t, limit)
integrated_Dt = simps(m[:idx], tvacf_t[:idx])
plt.axhline(integrated_Dt)
ax1_int.yaxis.tick_right()
ax1_int.yaxis.set_label_position('right')
plt.ylabel(r'Integral of transv. VACF')
plt.ylim(-0.0002,0.0025)
# Parallel VACF
ax2 = plt.subplot(223)
m = pvacf_all[...,:2].sum(axis=-1).mean(axis=0)
plt.plot(pvacf_t, m, marker='o')
plt.axvline(limit)
plt.xscale('log')
plt.xlabel(r'$\tau$')
plt.ylabel(r'Parallel VACF')
# Integral of parallel VACF
ax2_int = plt.subplot(224)
plt.plot(pvacf_t, cumtrapz(m, pvacf_t, initial=0))
plt.axvline(limit)
plt.xscale('log')
plt.xlabel(r'$\tau$')
idx = np.searchsorted(pvacf_t, limit)
integrated_Dp = simps(m[:idx], pvacf_t[:idx])
plt.axhline(integrated_Dp)
plt.ylim(-0.0002,0.0025)
ax2_int.yaxis.tick_right()
ax2_int.yaxis.set_label_position('right')
plt.ylabel(r'Integral of parallel VACF')
print('Transverse D:', integrated_Dt)
print('Parallel D:', integrated_Dp)
print("Sum of the D's", integrated_Dt+integrated_Dp)
plt.figure(figsize=(14,4), subplotpars=params)
m = wacf_all.mean(axis=0)
s = wacf_all.std(axis=0)
ax1 = plt.subplot(121)
plt.xscale('log')
plt.plot(wacf_t, m, marker='o')
plt.axvline(limit)
plt.xlim(.5, 1e4)
plt.xlabel(r'$\tau$')
plt.ylabel(r'Orientational ACF')
ax2 = plt.subplot(122)
plt.xscale('log')
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position('right')
plt.plot(wacf_t, cumtrapz(m, wacf_t, initial=0))
plt.xlim(.5, 1e4)
plt.ylim(-1e-6, 2e-4)
plt.xlabel(r'$\tau$')
plt.ylabel(r'Integral of orientational ACF')
limit = 800
idx = np.searchsorted(wacf_t, limit)
plt.axvline(limit)
D_integral = simps(m[:idx], wacf_t[:idx])
print('Integrated rotational diffusion coefficient', D_integral)
plt.axhline(D_integral)
plt.xlabel(r'$\tau$')
"""
Explanation: We use the velocity autocorrelation function (VACF) of the transverse and
parallel components of the velocity.
Integrating those functions yields the transverse and parallel diffusion
coefficients.
The integration is stopped when it reaches a plateau. This is done by setting
a limit in time, that is highlighted by reference lines in the plots.
We proceed in the same fashion for the planar angle diffusion coefficient.
End of explanation
"""
|
chetnapriyadarshini/deep-learning | batch-norm/Batch_Normalization_Lesson.ipynb | mit | # Import necessary packages
import tensorflow as tf
import tqdm
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import MNIST data so we have something for our experiments
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
"""
Explanation: Batch Normalization – Lesson
What is it?
What are it's benefits?
How do we add it to a network?
Let's see it work!
What are you hiding?
What is Batch Normalization?<a id='theory'></a>
Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called "batch" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch.
Why might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network.
For example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network.
Likewise, the output of layer 2 can be thought of as the input to a single layer network, consistng only of layer 3.
When you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network).
Beyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models.
Benefits of Batch Normalization<a id="benefits"></a>
Batch normalization optimizes network training. It has been shown to have several benefits:
1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall.
2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train.
3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights.
4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again.
5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great.
6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network.
7. May give better results overall – Some tests seem to show batch normalization actually improves the train.ing results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization.
Batch Normalization in TensorFlow<a id="implementation_1"></a>
This section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow.
The following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization.
End of explanation
"""
class NeuralNet:
def __init__(self, initial_weights, activation_fn, use_batch_norm):
"""
Initializes this object, creating a TensorFlow graph using the given parameters.
:param initial_weights: list of NumPy arrays or Tensors
Initial values for the weights for every layer in the network. We pass these in
so we can create multiple networks with the same starting weights to eliminate
training differences caused by random initialization differences.
The number of items in the list defines the number of layers in the network,
and the shapes of the items in the list define the number of nodes in each layer.
e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would
create a network with 784 inputs going into a hidden layer with 256 nodes,
followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param use_batch_norm: bool
Pass True to create a network that uses batch normalization; False otherwise
Note: this network will not use batch normalization on layers that do not have an
activation function.
"""
# Keep track of whether or not this network uses batch normalization.
self.use_batch_norm = use_batch_norm
self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm"
# Batch normalization needs to do different calculations during training and inference,
# so we use this placeholder to tell the graph which behavior to use.
self.is_training = tf.placeholder(tf.bool, name="is_training")
# This list is just for keeping track of data we want to plot later.
# It doesn't actually have anything to do with neural nets or batch normalization.
self.training_accuracies = []
# Create the network graph, but it will not actually have any real values until after you
# call train or test
self.build_network(initial_weights, activation_fn)
def build_network(self, initial_weights, activation_fn):
"""
Build the graph. The graph still needs to be trained via the `train` method.
:param initial_weights: list of NumPy arrays or Tensors
See __init__ for description.
:param activation_fn: Callable
See __init__ for description.
"""
self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])
layer_in = self.input_layer
for weights in initial_weights[:-1]:
layer_in = self.fully_connected(layer_in, weights, activation_fn)
self.output_layer = self.fully_connected(layer_in, initial_weights[-1])
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
"""
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
"""
# Since this class supports both options, only use batch normalization when
# requested. However, do not use it on the final layer, which we identify
# by its lack of an activation function.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
# (See later in the notebook for more details.)
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
# Apply batch normalization to the linear combination of the inputs and weights
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
# Now apply the activation function, *after* the normalization.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):
"""
Trains the model on the MNIST training dataset.
:param session: Session
Used to run training graph operations.
:param learning_rate: float
Learning rate used during gradient descent.
:param training_batches: int
Number of batches to train.
:param batches_per_sample: int
How many batches to train before sampling the validation accuracy.
:param save_model_as: string or None (default None)
Name to use if you want to save the trained model.
"""
# This placeholder will store the target labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define loss and optimizer
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
if self.use_batch_norm:
# If we don't include the update ops as dependencies on the train step, the
# tf.layers.batch_normalization layers won't update their population statistics,
# which will cause the model to fail at inference time
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
# Train for the appropriate number of batches. (tqdm is only for a nice timing display)
for i in tqdm.tqdm(range(training_batches)):
# We use batches of 60 just because the original paper did. You can use any size batch you like.
batch_xs, batch_ys = mnist.train.next_batch(60)
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
# Periodically test accuracy against the 5k validation images and store it for plotting later.
if i % batches_per_sample == 0:
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
self.training_accuracies.append(test_accuracy)
# After training, report accuracy against test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))
# If you want to use this model later for inference instead of having to retrain it,
# just construct it with the same parameters and then pass this file to the 'test' function
if save_model_as:
tf.train.Saver().save(session, save_model_as)
def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):
"""
Trains a trained model on the MNIST testing dataset.
:param session: Session
Used to run the testing graph operations.
:param test_training_accuracy: bool (default False)
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
Note: in real life, *always* perform inference using the population mean and variance.
This parameter exists just to support demonstrating what happens if you don't.
:param include_individual_predictions: bool (default True)
This function always performs an accuracy test against the entire test set. But if this parameter
is True, it performs an extra test, doing 200 predictions one at a time, and displays the results
and accuracy.
:param restore_from: string or None (default None)
Name of a saved model if you want to test with previously saved weights.
"""
# This placeholder will store the true labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# If provided, restore from a previously saved model
if restore_from:
tf.train.Saver().restore(session, restore_from)
# Test against all of the MNIST test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,
labels: mnist.test.labels,
self.is_training: test_training_accuracy})
print('-'*75)
print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))
# If requested, perform tests predicting individual values rather than batches
if include_individual_predictions:
predictions = []
correct = 0
# Do 200 predictions, 1 at a time
for i in range(200):
# This is a normal prediction using an individual test case. However, notice
# we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.
# Remember that will tell it whether it should use the batch mean & variance or
# the population estimates that were calucated while training the model.
pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],
feed_dict={self.input_layer: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
self.is_training: test_training_accuracy})
correct += corr
predictions.append(pred[0])
print("200 Predictions:", predictions)
print("Accuracy on 200 samples:", correct/200)
"""
Explanation: Neural network classes for testing
The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heaviy documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.
About the code:
This class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization.
It's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.
End of explanation
"""
def plot_training_accuracies(*args, **kwargs):
"""
Displays a plot of the accuracies calculated during training to demonstrate
how many iterations it took for the model(s) to converge.
:param args: One or more NeuralNet objects
You can supply any number of NeuralNet objects as unnamed arguments
and this will display their training accuracies. Be sure to call `train`
the NeuralNets before calling this function.
:param kwargs:
You can supply any named parameters here, but `batches_per_sample` is the only
one we look for. It should match the `batches_per_sample` value you passed
to the `train` function.
"""
fig, ax = plt.subplots()
batches_per_sample = kwargs['batches_per_sample']
for nn in args:
ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),
nn.training_accuracies, label=nn.name)
ax.set_xlabel('Training steps')
ax.set_ylabel('Accuracy')
ax.set_title('Validation Accuracy During Training')
ax.legend(loc=4)
ax.set_ylim([0,1])
plt.yticks(np.arange(0, 1.1, 0.1))
plt.grid(True)
plt.show()
def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):
"""
Creates two networks, one with and one without batch normalization, then trains them
with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.
:param use_bad_weights: bool
If True, initialize the weights of both networks to wildly inappropriate weights;
if False, use reasonable starting weights.
:param learning_rate: float
Learning rate used during gradient descent.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param training_batches: (default 50000)
Number of batches to train.
:param batches_per_sample: (default 500)
How many batches to train before sampling the validation accuracy.
"""
# Use identical starting weights for each network to eliminate differences in
# weight initialization as a cause for differences seen in training performance
#
# Note: The networks will use these weights to define the number of and shapes of
# its layers. The original batch normalization paper used 3 hidden layers
# with 100 nodes in each, followed by a 10 node output layer. These values
# build such a network, but feel free to experiment with different choices.
# However, the input size should always be 784 and the final output should be 10.
if use_bad_weights:
# These weights should be horrible because they have such a large standard deviation
weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,10), scale=5.0).astype(np.float32)
]
else:
# These weights should be good because they have such a small standard deviation
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
# Just to make sure the TensorFlow's default graph is empty before we start another
# test, because we don't bother using different graphs or scoping and naming
# elements carefully in this sample code.
tf.reset_default_graph()
# build two versions of same network, 1 without and 1 with batch normalization
nn = NeuralNet(weights, activation_fn, False)
bn = NeuralNet(weights, activation_fn, True)
# train and test the two models
with tf.Session() as sess:
tf.global_variables_initializer().run()
nn.train(sess, learning_rate, training_batches, batches_per_sample)
bn.train(sess, learning_rate, training_batches, batches_per_sample)
nn.test(sess)
bn.test(sess)
# Display a graph of how validation accuracies changed during training
# so we can compare how the models trained and when they converged
plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)
"""
Explanation: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.
We add batch normalization to layers inside the fully_connected function. Here are some important points about that code:
1. Layers with batch normalization do not include a bias term.
2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.)
3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later.
4. We add the normalization before calling the activation function.
In addition to that code, the training step is wrapped in the following with statement:
python
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
This line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference.
Finally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
We'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization.
Batch Normalization Demos<a id='demos'></a>
This section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier.
We'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights.
Code to support testing
The following two functions support the demos we run in the notebook.
The first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots.
The second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.relu)
"""
Explanation: Comparisons between identical networks, with and without batch normalization
The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.relu, 2000, 50)
"""
Explanation: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.
If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)
The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.sigmoid)
"""
Explanation: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.)
In the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.relu)
"""
Explanation: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches.
The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.relu)
"""
Explanation: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.
The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.
End of explanation
"""
train_and_test(False, 1, tf.nn.sigmoid)
"""
Explanation: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)
"""
Explanation: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.
The cell below shows a similar pair of networks trained for only 2000 iterations.
End of explanation
"""
train_and_test(False, 2, tf.nn.relu)
"""
Explanation: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.
The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 2, tf.nn.sigmoid)
"""
Explanation: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)
"""
Explanation: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.
However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.
End of explanation
"""
train_and_test(True, 0.01, tf.nn.relu)
"""
Explanation: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose randome values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
"""
train_and_test(True, 0.01, tf.nn.sigmoid)
"""
Explanation: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
"""
train_and_test(True, 1, tf.nn.relu)
"""
Explanation: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a>
End of explanation
"""
train_and_test(True, 1, tf.nn.sigmoid)
"""
Explanation: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.
End of explanation
"""
train_and_test(True, 2, tf.nn.relu)
"""
Explanation: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a>
End of explanation
"""
train_and_test(True, 2, tf.nn.sigmoid)
"""
Explanation: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.
End of explanation
"""
train_and_test(True, 1, tf.nn.relu)
"""
Explanation: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.
Full Disclosure: Batch Normalization Doesn't Fix Everything
Batch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run.
This section includes two examples that show runs when batch normalization did not help at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.
End of explanation
"""
train_and_test(True, 2, tf.nn.relu)
"""
Explanation: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.
End of explanation
"""
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
"""
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
"""
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
num_out_nodes = initial_weights.shape[-1]
# Batch normalization adds additional trainable variables:
# gamma (for scaling) and beta (for shifting).
gamma = tf.Variable(tf.ones([num_out_nodes]))
beta = tf.Variable(tf.zeros([num_out_nodes]))
# These variables will store the mean and variance for this layer over the entire training set,
# which we assume represents the general population distribution.
# By setting `trainable=False`, we tell TensorFlow not to modify these variables during
# back propagation. Instead, we will assign values to these variables ourselves.
pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)
# Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.
# This is the default value TensorFlow uses.
epsilon = 1e-3
def batch_norm_training():
# Calculate the mean and variance for the data coming out of this layer's linear-combination step.
# The [0] defines an array of axes to calculate over.
batch_mean, batch_variance = tf.nn.moments(linear_output, [0])
# Calculate a moving average of the training data's mean and variance while training.
# These will be used during inference.
# Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter
# "momentum" to accomplish this and defaults it to 0.99
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean'
# and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.
# This is necessary because the those two operations are not actually in the graph
# connecting the linear_output and batch_normalization layers,
# so TensorFlow would otherwise just skip them.
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
# During inference, use the our estimated population mean and variance to normalize the layer
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
# Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute
# the operation returned from `batch_norm_training`; otherwise it will execute the graph
# operation returned from `batch_norm_inference`.
batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)
# Pass the batch-normalized layer output through the activation function.
# The literature states there may be cases where you want to perform the batch normalization *after*
# the activation function, but it is difficult to find any uses of that in practice.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
"""
Explanation: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning.
Note: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.
Batch Normalization: A Detailed Look<a id='implementation_2'></a>
The layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization.
In order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.
We represent the average as $\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$
$$
\mu_B \leftarrow \frac{1}{m}\sum_{i=1}^m x_i
$$
We then need to calculate the variance, or mean squared deviation, represented as $\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\mu_B$), which gives us what's called the "deviation" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.
$$
\sigma_{B}^{2} \leftarrow \frac{1}{m}\sum_{i=1}^m (x_i - \mu_B)^2
$$
Once we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
Above, we said "(almost) standard deviation". That's because the real standard deviation for the batch is calculated by $\sqrt{\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch.
Why increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account.
At this point, we have a normalized value, represented as $\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\gamma$, and then add a beta value, $\beta$. Both $\gamma$ and $\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate.
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.
In NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
The next section shows you how to implement the math directly.
Batch normalization without the tf.layers package
Our implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.
However, if you would like to implement batch normalization at a lower level, the following code shows you how.
It uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.
1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before.
End of explanation
"""
def batch_norm_test(test_training_accuracy):
"""
:param test_training_accuracy: bool
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
"""
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
tf.reset_default_graph()
# Train the model
bn = NeuralNet(weights, tf.nn.relu, True)
# First train the network
with tf.Session() as sess:
tf.global_variables_initializer().run()
bn.train(sess, 0.01, 2000, 2000)
bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)
"""
Explanation: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:
It explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.
It initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \leftarrow \gamma \hat{x_i} + \beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.
Unlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.
TensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block.
The actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.
tf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.
We use the tf.nn.moments function to calculate the batch mean and variance.
2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization:
python
if self.use_batch_norm:
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
Our new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:
python
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:
python
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)
return gamma * normalized_linear_output + beta
And replace this line in batch_norm_inference:
python
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)
return gamma * normalized_linear_output + beta
As you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\hat{x_i}$:
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
And the second line is a direct translation of the following equation:
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you.
Why the difference between training and inference?
In the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
And that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
If you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?
First, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training).
End of explanation
"""
batch_norm_test(True)
"""
Explanation: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.
End of explanation
"""
batch_norm_test(False)
"""
Explanation: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer.
Note: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.
To overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it "normalize" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training.
So in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training.
End of explanation
"""
|
FeitengLab/EmotionMap | 2StockEmotion/3. 主成份分析(PCA)(伦敦).ipynb | mit | import numpy as np
from sklearn.decomposition import PCA
import pandas as pd
df = pd.read_csv('London.txt', sep='\s+')
# df.drop('id', axis=1, inplace=True) # 数据不像Manhattan,前期已经去除id项
df.tail()
"""
Explanation: Here I will using scikit-learn to perform PCA in Jupyter Notebook.
First, I need some example to get familiar with this
Get our data and analysis it
End of explanation
"""
tdf = df.iloc[:, 1:-2]
tdf.tail()
"""
Explanation: how to index a given part of a DataFrame have been a problem for me.
Refer pandas/html/10min.html#selection-by-position to keep in mind(link to file outside this dir not work well)
file:///C:/work/python/%E6%96%87%E6%A1%A3/pandas/html/10min.html#selection-by-position
End of explanation
"""
pca = PCA(n_components=8)
pca.fit(tdf)
np.set_printoptions(precision=6, suppress=True)
print('各主成份方差贡献占比:', end=' ')
print(pca.explained_variance_ratio_)
emotion_score = pd.DataFrame(pca.transform(tdf))
# 第一个主成份
pd.concat([df, emotion_score.loc[:, 0]], axis=1, join='inner').rename(index=str, columns={0: 'emotion_score'}).to_csv('London_score_raw.txt', index=None, sep='\t')
"""
Explanation: 取一个主成分, 解释方差0.917864
End of explanation
"""
|
metpy/MetPy | v0.12/_downloads/e3a381e26c1f7c055ae74476848708cb/Station_Plot_with_Layout.ipynb | bsd-3-clause | import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import pandas as pd
from metpy.calc import wind_components
from metpy.cbook import get_test_data
from metpy.plots import (add_metpy_logo, simple_layout, StationPlot,
StationPlotLayout, wx_code_map)
from metpy.units import units
"""
Explanation: Station Plot with Layout
Make a station plot, complete with sky cover and weather symbols, using a
station plot layout built into MetPy.
The station plot itself is straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
The StationPlotLayout class is used to standardize the plotting various parameters
(i.e. temperature), keeping track of the location, formatting, and even the units for use in
the station plot. This makes it easy (if using standardized names) to re-use a given layout
of a station plot.
End of explanation
"""
with get_test_data('station_data.txt') as f:
data_arr = pd.read_csv(f, header=0, usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),
names=['stid', 'lat', 'lon', 'slp', 'air_temperature',
'cloud_fraction', 'dew_point_temperature', 'weather',
'wind_dir', 'wind_speed'],
na_values=-99999)
data_arr.set_index('stid', inplace=True)
"""
Explanation: The setup
First read in the data. We use numpy.loadtxt to read in the data and use a structured
numpy.dtype to allow different types for the various columns. This allows us to handle
the columns with string data.
End of explanation
"""
# Pull out these specific stations
selected = ['OKC', 'ICT', 'GLD', 'MEM', 'BOS', 'MIA', 'MOB', 'ABQ', 'PHX', 'TTF',
'ORD', 'BIL', 'BIS', 'CPR', 'LAX', 'ATL', 'MSP', 'SLC', 'DFW', 'NYC', 'PHL',
'PIT', 'IND', 'OLY', 'SYR', 'LEX', 'CHS', 'TLH', 'HOU', 'GJT', 'LBB', 'LSV',
'GRB', 'CLT', 'LNK', 'DSM', 'BOI', 'FSD', 'RAP', 'RIC', 'JAN', 'HSV', 'CRW',
'SAT', 'BUY', '0CO', 'ZPC', 'VIH']
# Loop over all the whitelisted sites, grab the first data, and concatenate them
data_arr = data_arr.loc[selected]
# Drop rows with missing winds
data_arr = data_arr.dropna(how='any', subset=['wind_dir', 'wind_speed'])
# First, look at the names of variables that the layout is expecting:
simple_layout.names()
"""
Explanation: This sample data has way too many stations to plot all of them. Instead, we just select
a few from around the U.S. and pull those out of the data file.
End of explanation
"""
# This is our container for the data
data = {}
# Copy out to stage everything together. In an ideal world, this would happen on
# the data reading side of things, but we're not there yet.
data['longitude'] = data_arr['lon'].values
data['latitude'] = data_arr['lat'].values
data['air_temperature'] = data_arr['air_temperature'].values * units.degC
data['dew_point_temperature'] = data_arr['dew_point_temperature'].values * units.degC
data['air_pressure_at_sea_level'] = data_arr['slp'].values * units('mbar')
"""
Explanation: Next grab the simple variables out of the data we have (attaching correct units), and
put them into a dictionary that we will hand the plotting function later:
End of explanation
"""
# Get the wind components, converting from m/s to knots as will be appropriate
# for the station plot
u, v = wind_components(data_arr['wind_speed'].values * units('m/s'),
data_arr['wind_dir'].values * units.degree)
data['eastward_wind'], data['northward_wind'] = u, v
# Convert the fraction value into a code of 0-8, which can be used to pull out
# the appropriate symbol
data['cloud_coverage'] = (8 * data_arr['cloud_fraction']).fillna(10).values.astype(int)
# Map weather strings to WMO codes, which we can use to convert to symbols
# Only use the first symbol if there are multiple
wx_text = data_arr['weather'].fillna('')
data['present_weather'] = [wx_code_map[s.split()[0] if ' ' in s else s] for s in wx_text]
"""
Explanation: Notice that the names (the keys) in the dictionary are the same as those that the
layout is expecting.
Now perform a few conversions:
Get wind components from speed and direction
Convert cloud fraction values to integer codes [0 - 8]
Map METAR weather codes to WMO codes for weather symbols
End of explanation
"""
proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,
standard_parallels=[35])
"""
Explanation: All the data wrangling is finished, just need to set up plotting and go:
Set up the map projection and set up a cartopy feature for state borders
End of explanation
"""
# Change the DPI of the resulting figure. Higher DPI drastically improves the
# look of the text rendering
plt.rcParams['savefig.dpi'] = 255
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 1080, 290, size='large')
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS, linewidth=2)
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
simple_layout.plot(stationplot, data)
plt.show()
"""
Explanation: The payoff
End of explanation
"""
# Just winds, temps, and dewpoint, with colors. Dewpoint and temp will be plotted
# out to Farenheit tenths. Extra data will be ignored
custom_layout = StationPlotLayout()
custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots')
custom_layout.add_value('NW', 'air_temperature', fmt='.1f', units='degF', color='darkred')
custom_layout.add_value('SW', 'dew_point_temperature', fmt='.1f', units='degF',
color='darkgreen')
# Also, we'll add a field that we don't have in our dataset. This will be ignored
custom_layout.add_value('E', 'precipitation', fmt='0.2f', units='inch', color='blue')
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 1080, 290, size='large')
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS, linewidth=2)
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
custom_layout.plot(stationplot, data)
plt.show()
"""
Explanation: or instead, a custom layout can be used:
End of explanation
"""
|
nicoguaro/AdvancedMath | notebooks/vector_calculus-mayavi.ipynb | mit | from mayavi import mlab
import numpy as np
mlab.init_notebook()
red = (0.9, 0.1, 0.1)
blue = (0.2, 0.5, 0.7)
green = (0.3, 0.7, 0.3)
"""
Explanation: Coordinate systems
Introduction
This notebooks provides a tutorial about (curvilinear) coordinate systems. We use Mayavi to do the visualization of some of the surface for constant coordinate values.
Use of Mayavi in this notebook
Mayavi can display either images or X3D elements on the notebook. The images are static and one cannot interact with them. The X3D output produces a fully interactive 3D scene. For information on how to interact with the scene, see here: http://www.x3dom.org/documentation/interaction/
Mayavi ships with some javascript files that can be installed as:
$ jupyter nbextension install --py mayavi --user
This will install the x3dom Javascript and CSS files locally. Note that you do not need to “enable” the extension or anything after you run the above. For more instructions and options see the Installation of Jupyter Extensions. Doing this allows one to view X3D files without a network connection.
To view Mayavi visualizations on the notebook one should first do:
from mayavi import mlab
mlab.init_notebook()
Subequently, one may simply do:
s = mlab.test_plot3d()
s
When the init_notebook method is called it configures the Mayavi objects so they can be rendered on the Jupyter notebook.
More information: http://docs.enthought.com/mayavi/mayavi/tips.html
End of explanation
"""
cyl = mlab.figure(bgcolor=(1.0, 1.0, 1.0))
mlab.clf()
# Cylinder
phi, z = np.mgrid[0:2*np.pi:31j, -2:2*np.pi:31j]
x = 2*np.cos(phi)
y = 2*np.sin(phi)
z = z
cylinder = mlab.mesh(x, y, z, color=red)
# Vertical plane
rho, z = np.mgrid[0:3:31j, -2:2*np.pi:31j]
x = rho*np.cos(np.pi/4)
y = rho*np.sin(np.pi/4)
z = z
plane = mlab.mesh(x, y, z, color=blue, opacity=0.6)
# Horizontal plane
rho, phi = np.mgrid[0:3:31j, 0:2*np.pi:31j]
x = rho*np.cos(phi)
y = rho*np.sin(phi)
z = np.ones_like(x)
plane = mlab.mesh(x, y, z, color=green, opacity=0.6)
cyl
"""
Explanation: Cylindrical coordinates
The ISO standard 80000-2 recommends the use of $(ρ, φ, z)$, where $ρ$ is the radial coordinate, $\varphi$ the azimuth, and $z$ the height.
For the conversion between cylindrical and Cartesian coordinates, it is convenient to assume that the reference plane of the former is the Cartesian $xy$-plane (with equation $z=0$, and the cylindrical axis is the Cartesian $z$-axis. Then the $z$-coordinate is the same in both systems, and the correspondence between cylindrical $(\rho, \varphi)$ and Cartesian $(x, y)$ are the same as for polar coordinates, namely
\begin{align}
x &= \rho \cos \varphi \
y &= \rho \sin \varphi
\end{align}
in one direction, and
\begin{align}
\rho &= \sqrt{x^2+y^2} \
\varphi &= \begin{cases}
0 & \mbox{if } x = 0 \mbox{ and } y = 0\
\arcsin\left(\frac{y}{\rho}\right) & \mbox{if } x \geq 0 \
\arctan\left(\frac{y}{x}\right) & \mbox{if } x > 0 \
-\arcsin\left(\frac{y}{\rho}\right) + \pi & \mbox{if } x < 0
\end{cases}
\end{align}
in the other. It is also common to use $\varphi = atan2(y, x)$.
End of explanation
"""
sph = mlab.figure(bgcolor=(1.0, 1.0, 1.0))
mlab.clf()
theta, phi = np.mgrid[0:np.pi:21j, 0:np.pi:21j]
# Sphere
x = np.sin(phi) * np.cos(theta)
y = np.sin(phi) * np.sin(theta)
z = np.cos(phi)
sphere = mlab.mesh(x, y, z, color=red)
spher2e = mlab.mesh(-x, -y, z,representation='wireframe', color=red)
# Cone
x = theta/3 * np.cos(phi)
y = theta/3 * np.sin(phi)
z = theta/3
cone = mlab.mesh(x, y, z, color=blue, opacity=0.6)
cone2 = mlab.mesh(-x, -y, z, representation='wireframe', color=blue)
# Plane
x = theta/np.pi
y = theta/np.pi
z = phi - np.pi/2
plane = mlab.mesh(x, y, z, color=green)
sph
"""
Explanation: Spherical coordinates
The use of $(r, θ, φ)$ to denote radial distance, inclination (or elevation), and azimuth, respectively, is common practice in physics, and is specified by ISO standard 80000-2.
The spherical coordinates of a point can be obtained from its Cartesian coordinate system $(x, y, z)$
\begin{align}
r&=\sqrt{x^2 + y^2 + z^2} \
\theta &= \arccos\frac{z}{\sqrt{x^2 + y^2 + z^2}} = \arccos\frac{z}{r} \
\varphi &= \arctan \frac{y}{x}
\end{align}
The inverse tangent denoted in $\varphi = \arctan\left(\frac{y}{x}\right)$ must be suitably defined , taking into account the correct quadrant of $(x, y)$ (using the function atan2).
Conversely, the Cartesian coordinates may be retrieved from the spherical coordinates (radius $r$, inclination $\theta$, azimuth $\varphi$), where $r \in [0, \infty)$, $\theta \in [0, \pi]$ and $\varphi \in [0, 2\pi)$, by:
\begin{align}
x&=r \, \sin\theta \, \cos\varphi \
y&=r \, \sin\theta \, \sin\varphi \
z&=r \, \cos\theta
\end{align}
End of explanation
"""
ellip = mlab.figure(bgcolor=(1.0, 1.0, 1.0))
mlab.clf()
v, theta = np.mgrid[0:np.pi/2:21j, 0:np.pi:21j]
a = 3
b = 2
c = 1
# Ellipsoid
lam = 3
x = np.sqrt(a**2 + lam) * np.cos(v) * np.cos(theta)
y = np.sqrt(b**2 + lam)* np.cos(v) * np.sin(theta)
z = np.sqrt(c**2 + lam) * np.sin(v)
ellipsoid = mlab.mesh(x, y, z, color=red)
ellipsoid2 = mlab.mesh(x, y, -z, color=red)
ellipsoid3 = mlab.mesh(x, -y, z, representation='wireframe', color=red)
ellipsoid4 = mlab.mesh(x, -y, -z, representation='wireframe', color=red)
# Hyperboloid of one sheet
mu = 2
x = np.sqrt(a**2 + mu) * np.cosh(v) * np.cos(theta)
y = np.sqrt(b**2 + mu) * np.cosh(v) * np.sin(theta)
z = np.sqrt(c**2 + mu) * np.sinh(v)
z = np.sqrt(c**2 + mu) * np.sinh(v)
hyper = mlab.mesh(x, y, z, color=blue)
hyper2 = mlab.mesh(x, y, -z, color=blue)
hyper3 = mlab.mesh(x, -y, z, representation='wireframe', color=blue)
hyper4 = mlab.mesh(x, -y, -z, representation='wireframe', color=blue)
# Hyperboloid of two sheets
nu = 1
x = np.sqrt(a**2 + nu) * np.cosh(v)
y = np.sqrt(c**2 + nu) * np.sinh(v) * np.sin(theta)
z = np.sqrt(b**2 + nu) * np.sinh(v) * np.cos(theta)
hyper_up = mlab.mesh(x, y, z, color=green)
hyper_down = mlab.mesh(-x, y, z, color=green)
hyper_up2 = mlab.mesh(x, -y, z, representation='wireframe', color=green)
hyper_down2 = mlab.mesh(-x, -y, z, representation='wireframe', color=green)
ellip
"""
Explanation: Ellipsoidal coordinates
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
"""
Explanation: References
Wikipedia contributors. "Cylindrical coordinate system." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 12 Dec. 2016. Web. 9 Feb. 2017.
Wikipedia contributors. "Spherical coordinate system." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 29 Jan. 2017. Web. 9 Feb. 2017.
Enthought Inc. Mayavi: Tips and Trick. Web. 9 Feb. 2017.
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb | apache-2.0 | #@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
"""
Explanation: High-performance Simulation with Kubernetes
This tutorial will describe how to set up high-performance simulation using a
TFF runtime running on Kubernetes. The model is the same as in the previous
tutorial, High-performance simulations with TFF. The only difference is that
here we use a worker pool instead of a local executor.
This tutorial refers to Google Cloud's GKE to create the Kubernetes cluster,
but all the steps after the cluster is created can be used with any Kubernetes
installation.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/high_performance_simulation_with_kubernetes"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
GKE で TFF ワーカーを起動する
注: このチュートリアルは、ユーザーが既存の GCP プロジェクトを持っていることを前提としています。
Kubernetes クラスタの作成
次の手順は一度だけ実行する必要があります。クラスタは、今後のワークロードに再利用できます。
GKE の指示に従って、コンテナクラスタを作成します。このチュートリアルの以下の部分では、クラスタ名がtff-clusterであると想定していますが、実際の名前は重要ではありません。「ステップ 5: アプリケーションのデプロイ」に到達したら、指示に従うのをやめます。
TFF ワーカーアプリケーションをデプロイする
GCP とやり取りするコマンドは、ローカルまたは Google Cloud Shell で実行できます。Google Cloud Shell では、追加の設定は必要ないため、Google Cloud Shell の使用をお勧めします。
次のコマンドを実行して、Kubernetes アプリケーションを起動します。
$ kubectl create deployment tff-workers --image=gcr.io/tensorflow-federated/remote-executor-service:latest
アプリケーションのロードバランサを追加します。
$ kubectl expose deployment tff-workers --type=LoadBalancer --port 80 --target-port 8000
注: これにより、デプロイメントがインターネットに公開されますが、これはデモのみのためです。運用環境では、ファイアウォールと認証を強くお勧めします。
Google Cloud Console でロードバランサの IP アドレスを検索します。後でトレーニングループをワーカーアプリに接続するために必要になります。
(または) Docker コンテナをローカルで起動する
$ docker run --rm -p 8000:8000 gcr.io/tensorflow-federated/remote-executor-service:latest
TFF 環境の設定
End of explanation
"""
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
input_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=input_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for round in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('Round {}: loss {}, round time {}'.format(round, metrics.loss, t2 - t1))
"""
Explanation: トレーニングするモデルの定義
End of explanation
"""
import grpc
ip_address = '0.0.0.0' #@param {type:"string"}
port = 80 #@param {type:"integer"}
channels = [grpc.insecure_channel(f'{ip_address}:{port}') for _ in range(10)]
tff.backends.native.set_remote_execution_context(channels)
"""
Explanation: リモートエグゼキュータのセットアップ
デフォルトでは、TFF はすべての計算をローカルで実行します。このステップでは、上で設定した Kubernetes サービスに接続するよう TFF に指示します。サービスの IP アドレスは必ずここにコピーします。
End of explanation
"""
evaluate()
"""
Explanation: トレーニングの実行
End of explanation
"""
|
tleonhardt/CodingPlayground | python/cython/hello/hello_cython.ipynb | mit | %load_ext Cython
"""
Explanation: Using Cython in a Jupyter notebook
Cython can be used conveniently and interactively from a web browser through the Jupyter notebook.
To enable support for Cython compilation, install Cython and load the Cython extenstion from within Jupyter.
End of explanation
"""
%%cython
cdef int a = 0
for i in range(10):
a += i
print(a)
"""
Explanation: Then prefix a cell with the %%cython marker to compile it:
End of explanation
"""
%%cython -a
def intsum(values):
cdef int value, sum = 0
for value in values:
sum += value
return sum
print(intsum(range(5)))
"""
Explanation: You can show Cython's code analysis by passing the --annotate option:
End of explanation
"""
|
albahnsen/ML_SecurityInformatics | notebooks/13-ModelDeployment.ipynb | mit | import pandas as pd
import zipfile
with zipfile.ZipFile('../datasets/phishing.csv.zip', 'r') as z:
f = z.open('phishing.csv')
data = pd.read_csv(f, index_col=False)
data.head()
data.phishing.value_counts()
"""
Explanation: 13 - Model Deployment
by Alejandro Correa Bahnsen
version 0.1, May 2016
Part of the class Machine Learning for Security Informatics
This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License]
Agenda:
Creating and saving a model
Running the model in batch
Exposing the model as an API
Part 1: Phishing Detection
Phishing, by definition, is the act of defrauding an online user in order to obtain personal information by posing as a trustworthy institution or entity. Users usually have a hard time differentiating between legitimate and malicious sites because they are made to look exactly the same. Therefore, there is a need to create better tools to combat attackers.
End of explanation
"""
data.url[data.phishing==1].sample(50, random_state=1).tolist()
"""
Explanation: Creating features
End of explanation
"""
keywords = ['https', 'login', '.php', '.html', '@', 'sign']
for keyword in keywords:
data['keyword_' + keyword] = data.url.str.contains(keyword).astype(int)
"""
Explanation: Contain any of the following:
* https
* login
* .php
* .html
* @
* sign
* ?
End of explanation
"""
data['lenght'] = data.url.str.len() - 2
domain = data.url.str.split('/', expand=True).iloc[:, 2]
data['lenght_domain'] = domain.str.len()
domain.head(12)
data['isIP'] = (domain.str.replace('.', '') * 1).str.isnumeric().astype(int)
data['count_com'] = data.url.str.count('com')
data.sample(15, random_state=4)
"""
Explanation: Lenght of the url
Lenght of domain
is IP?
Number of .com
End of explanation
"""
X = data.drop(['url', 'phishing'], axis=1)
y = data.phishing
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import cross_val_score
clf = RandomForestClassifier(n_jobs=-1, n_estimators=100)
cross_val_score(clf, X, y, cv=10)
clf.fit(X, y)
"""
Explanation: Create Model
End of explanation
"""
from sklearn.externals import joblib
joblib.dump(clf, '22_clf_rf.pkl', compress=3)
"""
Explanation: Save model
End of explanation
"""
from m22_model_deployment import predict_proba
predict_proba('http://www.vipturismolondres.com/com.br/?atendimento=Cliente&/LgSgkszm64/B8aNzHa8Aj.php')
"""
Explanation: Part 2: Model in batch
See 22_model_deployment.py
End of explanation
"""
from flask import Flask
from flask.ext.restplus import Api
from flask.ext.restplus import fields
from sklearn.externals import joblib
from flask.ext.restplus import Resource
from sklearn.externals import joblib
import pandas as pd
"""
Explanation: Part 3: API
Flask is considered more Pythonic than Django because Flask web application code is in most cases more explicit. Flask is easy to get started with as a beginner because there is little boilerplate code for getting a simple app up and running.
First we need to install some libraries
pip install flask-restplus
Load Flask
End of explanation
"""
app = Flask(__name__)
api = Api(
app,
version='1.0',
title='Phishing Prediction API',
description='Phishing Prediction API')
ns = api.namespace('predict',
description='Phishing Classifier')
parser = api.parser()
parser.add_argument(
'URL',
type=str,
required=True,
help='URL to be analyzed',
location='args')
resource_fields = api.model('Resource', {
'result': fields.String,
})
"""
Explanation: Create api
End of explanation
"""
clf = joblib.load('22_clf_rf.pkl')
@ns.route('/')
class PhishingApi(Resource):
@api.doc(parser=parser)
@api.marshal_with(resource_fields)
def get(self):
args = parser.parse_args()
result = self.predict_proba(args)
return result, 200
def predict_proba(self, args):
url = args['URL']
url_ = pd.DataFrame([url], columns=['url'])
# Create features
keywords = ['https', 'login', '.php', '.html', '@', 'sign']
for keyword in keywords:
url_['keyword_' + keyword] = url_.url.str.contains(keyword).astype(int)
url_['lenght'] = url_.url.str.len() - 2
domain = url_.url.str.split('/', expand=True).iloc[:, 2]
url_['lenght_domain'] = domain.str.len()
url_['isIP'] = (url_.url.str.replace('.', '') * 1).str.isnumeric().astype(int)
url_['count_com'] = url_.url.str.count('com')
# Make prediction
p1 = clf.predict_proba(url_.drop('url', axis=1))[0,1]
print('url=', url,'| p1=', p1)
return {
"result": p1
}
"""
Explanation: Load model and create function that predicts an URL
End of explanation
"""
app.run(debug=True, use_reloader=False, host='0.0.0.0', port=5000)
"""
Explanation: Run API
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/bcc/cmip6/models/sandbox-1/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-1', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: BCC
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.21/_downloads/82590448493c884f52ea0c7ddc5b446b/plot_publication_figure.ipynb | bsd-3-clause | # Authors: Eric Larson <larson.eric.d@gmail.com>
# Daniel McCloy <dan.mccloy@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable, ImageGrid
import mne
"""
Explanation: Make figures more publication ready
In this example, we take some MNE plots and make some changes to make
a figure closer to publication-ready.
End of explanation
"""
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_stc = op.join(data_path, 'MEG', 'sample', 'sample_audvis-meg-eeg-lh.stc')
fname_evoked = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname_evoked, 'Left Auditory')
evoked.pick_types(meg='grad').apply_baseline((None, 0.))
max_t = evoked.get_peak()[1]
stc = mne.read_source_estimate(fname_stc)
"""
Explanation: Suppose we want a figure with an evoked plot on top, and the brain activation
below, with the brain subplot slightly bigger than the evoked plot. Let's
start by loading some example data <sample-dataset>.
End of explanation
"""
evoked.plot()
stc.plot(views='lat', hemi='split', size=(800, 400), subject='sample',
subjects_dir=subjects_dir, initial_time=max_t,
time_viewer=False, show_traces=False)
"""
Explanation: During interactive plotting, we might see figures like this:
End of explanation
"""
colormap = 'viridis'
clim = dict(kind='value', lims=[4, 8, 12])
# Plot the STC, get the brain image, crop it:
brain = stc.plot(views='lat', hemi='split', size=(800, 400), subject='sample',
subjects_dir=subjects_dir, initial_time=max_t, background='w',
colorbar=False, clim=clim, colormap=colormap,
time_viewer=False, show_traces=False)
screenshot = brain.screenshot()
brain.close()
"""
Explanation: To make a publication-ready figure, first we'll re-plot the brain on a white
background, take a screenshot of it, and then crop out the white margins.
While we're at it, let's change the colormap, set custom colormap limits and
remove the default colorbar (so we can add a smaller, vertical one later):
End of explanation
"""
nonwhite_pix = (screenshot != 255).any(-1)
nonwhite_row = nonwhite_pix.any(1)
nonwhite_col = nonwhite_pix.any(0)
cropped_screenshot = screenshot[nonwhite_row][:, nonwhite_col]
# before/after results
fig = plt.figure(figsize=(4, 4))
axes = ImageGrid(fig, 111, nrows_ncols=(2, 1), axes_pad=0.5)
for ax, image, title in zip(axes, [screenshot, cropped_screenshot],
['Before', 'After']):
ax.imshow(image)
ax.set_title('{} cropping'.format(title))
"""
Explanation: Now let's crop out the white margins and the white gap between hemispheres.
The screenshot has dimensions (h, w, 3), with the last axis being R, G, B
values for each pixel, encoded as integers between 0 and 255. (255,
255, 255) encodes a white pixel, so we'll detect any pixels that differ
from that:
End of explanation
"""
# Tweak the figure style
plt.rcParams.update({
'ytick.labelsize': 'small',
'xtick.labelsize': 'small',
'axes.labelsize': 'small',
'axes.titlesize': 'medium',
'grid.color': '0.75',
'grid.linestyle': ':',
})
"""
Explanation: A lot of figure settings can be adjusted after the figure is created, but
many can also be adjusted in advance by updating the
:data:~matplotlib.rcParams dictionary. This is especially useful when your
script generates several figures that you want to all have the same style:
End of explanation
"""
# figsize unit is inches
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(4.5, 3.),
gridspec_kw=dict(height_ratios=[3, 4]))
# alternate way #1: using subplot2grid
# fig = plt.figure(figsize=(4.5, 3.))
# axes = [plt.subplot2grid((7, 1), (0, 0), rowspan=3),
# plt.subplot2grid((7, 1), (3, 0), rowspan=4)]
# alternate way #2: using figure-relative coordinates
# fig = plt.figure(figsize=(4.5, 3.))
# axes = [fig.add_axes([0.125, 0.58, 0.775, 0.3]), # left, bot., width, height
# fig.add_axes([0.125, 0.11, 0.775, 0.4])]
# we'll put the evoked plot in the upper axes, and the brain below
evoked_idx = 0
brain_idx = 1
# plot the evoked in the desired subplot, and add a line at peak activation
evoked.plot(axes=axes[evoked_idx])
peak_line = axes[evoked_idx].axvline(max_t, color='#66CCEE', ls='--')
# custom legend
axes[evoked_idx].legend(
[axes[evoked_idx].lines[0], peak_line], ['MEG data', 'Peak time'],
frameon=True, columnspacing=0.1, labelspacing=0.1,
fontsize=8, fancybox=True, handlelength=1.8)
# remove the "N_ave" annotation
axes[evoked_idx].texts = []
# Remove spines and add grid
axes[evoked_idx].grid(True)
axes[evoked_idx].set_axisbelow(True)
for key in ('top', 'right'):
axes[evoked_idx].spines[key].set(visible=False)
# Tweak the ticks and limits
axes[evoked_idx].set(
yticks=np.arange(-200, 201, 100), xticks=np.arange(-0.2, 0.51, 0.1))
axes[evoked_idx].set(
ylim=[-225, 225], xlim=[-0.2, 0.5])
# now add the brain to the lower axes
axes[brain_idx].imshow(cropped_screenshot)
axes[brain_idx].axis('off')
# add a vertical colorbar with the same properties as the 3D one
divider = make_axes_locatable(axes[brain_idx])
cax = divider.append_axes('right', size='5%', pad=0.2)
cbar = mne.viz.plot_brain_colorbar(cax, clim, colormap, label='Activation (F)')
# tweak margins and spacing
fig.subplots_adjust(
left=0.15, right=0.9, bottom=0.01, top=0.9, wspace=0.1, hspace=0.5)
# add subplot labels
for ax, label in zip(axes, 'AB'):
ax.text(0.03, ax.get_position().ymax, label, transform=fig.transFigure,
fontsize=12, fontweight='bold', va='top', ha='left')
"""
Explanation: Now let's create our custom figure. There are lots of ways to do this step.
Here we'll create the figure and the subplot axes in one step, specifying
overall figure size, number and arrangement of subplots, and the ratio of
subplot heights for each row using :mod:GridSpec keywords
<matplotlib.gridspec>. Other approaches (using
:func:~matplotlib.pyplot.subplot2grid, or adding each axes manually) are
shown commented out, for reference.
End of explanation
"""
|
gammapy/PyGamma15 | tutorials/naima/naima_radiative_models.ipynb | bsd-3-clause | #prepare imports
import numpy as np
import astropy.units as u
import naima
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['lines.linewidth'] = 2
"""
Explanation: Naima Radiative Models
Welcome to the naima radiative models tutorial!
Useful references:
naima code at github
naima documentation
naima ICRC proceedings
Monday's talk on naima
2nd part of this tutorial: MCMC model fitting of a galactic non-thermal source
naima is named after a ballad composed by John Coltrane, so a few performances by him and others might be an appropriate soundtrack for the tutorial.
End of explanation
"""
naima.models.Synchrotron?
"""
Explanation: Note: Astropy units are used throughout this tutorial, see a short primer in the naima docs and a longer one in the astropy docs.
Exploring the naima radiative models
naima provides several radiative models and functional models than can be used together to compute the radiative output from populations of relativistic particle populations.
General concepts
In general, the radiative output for a given channel can be computed as:
$$L(\epsilon)=\iiiint E\,N(E, \vec{r})\,c\,n(\epsilon_0)\,\frac{\mathrm{d}
\sigma(E,\epsilon_0,\epsilon,\Omega)}{\mathrm{d} \Omega} \mathrm{d}\Omega\, \mathrm{d}\epsilon_0\, \mathrm{d} E\, \mathrm{d} V,
$$
where $E$ is the particle energy, $\epsilon$ the emitted photon energy, $\sigma$ the cross-section of the process, which might depend on the angle of interaction, $n(\epsilon_0)$ is the density of target particles or photons at energy $\epsilon_0$, and $N(E)$ is the energy distribution of the relativistic particle population. In a one-zone model, the properties are assumed homogeneous over the whole volume, so one fewer integral is needed:
$$L(\epsilon)=\iiint E\,N(E)\,c\,n(\epsilon_0)\,\frac{\mathrm{d}
\sigma(E,\epsilon_0,\epsilon,\Omega)}{\mathrm{d} \Omega} \mathrm{d}\Omega\, \mathrm{d}\epsilon_0\, \mathrm{d} E,
$$
One of the most prevalent mechanisms of particle acceleration is diffusive shock acceleration, which, for a strong shock, results in a powerlaw particle energy distribution with index $p=2$. The acceleration timescale increases with particle energy, and at some energy it will be longer than either the radiative or non-radiative losses of the particle or longer than the age of the source. At this energy the particle distribution will show a quasi-exponential cutoff.
Energy losses might modify the present-age particle distribution, typically making the distribution softer. This might only happen at higher energies, giving rise to a broken power-law energy distribution.
For a given relativistic particle population, there are different channels that can result in photon emission depending on the target of the interaction:
Synchrotron: Charged particles will radiate as they girate in magnetic fields.
Inverse Compton: Upscattering of seed photon fields by electrons.
Bremsstrahlung: Charged particles radiate as they are accelerated by nearby particles.
Pion Decay: Proton-proton interactions results in pions that decay into gamma-rays.
naima models
naima has the following radiative models available in the naima.models module:
Synchrotron
InverseCompton
PionDecay
Bremsstrahlung
All of them take as first argument a particle distribution function, which can be one of the functional models in naima (currently PowerLaw, BrokenPowerLaw, ExponentialCutoffPowerLaw, ExponentialCutoffBrokenPowerLaw, LogParabola). You can find out the arguments they take by looking at their docstrings:
End of explanation
"""
ECPL = naima.models.ExponentialCutoffBrokenPowerLaw(amplitude=1e36/u.eV, e_0=1*u.TeV, alpha_1=2.0,
alpha_2=3.0, e_break=0.2*u.TeV, e_cutoff=50*u.TeV,
beta=2)
#check the shape of the particle distribution
electron_energy = np.logspace(-3, 3, 1000) * u.TeV
plt.loglog(electron_energy, (electron_energy**2 * ECPL(electron_energy)).to('erg'))
plt.axvline(0.2, ls=':', lw=1, c='k'); plt.axvline(50, ls=':', lw=1, c='k')
plt.gca().set_ylim(bottom=1e42)
plt.xlabel('Particle energy [TeV]')
plt.ylabel('$E^2 N(E)$ [erg]')
"""
Explanation: Leptonic models
The first step is to define the particle distribution we will use, and we'll start with a power-law with an exponential cutoff. The amplitude must be in units of particles per unit energy:
End of explanation
"""
SY = naima.models.Synchrotron(ECPL, B=10*u.uG)
photon_energy = np.logspace(-5, 5, 100)*u.eV
SY_sed0 = SY.sed(photon_energy, distance=1*u.kpc)
plt.loglog(photon_energy, SY_sed0)
plt.axvspan(1e3,1e4, fc='k', alpha=0.3)
plt.xlabel('Photon energy [eV]')
plt.ylabel('$E^2 dN/dE\ [\mathrm{erg\ cm^{-2}\ s^{-1}}]$')
"""
Explanation: Now we can define the radiative channels we want to compute with this particle distribution. We will consider it to be an electron population, and compute its synchrotron SED in a magnetic field strength of $B=10\mu G$ at a distance of 1 kpc:
End of explanation
"""
IC = naima.models.InverseCompton(ECPL,
seed_photon_fields=['CMB',
['FIR', 27*u.K, 0.415*u.eV/u.cm**3],
['NIR', 2800*u.K, 0.802*u.eV/u.cm**3]])
# The particle_distribution attribute in IC and SY now point to the same object:
print('Are the SY and IC particle distributions the same object? {0}'.format(
IC.particle_distribution is SY.particle_distribution))
# and we compute the SED from optical to TeV
photon_energy = np.logspace(-5, 15, 100) * u.eV
f, ax = plt.subplots(1)
# plot the total SED from SY and IC
ax.loglog(photon_energy, SY.sed(photon_energy), label='Synchrotron')
ax.loglog(photon_energy, IC.sed(photon_energy), label='IC (total)')
# plot the SEDs from each of the seed photon fields with seed argument
ax.loglog(photon_energy, IC.sed(photon_energy, seed='CMB'),
label='IC (CMB)', ls='--', lw=1)
ax.loglog(photon_energy, IC.sed(photon_energy, seed='FIR'),
label='IC (FIR)', ls='--', lw=1)
ax.loglog(photon_energy, IC.sed(photon_energy, seed='NIR'),
label='IC (NIR)', ls='--', lw=1)
ax.set_xlabel('Photon energy [eV]')
ax.set_ylabel('$E^2 dN/dE$ [erg s$^{-1}$ cm$^{-2}$]')
ax.legend(loc='lower left')
ax.set_ylim(bottom=1e-14)
"""
Explanation: Now we can use the same particle distribution in an Inverse Compton class to compute its IC emission. We will use the interstellar radiation fields computed from the GALPROP model at a galactocentric distance of ~6 kpc:
Name | Temperature | Energy density
-----|-------------|---------------
CMB | 2.72 K | 0.216 eV/cm$^3$
Far infrared | 27 K | 0.415 eV/cm$^3$
Near infrared | 2800 K | 0.802 eV/cm$^3$
The total IC emission should be slighlty lower than the synchrotron emission, given that (in the Thomson regime for IC) the ratio between the Synchrotron and IC luminosities corresponds to the ration of target energy densities:
$$\frac{L_{sy}}{L_{IC}} = \frac{u_B}{u_\gamma} = \frac{B^2}{8\pi u_\gamma} \simeq 1.73$$
End of explanation
"""
ECPL.e_cutoff = 50*u.TeV
plt.loglog(photon_energy, SY.sed(photon_energy) + IC.sed(photon_energy), label='$E_\mathrm{c}$ = 50 TeV')
ECPL.e_cutoff = 100*u.TeV
plt.loglog(photon_energy, SY.sed(photon_energy) + IC.sed(photon_energy), label='$E_\mathrm{c}$ = 100 TeV')
plt.legend(loc='best')
plt.xlabel('Photon energy [eV]')
plt.ylabel('$E^2 dN/dE$ [erg/cm2/s]')
plt.ylim(bottom=1e-12)
"""
Explanation: Now we will explore how to modify the parameters of an already defined radiative model. We can modify the values of the particle distribution parameters and this will propagate to the radiative classes:
End of explanation
"""
f, ax = plt.subplots(1)
print('before setting We -- ECPL.amplitude = {0}'.format(ECPL.amplitude))
IC.set_We(5e48 * u.erg, Eemin=0.1*u.TeV) # 5x10^48 erg in electrons above 100 GeV
print('after setting We -- ECPL.amplitude = {0}'.format(ECPL.amplitude))
ax.loglog(photon_energy, SY.sed(photon_energy) + IC.sed(photon_energy), label=r'$5\times10^{48}$ erg', c='k')
IC.set_We(5e47 * u.erg, Eemin=0.1*u.TeV) # 5x10^47 erg in electrons above 100 GeV
ax.loglog(photon_energy, SY.sed(photon_energy) + IC.sed(photon_energy), label=r'$5\times10^{47}$ erg', c='r')
#plot label stuff
ax.set_xlabel('Photon energy [eV]')
ax.set_ylabel('$E^2 dN/dE$ [erg s$^{-1}$ cm$^{-2}$]')
ax.legend(loc='lower left')
ax.set_ylim(bottom=1e-13)
f, ax = plt.subplots(1)
for B in [0.1, 1, 10, 100, 1000]*u.uG:
SY.B = B
ax.loglog(photon_energy, SY.sed(photon_energy) + IC.sed(photon_energy),
label=r'$B$ = {0}'.format(B))
#plot label stuff
ax.set_xlabel('Photon energy [eV]')
ax.set_ylabel('$E^2 dN/dE$ [erg s$^{-1}$ cm$^{-2}$]')
ax.legend(loc='best')
ax.set_ylim(bottom=1e-15)
"""
Explanation: You can also modify the normalization by specifying the total energy in electrons with the set_We function:
End of explanation
"""
PL = naima.models.PowerLaw(1e36/u.eV, e_0=1*u.TeV, alpha=3.0)
IC_KN = naima.models.InverseCompton(PL, seed_photon_fields=[
['CMB', 2.7*u.K, 1*u.eV/u.cm**3],
['FIR', 70*u.K, 1*u.eV/u.cm**3],
['NIR', 1800*u.K, 1*u.eV/u.cm**3]])
# Set maximum and minimum electron energies to a very wide range
IC_KN.Eemax = 10*u.PeV
IC_KN.Eemin = 1*u.MeV
# compute the three IC spectra
photon_energy = np.logspace(-6, 3, 100) * u.TeV
for seed in ['CMB', 'FIR', 'NIR']:
plt.loglog(photon_energy, IC_KN.sed(photon_energy, seed=seed),
label='$T_{{0}}$ = {0:.1f}'.format(IC.seed_photon_fields[seed]['T']), lw=1)
# plot labels
plt.legend(loc='best')
plt.ylim(bottom=1e-12)
plt.xlabel('Photon energy [TeV]')
plt.ylabel('$E^2 dN/dE$ [erg s$^{-1}$ cm$^{-2}$]')
"""
Explanation: Klein-Nishina demo
The regime of Inverse Compton radiation depends on the product of the energy of the seed photon $\epsilon_0$ and the energy of the electron $E_e$ in units of $m_e c^2$: $\kappa_0=\epsilon_0 E_e/(m_e^2 c^4)$. For $\kappa_0<<1$ IC proceeds in the Thomson regime (with cross section increasing with energy), and at $\kappa_0>>1$ it proceeds in the Klein-Nishina regime, in which the emission from a gievn electron energy is close to monochromatic and the cross section diminishes rapidly with energy, leading to a break in the photon spectrum in the transition between the two regimes.
End of explanation
"""
proton_dist = naima.models.ExponentialCutoffPowerLaw(1e36/u.eV, e_0=1*u.TeV, alpha=2.1, e_cutoff=100*u.TeV)
PP = naima.models.PionDecay(proton_dist, nh=0.1*u.cm**-3)
photon_energy = np.logspace(-2, 6, 100) * u.GeV
plt.loglog(photon_energy, PP.sed(photon_energy))
#plot label stuff
plt.xlabel('Photon energy [GeV]')
plt.ylabel('$E^2 dN/dE$ [erg s$^{-1}$ cm$^{-2}$]')
plt.ylim(bottom=1e-14)
"""
Explanation: Pion Decay
Now we can explore the pion decay emission from a relativistic proton distribution. We define a new particle distribution:
End of explanation
"""
ECPL = naima.models.ExponentialCutoffPowerLaw(amplitude=1e36/u.eV, e_0=1*u.TeV, alpha=2.5,
e_cutoff=50*u.TeV, beta=2)
IC = naima.models.InverseCompton(ECPL, seed_photon_fields=['CMB', ['FIR', 90*u.K, 0.415*u.eV/u.cm**3]])
#IC.nEed /=10
SY = naima.models.Synchrotron(ECPL, B=10*u.uG)
photon_energy = np.logspace(-5, 15, 100)*u.eV
from ipywidgets import interact
from IPython.display import display
@interact(logA=(33.,39.,0.2), alpha=(1.,4.,0.1), log_e_cutoff=(-1,3,0.2), u_FIR=(0.1,5.,0.1), T_FIR=(10,300,10), logB=(-1,2,0.1))
def model(logA=36, alpha=2.5, log_e_cutoff=1.8, u_FIR=0.4, T_FIR=90, logB=1):
ECPL.amplitude = 10**logA /u.eV
ECPL.alpha = alpha
ECPL.e_cutoff = 10**log_e_cutoff * u.TeV
IC.seed_photon_fields['FIR']['u'] = u_FIR * u.eV/u.cm**3
IC.seed_photon_fields['FIR']['T'] = T_FIR * u.K
SY.B = 10**logB*u.uG
f,ax = plt.subplots(1)
ax.loglog(photon_energy, SY.sed(photon_energy))
ax.loglog(photon_energy, IC.sed(photon_energy))
ax.loglog(photon_energy, IC.sed(photon_energy, seed='CMB'), lw=1, ls='--')
ax.loglog(photon_energy, IC.sed(photon_energy, seed='FIR'), lw=1, ls='--')
plt.axvspan(1e2,1e4, fc='0.5', alpha=0.25, lw=0)
plt.axvspan(1e8,1e10, fc='r', alpha=0.25, lw=0)
plt.axvspan(1e11,1e13, fc='b', alpha=0.25, lw=0)
ax.set_ylim(bottom=1e-13)
ax.set_xlabel('Photon energy [eV]')
ax.set_ylabel('$E^2 dN/dE$ [erg s$^{-1}$ cm$^{-2}$]')
plt.show()
print('W_e(E_e>1 TeV) = {0:.3g}'.format(IC.compute_We(Eemin=1*u.TeV)))
print('E_c = {0:.3g}'.format(ECPL.e_cutoff))
print('B = {0:.3g}'.format(SY.B))
print('u_FIR = {0:.3g}'.format(IC.seed_photon_fields['FIR']['u']))
print('T_FIR = {0:.3g}'.format(IC.seed_photon_fields['FIR']['T']))
"""
Explanation: Interactive widget demo
You will need to install ipywidgets for this to work
End of explanation
"""
|
trangel/Insight-Data-Science | general-docs/recommendation-validation/recommender_systems-validation.ipynb | gpl-3.0 | # ipython notebook foo to embed figures
%matplotlib inline
from validation_figs import *
# generate a small, random user-item matrix for illustration
uim, _ = uim_data()
"""
Explanation: Recommender Systems: Validation
The goal of this document is to provide a solid basis for validating recommender systems. Along the way we aim to highlight pitfalls and best practices, as well as trade-offs between different approaches. While the hope is to cover all pertinent points, we may not - please let us know if you find relevant additions. Additionally, we welcome reports of comments, typos, errors, or bugs of any kind.
Outline
Introduction
Testing your recsys: holding out data
Leave-k-out
Leave-one-out
M-fold
Validation Sets
Summary: Holdout recommendations and best practices
<br>
Evaluation Metrics
Error Metrics
Classification Metrics
Rank Metrics
<br>
Qualitative Approaches
Heuristics
Visualizations
<br>
Best Practices
Dependencies
This notebook should ship with 'validation_figs.py', 'algorithms.py' and 'utils.py', which will provide some basic algorithms and functions used in this demo. In addition this notebook requires the following libraries:
- numpy
- matplotlib
- seaborn
- scikit-learn
Implementation Details
The code shown below is not optimized for performance, but instead for interpretability. It's left to the user to optimize code for a given problem. Important note - below we work with dense (normal) matrices, but for these problems sparse representations can save a lot of memory and computation!
<a id='intro'></a>
Introduction
Evaluating recommendation systems can be hard! There are a number of issues that don't always arise in other Machine Learning applications.
To start, splitting the data into train and test sets is different than some standard approaches. Almost all recommenders need a user to rate at least one item in order to make predictions. This means one cannot simply holdout entire rows from the user-item data matrix to formulate a test set.
Generally the data used for recommendation has some level of implicitness. Users may like some items but not necessarilly buy/click/rate/use them, or they may not have been exposed to the items, or they may be waiting to make some action, ... In short, an empty entry in a user-item matrix does not always have a clear meaning.
Probably the most unique aspect of the recommendation problem are practical constraints - its often not possible to show all good recommendations to users. Instead, we must provide the top N recommendations to users, where N is a relatively small number of items. This naturally leads us to rank our predictions, and consider metrics which weigh the quality of those recommendations.
<a id='splitting'></a>
Testing your recsys: holding out data
Evaluating a recommendation system follows a similar pattern to evaluations of most Machine Learning models. The gold standard is to evaluate the model's performance on heldout 'test' data. This means we need to choose a method by which we choose a set of training data and test data.
Some jargon - For a data matrix X with N rows by M columns, ML practitioners usually call N 'samples' and M 'features'. Often in recommenders, the primary form of data is again X where N is called 'users' and M is called 'items'.
Example data
To illustrate the concepts associated with holdout for recommenders, let's generate an example user-item matrix. This matrix has 20 users and 100 items. In addition, we set the matrix to have 10% sparsity, that is only 10% of the possible user-item pairs have data. Real world examples can often be very sparse, often at the one percent level or lower.
End of explanation
"""
# Leave-k-out demo
leave_k_out(uim.copy(), _)
"""
Explanation: Defining train and test sets.
For recommenders, the split is often slightly different from the (more) standard approach of splitting the data matrix on rows. Many recommenders, in fact, need to have SOME item-level information for a user, or it can't make any prediction at all. So instead, we need to split up the data in more creative ways.
<a id='leave-k-out'></a>
Leave-k-out
A common strategy for splitting recommendation data into training and test sets is leave-k-out. Here, a split percentage is chosen (e.g., 80% train, 20% test) and the test percentage is selected randomly from the user-item pairs with non-zero entries.
Choosing an 80%/20% split, we can see the test data highlighted in our example below:
End of explanation
"""
# M-fold demo
M_fold(uim.copy(), _)
"""
Explanation: Best practices
* Similar to K-fold cross validation, leave-k-out should be repeated multiple times. A minimal version is to repeat by 1 over the test fraction size - so for 20% test splits, repeat $N=5$ times. However, since the strategy is random (not stratified) it is best to repeat multiple $N$ times. One can look at the variance of metrics to see how well they are estimated.
<a id='leave-one-out'></a>
Leave-one-out
The extreme case of leave-k-out is leave-one-out, where only one user-item entry is held out for testing. If feasible, training and testing is repeated for every user-item entry in the entire data set! The obvious downside of this approach is the computational demands, which can be large for big datasets. Sometimes, this means people will adopt a strategy of only doing leave-one-out for each user, or randomly selecting one entry and doing this as many times as practical.
Why would anyone do this? In some cases, its hard to have enough training data to build a good model and one may need to maximize the amount of training data available. Alternatively, another situation where leave-one-out is appropriate is when almost all the users have one or two ratings. This means that under typical leave-k-out, a large number of users will have no entries by which to make predictions. Using leave-one-out generally requires knowledge and assumptions about the data and model, so Data Scientist beware! Leave-one-out can be quite prone to overfitting.
<a id='m-fold'></a>
M-fold Cross Validation
A useful variant that combines leave-k-out and standard K-fold crossvValidation is (sometimes) called M-fold cross validation. The idea is simple - split the data into a training set and holdout set. On the holdout set, perform leave-k-out.
On our example data, lets perform a 50% training and holdout split, and 50% leave-k-out on the holdout set. This means we are training on 50% of the data, and testing on 25%:
End of explanation
"""
from utils import fetch_movielens_data
# get user-review (i.e., user-item) matrix
urm = fetch_movielens_data()
print 'User-review matrix, shape: (%d, %d)' % urm.shape
"""
Explanation: Best practices
* Like our best practices for leave-k-out, we should repeated the number of tests we do on the holdout data for a given fold.
M-fold cross validation has some nice advantages. First, the training data is complete - that is, the training data for each user is not missing any data (unlike standard leave-k-out). This might be an advantage when the typical number of entries for a user is quite low. Second, integrating the stratified nature of K-fold CV gives us more assurance that we are traversing the dataset, possibly leading to less test evaluations ultimately. The downside is that one might have to put more care into considering the exact splits of training, holdout, and test proportions.
<a id='validation'></a>
Validation Sets
Many machine learning models require hyperparameter tuning in order to deliver good performance. The same is true for a subset of recommender systems. In these cases, we need a third partitioning of the data - a validation set. The validation set is used during the training process to set the hyperparameters. This strategy is standard practice to insure best generalized performance.
Selecting a validation set is similar to the process for any other machine learning application. From the training set, a validation set is selected in a manner similar to how the initial train/test split was generated. Often the only difference is the relative proportions of the train/test and train/validation splits.
<a id='holdout summary'></a>
Summary: Holdout recommendations and best practices
Generally we recommend using leave-k-out or M-fold validation. Use leave-one-out only when necessary.
Leave-k-out might be well suited to more balanced (# of user entries fairly consistent) and more dense (# of user entries fairly large) data, while M-fold might preform better elsewhere.
Think carefully when deciding strategy and split proportions:
How much training data does the model need?
What is the variance of your estimation of performance metrics?
If the data is large, what are the computational and practical trade-offs?
Does your strategy bias performance towards different user populations?
<a id='metrics'></a>
Evaluation Metrics
There are a large variety of evaluation metrics we might consider for different recommendation tasks. This is no single, slam-dunk metric by which recommenders can be evaluated. Instead, it requires a bit of thinking: What is the goal of the system? What do zeros in the matrix mean? How much implicitness and exposure are involved in the data collection?
These are tough questions. In a real-world, production situation, one or more of the below metrics may be used when building out recommenders but ultimately the gold standard is A/B testing. However, A/B testing every possible model is impractical. Moreover, practitioners developing new recommender models need a means by which to iterate.
This section will walk through various options for metrics, with the goal of giving a high level intuition but enough detail when necessary. Here we will evaluate the metrics on one data set and one train/test split, using a simple cosine similarity user-user recommender. To thoroughly evaluate a model, we would do the splitting and metric evaluation multiple times (as needed by the holdout method). Comparing models is then possible using these (more robust) metrics.
Data, Train/Test Sets, and Collaborative Filtering
Below, we will used the MovieLens 100k dataset. This dataset is represents a 943 user by 1682 movie matrix, where the entries are non-binary ratings of movies valued 1-5. We ship the necessary data file with this notebook. Let's load the data:
End of explanation
"""
import numpy as np
from utils import train_test_split
train, test = train_test_split(urm)
# note - in this implementation of leave-k-out, the train, test data
# have the same shape, but test data is zeroed out in the training set.
# In addition, we have imputed values where there are no entries in the
# matrix.
#
# Remember - this is MovieLens 100k, meaning 100k ratings.
print 'Train shape: (%d, %d)' % train.shape
print 'Test shape: (%d, %d)' % test.shape
print 'Zero-valued Train entries: %d' % np.where(train == 0)[0].size
print 'Non-zero Train entries: %d' % np.where(train > 0)[0].size
print 'Test entries: %d' % np.where(test > 0)[0].size
print 'Total entries in imputed user-item matrix: %d' % (train.shape[0] * train.shape[1])
"""
Explanation: For demonstration purposes, lets do one split of training and test sets using an 80%/20% ratio.
End of explanation
"""
from algorithms import CosSimilarityRecommender
# init, fit to training data, predict
rec = CosSimilarityRecommender(kind='user')
rec.fit(train)
predictions = rec.predict(train)
"""
Explanation: Finally, lets run our working model - user-user collaborative filtering. This model uses the cosine similarity between users to predict the ratings of items. A detailed discussion is beyond the scope of this document but this blog post has a great discussion, and (coming soon) recommender_systems-algorithms.ipynb will cover the model and possible alternatives.
End of explanation
"""
# alternatively:
# from sklearn.metrics import mean_squared_error, mean_absolute_error
# mean squared error
def mse(true, actual):
return np.mean((true - actual) ** 2.)
# mean absolute error
def mae(true, actual):
return np.mean(np.abs(true - actual))
nonzero_test = test > 0
print 'MSE = %0.3f' % mse(predictions[nonzero_test], test[nonzero_test])
print 'MAE = %0.3f' % mae(predictions[nonzero_test], test[nonzero_test])
"""
Explanation: <a id='error'></a>
Error metrics
Error metrics are one of the most used and well known metrics amongst Machine Learning practitioners. They are useful because they provide (in some sense) a summary of how model predictions perform across a given test set. The most common is the Mean Squared Error (MSE) or similarly Root Mean Squared Error (RMSE). Lets define the MSE:
$$ {\rm MSE} = \frac{1}{N}\sum_{i=1}^{N} (p_i - a_i)^2 $$
Here we sum the squared difference between predicted value $p_i$ and actual value $a_i$ over all $N$ test examples. The RMSE is simply the square root of this value. RMSE perhaps more interpretible because it is of the same scale as the data, but nevertheless constains the same information as the MSE.
While the MSE is easy to compute, it can suffer from very large error contributions from outliers - squaring the error puts emphasis on large deviations. A more robust error metric is the Mean Absolute Error (MAE):
$$ {\rm MAE} = \frac{1}{N}\sum_{i=1}^{N} |p_i - a_i| $$,
which is easy to implement and interpret.
Lets compute the MSE and MAE:
End of explanation
"""
def binarize(arr, tol):
"""
Convert continous valued array to binary.
"""
arr[arr < tol] = 0
arr[arr >= tol] = 1
return arr
from sklearn.metrics import recall_score, precision_score, f1_score, roc_curve, roc_auc_score
# get non-zero test entries, since we can't evaluate for unsubmitted ratings!
y_true = test[nonzero_test]
y_pred = predictions[nonzero_test]
# Continous version of predictions, scaled to 0, 1
y_pred_scaled = (y_pred - 1.) / 4.
# binarize true values and predictions
y_pred_binarized = binarize(y_pred.copy(), 3)
y_true_binarized = binarize(y_true.copy(), 3)
# recall, precision, and f1 are definied for binarized (non-continuous) values
# of predictions
print 'Recall: %0.3f' % recall_score(y_true_binarized, y_pred_binarized)
print 'Precision: %0.3f' % precision_score(y_true_binarized, y_pred_binarized)
print 'F1 score: %0.3f' % f1_score(y_true_binarized, y_pred_binarized)
# roc-auc is more accurate when using continuous predictions.
auc_score = roc_auc_score(y_true_binarized, y_pred_scaled)
print 'ROC-AUC: %0.3f' % auc_score
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_true_binarized, y_pred_scaled)
pl.figure(figsize=(8, 8))
pl.plot(fpr, tpr, lw=2)
pl.plot([0, 1], [0, 1], 'k--')
pl.title('ROC (AUC=%0.3f)' % auc_score)
pl.xlabel('FPR')
pl.ylabel('TPR')
pl.show()
"""
Explanation: Ok, so what do these numbers mean? Its hard to say, exactly, because they are compressed summaries of the data. There may be half of the predictions doing perfectly and half doing badly or all just performing equally well. This is an argument for always visualizing (or understanding) your errors. MAE is a bit more interpretible, on face value it says the model is making average errors on the order of 0.8. For a scale from 1-5, that seems fairly bad! Again, better dig into the errors.
Binary vs. Non-binary data: MSE and RMSE have clear connections to non-binary data - if we can accurately estimate the user ratings/plays/clicks it would be very useful! These metrics are also valid for binary data, if the binary predictions/estimates are continuous. If they are not, classification metrics (below) do just as well.
<a id='class'></a>
Classification Metrics
Error metrics are great as they can provide a nice summary of how well the model generally performs. However, one natural goal of recommender systems is to distinguish good recommendations from bad ones. In the binary case, this is very natural - a "1" is a good recommendation. The distinction from bad/good (0/1) makes perfect sense for binary data. For non-binary data, a threshold must be chosen such that all ratings above the threshold are good and called "1", while the rest are bad with label "0".
This good/bad, positive/negative framework is the same as binary classification in other machine learning settings. Therefore, standard classification metrics could be useful here. Two basic metrics are precision and recall, which are defined by True Positives $TP$, False Positives $FP$, True Negatives $TN$, and False Negatives $FN$:
$$ {\rm Precision} = \frac{TP}{TP + FP}, {\rm Recall} = \frac{TP}{TP + FN}$$
Precision is the fraction of good/positive/one labels you got correct, out of all the samples you label positive. Recall is the fraction good/positive/one labels you got correct, out of all the true positives.
One metric that combines these two measurements is the $F_1$ score:
$$ F_1 = 2 \frac{{\rm precision} \times {\rm recall}}{{\rm precision} + {\rm recall}}$$
which can be viewed as a weighted average of precision and recall. A perfectly good classfier has a $F_1$ score of 1, while a perfectly bad classifier has a $F_1$ score of zero.
All these metrics are clear once we have defined the threshold of good/bad in our predictions. For instance, in a binary situation our labels are 0, 1 while our predictions are (almost always) continuous from 0 - 1. To compare the predictions, we select a threshold (say 0.5), above which we call predictions 1 and below 0.
The choice of the threshold is left to the user, and can be varied depending on desired tradeoffs (for a more in-depth discussion of this threshold, see this blog). Therefore to summarize classification performance generally, we need metrics that can provide summaries over this threshold. One tool for generating such a metric is the Receiver Operator Characteristic (ROC) curve. It is a plot of the True Positive Rate $TPR$ versus the False Positive Rate $FPR$:
$$ TPR = \frac{TP}{TP + FN}, FPR = \frac{FP}{FP + TN}$$
A summary metric for the ROC curve is its integral - the Area Under the Curve (ROC-AUC).
Lets examine these metrics for the data and model we have been following:
End of explanation
"""
def recall_at_k(true, pred, user_ids, k, tol=3.):
unique_users = np.unique(user_ids)
pred_binarized = binarize(pred.copy(), tol)
recalls = np.zeros(unique_users.size)
for i in range(unique_users.size):
user_ind = user_ids == unique_users[i]
user_true = true[user_ind]
user_pred = pred[user_ind]
user_pred_binarized = pred_binarized[user_ind]
ranked_ind = np.argsort(-user_pred)[:k]
recalls[i] = recall_score(user_true[ranked_ind], user_pred_binarized[ranked_ind])
return np.mean(recalls[recalls > 0])
user_ids = np.tile(np.array([np.arange(test.shape[0])]).T, (1, test.shape[1]))[nonzero_test]
print 'recall @4 : %0.3f' % recall_at_k(y_true_binarized, y_pred, user_ids, 4)
print 'recall @8 : %0.3f' % recall_at_k(y_true_binarized, y_pred, user_ids, 8)
print 'recall @16 : %0.3f' % recall_at_k(y_true_binarized, y_pred, user_ids, 16)
print 'recall @32 : %0.3f' % recall_at_k(y_true_binarized, y_pred, user_ids, 32)
"""
Explanation: Interesting! According to recall our positive predictions are about 87% pure, and precision says we called 88% of the positive examples 'positive'. The F1 score is correspondingly high.
<a id='rank'></a>
Rank-based Metrics
Classification metrics speak highly to recommenders - we want to know if a recommendation is good or bad. However, generic classification metrics assess the overall performance of the recommender across all items. In a realistic setting, one can only recommend $K$ items out of a possible (and large!) number $N$.
Given that we can only show a user a limited number of items, it therefore makes sense to use metrics that rank the best set of possible items. For each user, we can prepare a rank ordered list of items that we might recommend. Under this condition, what metrics might we produce?
There are several options, and the correct (set of) metric(s) is an active area of debate and research. We will stick to those actively used in current research papers and in industry. One such metric is recall@k - which is simply the per-user recall, limited to the top $k$ ranked positive items. It's utility makes obvious sense - if you are limited to showing a user $k$ predictions, you want to have high recall!
Let's try on our running example for a few $k$:
End of explanation
"""
def precision_at_k(true, pred, user_ids, k, tol=3.):
unique_users = np.unique(user_ids)
pred_binarized = binarize(pred.copy(), tol)
precisions = np.zeros(unique_users.size)
for i in range(unique_users.size):
user_ind = user_ids == unique_users[i]
user_true = true[user_ind]
user_pred = pred[user_ind]
user_pred_binarized = pred_binarized[user_ind]
ranked_ind = np.argsort(-user_pred)[:k]
precisions[i] = precision_score(user_true[ranked_ind], user_pred_binarized[ranked_ind])
return precisions
def MAP_at_k(true, pred, user_ids, k, tol=3.):
unique_users = np.unique(user_ids)
precisions_at_ks = np.zeros((k, unique_users.size))
for i in range(k):
precisions_at_ks[i] = precision_at_k(true, pred, user_ids, i+1, tol)
return np.mean(precisions_at_ks[precisions_at_ks > 0])
print 'MAP @4 : %0.3f' % MAP_at_k(y_true_binarized, y_pred, user_ids, 4)
print 'MAP @8 : %0.3f' % MAP_at_k(y_true_binarized, y_pred, user_ids, 8)
print 'MAP @16 : %0.3f' % MAP_at_k(y_true_binarized, y_pred, user_ids, 16)
print 'MAP @32 : %0.3f' % MAP_at_k(y_true_binarized, y_pred, user_ids, 32)
"""
Explanation: So what do these numbers mean? The Recall@4 was very good, meaning that on average the top 4 items shown to each user were labeled as 'good' and in fact they were. Note that by definition, the recall@k converges to the global recall when $k$ is sufficiently high.
We also can straightforwardly define the precision@k in a similar manner. However, because recommendation data is of an implicit nature many people do not like precision@k itself as a metric - just because an item is not in the test set does not mean the user will not like it in the future. This fact tends to make precision@k a bit noisy as a standalone metric.
Instead, another metric that is often used is the Mean Average Precision @k (MAP@k). This is simply the mean across all users of the Average Precision@k.
$$ {\rm Average Precision@}k = \sum_{n=1}^{k} \frac{{\rm Precision@n}}{{\rm min}(n, |\mathbf{y}_u^{\rm test}|)}$$
where $\mathbf{y}_u^{\rm test}$ is the test data for the user. The benefit of the MAP metric is that it accounts for both precision and recall and penalizes bad items which are ranked early. The downside it that it is less interpretible than recall@k and precision@k.
End of explanation
"""
def DCG_at_k(true, pred, user_ids, k):
unique_users = np.unique(user_ids)
dcgs = np.zeros(unique_users.size)
for i in range(unique_users.size):
user_ind = user_ids == unique_users[i]
user_true = true[user_ind]
user_pred = pred[user_ind]
ranked_ind = np.argsort(-user_pred)[:k]
user_relevances = user_true[ranked_ind]
gains = 2. ** (user_relevances)
discounts = np.log2(np.arange(user_relevances.size) + 2)
dcgs[i] = np.sum(gains / discounts)
return dcgs
def NDCG_at_k(true, pred, user_ids, k):
actual_dcgs = DCG_at_k(true, pred, user_ids, k)
best_dcgs = DCG_at_k(true, true, user_ids, k)
return np.mean(actual_dcgs / best_dcgs)
print 'NDCG @4 : %0.3f' % NDCG_at_k(y_true, y_pred, user_ids, 4)
print 'NDCG @8 : %0.3f' % NDCG_at_k(y_true, y_pred, user_ids, 8)
print 'NDCG @16 : %0.3f' % NDCG_at_k(y_true, y_pred, user_ids, 16)
print 'NDCG @32 : %0.3f' % NDCG_at_k(y_true, y_pred, user_ids, 32)
"""
Explanation: The final rank-based metric we will discuss is the Normalized Discounted Cumulative Gain (NDCG). NDCG is a very popular metric, which emphasizes - strongly - that items with high relevance should be placed early in the ranked list. In addition, order is important.
NDCG is based on the Discounted Cumulative Gain (DCG). Simply put, for each user we take the rank ordered list (from our predictions) and look up how relevant the items are (from the true rating). Relevances are simply the value of the entry in the test set, so (0, 1) or (1, 2, 3, 4, 5) in the MovieLens case. We then exponentially weight the relevances, but discount them based on their place in the list. Definitionally:
$${\rm DCG}@k = \sum_{i=1}^{k} \frac{2^{rel_i} - 1}{\log_2(i + 1)}$$
where $rel_i$ is the relevance of the i-th item in the ranked list. To calculate NDCG we need to normalize the DCG to be bounded from 0 to 1. This is done by calculating DCG for a perfectly ordered list, i.e., the rank is determined from the true ordered list.
End of explanation
"""
from validation_figs import user_item_hists
# switch log = True if desired
user_counts, item_counts = user_item_hists(urm - test, log=False)
"""
Explanation: When we look at these numbers, we see that (on average) the first 4 items we showed to each user was only 0.778 of the DCG from a perfect set of recommendations. As k increases, NDCG increases because the cumulative effect of the rest of the list washes out some errors we made early. The cool thing about NDCG is that the actual score matters! We didn't binarize our predicted ratings.
<a id='qual'></a>
Qualitative Approaches
While quantitiative measures are the most rigorous means by which to test recommendation models, qualitative approaches have their place. The goal of these approaches can be many, but often is trying to speak to common sense or intuition and to generate a better understanding of the data and model.
<a id='touchyfeely'></a>
Heuristics
While not a quantitative measure, heuristics are often very convincing. If you love digitally animated films (say Kung Fu Panda) and your top 5 recommendations include Up, Toy Story, Big Hero 6, Wall-E, and Minions - you are probably going to be convinced the recommender is doing a good job.
Indeed this heuristic, 'feels good' analysis should probably be part of every recommender's validation story. However, its important not to rely to heavily on this, as there are many downsides to such analyses. Some disadvantages:
* It is often impractical if not impossible to make this assessment for large numbers of users and items.
* Due to its subjective nature, debates can arise about the quality of recommendations.
* Seemingly bad recommendations may actually be good - maybe the user has esoteric tastes!
* Even if the recommendations are good, there might actually be better ones we could give.
<a id='viz'></a>
Visualizations
Visualizations can be a powerful means for conveying your results and understanding the successes/failures of your recommender. Below we give some ideas for what these may look like, but in general these may have to be altered and/or well annotated/described to complete your data-model story.
User-Item rating distributions
To give context, its often useful to keep in mind the number of ratings users and items have. Let's look at these distributions in the training data:
End of explanation
"""
from validation_figs import se_hists
# histograms of squared errors
se_hists(test, predictions)
"""
Explanation: As expected, the number of user or item ratings is skewed to low numbers of ratings with a long tail that goes to quite high numbers. These distributions very nearly follow a power-law distribution (by setting log=True in user_item_hists one can easily see this). We will keep this in mind when exploring the visualizations below.
Error metric visualizations
A natural plot to make is a histogram of errors, both globally and then averaged for each user or item. Lets do this for the squared error of the predictions.
End of explanation
"""
from validation_figs import se_hists_percentile
# squared error histograms, conditioned on number of user, item ratings
se_hists_percentile(test, predictions, user_counts, item_counts)
"""
Explanation: Cool, there is a lot we can take away here. First, looking at the (global, blue) distribution of squared errors, we can again see a power law distribution. We are doing fairly well on average as there are many entries with low prediction error. However, the long tail of high error entries kills the MSE - resulting in the value of 1.06 we saw before. Remember, the maximum possible squared error here is $5^2$.
Next, lets examine the conditional distributions of squared errors. Looking the per-user (green) errors, we see a nearly normal distribution with a reasonable mean and a slightly long tail to high errors. The fact that the bins are small means that there are few (if any) bins off the plot. This makes sense if we look at the above plot showing the distribution of user rating counts - most users have a lot of ratings, so large errors are effectively averaged out.
On an item basis (red), we are doing a bit worse. Not only is the (eye-balled) mean larger then the per-user distribution, but it clearly has a long tail. A useful next exercise could be an examination of the items with high error - are they bad because they have few ratings? Or because they are eclectic? Or some mix?
Lets get at part of these questions, by producting the same plot but limited to the users and items in the top and bottom 10th percentile of the ratings distribution (shown two plots above).
End of explanation
"""
from validation_figs import user_item_rocs
# produce ROC
# re-scale predictions and binarize, we need to retain shape of matrix
user_item_rocs(binarize(test.copy(), 3), (predictions.copy() - 1.) / 4, user_counts, item_counts, np.where(test > 0))
"""
Explanation: As expected, the number of ratings for each user/item has a significant effect. User/items with large numbers of ratings have a low variance distribution. Looking at the difference between top and bottom users, it seems the effect is mostly just to increase the noise of the predictions. For the items, however, the distribution is much more skewed. This makes intuitive sense - items with just a few ratings are going to have bad predictions. Interestingly, note that the squared error for the top 10% has a mean very close to the global mean of 1.06. In an ideal world, the user/items with tons of ratings would have relatively lower error. This may mean the ratings are inherently noisy or inconsistent, but also may mean a better algorithm is needed.
Classfication metric visualization
We have already touched on how to visualize classification metrics with ROC curves. Its also possible to visualize precision and recall in a similar manner, detailed in the previously mentioned blog post. For completeness, lets revisit the ROC curve but plot it conditioning on our bottom 10% of users and items.
End of explanation
"""
from validation_figs import ndcg_plot
# plot dcgs for random users and 'best' users
ndcg_plot(y_true, y_pred, user_ids, 30, 30)
"""
Explanation: For users, the above plot should not come as much surprise given the histograms we saw above - restricting ourselves to the bottom 10% of users didn't have a dramatically different error distribution. In addition, binarizing the ratings washes out some of the errors. Hence, the ROC curve for users is pretty similar to the full ROC curve. Our bottom 10th percentile items where much more error prone, however, and we can see the model is doing very poorly here.
Rank metric visualizations
Similar to error metrics, one can imagine producing histograms of rank metrics (perhaps on a per-user basis). But the most interesting property of these metrics is that they measure the quality of rankings. Therefore, it would be nice to see the rankings themselves. Below is a visualization of DCGs, which is meant to give a better feel for NDCG scores.
End of explanation
"""
|
d00d/quantNotebooks | Notebooks/quantopian_research_public/notebooks/lectures/Variance/notebook.ipynb | unlicense | # Import libraries
import numpy as np
np.random.seed(121)
# Generate 20 random integers < 100
X = np.random.randint(100, size=20)
# Sort them
X = np.sort(X)
print 'X: %s' %(X)
mu = np.mean(X)
print 'Mean of X:', mu
"""
Explanation: Measures of Dispersion
By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, and Delaney Granizo-Mackenzie.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
Dispersion measures how spread out a set of data is. This is especially important in finance because one of the main ways risk is measured is in how spread out returns have been historically. If returns have been very tight around a central value, then we have less reason to worry. If returns have been all over the place, that is risky.
Data with low dispersion is heavily clustered around the mean, while high dispersion a indicates many very large and very small values.
Let's generate an array of random integers to work with.
End of explanation
"""
print 'Range of X: %s' %(np.ptp(X))
"""
Explanation: Range
Range is simply the difference between the maximum and minimum values in a dataset. Not surprisingly, it is very sensitive to outliers. We'll use numpy's peak to peak (ptp) function for this.
End of explanation
"""
abs_dispersion = [np.abs(mu - x) for x in X]
MAD = np.sum(abs_dispersion)/len(abs_dispersion)
print 'Mean absolute deviation of X:', MAD
"""
Explanation: Mean Absolute Deviation (MAD)
The mean absolute deviation is the average of the distances of observations from the arithmetic mean. We use the absolute value of the deviation, so that 5 above the mean and 5 below the mean both contribute 5, because otherwise the deviations always sum to 0.
$$ MAD = \frac{\sum_{i=1}^n |X_i - \mu|}{n} $$
where $n$ is the number of observations and $\mu$ is their mean.
End of explanation
"""
print 'Variance of X:', np.var(X)
print 'Standard deviation of X:', np.std(X)
"""
Explanation: Variance and standard deviation
The variance $\sigma^2$ is defined as the average of the squared deviations around the mean:
$$ \sigma^2 = \frac{\sum_{i=1}^n (X_i - \mu)^2}{n} $$
This is sometimes more convenient than the mean absolute deviation because absolute value is not differentiable, while squaring is smooth, and some optimization algorithms rely on differentiability.
Standard deviation is defined as the square root of the variance, $\sigma$, and it is the easier of the two to interpret because it is in the same units as the observations.
End of explanation
"""
k = 1.25
dist = k*np.std(X)
l = [x for x in X if abs(x - mu) <= dist]
print 'Observations within', k, 'stds of mean:', l
print 'Confirming that', float(len(l))/len(X), '>', 1 - 1/k**2
"""
Explanation: One way to interpret standard deviation is by referring to Chebyshev's inequality. This tells us that the proportion of samples within $k$ standard deviations (that is, within a distance of $k \cdot$ standard deviation) of the mean is at least $1 - 1/k^2$ for all $k>1$.
Let's check that this is true for our data set.
End of explanation
"""
# Because there is no built-in semideviation, we'll compute it ourselves
lows = [e for e in X if e <= mu]
semivar = np.sum( (lows - mu) ** 2 ) / len(lows)
print 'Semivariance of X:', semivar
print 'Semideviation of X:', np.sqrt(semivar)
"""
Explanation: The bound given by Chebyshev's inequality seems fairly loose in this case. This bound is rarely strict, but it is useful because it holds for all data sets and distributions.
Semivariance and semideviation
Although variance and standard deviation tell us how volatile a quantity is, they do not differentiate between deviations upward and deviations downward. Often, such as in the case of returns on an asset, we are more worried about deviations downward. This is addressed by semivariance and semideviation, which only count the observations that fall below the mean. Semivariance is defined as
$$ \frac{\sum_{X_i < \mu} (X_i - \mu)^2}{n_<} $$
where $n_<$ is the number of observations which are smaller than the mean. Semideviation is the square root of the semivariance.
End of explanation
"""
B = 19
lows_B = [e for e in X if e <= B]
semivar_B = sum(map(lambda x: (x - B)**2,lows_B))/len(lows_B)
print 'Target semivariance of X:', semivar_B
print 'Target semideviation of X:', np.sqrt(semivar_B)
"""
Explanation: A related notion is target semivariance (and target semideviation), where we average the distance from a target of values which fall below that target:
$$ \frac{\sum_{X_i < B} (X_i - B)^2}{n_{<B}} $$
End of explanation
"""
|
AllenDowney/ThinkBayes2 | examples/combine_estimates.ipynb | mit | # If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from empiricaldist import Pmf
"""
Explanation: Think Bayes
Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
"""
from scipy.stats import beta
yes = 8
no = 0
posterior_beta_alice = beta(yes+1, no+1)
posterior_beta_alice.mean()
"""
Explanation: The Decision-Making Box
One day Alice tells her friend, Bob, "I bought a random decision-making box. Every time you press this button, it says 'yes' or 'no'. I've tried it a few times, and I think it says 'yes' 90% of the time."
Bob says he has some important decisions to make and asks if he can borrow the box. The next day, he returns the box to Alice and says, "I used the box several times, and I also think it says 'yes' 90% of the time."
Alice says, "It sounds like we agree, but just to make sure, we should compare our predictions. Suppose I press the button twice; what do you think is the probability it says 'yes' both times?"
Bob does some calculations and reports the predictive probability 81.56%.
Alice says, "That's interesting. I got a slightly different result, 81.79%. So maybe we don't agree after all."
Bob says, "Well let's see what happens if we combine our data. I can tell you how many times I pressed the button and how many times it said 'yes'."
Alice says, "That's ok, I don't actually need your data; it's enough if you tell me what prior distribution you used."
Bob tells her he used a Jeffreys prior.
Alice does some calculations and says, "Ok, I've updated my beliefs to take into account your data as well as mine. Now I think the probability of 'yes' is 91.67%."
Bob says, "That's interesting. Based on your data, you thought the probability was 90%, and based on my data, I thought it was 90%, but when we combine the data, we get a different result. Tell me what data you saw, and let me see what I get."
Alice tells him she pressed the button 8 times and it always said 'yes'.
"So," says Bob, "I guess you used a uniform prior."
Bob does some calculations and reports, "Taking into account all of the data, I think the probability of 'yes' is 93.45%."
Alice says, "So when we started, we had seen different data, but we came to the same conclusion."
"Sort of," says Bob, "we had the same posterior mean, but our posterior distributions were different; that's why we made different predictions for pressing the button twice."
Alice says, "And now we're using the same data, but we have different posterior means. Which makes sense, because we started with different priors."
"That's true," says Bob, "but if we collect enough data, eventually our posterior distributions will converge, at least approximately."
"Well that's good," says Alice. "Anyway, how did those decisions work out yesterday?"
"Mostly bad," says Bob. "It turns out that saying 'yes' 93% of the time is a terrible way to make decisions."
Explanation
Suppose Alice starts with a uniform prior, with is a beta distribution with parameters (1, 1). She presses the button 8 times and it says 'yes' every time. Her posterior distribution is a beta distribution with parameters (9, 1); the posterior mean is 90%.
For this example I'm using distributions we can compute analytically, but the entire conversation works for arbitrary distributions.
End of explanation
"""
yes = 13
no = 1
posterior_beta_bob = beta(yes+1/2, no+1/2)
posterior_beta_bob.mean()
"""
Explanation: Bob never said so, but he pressed the button 13 times and got 'yes' 12 times. He used a Jeffreys prior, with is a beta distribution with parameters (1/2, 1/2), so his posterior is a beta distribution with parameters (13.5, 1.5), and his posterior mean is 90%.
End of explanation
"""
def make_beta(dist, **options):
"""PMF to approximate a beta distribution.
dist: `beta` object
returns: Pmf
"""
qs = np.linspace(0.4, 1, 1000, endpoint=False)
ps = dist.pdf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
"""
Explanation: Now, if we press the button twice, what's the chance of getting 'yes' both times? We could compute it analytically, but it will be convenient to approximate these distributions with a discrete PMF.
The following function takes a beta object that represents a beta distribution and returns a Pmf object that represents a discrete approximation.
End of explanation
"""
posterior_pmf_alice = make_beta(posterior_beta_alice)
posterior_pmf_alice.plot(label='Alice')
posterior_pmf_bob = make_beta(posterior_beta_bob)
posterior_pmf_bob.plot(label='Bob')
plt.xlabel('Probablity of yes')
plt.ylabel('PDF')
plt.title("Posterior distributions for Alice and Bob")
plt.legend();
"""
Explanation: qs is the range of possible probabilities, from 0.4 to (almost) 1.0. I omitted the end point, 1.0, because the Jeffreys prior goes to infinity at 0 and 1.
Here's what the posterior distributions look like for Alice and Bob.
End of explanation
"""
mean_alice = posterior_pmf_alice.mean()
mean_alice
mean_bob = posterior_pmf_bob.mean()
mean_bob
"""
Explanation: Even though they have the same posterior mean, they are substantially different, especially for probabilities greater than 90%.
Just to check, here are the posterior means for the discrete PMFs.
End of explanation
"""
pred_bob = np.sum(posterior_pmf_bob.ps * posterior_pmf_bob.qs**2)
pred_bob
"""
Explanation: They are close to 90%.
Now, when Bob offers to share his data, Alice says it's not necessary as long as she knows what prior he used.
Since he used a beta prior, she knows his posterior is also a beta distribution. She knows the mean is 90%; she also knows that the predictive probability for getting 'yes' twice is 81.56%. And that's enough to figure out Bob's posterior distribution.
The key is to realize that the predictive probability is the second moment of the posterior distribution:
$ \sum p_i q_i^2 $
where each $q_i$ is a quantity in the posterior distribution (possible probabilities of 'yes') and each $p_i$ is the corresponding probability.
This summation is the expectation of $q^2$, written $E[q^2]$, which is related to the variance of the posterior distribution:
$ \mathrm{var}(q) = E[q^2] - E[q]^2 $
where $E[q]$ is the mean of the posterior distribution.
To show that's true, here's Bob's predictive probability:
End of explanation
"""
pred_bob - mean_bob**2
"""
Explanation: Here's the variance Alice computes:
End of explanation
"""
posterior_pmf_bob.var()
"""
Explanation: And here's the actual variance of Bob's posterior distribution.
End of explanation
"""
prior_beta_bob = beta(0.5, 0.5)
prior_pmf_bob = make_beta(prior_beta_bob)
prior_pmf_bob.plot(color='C1')
plt.xlabel('Probability of yes')
plt.ylabel('PDF')
plt.title('Jeffreys prior');
"""
Explanation: Pretty close.
Now, knowing the mean and variance, Alice can do some algebra to figure out the parameters of Bob's distribution. So let's assume she knows posterior_pmf_bob.
Bob told her he used a Jeffrey's prior, so Alice can make a Pmf to approximate it.
End of explanation
"""
likelihood_bob = posterior_pmf_bob / prior_pmf_bob
"""
Explanation: At this point, Alice knows Bob's prior and posterior distributions, so she can use Bayes's Theorem to compute the likelihood of the data.
End of explanation
"""
posterior2_pmf_alice = posterior_pmf_alice * likelihood_bob
posterior2_pmf_alice.normalize()
"""
Explanation: Alice still hasn't seen Bob's data, but she doesn't have to; now that she knows the likelihood of Bob's data, she can use it to do a Bayesian update in the usual way:
End of explanation
"""
posterior2_pmf_alice.mean()
"""
Explanation: The result is a posterior that combines Alice's prior with data from both Alice and Bob. And here's the posterior mean.
End of explanation
"""
yes = 21
no = 1
posterior2_beta_alice = beta(yes+1, no+1)
posterior2_beta_alice.mean()
"""
Explanation: To check it, we can compute her posterior mean if she had used all of the data.
End of explanation
"""
from scipy.stats import binom
ps = posterior_pmf_alice.qs
likelihood_alice = binom.pmf(8, 8, ps)
plt.plot(ps, likelihood_alice)
plt.xlabel('Probablity of yes')
plt.ylabel('Likelihood')
plt.title("Likelihood based on Alice's data");
"""
Explanation: Pretty close.
Now, Alice tells Bob she pressed the button 8 times and got 'yes' every time.
That's enough for Bob to figure out what prior she used. He knows the mean and second moment of her posterior distribution, which is enough to approximate it with a beta distribution.
And since he knows what data she saw, he can figure out the likelihood she applied. Here it is:
End of explanation
"""
implied_prior = posterior_pmf_alice / likelihood_alice
implied_prior.normalize()
"""
Explanation: Now Bob can compute Alice's prior distribution.
End of explanation
"""
implied_prior.plot()
plt.ylim([0, 0.002])
plt.xlabel('Probablity of yes')
plt.ylabel('PDF')
plt.title("Prior implied by Alice's data and posterior distribution");
"""
Explanation: And it sure looks uniform.
End of explanation
"""
posterior2_pmf_bob = posterior_pmf_bob * likelihood_alice
posterior2_pmf_bob.normalize()
posterior2_pmf_bob.mean()
"""
Explanation: Finally, Bob can update his posterior distribution with the likelihood of Alice's data.
End of explanation
"""
yes = 21
no = 1
posterior2_beta_bob = beta(yes+1/2, no+1/2)
posterior2_beta_bob.mean()
"""
Explanation: The result takes into account Bob's prior, Bob's data, and Alice's data, which we can confirm by computing the posterior distribution with a Jeffreys prior and all of the data.
End of explanation
"""
|
drphilmarshall/LocalGroupHaloProps | Notebooks/gmm_pair_M33.ipynb | gpl-2.0 | %matplotlib inline
import localgroup
import triangle
import sklearn
from sklearn import mixture
import numpy as np
import pickle
import matplotlib.patches as mpatches
"""
Explanation: Local Group Halo Properties: Demo Inference
We approximate the local group distance, radial velocity and proper motion likelihood function by sampling from the posterior distributions for these variables reported in the literature and transforming to kinematic variables in the M31-centric coordinate system.
End of explanation
"""
L = localgroup.Likelihood(isPair=True)
L.generate(Nsamples=200000)
L.set_PDF(mixture.GMM(n_components=10, covariance_type='full'))
L.approximate()
figure_obs = L.plot_samples(10, color='b', overlay=False)
"""
Explanation: Inside the Likelihood object is a "triplet" object called T, which contains an array of sample local groups, each with kinematic parameters consistent with the observational data. Let's plot these kinematic parameters in a "triangle" figure, to show all their 1 and 2-D marginal distributions.
End of explanation
"""
figure_model = L.model_gof(L.T.Nsamples, color="r", fig=None)
L.model_gof(L.T.Nsamples, color="r", fig=figure_obs)
red_patch = mpatches.Patch(color='red')
blue_patch = mpatches.Patch(color='blue')
figure_obs.legend(handles=[red_patch, blue_patch], labels=["Model Generated", "Observation Generated"])
figure_obs
"""
Explanation: The above plot shows a Gaussian Mixture model fitted Gaussians. The shaded regions show two standard deviations. The samples data has been preprocessed to zero the mean and scale by standard deviation. Since we are using the Gaussian Mixture Model to model the underlying PDF of the data, more components is always better.
How to evaluate goodness of fit:
Due to lack of a standard goodness of fit test for GMM's, the best we can do is graphically show that the model reproduces the data well. We proceed by drawing a set of points from the fitted model, where each point is a local group with (MW_D, MW_vr, MW_vt, M33_D, M33_vr, M33_vt). We then plot the 1D and 2D marginalizations of the drawn point set and show that the marginalizations match the marginalizations of the true data.
End of explanation
"""
#figure_obs.savefig("/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/LocalGroupHaloProps/doc/thesis/plots/model_gof.png")
"""
Explanation: The above plot shows that the points drawn from the model create a population that is very similar to the true data.
End of explanation
"""
path = '/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/data_files/complete_triplets.txt'
#path = '/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/data_files/MW_M31_pairs.txt'
npoints = 122000
halo_props = ['MW_Mvir', 'M31_Mvir', 'M33_Mvir']
Tr = localgroup.Triplet(isPair=True)
Tr.read_sim_points(path, npoints, halo_props, h=1.0, a=1.0)
Tr.transform_to_M31(sim=True)
Tr.mass_filter('sim')
Tr.dist_filter((Tr.sim_samples[:,0] < 10))
Tr.preprocess(L.samples_means, L.samples_stds, mode='sim')
sim_plot = Tr.plot_kinematics('sim', L.samples_means, L.samples_stds, color='c', fig=None)
#sim_plot = Tr.plot_kinematics('sim', L.samples_means, L.samples_stds, color='c', fig=None)
Tr.unprocess(L.samples_means, L.samples_stds, mode='sim')
data = np.transpose(np.vstack((np.transpose(Tr.sim_samples), np.log10(Tr.MW.Mvir), np.log10(Tr.M31.Mvir))))
labs=["mwd", "mwvr", "mwvt", "MWMvir", "M31Mvir"]
sim_plot = triangle.corner(data, labels=labs, quantiles=[0.16,0.5,0.84], fig=None, weights=None,\
plot_contours=True, show_titles=True, title_args={"fontsize": 12}, \
plot_datapoints=False, bins=20, color='r', label_kwargs={"fontsize": 16})
red_patch = mpatches.Patch(color='r')
cyan_patch = mpatches.Patch(color='c')
sim_plot.legend(handles=[red_patch, cyan_patch], labels=["CONSUELO Prior", "GMM-fit CONSUELO Prior"])
Tr.preprocess(L.samples_means, L.samples_stds, mode='sim')
sim_plot
#name = 'gmm_CONSUELO_prior.png'
#sim_plot.savefig('/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/LocalGroupHaloProps/doc/thesis/plots/asurps/'+name)
dat = np.transpose(np.vstack((np.transpose(Tr.sim_samples), np.log10(Tr.M31.Mvir), np.log10(Tr.MW.Mvir))))
Tr.GMM(40, dat)
Tr.GMM_sample(12200000)
gmm_MW = np.copy(Tr.gmm_samples[:,4])
gmm_M31 = np.copy(Tr.gmm_samples[:,3])
gmm_LG = np.log10(np.power(10,gmm_MW) + np.power(10,gmm_M31))
cond = gmm_MW < gmm_M31
Tr.gmm_samples = Tr.gmm_samples[cond]
gmm_MW = gmm_MW[cond]
gmm_M31 = gmm_M31[cond]
gmm_LG = gmm_LG[cond]
Tr.gmm_samples = Tr.gmm_samples[:,0:3]
Tr.compute_model_weights(L, 'gmm')
Tr.calculate_N95()
Tr.unprocess(L.samples_means, L.samples_stds, 'gmm')
data2 = np.transpose(np.vstack((np.transpose(Tr.gmm_samples), gmm_MW, gmm_M31)))
labs=["mwd", "mwvr", "mwvt", "MWMvir", "M31Mvir"]
pl = triangle.corner(data2, labels=labs, quantiles=[0.16,0.5,0.84], fig=None, weights=None,\
plot_contours=True, show_titles=True, title_args={"fontsize": 12}, \
plot_datapoints=False, bins=20, color='c')
Tr.preprocess(L.samples_means, L.samples_stds, mode='gmm')
labs = ["MWMvir", "M31Mvir", "MW+M31"]
all_mvir = np.transpose(np.vstack((gmm_MW, gmm_M31, gmm_LG)))
figure = triangle.corner(all_mvir, labels=labs, quantiles=[0.16,0.5,0.84], fig=None, weights=Tr.weights,\
plot_contours=True, show_titles=True, title_kwargs={"fontsize": 12}, \
plot_datapoints=False, bins=20, color='red')
#figure = triangle.corner(all_mvir, labels=labs, quantiles=[0.16,0.5,0.84], fig=None, show_titles=True, title_args={"fontsize": 12}, color='g')
"""
Explanation: Reading Simulation Points:
Below we read the preconfigured files containing the Consuelo (soon to be Dark Sky) Local Group analogs into a Triplet object. We plot the marginalizations of the simulation data, which allows us to compare with the LG prior.
End of explanation
"""
|
QuantStack/quantstack-talks | 2019-07-10-CICM/notebooks/wealth-of-nations.ipynb | bsd-3-clause | import pandas as pd
import numpy as np
import os
from bqplot import (
LogScale, LinearScale, OrdinalColorScale, ColorAxis,
Axis, Scatter, Lines, CATEGORY10, Label, Figure, Tooltip
)
from ipywidgets import HBox, VBox, IntSlider, Play, jslink
initial_year = 1800
"""
Explanation: This is a bqplot recreation of Mike Bostock's Wealth of Nations. This was also done by Gapminder. It is originally based on a TED Talk by Hans Rosling.
End of explanation
"""
data = pd.read_json(os.path.abspath('./nations.json'))
def clean_data(data):
for column in ['income', 'lifeExpectancy', 'population']:
data = data.drop(data[data[column].apply(len) <= 4].index)
return data
def extrap_interp(data):
data = np.array(data)
x_range = np.arange(1800, 2009, 1.)
y_range = np.interp(x_range, data[:, 0], data[:, 1])
return y_range
def extrap_data(data):
for column in ['income', 'lifeExpectancy', 'population']:
data[column] = data[column].apply(extrap_interp)
return data
data = clean_data(data)
data = extrap_data(data)
income_min, income_max = np.min(data['income'].apply(np.min)), np.max(data['income'].apply(np.max))
life_exp_min, life_exp_max = np.min(data['lifeExpectancy'].apply(np.min)), np.max(data['lifeExpectancy'].apply(np.max))
pop_min, pop_max = np.min(data['population'].apply(np.min)), np.max(data['population'].apply(np.max))
def get_data(year):
year_index = year - 1800
income = data['income'].apply(lambda x: x[year_index])
life_exp = data['lifeExpectancy'].apply(lambda x: x[year_index])
pop = data['population'].apply(lambda x: x[year_index])
return income, life_exp, pop
"""
Explanation: Cleaning and Formatting JSON Data
End of explanation
"""
tt = Tooltip(fields=['name', 'x', 'y'], labels=['Country Name', 'Income per Capita', 'Life Expectancy'])
"""
Explanation: Creating the Tooltip to display the required fields
bqplot's native Tooltip allows us to simply display the data fields we require on a mouse-interaction.
End of explanation
"""
year_label = Label(x=[0.75], y=[0.10], font_size=52, font_weight='bolder', colors=['orange'],
text=[str(initial_year)], enable_move=True)
"""
Explanation: Creating the Label to display the year
Staying true to the d3 recreation of the talk, we place a Label widget in the bottom-right of the Figure (it inherits the Figure co-ordinates when no scale is passed to it). With enable_move set to True, the Label can be dragged around.
End of explanation
"""
x_sc = LogScale(min=income_min, max=income_max)
y_sc = LinearScale(min=life_exp_min, max=life_exp_max)
c_sc = OrdinalColorScale(domain=data['region'].unique().tolist(), colors=CATEGORY10[:6])
size_sc = LinearScale(min=pop_min, max=pop_max)
ax_y = Axis(label='Life Expectancy', scale=y_sc, orientation='vertical', side='left', grid_lines='solid')
ax_x = Axis(label='Income per Capita', scale=x_sc, grid_lines='solid')
"""
Explanation: Defining Axes and Scales
The inherent skewness of the income data favors the use of a LogScale. Also, since the color coding by regions does not follow an ordering, we use the OrdinalColorScale.
End of explanation
"""
# Start with the first year's data
cap_income, life_exp, pop = get_data(initial_year)
wealth_scat = Scatter(x=cap_income, y=life_exp, color=data['region'], size=pop,
names=data['name'], display_names=False,
scales={'x': x_sc, 'y': y_sc, 'color': c_sc, 'size': size_sc},
default_size=4112, tooltip=tt, animate=True, stroke='Black',
unhovered_style={'opacity': 0.5})
nation_line = Lines(x=data['income'][0], y=data['lifeExpectancy'][0], colors=['Gray'],
scales={'x': x_sc, 'y': y_sc}, visible=False)
"""
Explanation: Creating the Scatter Mark with the appropriate size and color parameters passed
To generate the appropriate graph, we need to pass the population of the country to the size attribute and its region to the color attribute.
End of explanation
"""
time_interval = 10
fig = Figure(marks=[wealth_scat, year_label, nation_line], axes=[ax_x, ax_y],
title='Health and Wealth of Nations', animation_duration=time_interval)
"""
Explanation: Creating the Figure
End of explanation
"""
year_slider = IntSlider(min=1800, max=2008, step=1, description='Year', value=initial_year)
"""
Explanation: Using a Slider to allow the user to change the year and a button for animation
Here we see how we can seamlessly integrate bqplot into the jupyter widget infrastructure.
End of explanation
"""
def hover_changed(change):
if change.new is not None:
nation_line.x = data['income'][change.new + 1]
nation_line.y = data['lifeExpectancy'][change.new + 1]
nation_line.visible = True
else:
nation_line.visible = False
wealth_scat.observe(hover_changed, 'hovered_point')
"""
Explanation: When the hovered_point of the Scatter plot is changed (i.e. when the user hovers over a different element), the entire path of that country is displayed by making the Lines object visible and setting it's x and y attributes.
End of explanation
"""
def year_changed(change):
wealth_scat.x, wealth_scat.y, wealth_scat.size = get_data(year_slider.value)
year_label.text = [str(year_slider.value)]
year_slider.observe(year_changed, 'value')
"""
Explanation: On the slider value callback (a function that is triggered everytime the value of the slider is changed) we change the x, y and size co-ordinates of the Scatter. We also update the text of the Label to reflect the current year.
End of explanation
"""
play_button = Play(min=1800, max=2008, interval=time_interval)
jslink((play_button, 'value'), (year_slider, 'value'))
"""
Explanation: Add an animation button
End of explanation
"""
VBox([HBox([play_button, year_slider]), fig])
"""
Explanation: Displaying the GUI
End of explanation
"""
|
TUW-GEO/ascat | docs/read_eumetsat.ipynb | mit | import os
import cartopy
from datetime import datetime
import matplotlib.pyplot as plt
from ascat.eumetsat.level1 import AscatL1bFile
from ascat.eumetsat.level1 import AscatL1bBufrFile
from ascat.eumetsat.level1 import AscatL1bBufrFileList
from ascat.eumetsat.level1 import AscatL1bNcFile
from ascat.eumetsat.level1 import AscatL1bNcFileList
from ascat.eumetsat.level1 import AscatL1bEpsFile
from ascat.eumetsat.level1 import AscatL1bEpsFileList
from ascat.eumetsat.level1 import AscatL1bHdf5File
from ascat.eumetsat.level1 import AscatL1bHdf5FileList
path = os.path.join('..', 'tests','ascat_test_data', 'eumetsat',
'ASCAT_generic_reader_data')
filename = os.path.join(path, 'bufr',
'M02-ASCA-ASCSZR1B0200-NA-9.1-20100609013900.000000000Z-20130824233100-1280350.bfr')
bufr_file = AscatL1bFile(filename)
data = bufr_file.read()
data
plot_crs = cartopy.crs.PlateCarree()
data_crs = cartopy.crs.PlateCarree()
fig = plt.figure(figsize=(12, 6))
ax = fig.add_subplot(1, 1, 1, projection=plot_crs)
ax.set_title('Level 1 BUFR example - Sigma0 Fore Beam')
ax.add_feature(cartopy.feature.COASTLINE, linestyle='-')
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.add_feature(cartopy.feature.LAND, facecolor='#aaaaaa')
float32_nan = -999999.
valid = data['sig'][:, 0] != float32_nan
sc = ax.scatter(data['lon'][valid], data['lat'][valid],
c=data['sig'][valid, 0], zorder=3, marker='s', s=2,
transform=data_crs)
cax = fig.add_axes([ax.get_position().x1+0.01, ax.get_position().y0,
0.02, ax.get_position().height])
cbar = fig.colorbar(sc, ax=ax, cax=cax)
cbar.set_label('Backscatter [dB]')
"""
Explanation: EUMETSAT
The following example show how to read ASCAT Level 1b and Level 2 data in various formats (BUFR, EPS Native, netCDF). The data has been ordered from the EUMETSAT Data Centre (https://eoportal.eumetsat.int).
ASCAT Level 1b
Read Level 1b BUFR file returning generic format
End of explanation
"""
bufr_file = AscatL1bBufrFile(filename)
data = bufr_file.read()
data
plot_crs = cartopy.crs.PlateCarree()
data_crs = cartopy.crs.PlateCarree()
fig = plt.figure(figsize=(12, 6))
ax = fig.add_subplot(1, 1, 1, projection=plot_crs)
ax.set_title('Level 1 BUFR example - Sigma0 Fore Beam')
ax.add_feature(cartopy.feature.COASTLINE, linestyle='-')
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.add_feature(cartopy.feature.LAND, facecolor='#aaaaaa')
bufr_nan = 1.7e+38
valid = data['m_Backscatter'] != bufr_nan
sc = ax.scatter(data['lon'][valid], data['lat'][valid],
c=data['m_Backscatter'][valid], zorder=3, marker='s', s=2,
transform=data_crs)
cax = fig.add_axes([ax.get_position().x1+0.01, ax.get_position().y0,
0.02, ax.get_position().height])
cbar = fig.colorbar(sc, ax=ax, cax=cax)
cbar.set_label('Backscatter [dB]')
"""
Explanation: Read Level 1b BUFR file returning original format
End of explanation
"""
bufr_path = os.path.join(path, 'bufr')
bufr_file_list = AscatL1bBufrFileList(bufr_path)
data = bufr_file_list.read(datetime(2010, 6, 9, 1, 39, 0))
data
"""
Explanation: Read Level 1b BUFR file using a timestamp
End of explanation
"""
filename = os.path.join(path, 'eps_nat',
'ASCA_SZR_1B_M02_20100609013900Z_20100609032058Z_R_O_20130824233100Z.nat')
eps_file = AscatL1bEpsFile(filename)
data = eps_file.read()
data
"""
Explanation: Read Level 1b EPS Native file returning original format
End of explanation
"""
filename = os.path.join(path, 'nc',
'W_XX-EUMETSAT-Darmstadt,SURFACE+SATELLITE,METOPA+ASCAT_C_EUMP_20100609013900_18872_eps_o_125_l1.nc')
nc_file = AscatL1bNcFile(filename)
data = nc_file.read()
data
"""
Explanation: Read Level 1b NetCDF file returning original format
End of explanation
"""
import os
import cartopy
from datetime import datetime
import matplotlib.pyplot as plt
from ascat.eumetsat.level2 import AscatL2File
from ascat.eumetsat.level2 import AscatL2BufrFile
from ascat.eumetsat.level2 import AscatL2BufrFileList
from ascat.eumetsat.level2 import AscatL2NcFile
from ascat.eumetsat.level2 import AscatL2NcFileList
from ascat.eumetsat.level2 import AscatL2EpsFile
from ascat.eumetsat.level2 import AscatL2EpsFileList
path = os.path.join('..', 'tests','ascat_test_data', 'eumetsat')
bufr_path = os.path.join(path, 'ASCAT_L2_SM_125', 'bufr', 'Metop_B')
filename = os.path.join(bufr_path, 'M01-ASCA-ASCSMR02-NA-5.0-20170220050900.000000000Z-20170220055833-1207110.bfr')
bufr_file = AscatL2File(filename)
data = bufr_file.read()
data
plot_crs = cartopy.crs.PlateCarree()
data_crs = cartopy.crs.PlateCarree()
fig = plt.figure(figsize=(12, 6))
ax = fig.add_subplot(1, 1, 1, projection=plot_crs)
ax.set_title('Level 2 BUFR example - Soil moisture')
ax.add_feature(cartopy.feature.COASTLINE, linestyle='-')
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.add_feature(cartopy.feature.LAND, facecolor='#aaaaaa')
field = 'sm'
float32_nan = -999999.
valid = data[field] != float32_nan
sc = ax.scatter(data['lon'][valid], data['lat'][valid],
c=data[field][valid], zorder=3, marker='s', s=2,
transform=data_crs)
cax = fig.add_axes([ax.get_position().x1+0.01, ax.get_position().y0,
0.02, ax.get_position().height])
cbar = fig.colorbar(sc, ax=ax, cax=cax)
cbar.set_label('Soil moisture [%]')
"""
Explanation: ASCAT Level 2
Read Level 2 BUFR file returning generic format
End of explanation
"""
bufr_file = AscatL2BufrFile(filename)
data = bufr_file.read()
data
"""
Explanation: Read Level 2 BUFR file returning original format
End of explanation
"""
bufr_path = os.path.join(path, 'ASCAT_L2_SM_125', 'bufr', 'Metop_B')
bufr_file = AscatL2BufrFileList(bufr_path, sat='B')
data = bufr_file.read(datetime(2017, 2, 20, 5, 9, 0))
data
"""
Explanation: Read Level 2 BUFR file using a timestamp
End of explanation
"""
eps_path = os.path.join(path, 'ASCAT_generic_reader_data', 'eps_nat')
filename = os.path.join(eps_path, 'ASCA_SMO_02_M01_20180612035700Z_20180612053856Z_N_O_20180612044530Z.nat')
eps_file = AscatL2EpsFile(filename)
data = eps_file.read()
data
"""
Explanation: Read Level 2 EPS Native file returning original format
End of explanation
"""
nc_path = os.path.join(path, 'ASCAT_generic_reader_data', 'nc')
filename = os.path.join(nc_path, 'W_XX-EUMETSAT-Darmstadt,SURFACE+SATELLITE,METOPB+ASCAT_C_EUMP_20180612035700_29742_eps_o_250_ssm_l2.nc')
nc_file = AscatL2NcFile(filename)
data = nc_file.read()
data
"""
Explanation: Read Level 2 NetCDF file returning original format
End of explanation
"""
|
arsenovic/clifford | docs/tutorials/g2-quick-start.ipynb | bsd-3-clause | import clifford as cf
layout, blades = cf.Cl(2) # creates a 2-dimensional clifford algebra
"""
Explanation: This notebook is part of the clifford documentation: https://clifford.readthedocs.io/.
Quick Start (G2)
This notebook gives a terse introduction to using the clifford module, using a two-dimensional geometric algebra as the context.
Setup
First, import clifford and instantiate a two-dimensional algebra (G2),
End of explanation
"""
blades
"""
Explanation: Inspect blades.
End of explanation
"""
e1 = blades['e1']
e2 = blades['e2']
e12 = blades['e12']
"""
Explanation: Assign blades to variables
End of explanation
"""
e1*e2 # geometric product
e1|e2 # inner product
e1^e2 # outer product
"""
Explanation: Basics
End of explanation
"""
a = e1+e2 # the vector
n = e1 # the reflector
-n*a*n.inv() # reflect `a` in hyperplane normal to `n`
"""
Explanation: Reflection
End of explanation
"""
from math import e, pi
R = e**(pi/4*e12) # enacts rotation by pi/2
R
R*e1*~R # rotate e1 by pi/2 in the e12-plane
"""
Explanation: Rotation
End of explanation
"""
|
ga7g08/ga7g08.github.io | _notebooks/2015-09-17-Estimating-the-underlying-distribution-of-Lyne-2010-correlations-in-nudot.ipynb | mit | nu = [1.229, 1.616, 1.543, 10.4, 1.3, 2.579, 2.622, 1.410, 2.631, 1.672,
3.952, 1.524, 0.983, 2.469, 3.256, 2.322, 5.996, 3.728]
nudot = [-21.25, -12.05, -11.76, -135.36, -88.31, -11.84, -7.5, -1.75, -1.18,
-17.7, -3.59, -22.75, -5.33, -365.68, -58.85, -73.96, -604.36, -58.64]
Deltanudot_over_nudot_percent = [44.9, 13.28, 6.8, 5.91, 3.28, 2.52, 2.0, 1.71,
1.69, 0.85, 0.84, 0.79, 0.75, 0.71, 0.68, 0.68, 0.66,
0.31]
"""
Explanation: Estimating the underlying distributtion from correlations in $\Delta\dot{\nu}$ and $|\dot{\nu}|$
In the supplementary material of Lyne (2010) the authors noted correlation between the $\log$ values of the fractional amplitude $\Delta\dot\nu$ (which they interpreted as the switching magnitude) and the pulsars
spin-down rate $|\dot{\nu}|$.
Reproduce the correlation
First let's reproduce the result, here is the data from Table 1:
End of explanation
"""
Deltanudot_over_nudot = np.array(Deltanudot_over_nudot_percent) / 100.0
nudot = 1e-15 * np.array(nudot)
Deltanudot = Deltanudot_over_nudot * np.abs(nudot)
"""
Explanation: Now turn into into S.I units and calculate the $\Delta\dot{\nu}$ from the fractional size
End of explanation
"""
xobs = np.log(np.abs(nudot))
yobs = np.log(np.abs(Deltanudot))
p = np.corrcoef(xobs, yobs)[0, 1]
print("Correlation coefficient of {:1.3f}".format(p))
fig, ax = plt.subplots()
ax.plot(xobs, yobs, "o")
ax.set_xlabel(r"$\log_{10}(|\dot{\nu}|)$")
ax.set_ylabel(r"$\log_{10}(\Delta\dot{\nu})$")
plt.show()
"""
Explanation: Plot the data
End of explanation
"""
def f(t, psidot, theta, chi):
psi = t *psidot
return 2*theta*np.cos(chi)/np.sin(chi)*np.sin(psi) - 0.5*theta**2*np.cos(2*psi)
times = np.linspace(0, 10, 1000)
psidot = 2
theta = 0.1
chi = 1.58
y = f(times, psidot, theta, chi)
plt.plot(times, y)
plt.show()
print y.max() - y.min()
print abs(2*theta*np.cos(chi)/np.sin(chi)) + theta**2 + (np.cos(chi)/np.sin(chi))**2
"""
Explanation: This seems to agree with what was found by Lyne (2010), although they had an extra 51 data points from (S2).
Precession intepretation
If the variations are caused not by switching, but instead by precession, then the time-dependent spin-down modulatoin is given by
$$ \Delta\dot{\nu}(t) = \frac{1}{2\pi\tau_A P}
\left(2\theta\cot\chi\sin\psi(t) - \frac{\theta^{2}}{2}\cos2\psi(t)\right) $$
The $\Delta\dot{\nu}$ considered in the previous plot are the peak-to-peak values. Solving for the extrema we calculate (see the end of this notebook) that the precession predicts the peak-to-peak amplitude to be
$$ |\Delta\dot{\nu}| = \frac{1}{2\pi\tau_A P}
\left(\left|2\theta\cot\chi\right| + \theta^{2} + \cot^{2}\chi\right) $$
Quick numerical check:
Here is a quick numerical experiment to ensure that this equation holds up
End of explanation
"""
import pymc3 as pm
with pm.Model() as model:
theta = pm.Uniform('theta', lower=0.0, upper=np.pi)
chi = pm.Uniform('chi', lower=0.0, upper=np.pi)
eps = pm.Uniform('eps', lower=0.0, upper=10)
mu = xobs + pm.log((pm.sqrt((2*theta*pm.cos(chi)/pm.sin(chi))**2)
+ theta**2 + (pm.cos(chi)/pm.sin(chi))**2)/(2*np.pi))/np.log(10)
# Likelihood (sampling distribution) of observations
Y_obs = pm.Normal('Y_obs', mu=mu, sd=eps, observed=yobs)
"""
Explanation: Predicting the underlying $\theta$ and $\chi~$
Let's first rewrite the prediction for $\Delta\dot{\nu}$ as a function of $\nu$ as follows:
\begin{align}
|\Delta\dot{\nu}| & = \frac{1}{2\pi\tau_A P}
\left(\left|2\theta\cot\chi\right| + \theta^{2} + \cot^{2}\chi\right) \
& = \frac{1}{2\pi}\frac{|\dot{\nu}|}{\nu}\nu
\left(\left|2\theta\cot\chi\right| + \theta^{2} + \cot^{2}\chi\right) \
& = \frac{1}{2\pi}|\dot{\nu}|
\left(\left|2\theta\cot\chi\right| + \theta^{2} + \cot^{2}\chi\right) \
\end{align}
Then the $\log$ of these quantities should satisfy:
\begin{align}
\log|\Delta\dot{\nu}| = \log|\dot{\nu}|
+ \log\left(\frac{1}{2\pi}
\left(\left|2\theta\cot\chi\right| + \theta^{2} + \cot^{2}\chi\right)\right)
\end{align}
Now we can fit this model to the observed values to derive the underlying distribution of $\theta$ and $\chi$.
Fitting the distribution
To fit the distribution we use linear regression with the offset defined as above.
We will use a uniform prior on $\theta$ and $\chi$ over $[0, \pi]$ and a uniform prior on the
noise-parameter $\epsilon$ of $[0, 10]$.
End of explanation
"""
nsamples = 100000
nburn = 10000
with model:
step = pm.Metropolis()
trace = pm.sample(nsamples, step, progressbar=True)
"""
Explanation: Run the MCMC simulations
End of explanation
"""
pm.traceplot(trace[nburn:])
plt.show()
"""
Explanation: Plot the estimated posterior distribution and traces:
End of explanation
"""
from triangle import corner
theta_chain = trace[nburn:]['theta']
chi_chain = trace[nburn:]['chi']
eps_chain = trace[nburn:]['eps']
data = np.vstack((theta_chain, chi_chain, eps_chain)).T
corner(data)
plt.show()
"""
Explanation: Corner plot
End of explanation
"""
npp = 100
repeats = 100
fig, ax = plt.subplots(figsize=(10, 10))
xfit = np.linspace(xobs.min(), xobs.max(), 100)
for i in np.arange(0, npp, 1):
s = trace[np.random.randint(nburn, nsamples)]
chi = s['chi']
theta = s['theta']
cotchi = np.cos(chi)/np.sin(chi)
eps = s['eps']
yfit = xfit + np.log10((np.abs(2*theta*cotchi) + theta**2 + cotchi**2)/(2*np.pi))
ax.plot(xfit, yfit, "-", lw=0.5, color='k')
ax.fill_between(xfit, yfit-eps, yfit+eps, alpha=0.01, color='k')
ax.plot(xobs, yobs, "o", color='r', markersize=10)
ax.set_xlabel(r"$\log_{10}(|\dot{\nu}|)$")
ax.set_ylabel(r"$\log_{10}(\Delta\dot{\nu})$")
plt.show()
"""
Explanation: Plot 100 realisations with the data
End of explanation
"""
def y(x, A, B):
return A*np.sin(x) - B*np.cos(2*x)
x = np.linspace(0, 2*np.pi, 1000)
A = 1
B = 0.8
fig, ax = plt.subplots()
yplot = y(x, A, B)
ax.plot(x, yplot)
ax.set_xticks([0, np.pi/2, np.pi, 3*np.pi/2, 2*np.pi])
ax.set_xticklabels(["0", "$\pi/2$", "$\pi$", "$3\pi/2$", "$2\pi$"])
actual_dy = yplot.max() - yplot.min()
predic_dy = abs(A) + 2*B + A**2 / (8*B)
print "Actual dy = {:1.3f}, predicted df = {:1.3f}".format(actual_dy, predic_dy)
plt.show()
"""
Explanation: Derivation of peak-to-peak amplitude
If we define
$$ y = A \sin(x) - B\cos(2x) $$
It can be shown that this has four extrema in $[0, 2\pi]$ which are
$$ x = \sin^{-1}\left(\frac{-A}{4B}\right), \; \pi - \sin^{-1}\left(\frac{-A}{4B}\right), \; \frac{\pi}{2}, \; \frac{3\pi}{2} $$
Let's check this (and help decide which is which):
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/snu/cmip6/models/sandbox-2/landice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-2', 'landice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: SNU
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
"""
|
raschuetz/foundations-homework | Data_and_Databases_homework/homework_2_schuetz.ipynb | mit | import pg8000
conn = pg8000.connect(database="homework2")
"""
Explanation: Homework 2: Working with SQL (Data and Databases 2016)
This homework assignment takes the form of an IPython Notebook. There are a number of exercises below, with notebook cells that need to be completed in order to meet particular criteria. Your job is to fill in the cells as appropriate.
You'll need to download this notebook file to your computer before you can complete the assignment. To do so, follow these steps:
Make sure you're viewing this notebook in Github.
Ctrl+click (or right click) on the "Raw" button in the Github interface, and select "Save Link As..." or your browser's equivalent. Save the file in a convenient location on your own computer.
Rename the notebook file to include your own name somewhere in the filename (e.g., Homework_2_Allison_Parrish.ipynb).
Open the notebook on your computer using your locally installed version of IPython Notebook.
When you've completed the notebook to your satisfaction, e-mail the completed file to the address of the teaching assistant (as discussed in class).
Setting the scene
These problem sets address SQL, with a focus on joins and aggregates.
I've prepared a SQL version of the MovieLens data for you to use in this homework. Download this .psql file here. You'll be importing this data into your own local copy of PostgreSQL.
To import the data, follow these steps:
Launch psql.
At the prompt, type CREATE DATABASE homework2;
Connect to the database you just created by typing \c homework2
Import the .psql file you downloaded earlier by typing \i followed by the path to the .psql file.
After you run the \i command, you should see the following output:
CREATE TABLE
CREATE TABLE
CREATE TABLE
COPY 100000
COPY 1682
COPY 943
The table schemas for the data look like this:
Table "public.udata"
Column | Type | Modifiers
-----------+---------+-----------
user_id | integer |
item_id | integer |
rating | integer |
timestamp | integer |
Table "public.uuser"
Column | Type | Modifiers
------------+-----------------------+-----------
user_id | integer |
age | integer |
gender | character varying(1) |
occupation | character varying(80) |
zip_code | character varying(10) |
Table "public.uitem"
Column | Type | Modifiers
--------------------+------------------------+-----------
movie_id | integer | not null
movie_title | character varying(81) | not null
release_date | date |
video_release_date | character varying(32) |
imdb_url | character varying(134) |
unknown | integer | not null
action | integer | not null
adventure | integer | not null
animation | integer | not null
childrens | integer | not null
comedy | integer | not null
crime | integer | not null
documentary | integer | not null
drama | integer | not null
fantasy | integer | not null
film_noir | integer | not null
horror | integer | not null
musical | integer | not null
mystery | integer | not null
romance | integer | not null
scifi | integer | not null
thriller | integer | not null
war | integer | not null
western | integer | not null
Run the cell below to create a connection object. This should work whether you have pg8000 installed or psycopg2.
End of explanation
"""
conn.rollback()
"""
Explanation: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
End of explanation
"""
cursor = conn.cursor()
statement = "select movie_title from uitem where scifi = 1 and horror = 1 order by release_date desc;"
cursor.execute(statement)
for row in cursor:
print(row[0])
"""
Explanation: Problem set 1: WHERE and ORDER BY
In the cell below, fill in the string assigned to the variable statement with a SQL query that finds all movies that belong to both the science fiction (scifi) and horror genres. Return these movies in reverse order by their release date. (Hint: movies are located in the uitem table. A movie's membership in a genre is indicated by a value of 1 in the uitem table column corresponding to that genre.) Run the cell to execute the query.
Expected output:
Deep Rising (1998)
Alien: Resurrection (1997)
Hellraiser: Bloodline (1996)
Robert A. Heinlein's The Puppet Masters (1994)
Body Snatchers (1993)
Army of Darkness (1993)
Body Snatchers (1993)
Alien 3 (1992)
Heavy Metal (1981)
Alien (1979)
Night of the Living Dead (1968)
Blob, The (1958)
End of explanation
"""
cursor = conn.cursor()
statement = "select count(movie_id) from uitem where musical = 1 or childrens = 1;"
cursor.execute(statement)
for row in cursor:
print(row[0])
"""
Explanation: Problem set 2: Aggregation, GROUP BY and HAVING
In the cell below, fill in the string assigned to the statement variable with a SQL query that returns the number of movies that are either musicals or children's movies (columns musical and childrens respectively). Hint: use the count(*) aggregate.
Expected output: 157
End of explanation
"""
cursor = conn.cursor()
statement = "select occupation, count(occupation) from uuser group by occupation having count(occupation) > 50;"
cursor.execute(statement)
for row in cursor:
print(row[0], row[1])
"""
Explanation: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output:
administrator 79
programmer 66
librarian 51
student 196
other 105
engineer 67
educator 95
Hint: use GROUP BY and HAVING. (If you're stuck, try writing the query without the HAVING first.)
End of explanation
"""
cursor = conn.cursor()
statement = "select movie_title from uitem join udata on uitem.movie_id = udata.item_id where documentary = 1 and release_date < '1992-01-01' and rating = 5 group by movie_title;"
cursor.execute(statement)
for row in cursor:
print(row[0])
"""
Explanation: Problem set 3: Joining tables
In the cell below, fill in the indicated string with a query that finds the titles of movies in the Documentary genre released before 1992 that received a rating of 5 from any user. Expected output:
Madonna: Truth or Dare (1991)
Koyaanisqatsi (1983)
Paris Is Burning (1990)
Thin Blue Line, The (1988)
Hints:
JOIN the udata and uitem tables.
Use DISTINCT() to get a list of unique movie titles (no title should be listed more than once).
The SQL expression to include in order to find movies released before 1992 is uitem.release_date < '1992-01-01'.
End of explanation
"""
cursor = conn.cursor()
statement = "select movie_title, round(avg(rating), 2) from uitem join udata on uitem.movie_id = udata.item_id where horror = 1 group by movie_title order by avg(rating) limit 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
"""
Explanation: Problem set 4: Joins and aggregations... together at last
This one's tough, so prepare yourself. Go get a cup of coffee. Stretch a little bit. Deep breath. There you go.
In the cell below, fill in the indicated string with a query that produces a list of the ten lowest rated movies in the Horror genre. For the purposes of this problem, take "lowest rated" to mean "has the lowest average rating." The query should display the titles of the movies, not their ID number. (So you'll have to use a JOIN.)
Expected output:
Amityville 1992: It's About Time (1992) 1.00
Beyond Bedlam (1993) 1.00
Amityville: Dollhouse (1996) 1.00
Amityville: A New Generation (1993) 1.00
Amityville 3-D (1983) 1.17
Castle Freak (1995) 1.25
Amityville Curse, The (1990) 1.25
Children of the Corn: The Gathering (1996) 1.32
Machine, The (1994) 1.50
Body Parts (1991) 1.62
End of explanation
"""
cursor = conn.cursor()
statement = "select movie_title, round(avg(rating), 2) from uitem join udata on uitem.movie_id = udata.item_id where horror = 1 group by movie_title having count(rating) >= 10 order by avg(rating) limit 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
"""
Explanation: BONUS: Extend the query above so that it only includes horror movies that have ten or more ratings. Fill in the query as indicated below.
Expected output:
Children of the Corn: The Gathering (1996) 1.32
Body Parts (1991) 1.62
Amityville II: The Possession (1982) 1.64
Jaws 3-D (1983) 1.94
Hellraiser: Bloodline (1996) 2.00
Tales from the Hood (1995) 2.04
Audrey Rose (1977) 2.17
Addiction, The (1995) 2.18
Halloween: The Curse of Michael Myers (1995) 2.20
Phantoms (1998) 2.23
End of explanation
"""
|
jphall663/GWU_data_mining | 02_analytical_data_prep/src/py_part_2_discretization.ipynb | apache-2.0 | import pandas as pd # pandas for handling mixed data sets
import numpy as np # numpy for basic math and matrix operations
"""
Explanation: License
Copyright (C) 2017 J. Patrick Hall, jphall@gwu.edu
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Simple discretization - Pandas and numpy
Imports
End of explanation
"""
scratch_df = pd.DataFrame({'x1': pd.Series(np.random.randn(20))})
scratch_df
"""
Explanation: Create sample data set
End of explanation
"""
scratch_df['x1_discrete'] = pd.DataFrame(pd.cut(scratch_df['x1'], 5))
scratch_df
"""
Explanation: Discretize
End of explanation
"""
|
dkirkby/astroml-study | Chapter5/ParameterEstimation.ipynb | mit | # Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
def gauss_logL(xbar, V, n, sigma, mu):
"""Equation 5.58: gaussian likelihood"""
return (-(n + 1) * np.log(sigma)
- 0.5 * n * ((xbar - mu) ** 2 + V) / sigma ** 2)
#------------------------------------------------------------
# Define the grid and compute logL
sigma = np.linspace(1, 5, 70)
mu = np.linspace(-3, 5, 70)
xbar = 1
V = 4
n = 10
logL = gauss_logL(xbar, V, n, sigma[:, np.newaxis], mu)
logL -= logL.max()
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(9, 7.75))
plt.imshow(logL, origin='lower',
extent=(mu[0], mu[-1], sigma[0], sigma[-1]),
cmap=plt.cm.binary,
aspect='auto')
plt.colorbar().set_label(r'$\log(L)$')
plt.clim(-5, 0)
plt.contour(mu, sigma, convert_to_stdev(logL),
levels=(0.683, 0.955, 0.997),
colors='k')
plt.text(0.5, 0.93, r'$L(\mu,\sigma)\ \mathrm{for}\ \bar{x}=%d,\ V=%d,\ n=%d$'%(xbar,V,n),
bbox=dict(ec='k', fc='w', alpha=0.9),
ha='center', va='center', transform=plt.gca().transAxes)
plt.xlabel(r'$\mu$')
plt.ylabel(r'$\sigma$');
"""
Explanation: 5.6 Simple Examples of Bayesian Analysis: Parameter Estimation
MLE vs Bayesian Estimation
MLE:
We sought a point value for a parameter that maximized the likelihood
No priors
Probability statements about parameters were meaningless
Bayesian Estimation:
We maximize the posterior to get a probability density function for the parameters(s) rather than a single point
We can update data with prior assumptions to make probability statements about the parameters
Parameter Estimation for a Gaussian Distribution
Known errors
Suppose we have a set of $N$ values ${x_{i}}$ drawn from a Gaussian distribution with known errors $\sigma_{i}$. If ${x_{i}}$ are i.i.d., the likelihood of the entire data set is just the product of the likelihood for each particular value:
$$ p({x_i}\,|\,\mu,\,I) = {\displaystyle \prod_{i=1}^{N}} \, \frac{1}{\sqrt{2\pi}\sigma_i} \mbox{exp}\Big(\frac{-(x_i-\mu)^2}{2\sigma_i^2}\Big) $$
If we take our prior to be uniform such that in the range $\mu_{min}<\mu<\mu_{max}$
$$p(\mu|I)=C,\hspace{1cm}$$
where $$C=(\mu_{max}-\mu_{min})^{-1}$$
then the posterior pdf for the parameter $\mu$ is
$$ p(\mu|{x_i}, {\sigma_i},I) = p({x_i}|\mu,I)\,p(\mu|I)$$
and its logarithm is
$$ L_p\equiv\mbox{ln}[p(\mu|{x_i}, {\sigma_i},I)] = \mbox{constant} - {\displaystyle \sum_{i=1}^{N}} \frac{(x_i-\mu)^2}{2\sigma_i^2}$$
Maximizing the parameter $\mu$ via $dL_p/d\mu|_{\mu=\mu_0}=0$ gives
$$ \mu_0=\frac{\Sigma_{i=1}^{N}w_ix_i}{\Sigma_{i=1}^{N}w_i} $$
and the width of the posterior pdf for $\mu$ is
$$ \sigma_\mu=\Big(-\frac{d^2L_p}{d\mu^2}|_{\mu=\mu_0}\Big)^{-1/2}=\Big({\displaystyle \sum_i}w_i\Big)^{-1/2}$$
We see that using a flat prior gave us the same result as the maximum likelihood estimation. If we had assumed our errors were homoscedastic, this would just reduce to the standard result given by
$$ \mu_0=\frac{1}{N}{\displaystyle \sum_{i=1}^N}x_i$$
Conclusion: when errors are known, the posterior pdf for $\mu$ is Gaussian. What happens when $\sigma$ is unknown?
Unknown errors (two-dimentional posterior)
Now the $\sigma$ needs to be determined from the data. We can interpret $\sigma$ as the intrinsic spread of the measured quantity (assuming individual measurement errors are negligible) or we can equivalently say that all measurements have the same unknown precision.
When N is large, we can assume Gaussian uncertainty of $\mu$ with its width given by
$$ \sigma_{\bar{x}}=\frac{s}{\sqrt{N}} $$
For small N, however, the posterior pdf for $\mu$ follows a different distribution. To determine what this distribution is we first write likelihood
$$ p({x_i}\,|\,\mu,\sigma,\,I) = {\displaystyle \prod_{i=1}^{N}} \, \frac{1}{\sqrt{2\pi}\sigma} \mbox{exp}\Big(\frac{-(x_i-\mu)^2}{2\sigma^2}\Big) $$
where $\sigma$ is unknown and homoscedastic. Assuming uniform priors for $\mu$ and $\sigma$,
$$ p(\mu, \sigma|I)\,\propto\,\frac{1}{\sigma} $$
we get (after some normalization)
$$ L_p = \mbox{constant} - (N+1)\,\mbox{ln}\sigma-{\displaystyle \sum_{i=1}^{N}} \frac{(x_i - \mu)^2}{2\sigma^2}$$
Making the substitution $V=N^{-1}\Sigma_{i=1}^{N}(x_i-\bar{x})^2$, the posterior becomes
$$ L_p = \mbox{constant} - (N+1)\,\mbox{ln}\sigma - \frac{N}{2\sigma^2}((\bar{x}-\mu)^2 + V)$$
$L_p$ is rewritten in terms of $(N, \bar{x}, V)$ because they fully capture the entire information content of the data set. They are called $\textit{sufficient}$ statistics for this reason.
Once again, we can find the position of the maximum ($\mu_0,\sigma_0$) by setting $dL_p/d\mu|{\mu=\mu_0} = 0$ and $dL_p/d\sigma|{\sigma=\sigma_0} = 0$ to get
$$ \mu_0=\bar{x} $$
$$ \sigma_0=\Big(\frac{VN}{N+1}\Big)^{1/2}= \sqrt{\frac{1}{N-1}{\displaystyle \sum_{i=1}^{N}}(x_i-\bar{x})^2}$$
The latter is just the sample standard deviation.
End of explanation
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
#------------------------------------------------------------
# Define functions for computations below
# These are expected analytic fits to the posterior distributions
def compute_pmu(mu, xbar, V, n):
return (1 + (xbar - mu) ** 2 / V) ** (-0.5 * n)
def compute_pmu_alt(mu, xbar, V, n):
return (1 + (xbar - mu) ** 2 / V) ** (-0.5 * (n - 1))
def compute_psig(sig, V, n):
return (sig ** -n) * np.exp(-0.5 * n * V / sig ** 2)
def compute_psig_alt(sig, V, n):
return (sig ** -(n - 1)) * np.exp(-0.5 * n * V / sig ** 2)
def gaussian(x, mu, sigma):
return np.exp(-0.5 * (x - mu) ** 2 / sigma ** 2)
#------------------------------------------------------------
# Draw a random sample from the distribution, and compute
# some quantities
n=10
#n = 77
xbar = 1
V = 4
sigma_x = np.sqrt(V)
np.random.seed(10)
xi = np.random.normal(xbar, sigma_x, size=n)
mu_mean, sig_mean = mean_sigma(xi, ddof=1)
# compute the analytically expected spread in measurements
mu_std = sig_mean / np.sqrt(n)
sig_std = sig_mean / np.sqrt(2 * (n - 1))
#------------------------------------------------------------
# bootstrap estimates
mu_bootstrap, sig_bootstrap = bootstrap(xi, 1E6, mean_sigma,
kwargs=dict(ddof=1, axis=1))
#------------------------------------------------------------
# Compute analytic posteriors
# distributions for the mean
mu = np.linspace(-3, 5, 1000)
dmu = mu[1] - mu[0]
pmu = compute_pmu(mu, xbar, V, n)
pmu /= (dmu * pmu.sum())
pmu2 = compute_pmu_alt(mu, xbar, V, n)
pmu2 /= (dmu * pmu2.sum())
pmu_norm = gaussian(mu, mu_mean, mu_std)
pmu_norm /= (dmu * pmu_norm.sum())
mu_hist, mu_bins = np.histogram(mu_bootstrap, 20)
mu_dbin = np.diff(mu_bins).astype(float)
mu_hist = mu_hist / mu_dbin / mu_hist.sum()
# distributions for the standard deviation
sig = np.linspace(1E-4, 5, 1000)
dsig = sig[1] - sig[0]
psig = compute_psig(sig, V, n)
psig /= (dsig * psig.sum())
psig2 = compute_psig_alt(sig, V, n)
psig2 /= (dsig * psig2.sum())
psig_norm = gaussian(sig, sig_mean, sig_std)
psig_norm /= (dsig * psig_norm.sum())
sig_hist, sig_bins = np.histogram(sig_bootstrap, 20)
sig_dbin = np.diff(sig_bins).astype(float)
sig_hist = sig_hist / sig_dbin / sig_hist.sum()
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(11, 11))
fig.subplots_adjust(wspace=0.35, right=0.95,
hspace=0.2, top=0.95)
# plot posteriors for mu
ax1 = plt.subplot(221, yscale='log')
ax1.plot(mu, pmu, '-b')
ax1.plot(mu, pmu2, ':m')
ax1.plot(mu, pmu_norm, '--r')
ax1.scatter(mu_bins[1:] - 0.5 * mu_dbin, mu_hist,
edgecolor='k', facecolor='none')
ax1.set_xlabel(r'$\mu$')
ax1.set_ylabel(r'$p(\mu|x,I)$')
ax2 = plt.subplot(223, sharex=ax1)
ax2.plot(mu, pmu.cumsum() * dmu, '-b')
ax2.plot(mu, pmu_norm.cumsum() * dmu, '--r')
ax2.scatter(mu_bins[1:] - 0.5 * mu_dbin, mu_hist.cumsum() * mu_dbin,
edgecolor='k', facecolor='none')
ax2.set_xlim(-3, 5)
ax2.set_xlabel(r'$\mu$')
ax2.set_ylabel(r'$P(<\mu|x,I)$')
# plot posteriors for sigma
ax3 = plt.subplot(222, sharey=ax1)
ax3.plot(sig, psig, '-b')
ax3.plot(sig, psig2, ':m')
ax3.plot(sig, psig_norm, '--r')
ax3.scatter(sig_bins[1:] - 0.5 * sig_dbin, sig_hist,
edgecolor='k', facecolor='none')
ax3.set_ylim(1E-4, 2)
ax3.set_xlabel(r'$\sigma$')
ax3.set_ylabel(r'$p(\sigma|x,I)$')
ax4 = plt.subplot(224, sharex=ax3, sharey=ax2)
ax4.plot(sig, psig.cumsum() * dsig, '-b')
ax4.plot(sig, psig_norm.cumsum() * dsig, '--r')
ax4.scatter(sig_bins[1:] - 0.5 * sig_dbin, sig_hist.cumsum() * sig_dbin,
edgecolor='k', facecolor='none')
ax4.set_ylim(0, 1.05)
ax4.set_xlim(0, 5)
ax4.set_xlabel(r'$\sigma$')
ax4.set_ylabel(r'$P(<\sigma|x,I)$');
"""
Explanation: If we knew the width of our distribution $\sigma$, then the posterior probability $p(\mu|\sigma)$ would be Gaussian. However we don't know $\sigma$, so we have to marginalize our posterior by integrating over all possible values of $\sigma$
$$ p(\mu|{x_i}, I)=\int_0^\infty p(\mu, \sigma|{x_i}, I)\,d\sigma$$
which gives a marginal pdf for $p(\mu)$ which is proportional to
$$ p(\mu|{x_i}, I) \propto \Big[1+\frac{(\bar{x}-\mu)^2}{V}\Big]^{-N/2} $$
Thus in cases when N is small and the errors are unkown and homoscedastic, the underlying distribution for the posterior follows a Student'ts $t$ distribution.
In the limit of large N, this essentially approaches a Gaussian. Moreover, marginalizing with repsect to $\mu$ gives a posterior pdf for $\sigma$ that resembles a $\chi^2$ distribution
$$ p(\sigma|{x_i}, I) \propto \frac{1}{\sigma^N}\mbox{exp}\Big(\frac{-NV}{w\sigma^2}\Big) $$
End of explanation
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
from __future__ import print_function
#------------------------------------------------------------
# Create distributions
# draw underlying points
np.random.seed(0)
Npts = 1E6
x = np.random.normal(loc=0, scale=1, size=Npts)
# add error for each point
e = 3 * np.random.random(Npts)
x += np.random.normal(0, e)
# compute anderson-darling test
A2, sig, crit = anderson(x)
print("anderson-darling A^2 = %.1f" % A2)
# compute point statistics
mu_sample, sig_sample = mean_sigma(x, ddof=1)
med_sample, sigG_sample = median_sigmaG(x)
#------------------------------------------------------------
# plot the results
fig, ax = plt.subplots(figsize=(9, 7.75))
ax.hist(x, 100, histtype='stepfilled', alpha=0.2,
color='k', normed=True)
# plot the fitting normal curves
x_sample = np.linspace(-15, 15, 1000)
ax.plot(x_sample, norm(mu_sample, sig_sample).pdf(x_sample),
'-k', label='$\sigma$ fit')
ax.plot(x_sample, norm(med_sample, sigG_sample).pdf(x_sample),
'--k', label='$\sigma_G$ fit')
ax.legend()
ax.set_xlim(-7.5, 7.5)
ax.set_xlabel('$x$')
ax.set_ylabel('$p(x)$');
"""
Explanation: General Case
Data drawn from some distribution
Known heteroscedastic measurment errors
Unknown intrinsic spread
Following the same procedure as before, the posterior is given by
$$ p({x_i}|\mu, \sigma, I)={\displaystyle \prod_{i=1}^{N}} \frac{1}{\sqrt{2\pi}(\sigma^2+e_i^2)^{1/2}}\mbox{exp}\Big(\frac{-(x_i-\mu)^2}{2(\sigma^2+e_i^2)}\Big)$$
$$ L_p = \mbox{constant} - \frac{1}{2}{\displaystyle \sum_{i=1}^{N}} \Big(\mbox{ln}(\sigma^2+e_i^2)+\frac{(x_i-\mu)^2}{\sigma^2+e_i^2}\Big) $$
$\sigma$ is coupled to $e_i$
Underlying distribution of {$x_i$} is no longer Gaussian -- weighted sum of Gaussians with varying widths
End of explanation
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
def gaussgauss_logL(xi, ei, mu, sigma):
"""Equation 5.63: gaussian likelihood with gaussian errors"""
ndim = len(np.broadcast(sigma, mu).shape)
xi = xi.reshape(xi.shape + tuple(ndim * [1]))
ei = ei.reshape(ei.shape + tuple(ndim * [1]))
s2_e2 = sigma ** 2 + ei ** 2
return -0.5 * np.sum(np.log(s2_e2) + (xi - mu) ** 2 / s2_e2, 0)
#------------------------------------------------------------
# Define the grid and compute logL
np.random.seed(5)
mu_true = 1.
sigma_true = 1.
N = 10
ei = 3 * np.random.random(N)
xi = np.random.normal(mu_true, np.sqrt(sigma_true ** 2 + ei ** 2))
sigma = np.linspace(0.01, 5, 70)
mu = np.linspace(-3, 5, 70)
logL = gaussgauss_logL(xi, ei, mu, sigma[:, np.newaxis])
logL -= logL.max()
#------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(9, 7.75))
plt.imshow(logL, origin='lower',
extent=(mu[0], mu[-1], sigma[0], sigma[-1]),
cmap=plt.cm.binary,
aspect='auto')
plt.colorbar().set_label(r'$\log(L)$')
plt.clim(-5, 0)
plt.text(0.5, 0.93,
(r'$L(\mu,\sigma)\ \mathrm{for}\ \bar{x}=1,\ '
r'\sigma_{\rm true}=1,\ n=10$'),
bbox=dict(ec='k', fc='w', alpha=0.9),
ha='center', va='center', transform=plt.gca().transAxes)
plt.contour(mu, sigma, convert_to_stdev(logL),
levels=(0.683, 0.955, 0.997),
colors='k')
plt.xlabel(r'$\mu$')
plt.ylabel(r'$\sigma$');
"""
Explanation: Taking the derivative of $L_p$ with respect to $\mu$ and setting it to zero gives a maximum value for $\mu=\mu_0$
$$ \mu_0=\frac{\Sigma_i w_i x_i}{\Sigma_i w_i} $$
but with different weights
$$ w_i = \frac{1}{\sigma_0^2 + e_i^2} $$
We can provide an additional constraint for $\sigma_0$ by setting the derivative of $L_p$ with respect to $\sigma$ equal to zero, but this does not lead to a closed form expression and requires simulatenously solving two complicated equations. Instead, the sample median is actually a pretty good estimator of $\mu_0$, and the maximum width can be estimated as
$$ \sigma_0^2=\zeta^2\sigma_G^2-e_{50}^2 $$
where $\sigma_G$ is the quartile-based estimator, $e_{50}$ is the median error, and $\zeta=\mbox{median}(\tilde{\sigma_i})/\mbox{mean}(\tilde{\sigma_i})$, where
$$ \tilde{\sigma_i} = (\sigma_G^2+e_i^2-e_{50}^2)^{1/2} $$
End of explanation
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
def gausslin_logL(xi, A=0.5, sigma=1.0, mu=5.0, L=10.0):
"""Equation 5.80: gaussian likelihood with uniform background"""
xi = np.asarray(xi)
shape = np.broadcast(sigma, A, mu, L).shape
xi = xi.reshape(xi.shape + tuple([1 for s in shape]))
return np.sum(np.log(A * np.exp(-0.5 * ((xi - mu) / sigma) ** 2)
/ (sigma * np.sqrt(2 * np.pi))
+ (1. - A) / L), 0)
#------------------------------------------------------------
# Define the distribution
np.random.seed(0)
mu = 5.0
sigma = 1.0
L = 10.0
A = 0.5
N = 200
xi = np.random.random(N)
NA = np.sum(xi < A)
dist1 = truncnorm((0 - mu) / sigma, (L - mu) / sigma, mu, sigma)
dist2 = uniform(0, 10)
xi[:NA] = dist1.rvs(NA)
xi[NA:] = dist2.rvs(N - NA)
x = np.linspace(-1, 11, 1000)
fracA = NA * 1. / N
#------------------------------------------------------------
# define the (sigma, A) grid and compute logL
sigma = np.linspace(0.5, 2, 70)
A = np.linspace(0, 1, 70)
logL = gausslin_logL(xi, A[:, np.newaxis], sigma)
logL -= logL.max()
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(7, 10))
fig.subplots_adjust(bottom=0.07, left=0.11, hspace=0.15, top=0.95)
ax = fig.add_subplot(211)
plt.imshow(logL, origin='lower', aspect='auto',
extent=(sigma[0], sigma[-1], A[0], A[-1]),
cmap=plt.cm.binary)
plt.colorbar().set_label(r'$\log(L)$')
plt.clim(-5, 0)
ax.set_xlabel(r'$\sigma$')
ax.set_ylabel(r'$A$')
ax.text(0.5, 0.9, r'$L(\sigma,A)\ (\mathrm{Gauss + bkgd},\ n=200)$',
bbox=dict(ec='k', fc='w', alpha=0.9),
ha='center', va='center', transform=plt.gca().transAxes)
ax.contour(sigma, A, convert_to_stdev(logL),
levels=(0.683, 0.955, 0.997),
colors='k')
ax2 = plt.subplot(212)
ax2.yaxis.set_major_locator(plt.MultipleLocator(0.1))
ax2.plot(x, fracA * dist1.pdf(x) + (1. - fracA) * dist2.pdf(x), '-k')
ax2.hist(xi, 30, normed=True, histtype='stepfilled', fc='gray', alpha=0.5)
ax2.set_ylim(0, 0.301)
ax2.set_xlim(-1, 11)
ax2.set_xlabel('$x$')
ax2.set_ylabel('$p(x)$');
"""
Explanation: Because the errors are heteroscedastic, the posterior pdf is no longer symmetric with respect to $\mu$.
Gaussian and a Uniform Background
This time we have a distribution that is a mixture of a Gaussian and a uniform distribution in some interval $W$. This could, for example, represent a profile of a source in an image with an unkown background. Assuming $\mu$ is known we seek the posterior pdf for $A$ (the strength of the background) and $\sigma$. The likelihood of obtaining a measurement $x_i$ is
$$p(x_i|A, \mu, \sigma) = \frac{A}{\sqrt{2\pi}\sigma}\mbox{exp}\Big(\frac{-(x_i-\mu)^2}{2\sigma^2}\Big)+\frac{1-A}{W} $$
Taking the prior to be
$$ p(A, \sigma|I) = \mbox{constant} $$
and requiring $A$ and $\sigma$ to be non-negative, we get
$$ L_p={\displaystyle \sum_{i=1}^N}\mbox{ln}\Big[A\frac{\mbox{exp}\Big(\frac{-(x_i-\mu)^2}{2\sigma^2}\Big)}{\sqrt{2\pi}\sigma}+\frac{1-A}{W}\Big] $$
End of explanation
"""
|
mbeyeler/opencv-machine-learning | notebooks/05.01-Building-Our-First-Decision-Tree.ipynb | mit | data = [
{'age': 33, 'sex': 'F', 'BP': 'high', 'cholesterol': 'high', 'Na': 0.66, 'K': 0.06, 'drug': 'A'},
{'age': 77, 'sex': 'F', 'BP': 'high', 'cholesterol': 'normal', 'Na': 0.19, 'K': 0.03, 'drug': 'D'},
{'age': 88, 'sex': 'M', 'BP': 'normal', 'cholesterol': 'normal', 'Na': 0.80, 'K': 0.05, 'drug': 'B'},
{'age': 39, 'sex': 'F', 'BP': 'low', 'cholesterol': 'normal', 'Na': 0.19, 'K': 0.02, 'drug': 'C'},
{'age': 43, 'sex': 'M', 'BP': 'normal', 'cholesterol': 'high', 'Na': 0.36, 'K': 0.03, 'drug': 'D'},
{'age': 82, 'sex': 'F', 'BP': 'normal', 'cholesterol': 'normal', 'Na': 0.09, 'K': 0.09, 'drug': 'C'},
{'age': 40, 'sex': 'M', 'BP': 'high', 'cholesterol': 'normal', 'Na': 0.89, 'K': 0.02, 'drug': 'A'},
{'age': 88, 'sex': 'M', 'BP': 'normal', 'cholesterol': 'normal', 'Na': 0.80, 'K': 0.05, 'drug': 'B'},
{'age': 29, 'sex': 'F', 'BP': 'high', 'cholesterol': 'normal', 'Na': 0.35, 'K': 0.04, 'drug': 'D'},
{'age': 53, 'sex': 'F', 'BP': 'normal', 'cholesterol': 'normal', 'Na': 0.54, 'K': 0.06, 'drug': 'C'},
{'age': 36, 'sex': 'F', 'BP': 'high', 'cholesterol': 'high', 'Na': 0.53, 'K': 0.05, 'drug': 'A'},
{'age': 63, 'sex': 'M', 'BP': 'low', 'cholesterol': 'high', 'Na': 0.86, 'K': 0.09, 'drug': 'B'},
{'age': 60, 'sex': 'M', 'BP': 'low', 'cholesterol': 'normal', 'Na': 0.66, 'K': 0.04, 'drug': 'C'},
{'age': 55, 'sex': 'M', 'BP': 'high', 'cholesterol': 'high', 'Na': 0.82, 'K': 0.04, 'drug': 'B'},
{'age': 35, 'sex': 'F', 'BP': 'normal', 'cholesterol': 'high', 'Na': 0.27, 'K': 0.03, 'drug': 'D'},
{'age': 23, 'sex': 'F', 'BP': 'high', 'cholesterol': 'high', 'Na': 0.55, 'K': 0.08, 'drug': 'A'},
{'age': 49, 'sex': 'F', 'BP': 'low', 'cholesterol': 'normal', 'Na': 0.27, 'K': 0.05, 'drug': 'C'},
{'age': 27, 'sex': 'M', 'BP': 'normal', 'cholesterol': 'normal', 'Na': 0.77, 'K': 0.02, 'drug': 'B'},
{'age': 51, 'sex': 'F', 'BP': 'low', 'cholesterol': 'high', 'Na': 0.20, 'K': 0.02, 'drug': 'D'},
{'age': 38, 'sex': 'M', 'BP': 'high', 'cholesterol': 'normal', 'Na': 0.78, 'K': 0.05, 'drug': 'A'}
]
"""
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Using Decision Trees to Make a Medical Diagnosis | Contents | Using Decision Trees to Diagnose Breast Cancer >
Building Our First Decision Tree
Let's consider the example where a number of patients have suffered from the same illness - such as a rare form of basorexia. Let's further assume that the true causes of the disease remain unknown to this day, and that all the information that is available to us consists of a bunch of physiological measurements. For example, we might have access to the following information:
a patient's blood pressure ('BP'),
a patient's cholesterol level ('cholesterol'),
a patient's gender ('sex'),
a patient's age ('age'),
a patient's blood sodium concentration ('Na'), and
a patient's blood potassium concentration ('K').
Based on all this information, let's suppose a doctor made recommendations for the patient to treat their disease using one of four possible drugs - drug A, B, C, or D.
We have data available for 20 different patients:
End of explanation
"""
target = [d['drug'] for d in data]
target
"""
Explanation: Understanding the task by understanding the data
What is always the first step in tackling a new machine learning problem?
You are absolutely right: to get a sense of the data. The better you understand the data, the better you understand the problem you are trying to solve.
The first thing to realize is that the 'drug' column is actually not a feature value like all the other columns.
Thus, it would be a good idea to remove all 'drug' entries from the dictionaries listed in the 'data' variable and store them in a separate variable. For this we need to go through the list and extract the 'drug' entry, which is easiest to do with a list comprehension:
End of explanation
"""
[d.pop('drug') for d in data];
"""
Explanation: Then remove the 'drug' entry from all the dictionaries:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
age = [d['age'] for d in data]
age
sodium = [d['Na'] for d in data]
potassium = [d['K'] for d in data]
plt.figure(figsize=(10, 6))
plt.scatter(sodium, potassium)
plt.xlabel('sodium')
plt.ylabel('potassium')
"""
Explanation: Sweet! Now let's look at the data:
End of explanation
"""
target = [ord(t) - 65 for t in target]
target
plt.figure(figsize=(14, 10))
plt.subplot(221)
plt.scatter([d['Na'] for d in data], [d['K'] for d in data],
c=target, s=100)
plt.xlabel('sodium (Na)')
plt.ylabel('potassium (K)')
plt.subplot(222)
plt.scatter([d['age'] for d in data], [d['K'] for d in data],
c=target, s=100)
plt.xlabel('age')
plt.ylabel('potassium (K)')
plt.subplot(223)
plt.scatter([d['age'] for d in data], [d['Na'] for d in data],
c=target, s=100)
plt.xlabel('age')
plt.ylabel('sodium (Na)')
"""
Explanation: But, what we really want is to color the data points according to their target labels:
End of explanation
"""
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer(sparse=False)
data_pre = vec.fit_transform(data)
vec.get_feature_names()
data_pre[0]
"""
Explanation: Preprocessing the data
We need to convert all categorical features into numerical features:
End of explanation
"""
import numpy as np
data_pre = np.array(data_pre, dtype=np.float32)
target = np.array(target, dtype=np.float32).reshape((-1, 1))
data_pre.shape, target.shape
"""
Explanation: Convert to 32-bit floating point numbers in order to make OpenCV happy:
End of explanation
"""
import sklearn.model_selection as ms
X_train, X_test, y_train, y_test = ms.train_test_split(
data_pre, target, test_size=5, random_state=42
)
"""
Explanation: Then split data into training and test sets:
End of explanation
"""
import cv2
dtree = cv2.ml.DTrees_create()
"""
Explanation: Building the decision tree
Building the decision tree with OpenCV works in much the same way as in Chapter 3, First Steps in Supervised Learning. Recall that all machine learning function reside in OpenCV 3.1's ml module. You can create an empty decision tree using the following code:
End of explanation
"""
# dtree.train(X_train, cv2.ml.ROW_SAMPLE, y_train)
"""
Explanation: Then train the model:
Note: It appears the DTrees object in OpenCV 3.1 is broken (segmentation fault). As a result, calling the train method will lead to "The kernel has died unexpectedly." There's a bug report here.
End of explanation
"""
# y_pred = dtree.predict(X_test)
"""
Explanation: Predict some values:
End of explanation
"""
# from sklearn import metrics
# metrics.accuracy_score(y_test, dtree.predict(X_test))
# metrics.accuracy_score(y_train, dtree.predict(X_train))
"""
Explanation: Calculate the score on the training and test sets:
End of explanation
"""
from sklearn import tree
dtc = tree.DecisionTreeClassifier()
"""
Explanation: Visualizing a trained decision tree
OpenCV's implementation of decision trees is good enough if you are just starting out, and don't care too much what's going on under the hood. However, in the following sections we will switch to Scikit-Learn. Their implementation allows us to customize the algorithm and makes it a lot easier to investigate the inner workings of the tree. Their usage is also much better documented...
End of explanation
"""
dtc.fit(X_train, y_train)
dtc.score(X_train, y_train)
dtc.score(X_test, y_test)
"""
Explanation: The model is trained by calling fit:
End of explanation
"""
with open("tree.dot", 'w') as f:
f = tree.export_graphviz(dtc, out_file=f,
feature_names=vec.get_feature_names(),
class_names=['A', 'B', 'C', 'D'])
"""
Explanation: Now, here's the cool thing: If you want to know what the tree looks like, you can do so using GraphViz to create a PDF file (or any other supported file type) from the tree structure. For this to work, you need to install GraphViz first, which you can do from the command line using conda:
$ conda install graphviz
End of explanation
"""
dtc.feature_importances_
"""
Explanation: Then, back on the command line, you can use GraphViz to turn "tree.dot" into (for example) a PNG file:
$ dot -Tpng tree.dot -o tree.png
Rating the importance of features
Scikit-Learn provides a function to rate feature importance, which is a number between 0 and 1 for each feature, where 0 means "not used at all in any decisions made" and 1 means "perfectly predicts the target". The feature importances are normalized such that they all sum to 1:
End of explanation
"""
plt.figure(figsize=(12, 6))
plt.barh(range(10), dtc.feature_importances_, align='center', tick_label=vec.get_feature_names())
"""
Explanation: If we remind ourselves of the feature names, it will become clear which features seem to be the most important. A plot might be most informative:
End of explanation
"""
dtce = tree.DecisionTreeClassifier(criterion='entropy')
dtce.fit(X_train, y_train)
dtce.score(X_train, y_train)
dtce.score(X_test, y_test)
with open("tree.dot", 'w') as f:
f = tree.export_graphviz(dtce, out_file=f,
feature_names=vec.get_feature_names(),
class_names=['A', 'B', 'C', 'D'])
"""
Explanation: Understanding decision rules
Two of the most commonly used criteria for making decisions are the following:
criterion='gini': The Gini impurity is a measure of misclassification, with the aim of minimizing the probability of misclassification.
criterion='entropy': In information theory, entropy is a measure of the amount of uncertainty associated with a signal or distribution.
In Scikit-Learn, you can specify the split criterion in the constructor of the decision tree call. For example, if you want to use entropy, you would type the following:
End of explanation
"""
dtc0 = tree.DecisionTreeClassifier(criterion='entropy', max_leaf_nodes=6)
dtc0.fit(X_train, y_train)
dtc0.score(X_train, y_train)
dtc0.score(X_test, y_test)
"""
Explanation: Controlling the complexity of decision trees
There are two common ways to avoid overfitting:
pre-pruning: This is the process of stopping the creation of the tree early.
post-pruning (or just pruning): This is the process of first building the tree but then removing or collapsing nodes that contain only little information.
There are a number of ways to pre-prune a tree, all of which can be achieved by passing optional arguments to the DecisionTreeClassifier constructor:
limiting the maximum depth of the tree via the max_depth parameter
limiting the maximum number of leaf nodes via max_leaf_nodes
requiring a minimum number of points in a node to keep splitting it via min_samples_split
End of explanation
"""
|
Karuntg/SDSS_SSC | Analysis_2020/recalibration_gray.ipynb | gpl-3.0 | %matplotlib inline
from astropy.table import Table
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy.table import hstack
import matplotlib.pyplot as plt
import numpy as np
from astroML.plotting import hist
# for astroML installation see https://www.astroml.org/user_guide/installation.html
## automatically reload any modules read below that might have changed (e.g. plots)
%load_ext autoreload
%autoreload 2
# importing ZI and KT tools:
import ZItools as zit
import KTtools as ktt
"""
Explanation: Compare photometry in the new Stripe82 catalog
to Gaia DR2 photometry and derive corrections for
gray systematics using Gmag photometry
input: N2020_stripe82calibStars.dat
output: stripe82calibStars_v3.1.dat
files with RA/Dec corrections: ZPcorrectionsRA_v3.1_final.dat and ZPcorrectionsDec_v3.1_final.dat
makes paper plots:
GmagCorrection_RA_Hess.png GmagCorrection_Dec_Hess.png
GmagCorrectionTest_Gmag_Hess.png
End of explanation
"""
ZIdataDir = "/Users/ivezic/Work/Science/CalibrationV2/SDSS_SSC/Data"
# the original SDSS catalog from 2007
sdssOldCat = ZIdataDir + "/" + "stripe82calibStars_v2.6.dat"
# INPUT: Karun's new catalog from 2020
sdssNewCatIn = ZIdataDir + "/" + "N2020_stripe82calibStars.dat"
readFormat = 'csv'
# OUTPUT: with Gmag-based gray corrections
sdssNewCatOut = ZIdataDir + "/" + "stripe82calibStars_v3.1.dat"
# Gaia DR2
GaiaDR2Cat = ZIdataDir + "/" + "Stripe82_GaiaDR2.dat"
# Gaia DR2 with BP and RP data
GaiaDR2CatBR = ZIdataDir + "/" + "Stripe82_GaiaDR2_BPRP.dat"
# both new and old files use identical data structure
colnamesSDSS = ['calib_fla', 'ra', 'dec', 'raRMS', 'decRMS', 'nEpochs', 'AR_val',
'u_Nobs', 'u_mMed', 'u_mMean', 'u_mErr', 'u_rms_scatt', 'u_chi2',
'g_Nobs', 'g_mMed', 'g_mMean', 'g_mErr', 'g_rms_scatt', 'g_chi2',
'r_Nobs', 'r_mMed', 'r_mMean', 'r_mErr', 'r_rms_scatt', 'r_chi2',
'i_Nobs', 'i_mMed', 'i_mMean', 'i_mErr', 'i_rms_scatt', 'i_chi2',
'z_Nobs', 'z_mMed', 'z_mMean', 'z_mErr', 'z_rms_scatt', 'z_chi2']
%%time
# old
sdssOld = Table.read(sdssOldCat, format='ascii', names=colnamesSDSS)
np.size(sdssOld)
%%time
# new
sdssNew = Table.read(sdssNewCatIn, format=readFormat, names=colnamesSDSS)
np.size(sdssNew)
"""
Explanation: <a id='dataReading'></a>
Define paths and catalogs
End of explanation
"""
sdssOld_coords = SkyCoord(ra = sdssOld['ra']*u.degree, dec= sdssOld['dec']*u.degree)
sdssNew_coords = SkyCoord(ra = sdssNew['ra']*u.degree, dec= sdssNew['dec']*u.degree)
# this is matching sdssNew to sdssOld, so that indices are into sdssNew catalog
# makes sense in this case since the sdssOld catalog is (a little bit) bigger
# than sdssNew (1006849 vs 1005470)
idx, d2d, d3d = sdssNew_coords.match_to_catalog_sky(sdssOld_coords)
# object separation is an object with units,
# I add that as a column so that one can
# select based on separation to the nearest matching object
new_old = hstack([sdssNew, sdssOld[idx]], table_names = ['new', 'old'])
new_old['sep_2d_arcsec'] = d2d.arcsec
# good matches between the old and new catalogs
MAX_DISTANCE_ARCSEC = 0.5
sdss = new_old[(new_old['sep_2d_arcsec'] < MAX_DISTANCE_ARCSEC)]
print(np.size(sdss))
"""
Explanation: Simple positional match using ra/dec
End of explanation
"""
mOK3 = sdss[sdss['ra_new']<1]
mOK3 = zit.selectCatalog(sdss, mOK3)
print(996147/1006849)
print(993774/1006849)
print(991472/1006849)
"""
Explanation: apply standard cuts as in old catalog:
End of explanation
"""
colnamesGaia = ['ra', 'dec', 'nObs', 'Gmag', 'flux', 'fluxErr', 'pmra', 'pmdec']
colnamesGaia = colnamesGaia + ['BPmag', 'BPeI', 'RPmag', 'RPeI', 'BRef']
gaia = Table.read(GaiaDR2CatBR, format='ascii', names=colnamesGaia)
gaia['raG'] = gaia['ra']
gaia['decG'] = gaia['dec']
gaia['GmagErr'] = gaia['fluxErr'] / gaia['flux']
gaia['BR'] = gaia['BPmag'] - gaia['RPmag']
gaia['GBP'] = gaia['Gmag'] - gaia['BPmag']
gaia['GRP'] = gaia['Gmag'] - gaia['RPmag']
sdss_coords = SkyCoord(ra = sdss['ra_old']*u.degree, dec= sdss['dec_old']*u.degree)
gaia_coords = SkyCoord(ra = gaia['raG']*u.degree, dec= gaia['decG']*u.degree)
# this is matching gaia to sdss, so that indices are into sdss catalog
# makes sense in this case since the sdss catalog is bigger than gaia
idxG, d2dG, d3dG = gaia_coords.match_to_catalog_sky(sdss_coords)
# object separation is an object with units,
# I add that as a column so that one can
# select based on separation to the nearest matching object
gaia_sdss = hstack([gaia, sdss[idxG]], table_names = ['gaia', 'sdss'])
gaia_sdss['sepSG_2d_arcsec'] = d2dG.arcsec
### code for generating new quantities, such as dra, ddec, colors, differences in mags, etc
def derivedColumns(matches):
matches['dra'] = (matches['ra_new']-matches['ra_old'])*3600
matches['ddec'] = (matches['dec_new']-matches['dec_old'])*3600
matches['ra'] = matches['ra_old']
ra = matches['ra']
matches['raW'] = np.where(ra > 180, ra-360, ra)
matches['dec'] = matches['dec_old']
matches['u'] = matches['u_mMed_old']
matches['g'] = matches['g_mMed_old']
matches['r'] = matches['r_mMed_old']
matches['i'] = matches['i_mMed_old']
matches['z'] = matches['z_mMed_old']
matches['ug'] = matches['u_mMed_old'] - matches['g_mMed_old']
matches['gr'] = matches['g_mMed_old'] - matches['r_mMed_old']
matches['ri'] = matches['r_mMed_old'] - matches['i_mMed_old']
matches['gi'] = matches['g_mMed_old'] - matches['i_mMed_old']
matches['du'] = matches['u_mMed_old'] - matches['u_mMed_new']
matches['dg'] = matches['g_mMed_old'] - matches['g_mMed_new']
matches['dr'] = matches['r_mMed_old'] - matches['r_mMed_new']
matches['di'] = matches['i_mMed_old'] - matches['i_mMed_new']
matches['dz'] = matches['z_mMed_old'] - matches['z_mMed_new']
# Gaia
matches['draGold'] = -3600*(matches['ra_old'] - matches['raG'])
matches['draGnew'] = -3600*(matches['ra_new'] - matches['raG'])
matches['ddecGold'] = -3600*(matches['dec_old'] - matches['decG'])
matches['ddecGnew'] = -3600*(matches['dec_new'] - matches['decG'])
# photometric
matches['gGr_old'] = matches['Gmag'] - matches['r_mMed_old']
matches['gGr_new'] = matches['Gmag'] - matches['r_mMed_new']
matches['gRPr_new'] = matches['RPmag'] - matches['r_mMed_new']
return
derivedColumns(gaia_sdss)
"""
Explanation: now match to Gaia DR2...
End of explanation
"""
# doGaiaAll(mOK)
def doGaiaGmagCorrection(d, Cstr, Gmax=20.0, yMax=0.03):
# Cstr = 'gGr_old' or 'gGr_new'
gi = d['gi']
Gr = d[Cstr]
Gmag = d['Gmag']
zit.qpBM(d, 'gi', -1, 4.5, Cstr, -2, 1.0, 56)
xBin, nPts, medianBin, sigGbin = zit.fitMedians(gi, Gr, -0.7, 4.0, 47, 0)
data = np.array([xBin, medianBin, sigGbin])
Ndata = xBin.size
### HERE WE ARE FITTING 7-th ORDER POLYNOMIAL TO Gmag-rSDSS vs. g-i ###
# get best-fit parameters
thetaCloc = zit.best_theta(data,7)
# generate best fit lines on a fine grid
xfit = np.linspace(-1.1, 4.3, 1000)
yfit = zit.polynomial_fit(thetaCloc, xfit)
## added "Poly" because switched to piecewise linear interpolation below
d['gGrFitPoly'] = zit.polynomial_fit(thetaCloc, gi)
d['dgGrPoly'] = d[Cstr] - d['gGrFitPoly']
### PIECEWISE LINEAR INTERPOLATION (AS FOR ALL OTHER COLORS AND SURVEYS)
d['gGrFit'] = np.interp(gi, xBin, medianBin)
d['dgGr'] = d[Cstr] - d['gGrFit']
# SELECT FOR RECALIBRATION wrt RA and Dec
giMin = 0.4
giMax = 3.0
Dc = d[(d['gi']>giMin)&(d['gi']<giMax)]
print('N before and after color cut:', np.size(d), np.size(Dc))
DcB = Dc[(Dc['Gmag']>14.5)&(Dc['Gmag']<Gmax)]
DcB['GrResid'] = DcB['dgGr'] - np.median(DcB['dgGr'])
zit.printStats(DcB['dgGr'])
DcBok = DcB[np.abs(DcB['dgGr'])<0.1]
print(np.size(DcB), np.size(DcBok))
zit.qpBM(DcBok, 'Gmag', 14.5, Gmax, 'GrResid', -1*yMax, yMax, 56)
zit.qpBM(DcBok, 'dec', -1.3, 1.3, 'GrResid', -1*yMax, yMax, 126)
zit.qpBM(DcBok, 'raW', -51.5, 60, 'GrResid', -1*yMax, yMax, 112)
return thetaCloc, DcBok
## first limit astrometric distance and
## require at least 4 epochs as in the old catalog
MAX_DISTANCE_ARCSEC = 0.5
m1 = gaia_sdss[(gaia_sdss['sepSG_2d_arcsec'] < MAX_DISTANCE_ARCSEC)]
a1 = m1['g_Nobs_new']
a2 = m1['r_Nobs_new']
a3 = m1['i_Nobs_new']
mOK = m1[(a1>3)&(a2>3)&(a3>3)]
print(len(new_old))
print(len(m1))
print(len(mOK))
def plotAstro2Ddiagrams(d):
### plots
plotNameRoot = 'astroVSpm_RA_pm'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'pmra', "Xmin":-40, "Xmax":40, "Xlabel":'R.A. proper motion (mas/yr)', \
"Ystr":'draGnew', "Ymin":-0.5, "Ymax":0.5, "Ylabel":'raw SDSS R.A. - Gaia R.A. (arcsec)', \
"XminBin":-35, "XmaxBin":35, "nBin":70, \
"plotName":plotName, "Nsigma":0, "offset":-0.1, "symbSize":0.05}
kw["nBinX"] = 90
kw["nBinY"] = 40
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
# need to fit draGnew vs. pmra and correct for the mean trend, then plot vs. r mag
pmra = d['pmra']
draGnew = d['draGnew']
xBin, nPts, medianBin, sigGbin = zit.fitMedians(pmra, draGnew, -60, 60, 120, 0)
### PIECEWISE LINEAR INTERPOLATION
d['draGnewFit'] = np.interp(d['pmra'], xBin, medianBin)
draCorr = d['draGnew'] - d['draGnewFit']
draCorrOK = np.where(np.abs(draCorr) < 0.25, draCorr, 0)
d['draGnewCorr'] = draCorrOK
plotNameRoot = 'astroVSpm_RA_r'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'r_mMed_new', "Xmin":14, "Xmax":21, "Xlabel":'SDSS r magnitude', \
"Ystr":'draGnewCorr', "Ymin":-0.12, "Ymax":0.12, "Ylabel":'corr. SDSS R.A. - Gaia R.A. (arcsec)', \
"XminBin":14, "XmaxBin":21, "nBin":30, \
"plotName":plotName, "Nsigma":0, "offset":0.050, "symbSize":0.05}
kw["nBinX"] = 30
kw["nBinY"] = 24
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
plotNameRoot = 'astroVSpm_Dec_pm'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'pmdec', "Xmin":-40, "Xmax":40, "Xlabel":'Dec. proper motion (mas/yr)', \
"Ystr":'ddecGnew', "Ymin":-0.5, "Ymax":0.5, "Ylabel":'raw SDSS Dec. - Gaia Dec. (arcsec)', \
"XminBin":-35, "XmaxBin":35, "nBin":70, \
"plotName":plotName, "Nsigma":0, "offset":-0.1, "symbSize":0.05}
kw["nBinX"] = 90
kw["nBinY"] = 40
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
### produce astrometric plots showing correlation with proper motions
plotAstro2Ddiagrams(mOK)
# print(np.std(mOK['draGnew']), np.std(mOK['ddecGnew']))
#mOK
x = mOK['draGnewCorr']
xOK = x[np.abs(x)<0.25]
print(np.std(xOK), zit.sigG(xOK))
zit.qpBM(mOK, 'pmra', -50, 50, 'draGnew', -0.6, 0.6, 50)
zit.qpBM(mOK, 'pmdec', -50, 50, 'ddecGnew', -0.6, 0.6, 50)
theta, mOKc = doGaiaGmagCorrection(mOK, 'gGr_new')
thetaLoc = theta
## for zero point calibration, in addition to color cut in doGaiaAll, take 16 < G < 19.5
mOKcB = mOKc[(mOKc['Gmag']>16)&(mOKc['Gmag']<19.5)]
mOKcB['GrResid'] = mOKcB['dgGr'] - np.median(mOKcB['dgGr'])
mOKcBok = mOKcB[np.abs(mOKcB['dgGr'])<0.1]
print(np.size(mOKc), np.size(mOKcB), np.size(mOKcBok))
print(np.std(mOKcBok['GrResid']), zit.sigG(mOKcBok['GrResid']))
zit.qpBM(mOKcBok, 'dec', -1.3, 1.3, 'GrResid', -0.03, 0.03, 260)
zit.qpBM(mOKcBok, 'raW', -51.5, 60, 'GrResid', -0.03, 0.03, 112)
"""
Explanation: Select good matches and compare both catalogs to Gaia DR2
End of explanation
"""
def plotGmag2Ddiagrams(d):
### plots
plotNameRoot = 'GrVSgi'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'gi', "Xmin":0.0, "Xmax":3.5, "Xlabel":'SDSS g-i', \
"Ystr":'gGr_new', "Ymin":-1.25, "Ymax":0.25, "Ylabel":'Gaia Gmag - SDSS r', \
"XminBin":-0.5, "XmaxBin":4.0, "nBin":90, \
"plotName":plotName, "Nsigma":3, "offset":0.0, "symbSize":0.05}
kw["nBinX"] = 90
kw["nBinY"] = 40
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
def plotGmag2DdiagramsX(d, kw):
# Gaia G
print('-----------')
print(' stats for SDSS r binning medians:')
plotName = plotNameRoot + '_Gmag.png'
kwOC = {"Xstr":'Gmag', "Xmin":14.3, "Xmax":21.01, "Xlabel":'Gaia G (mag)', \
"Ystr":kw['Ystr'], "Ymin":-0.06, "Ymax":0.06, "Ylabel":Ylabel, \
"XminBin":14.5, "XmaxBin":21.0, "nBin":130, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_Gmag_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 130
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
print('------------------------------------------------------------------')
def plotGmagCorrections(d, kw):
### REDEFINE residuals to correspond to "SDSS-others", as other cases
d['redef'] = -1*d[kw['Ystr']]
kw['Ystr'] = 'redef'
goodC = d[np.abs(d['redef'])<0.1]
### plots
plotNameRoot = kw['plotNameRoot']
# RA
print(' stats for RA binning medians:')
plotName = plotNameRoot + '_RA.png'
Ylabel = 'residuals for (Gmag$_{SDSS}$ - Gmag$_{GaiaDR2}$) '
kwOC = {"Xstr":'raW', "Xmin":-52, "Xmax":60.5, "Xlabel":'R.A. (deg)', \
"Ystr":kw['Ystr'], "Ymin":-0.07, "Ymax":0.07, "Ylabel":Ylabel, \
"XminBin":-51.5, "XmaxBin":60, "nBin":112, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_RA_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 112
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
# Dec
print('-----------')
print(' stats for Dec binning medians:')
plotName = plotNameRoot + '_Dec.png'
kwOC = {"Xstr":'dec', "Xmin":-1.3, "Xmax":1.3, "Xlabel":'Declination (deg)', \
"Ystr":kw['Ystr'], "Ymin":-0.07, "Ymax":0.07, "Ylabel":Ylabel, \
"XminBin":-1.266, "XmaxBin":1.264, "nBin":252, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_Dec_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 252
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
# Gaia G
print('-----------')
print(' stats for SDSS r binning medians:')
plotName = plotNameRoot + '_Gmag.png'
kwOC = {"Xstr":'Gmag', "Xmin":14.3, "Xmax":21.01, "Xlabel":'Gaia G (mag)', \
"Ystr":kw['Ystr'], "Ymin":-0.06, "Ymax":0.06, "Ylabel":Ylabel, \
"XminBin":14.5, "XmaxBin":21.0, "nBin":130, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_Gmag_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 130
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
print('------------------------------------------------------------------')
mOK['GrResid'] = mOK['dgGr'] - np.median(mOK['dgGr']) + 0.006
mOKok = mOK[np.abs(mOK['dgGr'])<0.1]
print(np.size(mOK), np.size(mOKok))
keywords = {"Ystr":'GrResid', "plotNameRoot":'GmagCorrection', "symbSize":0.05}
plotGmagCorrections(mOKok, keywords)
!cp GmagCorrection_Gmag_Hess.png GmagCorrectionTest_Gmag_Hess.png
mOKokX = mOKok[(mOKok['Gmag']>15)&(mOKok['Gmag']<15.5)]
print(np.median(mOKokX['GrResid']))
mOKokX = mOKok[(mOKok['Gmag']>16)&(mOKok['Gmag']<16.2)]
print(np.median(mOKokX['GrResid']))
keywords = {"Ystr":'GrResid', "plotNameRoot":'GmagCorrection', "symbSize":0.05}
plotGmagCorrections(mOKcBok, keywords)
# for calibration: giMin = 0.4 & giMax = 3.0
mOKB = mOK[(mOK['Gmag']>16)&(mOK['Gmag']<19.5)]
plotGmag2Ddiagrams(mOKB)
mOKB
"""
Explanation: Final Figures for the Paper
with Karun's 2D histogram implementation
End of explanation
"""
RAbin, RAnPts, RAmedianBin, RAsigGbin = zit.fitMedians(mOKcBok['raW'], mOKcBok['GrResid'], -51.5, 60.0, 112, 1)
"""
Explanation: Final Gmag-based recalibration
Recalibrate R.A. residuals
End of explanation
"""
decOK = mOKcBok['dec_new']
GrResid = mOKcBok['GrResid']
fig,ax = plt.subplots(1,1,figsize=(8,6))
ax.scatter(decOK, GrResid, s=0.01, c='blue')
ax.set_xlim(-1.3,1.3)
ax.set_ylim(-0.06,0.06)
ax.set_ylim(-0.04,0.04)
ax.set_xlabel('Declination (deg)')
ax.set_ylabel('Gaia G - SDSS G')
xBin, nPts, medianBin, sigGbin = zit.fitMedians(decOK, GrResid, -1.266, 1.264, 252, 0)
ax.scatter(xBin, medianBin, s=30.0, c='black', alpha=0.9)
ax.scatter(xBin, medianBin, s=15.0, c='yellow', alpha=0.5)
TwoSigP = medianBin + 2*sigGbin
TwoSigM = medianBin - 2*sigGbin
ax.plot(xBin, TwoSigP, c='yellow')
ax.plot(xBin, TwoSigM, c='yellow')
xL = np.linspace(-100,100)
ax.plot(xL, 0*xL+0.00, c='yellow')
ax.plot(xL, 0*xL+0.01, c='red')
ax.plot(xL, 0*xL-0.01, c='red')
dCleft = -1.3
ax.plot(0*xL+dCleft, xL, c='red')
alltheta = []
for i in range(0,12):
decCol = -1.2655 + (i+1)*0.2109
ax.plot(0*xL+decCol, xL, c='red')
xR = xBin[(xBin>dCleft)&(xBin<decCol)]
yR = medianBin[(xBin>dCleft)&(xBin<decCol)]
dyR = sigGbin[(xBin>dCleft)&(xBin<decCol)]
data = np.array([xR, yR, dyR])
theta2 = zit.best_theta(data,5)
alltheta.append(theta2)
yfit = zit.polynomial_fit(theta2, xR)
ax.plot(xR, yfit, c='cyan', lw=2)
dCleft = decCol
rrr = yR - yfit
# print(i, np.median(rrr), np.std(rrr)) # 2 milimag scatter
# print(i, theta2)
plt.savefig('GmagDecCorrections.png')
# let's now correct all mags with this correction
thetaRecalib = alltheta
decLeft = -1.3
for i in range(0,12):
decRight = -1.2655 + (i+1)*0.2109
decArr = np.linspace(decLeft, decRight, 100)
thetaBin = thetaRecalib[i]
ZPfit = zit.polynomial_fit(thetaBin, decArr)
if (i==0):
decCorrGrid = decArr
ZPcorr = ZPfit
else:
decCorrGrid = np.concatenate([decCorrGrid, decArr])
ZPcorr = np.concatenate([ZPcorr, ZPfit])
decLeft = decRight
mOKtest = mOK[mOK['r_Nobs_new']>3]
# Dec correction
decGrid2correct = mOKtest['dec_new']
ZPcorrectionsDec = np.interp(decGrid2correct, decCorrGrid, ZPcorr)
# RA correction
raWGrid2correct = mOKtest['raW']
ZPcorrectionsRA = np.interp(raWGrid2correct, RAbin, RAmedianBin)
print(np.std(ZPcorrectionsDec), np.std(ZPcorrectionsRA))
fig,ax = plt.subplots(1,1,figsize=(8,6))
ax.scatter(decGrid2correct, ZPcorrectionsDec, s=0.01, c='blue')
ax.plot(decCorrGrid, ZPcorr, c='red')
ax.set_xlim(-1.3,1.3)
ax.set_ylim(-0.02,0.02)
ax.set_xlabel('Declination (deg)')
ax.set_ylabel('Correction')
fig,ax = plt.subplots(1,1,figsize=(8,6))
ax.scatter(raWGrid2correct, ZPcorrectionsRA, s=0.01, c='blue')
ax.plot(RAbin, RAmedianBin, c='red')
ax.set_xlim(-52,61)
ax.set_ylim(-0.05,0.05)
ax.set_xlabel('RA (deg)')
ax.set_ylabel('Correction')
np.min(ZPcorrectionsRA)
mOKtest['ZPcorrectionsRA'] = ZPcorrectionsRA
mOKtest['ZPcorrectionsDec'] = ZPcorrectionsDec
mOKtest['r_mMed_new'] = mOKtest['r_mMed_new'] + mOKtest['ZPcorrectionsRA'] + mOKtest['ZPcorrectionsDec']
mOKtest['gGr_new'] = mOKtest['Gmag'] - mOKtest['r_mMed_new']
mOKtest['gGrFit'] = zit.polynomial_fit(thetaCloc, mOKtest['gi'])
mOKtest['dgGr'] = mOKtest['gGr_new'] - mOKtest['gGrFit']
d = mOKtest
gi = d['gi']
Gr = d['gGr_new']
Gmag = d['Gmag']
zit.qpBM(d, 'gi', -1, 4.5, 'gGr_new', -2, 1.0, 56)
thetaCtest, DcBokTest_new = doGaiaGmagCorrection(mOKtest, 'gGr_new')
keywords = {"Ystr":'gGr_new', "plotNameRoot":'GmagCorrectionTest', "symbSize":0.05}
mOKtest2 = mOKtest[(mOKtest['gi']>0.4)&(mOKtest['gi']<3.0)]
x = mOKtest2[(mOKtest2['Gmag']>14.5)&(mOKtest2['Gmag']<15.5)]
mOKtest2['gGr_new'] = mOKtest2['gGr_new'] - np.median(x['gGr_new'])
plotGmagCorrections(mOKtest2, keywords)
"""
Explanation: Recalibrate Dec residuals
End of explanation
"""
# final refers to the July 2020 analysis, before the paper submission
#np.savetxt('ZPcorrectionsRA_v3.1_final.dat', (RAbin, RAmedianBin))
#np.savetxt('ZPcorrectionsDec_v3.1_final.dat', (decCorrGrid, ZPcorr))
sdssOut = sdss[sdss['ra_new']<1]
sdssOut = zit.selectCatalog(sdss, sdssOut)
sdssOut.sort('calib_fla_new')
# read back gray zero point recalibration files
zpRAgrid, zpRA = np.loadtxt('ZPcorrectionsRA_v3.1_final.dat')
zpDecgrid, zpDec = np.loadtxt('ZPcorrectionsDec_v3.1_final.dat')
sdssOut
# Dec correction
decGrid2correct = sdssOut['dec_new']
ZPcorrectionsDec = np.interp(decGrid2correct, zpDecgrid, zpDec)
# RA correction
ra = sdssOut['ra_new']
raWGrid2correct = np.where(ra > 180, ra-360, ra)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpRAgrid, zpRA)
print('gray std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
for b in ('u', 'g', 'r', 'i', 'z'):
for mtype in ('_mMed_new', '_mMean_new'):
mstr = b + mtype
# applying here gray corrections
sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec
SSCindexRoot = 'CALIBSTARS_'
outFile = ZIdataDir + "/" + "stripe82calibStars_v3.1_noheader_final.dat"
newSSC = open(outFile,'w')
df = sdssOut
Ngood = 0
for i in range(0, np.size(df)):
Ngood += 1
NoldCat = df['calib_fla_new'][i]
strNo = f'{Ngood:07}'
SSCindex = SSCindexRoot + strNo
SSCrow = zit.getSSCentry(df, i)
zit.SSCentryToOutFileRow(SSCrow, SSCindex, newSSC)
newSSC.close()
print(Ngood, 'rows in file', outFile)
"""
Explanation: Now save correction arrays, then apply to original file, and then test
End of explanation
"""
np.size(zpDec)
"""
Explanation: paper plot showing the jump in Gaia Gmag
End of explanation
"""
### need to figure out where were ZPcorrections2_rz_Dec.dat etc produced ...
## color corrections
for mtype in ('_mMed', '_mMean'):
## u band from u-r color
color = 'ur'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
print('u-r std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
mstr = 'u' + mtype
sdssOut[mstr] = sdssOut[mstr] - ZPcorrectionsRA - ZPcorrectionsDec
## g band from g-r color
color = 'gr'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
print('g-r std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
mstr = 'g' + mtype
sdssOut[mstr] = sdssOut[mstr] - ZPcorrectionsRA - ZPcorrectionsDec
## i band from r-i color
color = 'ri'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
mstr = 'i' + mtype
print('r-i std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec
## i band from r-z color
color = 'rz'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
mstr = 'z' + mtype
print('r-z std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec
"""
Explanation: TEMP code for color corrections to go from 3.1 to 3.2 and 3.3
End of explanation
"""
|
catherinezucker/dustcurve | tutorial.ipynb | gpl-3.0 | import emcee
from dustcurve import model
import seaborn as sns
import numpy as np
from dustcurve import pixclass
import matplotlib.pyplot as plt
import pandas as pd
import warnings
from dustcurve import io
from dustcurve import hputils
from dustcurve import kdist
%matplotlib inline
#this code pulls snippets from the PHYS201 week 9 MCMC notebook written by Vinny Manohoran and the PHYS201 L9 solutions,
#written by Tom Dimiduk and Kevin Shain
#suppress obnoxious deprecation warning that doesn't affect output
warnings.filterwarnings("ignore", category=Warning, module="emcee")
#our pixel of choice
fnames='359992.h5'
#fetch the required likelihood and prior arguments for PTSampler
ldata,pdata=io.fetch_args(fnames,[4,19,0,100],0.06)
# the model has 24 parameters; we'll use 50 walkers, 1000 steps each, at 5 different temps
ndim=24
nslices=12
nwalkers = 50
nsteps = 20000
ntemps=5
#setting off the walkers at the kinematic distance given by the literature, assuming a flat rotation curve, theta=220 km/s, R=8.5 kpc
#Details on rotation curve given in Rosolowsky and Leroy 2006
vslices=np.linspace(-15.6,-1.3,12)
klong=np.ones(12)*hputils.pix2lb_scalar(256,int(fnames[:-3]))[0]
klat=np.ones(12)*hputils.pix2lb_scalar(256,int(fnames[:-3]))[1]
kdist=kdist.kdist(klong,klat,vslices)
kdistmod=5*np.log10(kdist)-5
#slightly perturb the starting positions for each walker, in a ball centered around result
result=kdistmod.tolist()
result.extend(1.0 for i in range (nslices))
starting_positions = [[result + np.random.randn(ndim) for i in range(nwalkers)] for j in range(ntemps)]
#set up the sampler object
sampler = emcee.PTSampler(ntemps, nwalkers, ndim, model.log_likelihood, model.log_prior, loglargs=(ldata), logpargs=[pdata])
#burn in, and save final positions for all parameters, which we'll then set off our walkers at for the "real" thing
post_burn_pos, prob, state = sampler.run_mcmc(starting_positions, 300)
sampler.reset()
print("Setup complete")
# run the sampler and time how long it takes
%time sampler.run_mcmc(post_burn_pos, nsteps) #, thin=thin_int)
print('Sampler Done')
"""
Explanation: Logistics
We are going to use parallel-tempering, implemented via the python emcee package, to explore our posterior, which consists of the set of distances and gas to dust conversion coefficients to the twelve velocity slices. Since we need to explore a 24 dimensional parameter space, we are going to use 100 walkers, 500 steps each, at 5 different temperatures. If you would like to edit this parameters, simply edit "nwalkers", "ntemps", and "nsteps" in the cell below. However, we are only going to keep the lowest temperature chain ($\beta=1$) for analysis. Since the sampler.chain object from PTSampler returns an array of shape (Ntemps, Nwalkers, Nsteps, Ndim), returning the samples for all walkers, steps, and dimensions at $\beta=1$ would correspond to sampler.chain[0,:,:,:]. To decrease your value of $\beta$ simply increase the index for the first dimension. For more information on how PTSampler works, see http://dan.iel.fm/emcee/current/user/pt/. We will set off our walkers in a Gaussian ball around a) the kinematic distance estimates for the Cepheus molecular cloud given by a flat rotation curve from Leroy & Rosolowsky 2006 and b) the gas-to-dust coefficient given by the literature. We perturb the walkers in a Gaussian ball with mean 0 and variance 1. You can edit the starting positions of the walkers by editing the "result" variable below. We are going to discard the first half of every walker's chain as burn-in.
Setting up the positional arguments for PTSampler
We need to feed PTSampler the required positional arguments for the log_likelihood and log_prior function. We do this using the fetch_args function from the io module, which creates an instance of the pixclass object that holds our data and metadata. Fetch_args accepts three arguments:
A string specifiying the h5 filename containing your data, in our case 89996.h5
The prior bounds you want to impose on a) distances (flat prior), in units of distance modulus and b) gas to dust coefficients (also flat prior). In the script, we multiply our input literature value of choice by this prefactor. The bounds are given as a list in the format [lower_distance, upper_distance, lower_coefficient, upper_coefficient]. For distance, this must be between 4 and 19, because that's the distance modulus range of our stellar posterior array.
The gas-to-dust coefficient you'd like to use, given as a float; for this tutorial, we are pulling a value from the literature of 0.06 magnitudes/K. This value is then multiplied by the set of c coefficients we're determining as part of the parameter estimation problem.
Fetch_args will then return the correct arguments for the log_likelihood and log_prior functions within the model module.
Here we go!
End of explanation
"""
#Extract the coldest [beta=1] temperature chain from the sampler object; discard first half of samples as burnin
samples_cold = sampler.chain[0,:,int(nsteps/2):,:]
traces_cold = samples_cold.reshape(-1, ndim).T
#check out acceptance fraction:
print("Our mean acceptance fraction for the coldest chain is %.2f" % np.mean(sampler.acceptance_fraction[0]))
#find best fit values for each of the 24 parameters (12 d's and 12 c's)
theta=pd.DataFrame(traces_cold)
quantile_50=theta.quantile(.50, axis=1).values
quantile_84=theta.quantile(.84, axis=1).values
quantile_16=theta.quantile(.16, axis=1).values
upperlim=quantile_84-quantile_50
lowerlim=quantile_50-quantile_16
#print out distances
for i in range(0,int(len(quantile_50)/2)):
print('d%i: %.3f + %.3f - %.3f' % (i+1,quantile_50[i],upperlim[i], lowerlim[i]))
#print out coefficients
for i in range(int(len(quantile_50)/2), int(len(quantile_50))):
print('c%i: %.3f + %.3f - %.3f' % (i+1-int(len(quantile_50)/2),quantile_50[i],upperlim[i], lowerlim[i]))
"""
Explanation: The sampler is done running, so now let's check out the results. We are going to print out our mean acceptance fraction across all walkers for the coldest temperature chain.
We are going to discard the first half of each walker's chain as burn-in; to change the number of steps to burn off, simply edit the 3rd dimension of sampler.chain[0,:,n:,:] and input your desired value of n. Next we are going to plot what the chains look like, for all distances and conversion coefficients. Finally, we are going to compute and print out the 50th, 16th, and 84th percentile of chain for each distance parameter, using the "quantile" attribute of a pandas dataframe object. The 50th percentile measurement represents are best guess for the each distance parameter, while the difference between the 16th and 50th gives us a lower limit and the difference between the 50th and 84th percentile gives us an upper limit.
End of explanation
"""
#set up subplots for chain plotting
axes=['ax'+str(i) for i in range(ndim)]
fig, (axes) = plt.subplots(ndim, figsize=(10,60))
plt.tight_layout()
for i in range(0,ndim):
if i<int(ndim/2):
axes[i].set(ylabel='d%i' % (i+1))
else:
axes[i].set(ylabel='c%i' % (i+1))
#plot traces for each parameter
for i in range(0,ndim):
sns.tsplot(traces_cold[i],ax=axes[i])
"""
Explanation: Let's see what our chains look like by producing trace plots:
End of explanation
"""
from dustcurve import pixclass
pixObj=pixclass.PixStars('/n/fink1/czucker/Data/'+ fnames)
post_array=pixObj.get_p()
unique_co,indices,unique_post,ratio=ldata
from dustcurve import plot_posterior
#plot the reddening profile over the stacked, normalized stellar posterior surfaces
#normcol=True, normsurf=True
plot_posterior.plot_posterior(np.asarray(post_array),np.linspace(4,19,120),np.linspace(0,7,700),quantile_50,ratio,unique_co,y_range=[0,2],vmax=.03,normcol=True)
"""
Explanation: Now let's overplot the reddening profiles corresponding to our most probable parameters on top of the stacked stellar posterior surfaces. This plot has been normalized so that a) each individual stellar posterior array sums to one and b) each distance column in the stacked posterior array contains the same amount of "ink"
End of explanation
"""
from dustcurve import diagnostics
gr,chain_ensemble=diagnostics.run_chains(fnames,nwalkers=50,nsteps=20000,runs=5, bounds=[4,19,0,100])
print(gr)
"""
Explanation: Now that we know how our code works, let's run a Gelman-Rubin test. We are going to determine the Gelman-Rubin convergence diagnostic for every parameter. We'll use the same number of walkers (50) and steps (5000) as before, except now we are running 5 independent chains and checking if these chains converge for all parameters, for the coldest temperature.
End of explanation
"""
#extract the independent runs
traces_cold_0=chain_ensemble[0,:,:]
traces_cold_1=chain_ensemble[1,:,:]
traces_cold_2=chain_ensemble[2,:,:]
traces_cold_3=chain_ensemble[3,:,:]
traces_cold_4=chain_ensemble[4,:,:]
#find best fit values for each of the 24 parameters (12 d's and 12 c's)
theta_0=pd.DataFrame(traces_cold_0)
quantile_50_0=theta_0.quantile(.50, axis=1).values
theta_1=pd.DataFrame(traces_cold_1)
quantile_50_1=theta_1.quantile(.50, axis=1).values
theta_2=pd.DataFrame(traces_cold_2)
quantile_50_2=theta_2.quantile(.50, axis=1).values
theta_3=pd.DataFrame(traces_cold_3)
quantile_50_3=theta_3.quantile(.50, axis=1).values
theta_4=pd.DataFrame(traces_cold_4)
quantile_50_4=theta_4.quantile(.50, axis=1).values
print("Values of each parameter in 5 independent runs with perturbed starting positions")
#print out distances
for i in range(0,int(len(quantile_50_0)/2)):
print('d%i: %.3f, %.3f ,%.3f, %.3f, %.3f' % (i+1,quantile_50_0[i],quantile_50_1[i], quantile_50_2[i], quantile_50_3[i], quantile_50_4[i]))
#print out coefficients
for i in range(int(len(quantile_50_0)/2), int(len(quantile_50_0))):
print('c%i: %.3f, %.3f ,%.3f, %.3f, %.3f' % (i+1-int(len(quantile_50)/2),quantile_50_0[i],quantile_50_1[i], quantile_50_2[i], quantile_50_3[i], quantile_50_4[i]))
print(result)
"""
Explanation: It looks like most of the parameters are converging (GR diagnostic < 1.1); let's see what they look like side by side for all 5 runs:
End of explanation
"""
import h5py
#Save the results of the sampler:
output=fnames
fwrite = h5py.File("/n/fink1/czucker/Output/"+str(output), "w")
chaindata = fwrite.create_dataset("/chains", sampler.chain.shape, dtype='f')
chaindata[:,:,:,:]=sampler.chain
probdata = fwrite.create_dataset("/probs", sampler.lnprobability.shape, dtype='f')
probdata[:,:,:]=sampler.lnprobability
gr=fwrite.create_dataset("/gr", chain_ensemble.shape, dtype='f')
gr[:,:,:]=chain_ensemble
fwrite.close()
"""
Explanation: Now let's save the results to a file!
End of explanation
"""
|
SylvainCorlay/bqplot | examples/Marks/Pyplot/HeatMap.ipynb | apache-2.0 | import numpy as np
from ipywidgets import Layout
import bqplot.pyplot as plt
from bqplot import ColorScale
"""
Explanation: Heatmap
The HeatMap mark represents a 2d matrix of values as a color image. It can be used to visualize a 2d function, or a grayscale image for instance.
HeatMap is very similar to the GridHeatMap, but should be preferred for a greater number of points (starting at around 100x100), to avoid overloading the browser. GridHeatMap offers more control (interactions, selections), and is better suited for a smaller number of points.
End of explanation
"""
x = np.linspace(-5, 5, 200)
y = np.linspace(-5, 5, 200)
X, Y = np.meshgrid(x, y)
color = np.cos(X**2 + Y**2)
"""
Explanation: Data Input
x is a 1d array, corresponding to the abscissas of the points (size N)
y is a 1d array, corresponding to the ordinates of the points (size M)
color is a 2d array, $\text{color}_{ij}$ is the intensity of the point $(x_i, y_j)$ (size (N, M))
Scales must be defined for each attribute:
- a LinearScale, LogScale or OrdinalScale for x and y
- a ColorScale for color
End of explanation
"""
fig = plt.figure(title='Cosine',
layout=Layout(width='650px', height='650px'),
min_aspect_ratio=1, max_aspect_ratio=1, padding_y=0)
heatmap = plt.heatmap(color, x=x, y=y)
fig
"""
Explanation: Plotting a 2-dimensional function
This is a visualization of the function $f(x, y) = \text{cos}(x^2+y^2)$
End of explanation
"""
from scipy.misc import ascent
Z = ascent()
Z = Z[::-1, :]
aspect_ratio = Z.shape[1] / Z.shape[0]
img = plt.figure(title='Ascent', layout=Layout(width='650px', height='650px'),
min_aspect_ratio=aspect_ratio,
max_aspect_ratio=aspect_ratio, padding_y=0)
plt.scales(scales={'color': ColorScale(scheme='Greys', reverse=True)})
axes_options = {'x': {'visible': False}, 'y': {'visible': False}, 'color': {'visible': False}}
ascent = plt.heatmap(Z, axes_options=axes_options)
img
"""
Explanation: Displaying an image
The HeatMap can be used as is to display a 2d grayscale image, by feeding the matrix of pixel intensities to the color attribute
End of explanation
"""
|
infilect/ml-course1 | keras-notebooks/Frameworks/2.2 Introduction - Tensorflow.ipynb | mit | # A simple calculation in Python
x = 1
y = x + 10
print(y)
import tensorflow as tf
# The ~same simple calculation in Tensorflow
x = tf.constant(1, name='x')
y = tf.Variable(x+10, name='y')
print(y)
"""
Explanation: <img src="../imgs/tensorflow_head.png" />
Tensorflow
TensorFlow (https://www.tensorflow.org/) is a software library, developed by Google Brain Team within Google's Machine Learning Intelligence research organization, for the purposes of conducting machine learning and deep neural network research.
TensorFlow combines the computational algebra of compilation optimization techniques, making easy the calculation of many mathematical expressions that would be difficult to calculate, instead.
Tensorflow Main Features
Defining, optimizing, and efficiently calculating mathematical expressions involving multi-dimensional arrays (tensors).
Programming support of deep neural networks and machine learning techniques.
Transparent use of GPU computing, automating management and optimization of the same memory and the data used. You can write the same code and run it either on CPUs or GPUs. More specifically, TensorFlow will figure out which parts of the computation should be moved to the GPU.
High scalability of computation across machines and huge data sets.
TensorFlow is available with Python and C++ support, but the Python API is better supported and much easier to learn.
Very Preliminary Example
End of explanation
"""
model = tf.global_variables_initializer() # model is used by convention
with tf.Session() as session:
session.run(model)
print(session.run(y))
"""
Explanation: Meaning: "When the variable y is computed, take the value of the constant x and add 10 to it"
Sessions and Models
To actually calculate the value of the y variable and to evaluate expressions, we need to initialise the variables, and then create a session where the actual computation happens
End of explanation
"""
a = tf.constant(5, name="a")
b = tf.constant(45, name="b")
y = tf.Variable(a+b*2, name='y')
model = tf.global_variables_initializer()
with tf.Session() as session:
# Merge all the summaries collected in the default graph.
merged = tf.summary.merge_all()
# Then we create `SummaryWriter`.
# It will write all the summaries (in this case the execution graph)
# obtained from the code's execution into the specified path”
writer = tf.summary.FileWriter("tmp/tf_logs_simple", session.graph)
session.run(model)
print(session.run(y))
"""
Explanation: Data Flow Graph
(IDEA)
_A Machine Learning application is the result of the repeated computation of complex mathematical expressions, thus
we could describe this computation by using a Data Flow Graph
Data Flow Graph: a graph where:
each Node represents the instance of a mathematical operation
multiply, add, divide
each Edge is a multi-dimensional data set (tensors) on which the operations are performed.
Tensorflow Graph Model
Node: In TensorFlow, each node represents the instantion of an operation.
Each operation has inputs (>= 2) and outputs >= 0.
Edges: In TensorFlow, there are two types of edge:
Data Edges:
They are carriers of data structures (tensors), where an output of one operation (from one node) becomes the input for another operation.
Dependency Edges: These edges indicate a control dependency between two nodes (i.e. "happens before" relationship).
Let's suppose we have two nodes A and B and a dependency edge connecting A to B. This means that B will start its operation only when the operation in A ends.
Tensorflow Graph Model (cont.)
Operation: This represents an abstract computation, such as adding or multiplying matrices.
An operation manages tensors, and It can just be polymorphic: the same operation can manipulate different tensor element types.
For example, the addition of two int32 tensors, the addition of two float tensors, and so on.
Kernel: This represents the concrete implementation of that operation.
A kernel defines the implementation of the operation on a particular device.
For example, an add matrix operation can have a CPU implementation and a GPU one.
Tensorflow Graph Model Session
Session: When the client program has to establish communication with the TensorFlow runtime system, a session must be created.
As soon as the session is created for a client, an initial graph is created and is empty. It has two fundamental methods:
session.extend: To be used during a computation, requesting to add more operations (nodes) and edges (data). The execution graph is then extended accordingly.
session.run: The execution graphs are executed to get the outputs (sometimes, subgraphs are executed thousands/millions of times using run invocations).
Tensorboard
TensorBoard is a visualization tool, devoted to analyzing Data Flow Graph and also to better understand the machine learning models.
It can view different types of statistics about the parameters and details of any part of a computer graph graphically. It often happens that a graph of computation can be very complex.
Tensorboard Example
Run the TensorBoard Server:
shell
tensorboard --logdir=/tmp/tf_logs
Open TensorBoard
Example
End of explanation
"""
import numpy as np
tensor_1d = np.array([1, 2.5, 4.6, 5.75, 9.7])
tf_tensor=tf.convert_to_tensor(tensor_1d,dtype=tf.float64)
with tf.Session() as sess:
print(sess.run(tf_tensor))
print(sess.run(tf_tensor[0]))
print(sess.run(tf_tensor[2]))
"""
Explanation: Data Types (Tensors)
One Dimensional Tensor (Vector)
End of explanation
"""
tensor_2d = np.arange(16).reshape(4, 4)
print(tensor_2d)
tf_tensor = tf.placeholder(tf.float32, shape=(4, 4))
with tf.Session() as sess:
print(sess.run(tf_tensor, feed_dict={tf_tensor: tensor_2d}))
"""
Explanation: Two Dimensional Tensor (Matrix)
End of explanation
"""
matrix1 = np.array([(2,2,2),(2,2,2),(2,2,2)],dtype='float32')
matrix2 = np.array([(1,1,1),(1,1,1),(1,1,1)],dtype='float32')
tf_mat1 = tf.constant(matrix1)
tf_mat2 = tf.constant(matrix2)
matrix_product = tf.matmul(tf_mat1, tf_mat2)
matrix_sum = tf.add(tf_mat1, tf_mat2)
matrix_det = tf.matrix_determinant(matrix2)
with tf.Session() as sess:
prod_res = sess.run(matrix_product)
sum_res = sess.run(matrix_sum)
det_res = sess.run(matrix_det)
print("matrix1*matrix2 : \n", prod_res)
print("matrix1+matrix2 : \n", sum_res)
print("det(matrix2) : \n", det_res)
"""
Explanation: Basic Operations (Examples)
End of explanation
"""
%matplotlib inline
import matplotlib.image as mp_image
filename = "../imgs/keras-logo-small.jpg"
input_image = mp_image.imread(filename)
#dimension
print('input dim = {}'.format(input_image.ndim))
#shape
print('input shape = {}'.format(input_image.shape))
import matplotlib.pyplot as plt
plt.imshow(input_image)
plt.show()
"""
Explanation: Handling Tensors
End of explanation
"""
my_image = tf.placeholder("uint8",[None,None,3])
slice = tf.slice(my_image,[10,0,0],[16,-1,-1])
with tf.Session() as session:
result = session.run(slice,feed_dict={my_image: input_image})
print(result.shape)
plt.imshow(result)
plt.show()
"""
Explanation: Slicing
End of explanation
"""
x = tf.Variable(input_image,name='x')
model = tf.global_variables_initializer()
with tf.Session() as session:
x = tf.transpose(x, perm=[1,0,2])
session.run(model)
result=session.run(x)
plt.imshow(result)
plt.show()
"""
Explanation: Transpose
End of explanation
"""
x = tf.placeholder(tf.float32)
y = tf.log(x)
var_grad = tf.gradients(y, x)
with tf.Session() as session:
var_grad_val = session.run(var_grad, feed_dict={x:2})
print(var_grad_val)
"""
Explanation: Computing the Gradient
Gradients are free!
End of explanation
"""
|
jarvis-fga/Projetos | Problema 2/jeferson/.ipynb_checkpoints/sentiment-analysis-checkpoint.ipynb | mit | import pandas
imdb = pandas.read_csv('data/imdb_labelled.txt', sep="\t", names=["sentences", "polarity"])
yelp = pandas.read_csv('data/yelp_labelled.txt', sep="\t", names=["sentences", "polarity"])
amazon = pandas.read_csv('data/amazon_cells_labelled.txt', sep="\t", names=["sentences", "polarity"])
big = pandas.DataFrame()
big = big.append([imdb, yelp, amazon])
big.to_csv('big.csv', index=False, encoding='utf-8')
"""
Explanation: 1. Análise de Sentimentos
1. Objetivo
Determinar o valor subjetivo de um documento de texto, ou seja, como positivo ou negativo é o conteúdo de um documento de texto.
1. Problema
A empresa Amazon deseja obter um sistema inteligente para processar os comentários
de seus clientes sobre os seus produtos, podendo classificar tais comentários dentre as
categorias: positivo ou negativo. Para isso ela disponibiliza três bases de dados com
sentenças rotuladas.
1. Os Dados
Os dados estão organizados em sentença e rótulo, sendo 0 negativo e 1 positivo
As bases são provenientes dos seguintes sites:
* imdb.com
* amazon.com
* yelp.com
1. Preparação
```Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install scikit-learn pandas matplotlib scipy jupyter nltk
RUN chmod +x boot.sh
EXPOSE 8888
CMD ["/bin/sh", "./boot.sh"]
```
Boot.sh:
```
python << END
import sys
import nltk
nltk.download('punkt')
nltk.download('stopwords')
END
jupyter notebook --ip=0.0.0.0 --allow-root
```
Build:
docker build -t machine-learn .
Para executar, use o comando:
docker run --name machine-learn-container -p 8888:8888 -v ~/amazon/jeferson:/code machine-learn:latest /bin/sh ./boot.sh
1. Pré-processamento
As três bases de dados do problema devem ser usadas em conjunto. A biblioteca pandas pode ser usada para facilitar esse processo. O código abaixo importa as bases e as concatena em uma base maior. Ao final, é gerado um arquivo com extensão csv, o qual será usado para os treinos e análises futuras.
End of explanation
"""
import nltk
sentence = 'My test for nltk library!!'
tokens = nltk.word_tokenize(sentence)
print(tokens)
"""
Explanation: 1. Tokenization
Para a criação de um dicionário, precisa-se transformar o texto em tokens. Para auxiliar nessa tarefa pode-se utilizar uma biblioteca para processamento de linguagem natural, no exemplo abaixo, usa-se o nltk. O nltk é uma biblioteca Python de código aberto que realiza esta função.
End of explanation
"""
import nltk
sentences = big['sentences']
sentences_strings = sentences.str.lower()
sentences_tokens = [] # Has all sentences tokens
for sentence_string in sentences_strings:
token = nltk.word_tokenize(sentence_string)
sentences_tokens.extend(token)
dictionary = set()
dictionary.update(sentences_tokens)
"""
Explanation: 1. Dicionário
Para criar um dicionário, precisamos usar apenas a coluna que contém as sentenças no arquivo e ignorar a coluna da polaridade. Novamente, o nltk facilita esse processo. Obtidas as sentenças, usaremos o nltk para quebrá-las em tokens.
End of explanation
"""
import numpy
tuplas = zip(dicionario, range(len(dicionario)))
mapa = {sentence_tokens:indice for sentence_tokens, indice in tuplas}
def vetorizar_texto(texto, mapa):
vetor = [0] * len(mapa)
for sentence_tokens in texto:
if sentence_tokens in mapa:
posicao = mapa[sentence_tokens]
vetor[posicao] += 1
return vetor
vetoresdeTexto = [vetorizar_texto(texto, mapa) for texto in sentences_tokens]
X = numpy.array(vetoresdeTexto)
"""
Explanation: 1. Vetorização
Usa-se a vetorização para tornar possível o trabalho dos classificadores. Na vetorização do texto, cada palavra é tratada como uma característica
End of explanation
"""
dictionary = set()
tokens = ['teste1', 'teste2', 'teste3', 'teste4']
for x in range(len(tokens)):
if x + 1 < len(tokens):
digram = [tokens[x]+' '+tokens[x+1]]
dictionary.update(digram)
print(dictionary)
"""
Explanation: 1. Word normalization
Uma abordagem comum na análise de sentimentos é o uso de digramas ou trigramas, isso auxilia na classificação de sentenças. Dado um vetor de tokens onde já se foram eliminados as repetições e stop words, podemos criar digramas como segue:
End of explanation
"""
|
seg/2016-ml-contest | MandMs/03_Facies_classification-MandMs_SFS_v2-validation_set.ipynb | apache-2.0 | %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.metrics import f1_score, accuracy_score, make_scorer
filename = 'engineered_features.csv'
training_data = pd.read_csv(filename)
training_data.describe()
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
y = training_data['Facies'].values
print y[25:40]
print np.shape(y)
X = training_data.drop(['Formation', 'Well Name','Facies'], axis=1)
print np.shape(X)
X.describe(percentiles=[.05, .25, .50, .75, .95])
scaler = preprocessing.StandardScaler().fit(X)
X = scaler.transform(X)
"""
Explanation: Facies classification - Sequential Feature Selection
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The code and ideas in this notebook,</span> by <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">Matteo Niccoli and Mark Dahl,</span> are licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
The mlxtend library used for the sequential feature selection is by Sebastian Raschka.
End of explanation
"""
Fscorer = make_scorer(f1_score, average = 'micro')
"""
Explanation: Make performance scorers
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
"""
Explanation: Sequential Feature Selection with mlextend
http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/
End of explanation
"""
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
clf = RandomForestClassifier(random_state=49)
sfs = SFS(clf,
k_features=100,
forward=True,
floating=False,
scoring=Fscorer,
cv = 8,
n_jobs = -1)
sfs = sfs.fit(X, y)
np.save('sfs_RF_metric_dict.npy', sfs.get_metric_dict())
"""
Explanation: The next cell will take many hours to run, skip it
End of explanation
"""
# load previously saved dictionary
read_dictionary = np.load('sfs_RF_metric_dict.npy').item()
# plot results
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
# run this twice
fig = plt.figure()
ax = plot_sfs(read_dictionary, kind='std_err')
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 22
fig_size[1] = 18
plt.title('Sequential Forward Selection (w. StdDev)')
plt.grid()
plt.xticks( rotation='vertical')
locs, labels = plt.xticks()
plt.xticks( locs, labels)
plt.show()
"""
Explanation: Restart from here
End of explanation
"""
# save results to dataframe
selected_summary = pd.DataFrame.from_dict(read_dictionary).T
selected_summary['index'] = selected_summary.index
selected_summary.sort_values(by='avg_score', ascending=0)
# save dataframe
selected_summary.to_csv('SFS_RF_selected_features_summary.csv', sep=',', header=True, index = False)
# re load saved dataframe and sort by score
filename = 'SFS_RF_selected_features_summary.csv'
selected_summary = pd.read_csv(filename)
selected_summary = selected_summary.set_index(['index'])
selected_summary.sort_values(by='avg_score', ascending=0).head()
# feature selection with highest score
selected_summary.iloc[44]['feature_idx']
slct = np.array([257, 3, 4, 6, 7, 8, 10, 12, 16, 273, 146, 19, 26, 27, 284, 285, 30, 34, 163, 1, 42, 179, 155, 181, 184, 58, 315, 190, 320, 193, 194, 203, 290, 80, 210, 35, 84, 90, 97, 18, 241, 372, 119, 120, 126])
slct
# isolate and save selected features
filename = 'engineered_features_validation_set2.csv'
training_data = pd.read_csv(filename)
X = training_data.drop(['Formation', 'Well Name'], axis=1)
Xs = X.iloc[:, slct]
Xs = pd.concat([training_data[['Depth', 'Well Name', 'Formation']], Xs], axis = 1)
print np.shape(Xs), list(Xs)
Xs.to_csv('SFS_top45_selected_engineered_features_validation_set.csv', sep=',', index=False)
# feature selection with highest score
selected_summary.iloc[74]['feature_idx']
slct = np.array([257, 3, 4, 5, 6, 7, 8, 265, 10, 12, 13, 16, 273, 18, 19, 26, 27, 284, 285, 30, 34, 35, 1, 42, 304, 309, 313, 58, 315, 319, 320, 75, 80, 338, 84, 341, 89, 90, 92, 97, 101, 102, 110, 372, 119, 120, 122, 124, 126, 127, 138, 139, 146, 155, 163, 165, 167, 171, 177, 179, 180, 181, 184, 190, 193, 194, 198, 203, 290, 210, 211, 225, 241, 249, 253])
slct
# isolate and save selected features
filename = 'engineered_features_validation_set2.csv'
training_data = pd.read_csv(filename)
X = training_data.drop(['Formation', 'Well Name'], axis=1)
Xs = X.iloc[:, slct]
Xs = pd.concat([training_data[['Depth', 'Well Name', 'Formation']], Xs], axis = 1)
print np.shape(Xs), list(Xs)
Xs.to_csv('SFS_top75_selected_engineered_features_validation_set.csv', sep=',', index=False)
"""
Explanation: It looks like the score stabilizes after about 6 features, reaches a max at 16, then begins to taper off after about 70 features. We will save the top 45 and the top 75.
End of explanation
"""
|
jakevdp/sklearn_tutorial | notebooks/03.1-Classification-SVMs.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
"""
Explanation: <small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Supervised Learning In-Depth: Support Vector Machines
Previously we introduced supervised machine learning.
There are many supervised learning algorithms available; here we'll go into brief detail one of the most powerful and interesting methods: Support Vector Machines (SVMs).
End of explanation
"""
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring');
"""
Explanation: Motivating Support Vector Machines
Support Vector Machines (SVMs) are a powerful supervised learning algorithm used for classification or for regression. SVMs are a discriminative classifier: that is, they draw a boundary between clusters of data.
Let's show a quick example of support vector classification. First we need to create a dataset:
End of explanation
"""
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
"""
Explanation: A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see a problem: such a line is ill-posed! For example, we could come up with several possibilities which perfectly discriminate between the classes in this example:
End of explanation
"""
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
"""
Explanation: These are three very different separaters which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently!
How can we improve on this?
Support Vector Machines: Maximizing the Margin
Support vector machines are one way to address this.
What support vector machined do is to not only draw a line, but consider a region about the line of some given width. Here's an example of what it might look like:
End of explanation
"""
from sklearn.svm import SVC # "Support Vector Classifier"
clf = SVC(kernel='linear')
clf.fit(X, y)
"""
Explanation: Notice here that if we want to maximize this width, the middle fit is clearly the best.
This is the intuition of support vector machines, which optimize a linear discriminant model in conjunction with a margin representing the perpendicular distance between the datasets.
Fitting a Support Vector Machine
Now we'll fit a Support Vector Machine Classifier to these points. While the mathematical details of the likelihood model are interesting, we'll let you read about those elsewhere. Instead, we'll just treat the scikit-learn algorithm as a black box which accomplishes the above task.
End of explanation
"""
def plot_svc_decision_function(clf, ax=None):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function([[xi, yj]])
# plot the margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf);
"""
Explanation: To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us:
End of explanation
"""
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
"""
Explanation: Notice that the dashed lines touch a couple of the points: these points are the pivotal pieces of this fit, and are known as the support vectors (giving the algorithm its name).
In scikit-learn, these are stored in the support_vectors_ attribute of the classifier:
End of explanation
"""
from ipywidgets import interact
def plot_svm(N=10):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
clf = SVC(kernel='linear')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plt.xlim(-1, 4)
plt.ylim(-1, 6)
plot_svc_decision_function(clf, plt.gca())
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none')
interact(plot_svm, N=[10, 200], kernel='linear');
"""
Explanation: Let's use IPython's interact functionality to explore how the distribution of points affects the support vectors and the discriminative fit.
(This is only available in IPython 2.0+, and will not work in a static view)
End of explanation
"""
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf);
"""
Explanation: Notice the unique thing about SVM is that only the support vectors matter: that is, if you moved any of the other points without letting them cross the decision boundaries, they would have no effect on the classification results!
Going further: Kernel Methods
Where SVM gets incredibly exciting is when it is used in conjunction with kernels.
To motivate the need for kernels, let's look at some data which is not linearly separable:
End of explanation
"""
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))
"""
Explanation: Clearly, no linear discrimination will ever separate these data.
One way we can adjust this is to apply a kernel, which is some functional transformation of the input data.
For example, one simple model we could use is a radial basis function
End of explanation
"""
from mpl_toolkits import mplot3d
def plot_3D(elev=30, azim=30):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring')
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=(-90, 90), azip=(-180, 180));
"""
Explanation: If we plot this along with our data, we can see the effect of it:
End of explanation
"""
clf = SVC(kernel='rbf')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
"""
Explanation: We can see that with this additional dimension, the data becomes trivially linearly separable!
This is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using kernel='rbf', short for radial basis function:
End of explanation
"""
|
kimkipyo/dss_git_kkp | Python 복습/14일차.금_pandas + SQL_2/14일차_4T_Pandas로 배우는 SQL 시작하기 (4) - HAVING, SUB QUERY.ipynb | mit | import pymysql
import curl
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"sakila",
charset = "utf8",
)
customer_df = pd.read_sql("SELECT * FROM customer;", db)
rental_df = pd.read_sql("SELECT * FROM rental;", db)
df = rental_df.merge(customer_df, on="customer_id")
df.head(1)
rental_per_customer_groups = df.groupby("customer_id")
rental_per_customer_groups.get_group(1)
rental_per_customer_groups.agg({"customer_id": np.size})["customer_id"]
# rental_per_customer_groups.agg({"customer_id": np.size})["customer_id"] > 30
is_many_rentals_user = rental_per_customer_groups.size() > 30
is_many_rentals_user
#여기서는 어떤 문제로 되지 않는다. 다음에 알려주겠다.
# SQL로 하겠다
# 1. Sub Query - Query 안에 Query 가 들어있다.
SQL_QUERY = """
SELECT
r.customer_id,
COUNT(*) rentals
FROM rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY r.customer_id
WHERE rentals > 30
"""
pd.read_sql(SQL_QUERY, db)
#순서가 FROM -> group by -> where(=>rental이 없다.) -> select에서 카운트가 마지막이라서
# 1. Sub Query = Query 안에 Query가 들어있다.
SQL_QUERY = """
SELECT rentals_per_customer.customer_id "Customer ID", rentals_per_customer.rentals
FROM (
SELECT
r.customer_id,
COUNT(*) rentals
FROM rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY r.customer_id
) AS rentals_per_customer
WHERE rentals > 30
;
"""
pd.read_sql(SQL_QUERY, db)
# sub query스럽지 않아서 나눠 쓰면 보기가 좋다.
RENTALS_PER_CUSTOMER_SQL_QUERY = """
SELECT
r.customer_id,
COUNT(*) rentals
FROM rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY r.customer_id
;
"""
SQL_QUERY = """
SELECT *
FROM (
{rentals_per_customer_sql_query}
) AS rentals_per_customer
WHERE rentals > 30
;
""".format(
rentals_per_customer_sql_query=RENTALS_PER_CUSTOMER_SQL_QUERY.replace(";", "")
)
pd.read_sql(SQL_QUERY, db)
print(SQL_QUERY)
# 30번 이상인 애들의 => 이름, 이메일
RESULT_SQL_QUERY = """
SELECT customer.last_name, customer.first_name, customer.email
FROM ({SQL_QUERY}) many_rental_user
JOIN customer ON many_rental_user.customer_id = customer.customer_id
;
""".format(
SQL_QUERY=SQL_QUERY.replace(";", "")
)
pd.read_sql(RESULT_SQL_QUERY, db)
# Temporary Table ( 임시 테이블 )
SQL_QUERY = """
DROP TEMPORARY TABLE IF EXISTS rentals_per_customer;
CREATE TEMPORARY TABLE rentals_per_customer
SELECT
r.customer_id,
COUNT(*) rentals
FROM rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY r.customer_id
;
"""
# pd.read_sql() => 이걸로 실행시키면 오류가 난다. 그래서 cursor로 실행
cursor = db.cursor()
cursor.execute(SQL_QUERY)
SQL_QUERY = """
SELECT rpc.customer_id, rpc.rentals
FROM rentals_per_customer rpc
WHERE rentals > 30
;
"""
pd.read_sql(SQL_QUERY, db)
"""
Explanation: 4T_Pandas로 배우는 SQL 시작하기 (4) - HAVING, SUB QUERY
SQL => 연산의 결과로 나온 데이터를 다시 Filtering ( HAVING )
SUB QUERY + TEMPORARY TABLE ( 임시 테이블 )
실습)
"5월 달에" / "지금까지" 렌탈 횟수가 30회 이상인 유저
유저이름과 유저 이메일 => 마케팅
customer, rental
Pandas
End of explanation
"""
SQL_QUERY = """
SELECT r.customer_id, COUNT(*) rentals
FROM rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY r.customer_id
# WHERE rentals > 30 #연산에 대한 결과로 Filtering을 할 수 없다.
# 연산에 대한 결과로 Filtering을 할 수 있는 기능
HAVING rentals > 30
"""
pd.read_sql(SQL_QUERY, db)
"""
Explanation: HAVING
End of explanation
"""
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"sakila",
charset="utf8"
)
film_df = pd.read_sql("SELECT * FROM film;", db)
rental_df = pd.read_sql("SELECT * FROM rental;", db)
payment_df = pd.read_sql("SELECT * FROM payment;", db)
inventory_df = pd.read_sql("SELECT * FROM inventory;", db)
"""
Explanation: 실습)
제일 많이 매출을 발생시킨 "영화 제목"
payment, film, inventory, rental
영화별 ( GROUP BY )
매출 ( SUM )을 뽑아서
정렬하자 ( ORDER BY )
데이터가 어디에 들어있는가?
End of explanation
"""
SQL_QUERY = """
SELECT f.film_id, f.title, SUM(p.amount) "revenue"
FROM
film f,
rental r,
payment p,
inventory i
WHERE
f.film_id = i.film_id
AND i.inventory_id = r.inventory_id
AND r.rental_id = p.rental_id
GROUP BY f.film_id
ORDER BY revenue DESC
;
"""
pd.read_sql(SQL_QUERY, db)
SQL_QUERY = """
SELECT f.film_id, f.title, SUM(p.amount) "revenue"
FROM payment p
JOIN rental r ON p.rental_id = r.rental_id
JOIN inventory i ON i.inventory_id = r.inventory_id
JOIN film f ON f.film_id = i.film_id
GROUP BY f.film_id
ORDER BY revenue DESC
;
"""
pd.read_sql(SQL_QUERY, db)
"""
Explanation: film_df => film_id, title
inventory_df => inventory_id, film_id
rental_df => rental_id, inventory_id
payment_df => rental_id, amount
End of explanation
"""
customer_df = pd.read_sql("SELECT * FROM customer;", db)
customer_df.head(1)
payment_df.head(1)
SQL_QUERY = """
SELECT c.first_name, c.last_name, SUM(p.amount) "revenue"
FROM customer c
JOIN payment p
ON c.customer_id = p.customer_id
GROUP BY c.customer_id
ORDER BY revenue DESC
;
"""
pd.read_sql(SQL_QUERY, db)
"""
Explanation: 실습 추가)
결제 누적액이 많은 유저 상위 10명 ( customer, payment )
End of explanation
"""
SQL_QUERY = """
SELECT a.first_name, a.last_name, SUM(p.amount) "revenue"
FROM
actor a,
film_actor fa,
inventory i,
rental r,
payment p
WHERE
a.actor_id = fa.actor_id
AND fa.film_id = i.film_id
AND i.inventory_id = r.inventory_id
AND r.rental_id = p.rental_id
GROUP BY a.actor_id
ORDER BY revenue DESC
;
"""
pd.read_sql(SQL_QUERY, db)
"""
Explanation: 영화를 흥행시킨 ( 매출이 많이 발생한 ) 배우 상위 10명
rental, payment ...
actor_df => actor_id, first_name, last_name
film_actor_df => actor_id, film_id
inventory_df => inventory_id, film_id
rental_df => rental_id, inventory_id
payment_df => rental_id, amount
End of explanation
"""
|
luizfmoura/datascience | Luiz Fernando De Moura - 2021_2_Practice_2_Implementing_LENET_5_architectures_using_Keras.ipynb | gpl-2.0 | import tensorflow as tf
from keras import callbacks
"""
Explanation: 1 - Hands-on TensorFlow + Keras + LENET-5
Implement and train several times using keras API your own LENET-5 implementation. Notice that you will be urged to derive an implementation somehow distinct to the original proposal of LeCun et al.
1.1 - Load tensor flow
End of explanation
"""
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print ("Training set info:",x_train.shape)
print ("Train target info:",y_train.shape)
print ("Test set info:",x_test.shape)
print ("Test target info:",y_test.shape)
x_train_norm, x_test_norm = x_train / 255.0, x_test / 255.0
"""
Explanation: 1.2 - Dataset preparation
Import and prepare dataset
Modified NIST (MNIST) is a database of handwritten digits. It encompasses a training set of 60,000 examples, and a test set of 10,000 examples. Images available in keras library are 28x28x1-sized.
The final part of the following code normalizes the training data to $[0,1]$.
End of explanation
"""
import matplotlib.pyplot as plt
plt.figure(figsize=(12, 12))
for i in range(100,200):
ax = plt.subplot(10, 10, i-99)
plt.axis("off")
plt.imshow(x_train_norm[i].reshape(28,28),interpolation='nearest')
plt.winter()
"""
Explanation: Show sample images
End of explanation
"""
import tensorflow as tf
# Here it goes a functional implementation of a MLP
def mymlp_functional( hidd_neu = 128, hidd_act='relu',do_freq=0.2):
inputs = tf.keras.layers.Input(shape=(28,28))
x = tf.keras.layers.Flatten(input_shape=(28, 28))(inputs)
x = tf.keras.layers.Dense(hidd_neu, activation=hidd_act) (x)
x = tf.keras.layers.Dropout(do_freq) (x)
outputs = tf.keras.layers.Dense(10, activation='softmax') (x)
return tf.keras.models.Model(inputs, outputs, name='mymlp')
"""
Explanation: 1.3 - Create and Initialize LENET-5 Architecture
The code bellow implements a single-hidden-layer perceptron using Keras functional architecture.
End of explanation
"""
import tensorflow as tf
def mylenet_functional( hidd_neu = 128, hidd_act='relu',do_freq=0.2):
inputs = tf.keras.layers.Input(shape=(28,28,1))
out = tf.keras.layers.ZeroPadding2D(padding=2)(inputs)
out = tf.keras.layers.Conv2D(filters=6,kernel_size=(5, 5),activation='tanh',input_shape=(32,32,1))(out)
out = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),strides=(2, 2), padding='valid')(out)
out = tf.keras.layers.Conv2D(filters=16,kernel_size=(5, 5),activation='tanh',input_shape=(14,14,6))(out)
out = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),strides=(2, 2), padding='valid')(out)
out = tf.keras.layers.Flatten()(out)
out = tf.keras.layers.Dense(120) (out)
out = tf.keras.layers.BatchNormalization()(out)
out = tf.keras.layers.Activation(hidd_act)(out)
out = tf.keras.layers.Dense(84) (out)
out = tf.keras.layers.BatchNormalization()(out)
out = tf.keras.layers.Activation(hidd_act)(out)
out = tf.keras.layers.Dropout(do_freq) (out)
outputs = tf.keras.layers.Dense(10, activation='softmax') (out)
return tf.keras.models.Model(inputs, outputs, name='mylenet_functional')
model = mylenet_functional()
"""
Explanation: Task 1 Implement using the Keras functional API the LENET-5 architecture
Notice that the LETNET input layer size is 32 x 32 while MNIST images dimensions are 28 x 28. To solve such mismatch, include, after the input layer, a zeros padding layer to insert 2 zero-pixels at each border. Use zero_padding2d layer function. Also, notice that the original LENET-5 has a Gaussian layer on its output, substitute it by a softmax layer. In addition, incorporate batch normalization and dropout to your implementation. For further information, consult Keras layers API
End of explanation
"""
#Use this space to answer Task 2
model.summary()
#References:
#https://towardsdatascience.com/how-to-calculate-the-number-of-parameters-in-keras-models-710683dae0ca
#https://towardsdatascience.com/batch-norm-explained-visually-how-it-works-and-why-neural-networks-need-it-b18919692739
#Input: 28*28*1
#The Flatten layer doesn’t learn anything, thus the number of parameters is 0.
#Bath normalization layer: For each input parameter, it generates 4 parameters (2 learnable(beta and gamma) and 2 non-learnable(mean, var))
#Convolutional layer: number of filters * (squared filter length * input channels + 1)
#Dense layer formula: param_number = output_channel_number * (input_channel_number + 1)
conv1_input_depth = 1
conv1_filter_size = 5 * 5
conv1_number_of_filters = 6
conv1_params = conv1_number_of_filters * ((conv1_filter_size * conv1_input_depth) + 1)
conv2_input_depth = 6
conv2_filter_size = 5 * 5
conv2_number_of_filters = 16
conv2_params = conv2_number_of_filters * ((conv2_filter_size * conv2_input_depth) + 1)
flatten = 5 * 5 * 16
dense_input = flatten
dense_neurons = 120
dense_params = dense_neurons * (dense_input + 1)
dense_batchNormLayer_params = dense_neurons * 4
dense1_input = dense_neurons
dense1_neurons = 84
dense1_params = dense1_neurons * (dense1_input + 1)
dense1_batchNormLayer_params = dense1_neurons * 4
dense2_input = 84 #previous layer output (dropout)
dense2_neurons = 10
dense2_params = dense2_neurons * (dense2_input + 1)
total_params = conv1_params + conv2_params + dense_params + dense_batchNormLayer_params + dense1_params + dense1_batchNormLayer_params + dense2_params
print("Conv1 count (computed): ", conv1_params)
print("Conv2 count (computed): ", conv2_params)
print("Dense count (computed): ", dense_params)
print("Batch count (computed): ", dense_batchNormLayer_params)
print("Dense1 count (computed): ", dense1_params)
print("Batch1 count (computed): ", dense1_batchNormLayer_params)
print("Dense2 count (computed): ", dense2_params)
print("Total parameters count (computed): ", total_params)
"""
Explanation: Task 2 Calculate the number of parameters of the above model. Compare the estimatad values to the one provided by
Model.summary()
End of explanation
"""
# Put you code here
model = mylenet_functional()
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
EPOCHS = 25
earlystopping = callbacks.EarlyStopping(monitor ="val_loss", mode ="min", patience = 5, restore_best_weights = True)
H = model.fit(x_train_norm, y_train, epochs=EPOCHS,validation_data=(x_test_norm,y_test), callbacks =[earlystopping])
#H = model.fit(x_train_norm, y_train, epochs=EPOCHS, validation_data=(x_test_norm,y_test))
#plot_history(H)
for i,(val_accuracy,loss,val_loss) in enumerate(zip(H.history['val_accuracy'],H.history['loss'], H.history['val_loss'])):
print(i+1,round(val_loss,3),round(val_loss-loss,3),round(val_accuracy,4))
import matplotlib.pyplot as plt
def plot_history(H):
plt.figure(figsize=(10, 10))
plt.plot( H.history["loss"], label="train_loss")
plt.plot( H.history["accuracy"], label="train_acc")
plt.plot( H.history["val_loss"], label="validation_loss")
plt.plot( H.history["val_accuracy"], label="validation_acc")
plt.title("Loss / accuracy evolution")
plt.xlabel("Epoch #")
plt.ylabel("Loss / Accuracy")
plt.ylim([0, 1])
leg=plt.legend()
"""
Explanation: Use this space to answer Task 2
References:
https://towardsdatascience.com/how-to-calculate-the-number-of-parameters-in-keras-models-710683dae0ca
https://towardsdatascience.com/batch-norm-explained-visually-how-it-works-and-why-neural-networks-need-it-b18919692739
The Flatten layer doesn’t learn anything, thus the number of parameters is 0.
Batch normalization layer: For each input parameter, it generates 4 parameters (2 learnable(beta and gamma) and 2 non-learnable(mean, var))
Convolutional layer: number of filters * (squared filter length * input channels + 1)
Dense layer formula: param_number = output_channel_number * (input_channel_number + 1)
Summary:
Total params: 62,522
Trainable params: 62,114
Non-trainable params: 408
Conv1 count (computed): 156
Conv2 count (computed): 2416
Dense count (computed): 48120
Batch count (computed): 480
Dense1 count (computed): 10164
Batch1 count (computed): 336
Dense2 count (computed): 850
Total parameters count (computed): 62522
1.4 - Network training
Remember the training process workflow:
1. Architecture setup
2. Network instantiation
3. Parameters initialization.
4. Training graph setup
5. Paramenters optimization
6. Evaluation
Details can be found in the keras documentation.
Task 3: Make a evaluation of the number of training epochs. Use that 'optimal' number of epochs in the remaining tasks.
End of explanation
"""
#Put your code for Task 4 here
EPOCHS = 8
model = mylenet_functional()
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
H = model.fit(x_train_norm, y_train, epochs=EPOCHS,validation_data=(x_test_norm,y_test))
"""
Explanation: Put your accuracy results for training and test data in a table inside the next cell. Remind the 'best' number of epochs. This value will be used in the following experiments.
Epoch 8/25
1875/1875 [==============================] - 28s 15ms/step - loss: 0.0339 - accuracy: 0.9890 - val_loss: 0.0338 - val_accuracy: 0.9897
Considering validation_loss, training set loss and validation accuracy metrics, the model reaches its optimal results at epoch 8, when the difference between val_loss and loss is nearly zero and val_accuracy reaches a high score. After that, little improvement is seen on either metric.
Task 4: Train from the scratch a LENET-5 network through the previously determined number of epochs
End of explanation
"""
#Put your code for Task 5 here
import numpy as np
predictions = model.predict(x_test_norm)
def getConfusionMatrix(predictions):
predictions_vec = [(np.argmax(np.array(p))) for p in predictions]
return np.array(tf.math.confusion_matrix(labels=y_test,predictions=predictions_vec))
conf_matrix_arr = getConfusionMatrix(predictions)
print(conf_matrix_arr)
"""
Explanation: 1.5 - Rigorous evaluation
Tasks 5 to 10 will introduce yoursel to a more thorought evaluation of the network trained in task 4. It is a really important need for ensuring network quality.
Task 5: Obtain the test set confusion matrix for the above trained network.
End of explanation
"""
#Put your code for Task 6 here
def getOverallAcc(conf_matrix_arr):
return np.trace(conf_matrix_arr) / np.sum(conf_matrix_arr)
print("Overall accuracy: ",round(getOverallAcc(conf_matrix_arr),3))
"""
Explanation: Task 6: Present the overall accuracy
End of explanation
"""
#Put your code for Task 7 here
#Omission - Type I Error:
def getOmission(conf_matrix_arr):
return (conf_matrix_arr.sum(axis=0) - np.diagonal(conf_matrix_arr)) / conf_matrix_arr.sum(axis=0)
#Comission - Type II Error:
def getComission(conf_matrix_arr):
return (conf_matrix_arr.sum(axis=1) - np.diagonal(conf_matrix_arr)) / conf_matrix_arr.sum(axis=1)
omission = getOmission(conf_matrix_arr)
print("Omission: ",omission)
for i,val in enumerate(omission):
print("Omission Class",i,": ","{:.2%}".format(val))
comission = getComission(conf_matrix_arr)
print("Comission: ",comission)
for i,val in enumerate(comission):
print("Comission Class",i,": ","{:.2%}".format(val))
"""
Explanation: Task 7: Present the errors of omission and commission
End of explanation
"""
#Put your code for Task 8 here
#Producer's accuracy:
def getProducersAcc(conf_matrix_arr):
return np.diagonal(conf_matrix_arr) / conf_matrix_arr.sum(axis=0)
#Users's accuracy:
def getUsersAcc(conf_matrix_arr):
return np.diagonal(conf_matrix_arr) / conf_matrix_arr.sum(axis=1)
producer = getProducersAcc(conf_matrix_arr)
print("Producer's accuracy: ",producer)
for i,val in enumerate(producer):
print("Producer's accuracy Class",i,": ","{:.2%}".format(val))
user = getUsersAcc(conf_matrix_arr)
print("User's accuracy: ",user)
for i,val in enumerate(user):
print("User's accuracy Class",i,": ","{:.2%}".format(val))
"""
Explanation: Task 8: Present user's and producer's accuracy
End of explanation
"""
#Put your code for Task 9 here
#Reference: https://towardsdatascience.com/cohens-kappa-9786ceceab58
#axis=0: columns
#axis=1: rows
def getKappa(conf_matrix_arr):
p0 = getOverallAcc(conf_matrix_arr)
pe = (conf_matrix_arr.sum(axis=1) / np.sum(conf_matrix_arr)) * (conf_matrix_arr.sum(axis=0) / np.sum(conf_matrix_arr))
total_pe = np.sum(pe)
return (p0 - total_pe) / (1 - total_pe)
print("Kappa coefficient: ",getKappa(conf_matrix_arr))
"""
Explanation: Task 9: Obtain the Kappa
End of explanation
"""
#Put your code for Task 10 here
EPOCHS = 8
results = []
for i in range(10):
modl = mylenet_functional()
modl.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
modl.fit(x_train_norm, y_train, epochs=EPOCHS,validation_data=(x_test_norm,y_test))
preds = modl.predict(x_test_norm)
conf_matrx = getConfusionMatrix(preds)
acc = getOverallAcc(conf_matrx)
omiss = getOmission(conf_matrx)
comiss = getComission(conf_matrx)
prodcs = getProducersAcc(conf_matrx)
usrs = getUsersAcc(conf_matrx)
kappa = getKappa(conf_matrx)
iteration = {'overall_accuracy':acc,'omission':omiss,'comission':comiss,'producers_acc':prodcs,'usrs_acc':usrs,'kappa':kappa}
results.append(iteration)
import pandas as pd
for i,r in enumerate(results):
print("\nRound ",i+1)
df = pd.DataFrame(r)
print(df)
"""
Explanation: Task 10: Initialize and train this model 10 times and take note of the values obtained at each round.
End of explanation
"""
EPOCHS = 8
modl = mylenet_functional()
modl.compile(
optimizer='SGD',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
H1 = modl.fit(x_train_norm, y_train, epochs=EPOCHS,validation_data=(x_test_norm,y_test))
plot_history(H1)
EPOCHS = 8
modl = mylenet_functional()
modl.compile(
optimizer='RMSprop',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
H2 = modl.fit(x_train_norm, y_train, epochs=EPOCHS,validation_data=(x_test_norm,y_test))
plot_history(H2)
EPOCHS = 8
modl = mylenet_functional()
modl.compile(
optimizer='Adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
H3 = modl.fit(x_train_norm, y_train, epochs=EPOCHS,validation_data=(x_test_norm,y_test))
plot_history(H3)
EPOCHS = 8
modl = mylenet_functional()
modl.compile(
optimizer='Adadelta',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
H4 = modl.fit(x_train_norm, y_train, epochs=EPOCHS,validation_data=(x_test_norm,y_test))
plot_history(H4)
EPOCHS = 8
modl = mylenet_functional()
modl.compile(
optimizer='Adagrad',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
H5 = modl.fit(x_train_norm, y_train, epochs=EPOCHS,validation_data=(x_test_norm,y_test))
plot_history(H5)
EPOCHS = 8
modl = mylenet_functional()
modl.compile(
optimizer='Adamax',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
H6 = modl.fit(x_train_norm, y_train, epochs=EPOCHS,validation_data=(x_test_norm,y_test))
plot_history(H6)
EPOCHS = 8
modl = mylenet_functional()
modl.compile(
optimizer='Nadam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
H7 = modl.fit(x_train_norm, y_train, epochs=EPOCHS,validation_data=(x_test_norm,y_test))
plot_history(H7)
EPOCHS = 8
modl = mylenet_functional()
modl.compile(
optimizer='Ftrl',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
H8 = modl.fit(x_train_norm, y_train, epochs=EPOCHS,validation_data=(x_test_norm,y_test))
plot_history(H8)
from matplotlib import pyplot as plt
accuracies = {
'SGD':H1.history['accuracy'][-1],
'RMSprop':H2.history['accuracy'][-1],
'Adam':H3.history['accuracy'][-1],
'Adadelta':H4.history['accuracy'][-1],
'Adagrad':H5.history['accuracy'][-1],
'Adamax':H6.history['accuracy'][-1],
'Nadam':H7.history['accuracy'][-1],
'Ftrl':H8.history['accuracy'][-1],
}
losses = {
'SGD':H1.history['loss'][-1],
'RMSprop':H2.history['loss'][-1],
'Adam':H3.history['loss'][-1],
'Adadelta':H4.history['loss'][-1],
'Adagrad':H5.history['loss'][-1],
'Adamax':H6.history['loss'][-1],
'Nadam':H7.history['loss'][-1],
'Ftrl':H8.history['loss'][-1],
}
val_accuracies = {
'SGD':H1.history['val_accuracy'][-1],
'RMSprop':H2.history['val_accuracy'][-1],
'Adam':H3.history['val_accuracy'][-1],
'Adadelta':H4.history['val_accuracy'][-1],
'Adagrad':H5.history['val_accuracy'][-1],
'Adamax':H6.history['val_accuracy'][-1],
'Nadam':H7.history['val_accuracy'][-1],
'Ftrl':H8.history['val_accuracy'][-1],
}
val_losses = {
'SGD':H1.history['val_loss'][-1],
'RMSprop':H2.history['val_loss'][-1],
'Adam':H3.history['val_loss'][-1],
'Adadelta':H4.history['val_loss'][-1],
'Adagrad':H5.history['val_loss'][-1],
'Adamax':H6.history['val_loss'][-1],
'Nadam':H7.history['val_loss'][-1],
'Ftrl':H8.history['val_loss'][-1],
}
plt.figure(figsize=(8,4))
plt.title("Training Accuracies by Optimizer'")
plt.bar(range(len(accuracies)),list(accuracies.values()),0.6)
plt.xticks(range(len(accuracies)), list(accuracies.keys()))
plt.show()
plt.figure(figsize=(8,5))
plt.title("Training Losses by Optimizer'")
plt.bar(range(len(losses)),list(losses.values()),0.6)
plt.xticks(range(len(losses)), list(losses.keys()))
plt.show()
plt.figure(figsize=(8,4))
plt.title("Validation Accuracies by Optimizer'")
plt.bar(range(len(val_accuracies)),list(val_accuracies.values()),0.6)
plt.xticks(range(len(val_accuracies)), list(val_accuracies.keys()))
plt.show()
plt.figure(figsize=(8,5))
plt.title("Validation Losses by Optimizer'")
plt.bar(range(len(val_losses)),list(val_losses.values()),0.6)
plt.xticks(range(len(val_losses)), list(val_losses.keys()))
plt.show()
"""
Explanation: Put your outcomes here in the form of an table.
See frame above. Each row (0-9) represents a class of digit. Overall accuracy and kappa coefficient metrics are aggregated for each whole training round whereas the other measures have one value for each class.
1.6 - Optimizers
Evaluate the impact of optimizer's choice during training process.
Task 11: While using the same number of training epochs as before, vary the optimizers. Save the training history for each training, then plot histories of both losses and accuracies in the training set. Also compare the training and test final accuracy values obtained using each optimizer.
Click here for tf.keras.optimizers documentation. There you may find the following variants:
SGD
RMSprop
Adam
Adadelta
Adagrad
Adamax
Nadam
Ftrl
End of explanation
"""
|
joelowj/Udacity-Projects | Udacity-Artificial-Intelligence-Nanodegree/Project-6/RNN_project.ipynb | apache-2.0 | ### Load in necessary libraries for data input and normalization
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
%load_ext autoreload
%autoreload 2
from my_answers import *
%load_ext autoreload
%autoreload 2
from my_answers import *
### load in and normalize the dataset
dataset = np.loadtxt('datasets/normalized_apple_prices.csv')
"""
Explanation: Artificial Intelligence Nanodegree
Recurrent Neural Network Projects
Welcome to the Recurrent Neural Network Project in the Artificial Intelligence Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Implementation TODOs in this notebook
This notebook contains two problems, cut into a variety of TODOs. Make sure to complete each section containing a TODO marker throughout the notebook. For convenience we provide links to each of these sections below.
TODO #1: Implement a function to window time series
TODO #2: Create a simple RNN model using keras to perform regression
TODO #3: Finish cleaning a large text corpus
TODO #4: Implement a function to window a large text corpus
TODO #5: Create a simple RNN model using keras to perform multiclass classification
TODO #6: Generate text using a fully trained RNN model and a variety of input sequences
Problem 1: Perform time series prediction
In this project you will perform time series prediction using a Recurrent Neural Network regressor. In particular you will re-create the figure shown in the notes - where the stock price of Apple was forecasted (or predicted) 7 days in advance. In completing this exercise you will learn how to construct RNNs using Keras, which will also aid in completing the second project in this notebook.
The particular network architecture we will employ for our RNN is known as Long Term Short Memory (LSTM), which helps significantly avoid technical problems with optimization of RNNs.
1.1 Getting started
First we must load in our time series - a history of around 140 days of Apple's stock price. Then we need to perform a number of pre-processing steps to prepare it for use with an RNN model. First off, it is good practice to normalize time series - by normalizing its range. This helps us avoid serious numerical issues associated how common activation functions (like tanh) transform very large (positive or negative) numbers, as well as helping us to avoid related issues when computing derivatives.
Here we normalize the series to lie in the range [0,1] using this scikit function, but it is also commonplace to normalize by a series standard deviation.
End of explanation
"""
# lets take a look at our time series
plt.plot(dataset)
plt.xlabel('time period')
plt.ylabel('normalized series value')
"""
Explanation: Lets take a quick look at the (normalized) time series we'll be performing predictions on.
End of explanation
"""
odd_nums = np.array([1,3,5,7,9,11,13])
"""
Explanation: 1.2 Cutting our time series into sequences
Remember, our time series is a sequence of numbers that we can represent in general mathematically as
$$s_{0},s_{1},s_{2},...,s_{P}$$
where $s_{p}$ is the numerical value of the time series at time period $p$ and where $P$ is the total length of the series. In order to apply our RNN we treat the time series prediction problem as a regression problem, and so need to use a sliding window to construct a set of associated input/output pairs to regress on. This process is animated in the gif below.
<img src="images/timeseries_windowing_training.gif" width=600 height=600/>
For example - using a window of size T = 5 (as illustrated in the gif above) we produce a set of input/output pairs like the one shown in the table below
$$\begin{array}{c|c}
\text{Input} & \text{Output}\
\hline \color{CornflowerBlue} {\langle s_{1},s_{2},s_{3},s_{4},s_{5}\rangle} & \color{Goldenrod}{ s_{6}} \
\ \color{CornflowerBlue} {\langle s_{2},s_{3},s_{4},s_{5},s_{6} \rangle } & \color{Goldenrod} {s_{7} } \
\color{CornflowerBlue} {\vdots} & \color{Goldenrod} {\vdots}\
\color{CornflowerBlue} { \langle s_{P-5},s_{P-4},s_{P-3},s_{P-2},s_{P-1} \rangle } & \color{Goldenrod} {s_{P}}
\end{array}$$
Notice here that each input is a sequence (or vector) of length 5 (and in general has length equal to the window size T) while each corresponding output is a scalar value. Notice also how given a time series of length P and window size T = 5 as shown above, we created P - 5 input/output pairs. More generally, for a window size T we create P - T such pairs.
Now its time for you to window the input time series as described above!
<a id='TODO_1'></a>
TODO: Implement the function called window_transform_series in my_answers.py so that it runs a sliding window along the input series and creates associated input/output pairs. Note that this function should input a) the series and b) the window length, and return the input/output subsequences. Make sure to format returned input/output as generally shown in table above (where window_size = 5), and make sure your returned input is a numpy array.
You can test your function on the list of odd numbers given below
End of explanation
"""
# run a window of size 2 over the odd number sequence and display the results
window_size = 2
X = []
X.append(odd_nums[0:2])
X.append(odd_nums[1:3])
X.append(odd_nums[2:4])
X.append(odd_nums[3:5])
X.append(odd_nums[4:6])
y = odd_nums[2:]
X = np.asarray(X)
y = np.asarray(y)
y = np.reshape(y, (len(y),1)) #optional
assert(type(X).__name__ == 'ndarray')
assert(type(y).__name__ == 'ndarray')
assert(X.shape == (5,2))
assert(y.shape in [(5,1), (5,)])
# print out input/output pairs --> here input = X, corresponding output = y
print ('--- the input X will look like ----')
print (X)
print ('--- the associated output y will look like ----')
print (y)
"""
Explanation: Here is a hard-coded solution for odd_nums. You can compare its results with what you get from your window_transform_series implementation.
End of explanation
"""
### TODO: implement the function window_transform_series in the file my_answers.py
from my_answers import window_transform_series
"""
Explanation: Again - you can check that your completed window_transform_series function works correctly by trying it on the odd_nums sequence - you should get the above output.
End of explanation
"""
# window the data using your windowing function
window_size = 7
X,y = window_transform_series(series = dataset,window_size = window_size)
"""
Explanation: With this function in place apply it to the series in the Python cell below. We use a window_size = 7 for these experiments.
End of explanation
"""
# split our dataset into training / testing sets
train_test_split = int(np.ceil(2*len(y)/float(3))) # set the split point
# partition the training set
X_train = X[:train_test_split,:]
y_train = y[:train_test_split]
# keep the last chunk for testing
X_test = X[train_test_split:,:]
y_test = y[train_test_split:]
# NOTE: to use keras's RNN LSTM module our input must be reshaped to [samples, window size, stepsize]
X_train = np.asarray(np.reshape(X_train, (X_train.shape[0], window_size, 1)))
X_test = np.asarray(np.reshape(X_test, (X_test.shape[0], window_size, 1)))
"""
Explanation: 1.3 Splitting into training and testing sets
In order to perform proper testing on our dataset we will lop off the last 1/3 of it for validation (or testing). This is that once we train our model we have something to test it on (like any regression problem!). This splitting into training/testing sets is done in the cell below.
Note how here we are not splitting the dataset randomly as one typically would do when validating a regression model. This is because our input/output pairs are related temporally. We don't want to validate our model by training on a random subset of the series and then testing on another random subset, as this simulates the scenario that we receive new points within the timeframe of our training set.
We want to train on one solid chunk of the series (in our case, the first full 2/3 of it), and validate on a later chunk (the last 1/3) as this simulates how we would predict future values of a time series.
End of explanation
"""
### TODO: create required RNN model
# import keras network libraries
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
import keras
# given - fix random seed - so we can all reproduce the same results on our default time series
np.random.seed(0)
# TODO: implement build_part1_RNN in my_answers.py
from my_answers import build_part1_RNN
model = build_part1_RNN(window_size)
# build model using keras documentation recommended optimizer initialization
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
# compile the model
model.compile(loss='mean_squared_error', optimizer=optimizer)
"""
Explanation: <a id='TODO_2'></a>
1.4 Build and run an RNN regression model
Having created input/output pairs out of our time series and cut this into training/testing sets, we can now begin setting up our RNN. We use Keras to quickly build a two hidden layer RNN of the following specifications
layer 1 uses an LSTM module with 5 hidden units (note here the input_shape = (window_size,1))
layer 2 uses a fully connected module with one unit
the 'mean_squared_error' loss should be used (remember: we are performing regression here)
This can be constructed using just a few lines - see e.g., the general Keras documentation and the LSTM documentation in particular for examples of how to quickly use Keras to build neural network models. Make sure you are initializing your optimizer given the keras-recommended approach for RNNs
(given in the cell below). (remember to copy your completed function into the script my_answers.py function titled build_part1_RNN before submitting your project)
End of explanation
"""
# run your model!
model.fit(X_train, y_train, epochs=1000, batch_size=50, verbose=0)
"""
Explanation: With your model built you can now fit the model by activating the cell below! Note: the number of epochs (np_epochs) and batch_size are preset (so we can all produce the same results). You can choose to toggle the verbose parameter - which gives you regular updates on the progress of the algorithm - on and off by setting it to 1 or 0 respectively.
End of explanation
"""
# generate predictions for training
train_predict = model.predict(X_train)
test_predict = model.predict(X_test)
"""
Explanation: 1.5 Checking model performance
With your model fit we can now make predictions on both our training and testing sets.
End of explanation
"""
# print out training and testing errors
training_error = model.evaluate(X_train, y_train, verbose=0)
print('training error = ' + str(training_error))
testing_error = model.evaluate(X_test, y_test, verbose=0)
print('testing error = ' + str(testing_error))
"""
Explanation: In the next cell we compute training and testing errors using our trained model - you should be able to achieve at least
training_error < 0.02
and
testing_error < 0.02
with your fully trained model.
If either or both of your accuracies are larger than 0.02 re-train your model - increasing the number of epochs you take (a maximum of around 1,000 should do the job) and/or adjusting your batch_size.
End of explanation
"""
### Plot everything - the original series as well as predictions on training and testing sets
import matplotlib.pyplot as plt
%matplotlib inline
# plot original series
plt.plot(dataset,color = 'k')
# plot training set prediction
split_pt = train_test_split + window_size
plt.plot(np.arange(window_size,split_pt,1),train_predict,color = 'b')
# plot testing set prediction
plt.plot(np.arange(split_pt,split_pt + len(test_predict),1),test_predict,color = 'r')
# pretty up graph
plt.xlabel('day')
plt.ylabel('(normalized) price of Apple stock')
plt.legend(['original series','training fit','testing fit'],loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
"""
Explanation: Activating the next cell plots the original data, as well as both predictions on the training and testing sets.
End of explanation
"""
# read in the text, transforming everything to lower case
text = open('datasets/holmes.txt').read().lower()
print('our original text has ' + str(len(text)) + ' characters')
"""
Explanation: Note: you can try out any time series for this exercise! If you would like to try another see e.g., this site containing thousands of time series and pick another one!
Problem 2: Create a sequence generator
2.1 Getting started
In this project you will implement a popular Recurrent Neural Network (RNN) architecture to create an English language sequence generator capable of building semi-coherent English sentences from scratch by building them up character-by-character. This will require a substantial amount amount of parameter tuning on a large training corpus (at least 100,000 characters long). In particular for this project we will be using a complete version of Sir Arthur Conan Doyle's classic book The Adventures of Sherlock Holmes.
How can we train a machine learning model to generate text automatically, character-by-character? By showing the model many training examples so it can learn a pattern between input and output. With this type of text generation each input is a string of valid characters like this one
dogs are grea
while the corresponding output is the next character in the sentence - which here is 't' (since the complete sentence is 'dogs are great'). We need to show a model many such examples in order for it to make reasonable predictions.
Fun note: For those interested in how text generation is being used check out some of the following fun resources:
Generate wacky sentences with this academic RNN text generator
Various twitter bots that tweet automatically generated text likethis one.
the NanoGenMo annual contest to automatically produce a 50,000+ novel automatically
Robot Shakespeare a text generator that automatically produces Shakespear-esk sentences
2.2 Preprocessing a text dataset
Our first task is to get a large text corpus for use in training, and on it we perform a several light pre-processing tasks. The default corpus we will use is the classic book Sherlock Holmes, but you can use a variety of others as well - so long as they are fairly large (around 100,000 characters or more).
End of explanation
"""
### print out the first 1000 characters of the raw text to get a sense of what we need to throw out
text[:2000]
"""
Explanation: Next, lets examine a bit of the raw text. Because we are interested in creating sentences of English words automatically by building up each word character-by-character, we only want to train on valid English words. In other words - we need to remove all of the other characters that are not part of English words.
End of explanation
"""
### find and replace '\n' and '\r' symbols - replacing them
text = text[1302:]
text = text.replace('\n',' ') # replacing '\n' with '' simply removes the sequence
text = text.replace('\r',' ')
"""
Explanation: Wow - there's a lot of junk here (i.e., weird uncommon character combinations - as this first character chunk contains the title and author page, as well as table of contents)! To keep things simple, we want to train our RNN on a large chunk of more typical English sentences - we don't want it to start thinking non-english words or strange characters are valid! - so lets clean up the data a bit.
First, since the dataset is so large and the first few hundred characters contain a lot of junk, lets cut it out. Lets also find-and-replace those newline tags with empty spaces.
End of explanation
"""
### print out the first 1000 characters of the raw text to get a sense of what we need to throw out
text[:1000]
"""
Explanation: Lets see how the first 1000 characters of our text looks now!
End of explanation
"""
### TODO: implement cleaned_text in my_answers.py
from my_answers import cleaned_text
text = cleaned_text(text)
# shorten any extra dead space created above
text = text.replace(' ',' ')
"""
Explanation: <a id='TODO_3'></a>
TODO: finish cleaning the text
Lets make sure we haven't left any other atypical characters (commas, periods, etc., are ok) lurking around in the depths of the text. You can do this by enumerating all the text's unique characters, examining them, and then replacing any unwanted characters with empty spaces! Once we find all of the text's unique characters, we can remove all of the atypical ones in the next cell. Note: don't remove the punctuation marks given in my_answers.py.
End of explanation
"""
### print out the first 2000 characters of the raw text to get a sense of what we need to throw out
text[:2000]
"""
Explanation: With your chosen characters removed print out the first few hundred lines again just to double check that everything looks good.
End of explanation
"""
# count the number of unique characters in the text
chars = sorted(list(set(text)))
# print some of the text, as well as statistics
print ("this corpus has " + str(len(text)) + " total number of characters")
print ("this corpus has " + str(len(chars)) + " unique characters")
"""
Explanation: Now that we have thrown out a good number of non-English characters/character sequences lets print out some statistics about the dataset - including number of total characters and number of unique characters.
End of explanation
"""
### TODO: implement window_transform_series in my_answers.py
from my_answers import window_transform_series
"""
Explanation: 2.3 Cutting data into input/output pairs
Now that we have our text all cleaned up, how can we use it to train a model to generate sentences automatically? First we need to train a machine learning model - and in order to do that we need a set of input/output pairs for a model to train on. How can we create a set of input/output pairs from our text to train on?
Remember in part 1 of this notebook how we used a sliding window to extract input/output pairs from a time series? We do the same thing here! We slide a window of length $T$ along our giant text corpus - everything in the window becomes one input while the character following becomes its corresponding output. This process of extracting input/output pairs is illustrated in the gif below on a small example text using a window size of T = 5.
<img src="images/text_windowing_training.gif" width=400 height=400/>
Notice one aspect of the sliding window in this gif that does not mirror the analogous gif for time series shown in part 1 of the notebook - we do not need to slide the window along one character at a time but can move by a fixed step size $M$ greater than 1 (in the gif indeed $M = 1$). This is done with large input texts (like ours which has over 500,000 characters!) when sliding the window along one character at a time we would create far too many input/output pairs to be able to reasonably compute with.
More formally lets denote our text corpus - which is one long string of characters - as follows
$$s_{0},s_{1},s_{2},...,s_{P}$$
where $P$ is the length of the text (again for our text $P \approx 500,000!$). Sliding a window of size T = 5 with a step length of M = 1 (these are the parameters shown in the gif above) over this sequence produces the following list of input/output pairs
$$\begin{array}{c|c}
\text{Input} & \text{Output}\
\hline \color{CornflowerBlue} {\langle s_{1},s_{2},s_{3},s_{4},s_{5}\rangle} & \color{Goldenrod}{ s_{6}} \
\ \color{CornflowerBlue} {\langle s_{2},s_{3},s_{4},s_{5},s_{6} \rangle } & \color{Goldenrod} {s_{7} } \
\color{CornflowerBlue} {\vdots} & \color{Goldenrod} {\vdots}\
\color{CornflowerBlue} { \langle s_{P-5},s_{P-4},s_{P-3},s_{P-2},s_{P-1} \rangle } & \color{Goldenrod} {s_{P}}
\end{array}$$
Notice here that each input is a sequence (or vector) of 5 characters (and in general has length equal to the window size T) while each corresponding output is a single character. We created around P total number of input/output pairs (for general step size M we create around ceil(P/M) pairs).
<a id='TODO_4'></a>
Now its time for you to window the input time series as described above!
TODO: Create a function that runs a sliding window along the input text and creates associated input/output pairs. A skeleton function has been provided for you. Note that this function should input a) the text b) the window size and c) the step size, and return the input/output sequences. Note: the return items should be lists - not numpy arrays.
(remember to copy your completed function into the script my_answers.py function titled window_transform_text before submitting your project)
End of explanation
"""
# run your text window-ing function
window_size = 100
step_size = 5
inputs, outputs = window_transform_text(text,window_size,step_size)
"""
Explanation: With our function complete we can now use it to produce input/output pairs! We employ the function in the next cell, where the window_size = 50 and step_size = 5.
End of explanation
"""
# print out a few of the input/output pairs to verify that we've made the right kind of stuff to learn from
print('input = ' + inputs[2])
print('output = ' + outputs[2])
print('--------------')
print('input = ' + inputs[100])
print('output = ' + outputs[100])
"""
Explanation: Lets print out a few input/output pairs to verify that we have made the right sort of stuff!
End of explanation
"""
# print out the number of unique characters in the dataset
chars = sorted(list(set(text)))
print ("this corpus has " + str(len(chars)) + " unique characters")
print ('and these characters are ')
print (chars)
"""
Explanation: Looks good!
2.4 Wait, what kind of problem is text generation again?
In part 1 of this notebook we used the same pre-processing technique - the sliding window - to produce a set of training input/output pairs to tackle the problem of time series prediction by treating the problem as one of regression. So what sort of problem do we have here now, with text generation? Well, the time series prediction was a regression problem because the output (one value of the time series) was a continuous value. Here - for character-by-character text generation - each output is a single character. This isn't a continuous value - but a distinct class - therefore character-by-character text generation is a classification problem.
How many classes are there in the data? Well, the number of classes is equal to the number of unique characters we have to predict! How many of those were there in our dataset again? Lets print out the value again.
End of explanation
"""
# this dictionary is a function mapping each unique character to a unique integer
chars_to_indices = dict((c, i) for i, c in enumerate(chars)) # map each unique character to unique integer
# this dictionary is a function mapping each unique integer back to a unique character
indices_to_chars = dict((i, c) for i, c in enumerate(chars)) # map each unique integer back to unique character
"""
Explanation: Rockin' - so we have a multiclass classification problem on our hands!
2.5 One-hot encoding characters
The last issue we have to deal with is representing our text data as numerical data so that we can use it as an input to a neural network. One of the conceptually simplest ways of doing this is via a 'one-hot encoding' scheme. Here's how it works.
We transform each character in our inputs/outputs into a vector with length equal to the number of unique characters in our text. This vector is all zeros except one location where we place a 1 - and this location is unique to each character type. e.g., we transform 'a', 'b', and 'c' as follows
$$a\longleftarrow\left[\begin{array}{c}
1\
0\
0\
\vdots\
0\
0
\end{array}\right]\,\,\,\,\,\,\,b\longleftarrow\left[\begin{array}{c}
0\
1\
0\
\vdots\
0\
0
\end{array}\right]\,\,\,\,\,c\longleftarrow\left[\begin{array}{c}
0\
0\
1\
\vdots\
0\
0
\end{array}\right]\cdots$$
where each vector has 32 entries (or in general: number of entries = number of unique characters in text).
The first practical step towards doing this one-hot encoding is to form a dictionary mapping each unique character to a unique integer, and one dictionary to do the reverse mapping. We can then use these dictionaries to quickly make our one-hot encodings, as well as re-translate (from integers to characters) the results of our trained RNN classification model.
End of explanation
"""
# transform character-based input/output into equivalent numerical versions
def encode_io_pairs(text,window_size,step_size):
# number of unique chars
chars = sorted(list(set(text)))
num_chars = len(chars)
# cut up text into character input/output pairs
inputs, outputs = window_transform_text(text,window_size,step_size)
# create empty vessels for one-hot encoded input/output
X = np.zeros((len(inputs), window_size, num_chars), dtype=np.bool)
y = np.zeros((len(inputs), num_chars), dtype=np.bool)
# loop over inputs/outputs and transform and store in X/y
for i, sentence in enumerate(inputs):
for t, char in enumerate(sentence):
X[i, t, chars_to_indices[char]] = 1
y[i, chars_to_indices[outputs[i]]] = 1
return X,y
"""
Explanation: Now we can transform our input/output pairs - consisting of characters - to equivalent input/output pairs made up of one-hot encoded vectors. In the next cell we provide a function for doing just this: it takes in the raw character input/outputs and returns their numerical versions. In particular the numerical input is given as $\bf{X}$, and numerical output is given as the $\bf{y}$
End of explanation
"""
# use your function
window_size = 100
step_size = 5
X,y = encode_io_pairs(text,window_size,step_size)
"""
Explanation: Now run the one-hot encoding function by activating the cell below and transform our input/output pairs!
End of explanation
"""
### necessary functions from the keras library
from keras.models import Sequential
from keras.layers import Dense, Activation, LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
import keras
import random
# TODO implement build_part2_RNN in my_answers.py
from my_answers import build_part2_RNN
model = build_part2_RNN(window_size, len(chars))
# initialize optimizer
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
# compile model --> make sure initialized optimizer and callbacks - as defined above - are used
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
"""
Explanation: <a id='TODO_5'></a>
2.6 Setting up our RNN
With our dataset loaded and the input/output pairs extracted / transformed we can now begin setting up our RNN for training. Again we will use Keras to quickly build a single hidden layer RNN - where our hidden layer consists of LSTM modules.
Time to get to work: build a 3 layer RNN model of the following specification
layer 1 should be an LSTM module with 200 hidden units --> note this should have input_shape = (window_size,len(chars)) where len(chars) = number of unique characters in your cleaned text
layer 2 should be a linear module, fully connected, with len(chars) hidden units --> where len(chars) = number of unique characters in your cleaned text
layer 3 should be a softmax activation ( since we are solving a multiclass classification)
Use the categorical_crossentropy loss
This network can be constructed using just a few lines - as with the RNN network you made in part 1 of this notebook. See e.g., the general Keras documentation and the LSTM documentation in particular for examples of how to quickly use Keras to build neural network models.
End of explanation
"""
# a small subset of our input/output pairs
Xsmall = X[:10000,:,:]
ysmall = y[:10000,:]
"""
Explanation: 2.7 Training our RNN model for text generation
With our RNN setup we can now train it! Lets begin by trying it out on a small subset of the larger version. In the next cell we take the first 10,000 input/output pairs from our training database to learn on.
End of explanation
"""
# train the model
model.fit(Xsmall, ysmall, batch_size=500, epochs=40,verbose = 1)
# save weights
model.save_weights('model_weights/best_RNN_small_textdata_weights.hdf5')
"""
Explanation: Now lets fit our model!
End of explanation
"""
# function that uses trained model to predict a desired number of future characters
def predict_next_chars(model,input_chars,num_to_predict):
# create output
predicted_chars = ''
for i in range(num_to_predict):
# convert this round's predicted characters to numerical input
x_test = np.zeros((1, window_size, len(chars)))
for t, char in enumerate(input_chars):
x_test[0, t, chars_to_indices[char]] = 1.
# make this round's prediction
test_predict = model.predict(x_test,verbose = 0)[0]
# translate numerical prediction back to characters
r = np.argmax(test_predict) # predict class of each test input
d = indices_to_chars[r]
# update predicted_chars and input
predicted_chars+=d
input_chars+=d
input_chars = input_chars[1:]
return predicted_chars
"""
Explanation: How do we make a given number of predictions (characters) based on this fitted model?
First we predict the next character after following any chunk of characters in the text of length equal to our chosen window size. Then we remove the first character in our input sequence and tack our prediction onto the end. This gives us a slightly changed sequence of inputs that still has length equal to the size of our window. We then feed in this updated input sequence into the model to predict the another character. Together then we have two predicted characters following our original input sequence. Repeating this process N times gives us N predicted characters.
In the next Python cell we provide you with a completed function that does just this - it makes predictions when given a) a trained RNN model, b) a subset of (window_size) characters from the text, and c) a number of characters to predict (to follow our input subset).
End of explanation
"""
# TODO: choose an input sequence and use the prediction function in the previous Python cell to predict 100 characters following it
# get an appropriately sized chunk of characters from the text
start_inds = [10, 100, 1000, 10000]
# load in weights
model.load_weights('model_weights/best_RNN_small_textdata_weights.hdf5')
for s in start_inds:
start_index = s
input_chars = text[start_index: start_index + window_size]
# use the prediction function
predict_input = predict_next_chars(model,input_chars,num_to_predict = 100)
# print out input characters
print('------------------')
input_line = 'input chars = ' + '\n' + input_chars + '"' + '\n'
print(input_line)
# print out predicted characters
line = 'predicted chars = ' + '\n' + predict_input + '"' + '\n'
print(line)
"""
Explanation: <a id='TODO_6'></a>
With your trained model try a few subsets of the complete text as input - note the length of each must be exactly equal to the window size. For each subset use the function above to predict the next 100 characters that follow each input.
End of explanation
"""
### A simple way to write output to file
f = open('my_test_output.txt', 'w') # create an output file to write too
f.write('this is only a test ' + '\n') # print some output text
x = 2
f.write('the value of x is ' + str(x) + '\n') # record a variable value
f.close()
# print out the contents of my_test_output.txt
f = open('my_test_output.txt', 'r') # create an output file to write too
f.read()
"""
Explanation: This looks ok, but not great. Now lets try the same experiment with a larger chunk of the data - with the first 100,000 input/output pairs.
Tuning RNNs for a typical character dataset like the one we will use here is a computationally intensive endeavour and thus timely on a typical CPU. Using a reasonably sized cloud-based GPU can speed up training by a factor of 10. Also because of the long training time it is highly recommended that you carefully write the output of each step of your process to file. This is so that all of your results are saved even if you close the web browser you're working out of, as the processes will continue processing in the background but variables/output in the notebook system will not update when you open it again.
In the next cell we show you how to create a text file in Python and record data to it. This sort of setup can be used to record your final predictions.
End of explanation
"""
# a small subset of our input/output pairs
Xlarge = X[:100000,:,:]
ylarge = y[:100000,:]
# TODO: fit to our larger dataset
model.fit(Xlarge, ylarge, batch_size=500, epochs=30, verbose=1)
# save weights
model.save_weights('model_weights/best_RNN_large_textdata_weights.hdf5')
# TODO: choose an input sequence and use the prediction function in the previous Python cell to predict 100 characters following it
# get an appropriately sized chunk of characters from the text
start_inds = [10, 100, 1000, 10000]
# save output
f = open('text_gen_output/RNN_large_textdata_output.txt', 'w') # create an output file to write too
# load weights
model.load_weights('model_weights/best_RNN_large_textdata_weights.hdf5')
for s in start_inds:
start_index = s
input_chars = text[start_index: start_index + window_size]
# use the prediction function
predict_input = predict_next_chars(model,input_chars,num_to_predict = 100)
# print out input characters
line = '-------------------' + '\n'
print(line)
f.write(line)
input_line = 'input chars = ' + '\n' + input_chars + '"' + '\n'
print(input_line)
f.write(input_line)
# print out predicted characters
predict_line = 'predicted chars = ' + '\n' + predict_input + '"' + '\n'
print(predict_line)
f.write(predict_line)
f.close()
"""
Explanation: With this recording devices we can now more safely perform experiments on larger portions of the text. In the next cell we will use the first 100,000 input/output pairs to train our RNN model.
First we fit our model to the dataset, then generate text using the trained model in precisely the same generation method applied before on the small dataset.
Note: your generated words should be - by and large - more realistic than with the small dataset, but you won't be able to generate perfect English sentences even with this amount of data. A rule of thumb: your model is working well if you generate sentences that largely contain real English words.
End of explanation
"""
|
erccarls/vectorsearch | notebooks/data_challenge/Data Summaries.ipynb | apache-2.0 | import pandas as pd
# Sample code number: id number
# Clump Thickness: 1 - 10
# 3. Uniformity of Cell Size: 1 - 10
# 4. Uniformity of Cell Shape: 1 - 10
# 5. Marginal Adhesion: 1 - 10
# 6. Single Epithelial Cell Size: 1 - 10
# 7. Bare Nuclei: 1 - 10
# 8. Bland Chromatin: 1 - 10
# 9. Normal Nucleoli: 1 - 10
# 10. Mitoses: 1 - 10
# 11. Class: (2 for benign, 4 for malignant)
names = ['sampleid', 'clumpthickness', 'sizeuniformity', 'shapeunformity',
'adhesion', 'epithelialsize', 'barenuclei', 'blandchromatin', 'normalnucleoli',
'mitoses', 'cellclass']
df = pd.read_csv('./breast-cancer-wisconsin.data', names=names)
# df.drop('sampleid')
df.drop('sampleid', axis=1, inplace=True)
df.head(10)
df.cellclass = (df.cellclass == 4).astype(int)
# It turns out one column is a string, but should be an int...
df.barenuclei = df.barenuclei.values.astype(int)
df.describe()
# Check the class balance. Turns out to be pretty good so we should have a relatively unbiased view
print 'Num Benign', (df.cellclass==2).sum(), 'Num Malignant', (df.cellclass==4).sum()
"""
Explanation: Data exploration
To start with, let us load the dataframe, summarize the columns, and plot a sactter matrix of the data to check for e.g. missing values, non-linear scaling, etc..
End of explanation
"""
from pandas.tools.plotting import scatter_matrix
_ = scatter_matrix(df, figsize=(14,14), alpha=.4)
"""
Explanation: Scatter matrix.
None of the features appear to require rescaling transformations e.g. on a log-scales...
End of explanation
"""
from sklearn.linear_model import LogisticRegression
from sklearn import cross_validation
from sklearn import svm
LR = LogisticRegression(penalty='l1', dual=False, tol=0.0001, C=1,
fit_intercept=True, intercept_scaling=1,
class_weight=None, random_state=None,
solver='liblinear', max_iter=100,
multi_class='ovr', verbose=1,
warm_start=False, n_jobs=1)
X, Y = df.astype(np.float32).get_values()[:,:-1], df.get_values()[:,-1]
X2 = np.append(X,X**2, axis=1)
print X2.shape
LR.fit(X, Y)
print LR.score(X,Y)
C_list = np.logspace(-1, 2, 15)
CV_scores = []
CV_scores2 = []
for c in C_list:
LR = LogisticRegression(penalty='l1', dual=False, tol=0.0001, C=c,
fit_intercept=True, intercept_scaling=1,
class_weight=None, random_state=None,
solver='liblinear', max_iter=100,
multi_class='ovr', verbose=1,
warm_start=False, n_jobs=1)
CV_scores.append(np.average(cross_validation.cross_val_score(LR, X, Y, cv=6, n_jobs=12)))
svm_class = svm.SVC(C=c, kernel='linear', gamma='auto', coef0=0.0,
shrinking=True, probability=False, tol=0.001, cache_size=200,
class_weight=None, verbose=False,
max_iter=-1, decision_function_shape=None, random_state=None)
CV_scores2.append(np.average(cross_validation.cross_val_score(svm_class, X, Y, cv=6, n_jobs=12)))
plt.plot(C_list, CV_scores, marker='o', label='Logistic Regression L1 loss')
plt.plot(C_list, CV_scores2, marker='o', label='SVM-Linear')
plt.xscale('log')
plt.xlabel(r'C = 1/$\lambda$')
plt.legend(loc=4)
from sklearn.metrics import confusion_matrix
LR = LogisticRegression(penalty='l1', dual=False, tol=0.0001, C=1e10,
fit_intercept=True, intercept_scaling=1,
class_weight=None, random_state=None,
solver='liblinear', max_iter=100,
multi_class='ovr', verbose=1,
warm_start=False, n_jobs=1)
LR.fit(X[:300],Y[:300])
svm_class = svm.SVC(C=10., kernel='linear', gamma='auto', coef0=0.0,
shrinking=True, probability=True, tol=0.001, cache_size=200,
class_weight=None, verbose=False,
max_iter=-1, decision_function_shape=None, random_state=None)
svm_class.fit(X[:300],Y[:300])
# Confusion matrix
print
print 'Confusion Matrix - LASSO Regression'
print confusion_matrix(y_true=Y[300:], y_pred=LR.predict(X[300:]))
print 'Confusion Matrix - SVM-Linear'
print confusion_matrix(y_true=Y[300:], y_pred=svm_class.predict(X[300:]))
"""
Explanation: Constructing a logistic regression classifier
Intriguingly, the logistic
End of explanation
"""
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import precision_recall_curve
plt.figure(figsize=(7,2))
plt.subplot(121)
prec, rec, thresh = precision_recall_curve(y_true=Y[300:], probas_pred=LR.predict_proba(X[300:])[:,1])
plt.plot(rec, prec,)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.xlim(0,1)
plt.ylim(0,1)
plt.subplot(122)
fp, tp, thresh = roc_curve(y_true=Y[300:], y_score=LR.predict_proba(X[300:])[:,1])
AUC = roc_auc_score(y_true=Y[300:], y_score=LR.predict_proba(X[300:])[:,1])
roc_curve(y_true=Y[300:], y_score=LR.predict_proba(X[300:])[:,1])
plt.text(.05, .05, 'AUC=%1.3f'%AUC)
plt.plot(fp, tp, linewidth=2)
plt.xlabel('False Positives')
plt.ylabel('True Positives')
"""
Explanation: Measuring precision/recall and ROC curves
End of explanation
"""
|
antoniomezzacapo/qiskit-tutorial | community/terra/qis_adv/fourier_transform.ipynb | apache-2.0 | import math
# importing Qiskit
from qiskit import Aer, IBMQ
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute
from qiskit.backends.ibmq import least_busy
# useful additional packages
from qiskit.wrapper.jupyter import *
from qiskit.tools.visualization import plot_histogram
IBMQ.load_accounts()
"""
Explanation: <img src="../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
Quantum Fourier Transform
In this tutorial, we introduce the quantum fourier transform (QFT), derive the circuit, QASM and QISKit code, before implementing it using the simulator and five qubit device.
The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.
Contributors
Anna Phan
Introduction <a id='introduction'></a>
The Fourier transform occurs in many different versions throughout classical computing, in areas ranging from signal processing to data compression to complexity theory. The quantum Fourier transform (QFT) is the quantum implementation of the discrete Fourier transform over the amplitudes of a wavefunction. It is part of many quantum algorithms, most notably Shor's factoring algorithm and quantum phase estimation.
The discrete Fourier transform acts on a vector $(x_0, ..., x_{N-1})$ and maps it to the vector $(y_0, ..., y_{N-1})$ according to the formula
$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$
where $\omega_N^{jk} = e^{2\pi i \frac{jk}{N}}$.
Similarly, the quantum Fourier transform acts on a quantum state $\sum_{i=0}^{N-1} x_i \vert i \rangle$ and maps it to the quantum state $\sum_{i=0}^{N-1} y_i \vert i \rangle$ according to the formula
$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$
with $\omega_N^{jk}$ defined as above. Note that only the amplitudes of the state were affected by this transformation.
This can also be expressed as the map:
$$\vert x \rangle \mapsto \frac{1}{\sqrt{N}}\sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle$$
Or the unitary matrix:
$$ U_{QFT} = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} \sum_{y=0}^{N-1} \omega_N^{xy} \vert y \rangle \langle x \vert$$
Circuit and Code <a id='circuit'></a>
We've actually already seen the quantum Fourier transform for when $N = 2$, it is the Hadamard operator ($H$):
$$H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \ 1 & -1 \end{bmatrix}$$
Suppose we have the single qubit state $\alpha \vert 0 \rangle + \beta \vert 1 \rangle$, if we apply the $H$ operator to this state, we obtain the new state:
$$\frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle
\equiv \tilde{\alpha}\vert 0 \rangle + \tilde{\beta}\vert 1 \rangle$$
Notice how the Hadamard gate performs the discrete Fourier transform for $N = 2$ on the amplitudes of the state.
So what does the quantum Fourier transform look like for larger N? Let's derive a circuit for $N=2^n$, $QFT_N$ acting on the state $\vert x \rangle = \vert x_1...x_n \rangle$ where $x_1$ is the most significant bit.
\begin{aligned}
QFT_N\vert x \rangle & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle \
& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i xy / 2^n} \vert y \rangle \:\text{since}\: \omega_N^{xy} = e^{2\pi i \frac{xy}{N}} \:\text{and}\: N = 2^n\
& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i \left(\sum_{k=1}^n y_k/2^k\right) x} \vert y_1 ... y_n \rangle \:\text{rewriting in fractional binary notation}\: y = y_1...y_k, y/2^n = \sum_{k=1}^n y_k/2^k \
& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} \prod_{k=0}^n e^{2 \pi i x y_k/2^k } \vert y_1 ... y_n \rangle \:\text{after expanding the exponential of a sum to a product of exponentials} \
& = \frac{1}{\sqrt{N}} \bigotimes_{k=1}^n \left(\vert0\rangle + e^{2 \pi i x /2^k } \vert1\rangle \right) \:\text{after rearranging the sum and products, and expanding} \
& = \frac{1}{\sqrt{N}} \left(\vert0\rangle + e^{2 \pi i[0.x_n]} \vert1\rangle\right) \otimes...\otimes \left(\vert0\rangle + e^{2 \pi i[0.x_1.x_2...x_{n-1}.x_n]} \vert1\rangle\right) \:\text{as}\: e^{2 \pi i x/2^k} = e^{2 \pi i[0.x_k...x_n]}
\end{aligned}
This is a very useful form of the QFT for $N=2^n$ as only the last qubit depends on the the
values of all the other input qubits and each further bit depends less and less on the input qubits. Furthermore, note that $e^{2 \pi i.0.x_n}$ is either $+1$ or $-1$, which resembles the Hadamard transform.
For the QFT circuit, together with the Hadamard gate, we will also need the controlled phase rotation gate, as defined in OpenQASM, to implement the dependencies between the bits:
$$CU_1(\theta) =
\begin{bmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & e^{i\theta}\end{bmatrix}$$
Before we create the circuit code for general $N=2^n$, let's look at $N=8,n=3$:
$$QFT_8\vert x_1x_2x_3\rangle = \frac{1}{\sqrt{8}} \left(\vert0\rangle + e^{2 \pi i[0.x_3]} \vert1\rangle\right) \otimes \left(\vert0\rangle + e^{2 \pi i[0.x_2.x_3]} \vert1\rangle\right) \otimes \left(\vert0\rangle + e^{2 \pi i[0.x_1.x_2.x_3]} \vert1\rangle\right) $$
The steps to creating the circuit for $\vert y_1y_2x_3\rangle = QFT_8\vert x_1x_2x_3\rangle$ would be:
1. Apply a Hadamard to $\vert x_3 \rangle$, giving the state $\frac{1}{\sqrt{2}}\left(\vert0\rangle + e^{2 \pi i.0.x_3} \vert1\rangle\right) = \frac{1}{\sqrt{2}}\left(\vert0\rangle + (-1)^{x_3} \vert1\rangle\right)$
2. Apply a Hadamard to $\vert x_2 \rangle$, then depending on $k_3$ (before the Hadamard gate) a $CU_1(\frac{\pi}{2})$, giving the state $\frac{1}{\sqrt{2}}\left(\vert0\rangle + e^{2 \pi i[0.x_2.x_3]} \vert1\rangle\right)$
3. Apply a Hadamard to $\vert x_1 \rangle$, then $CU_1(\frac{\pi}{2})$ depending on $k_2$, and $CU_1(\frac{\pi}{4})$ depending on $k_3$.
4. Measure the bits in reverse order, that is $y_3 = x_1, y_2 = x_2, y_1 = y_3$.
In the Quantum Experience composer (if controlled phase rotation gates were available) this circuit would look like:
<img src="../images/qft3.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="400 px" align="center">
In QASM, it is:
qreg q[3];
creg c[3];
h q[0];
cu1(pi/2) q[1],q[0];
h q[1];
cu1(pi/4) q[2],q[0];
cu1(pi/2) q[2],q[1];
h q[2];
In QISKit, it is:
```
q = QuantumRegister(3)
c = ClassicalRegister(3)
qft3 = QuantumCircuit(q, c)
qft3.h(q[0])
qft3.cu1(math.pi/2.0, q[1], q[0])
qft3.h(q[1])
qft3.cu1(math.pi/4.0, q[2], q[0])
qft3.cu1(math.pi/2.0, q[2], q[1])
qft3.h(q[2])
```
For $N=2^n$, this can be generalised, as in the qft function in tools.qi:
def qft(circ, q, n):
"""n-qubit QFT on q in circ."""
for j in range(n):
for k in range(j):
circ.cu1(math.pi/float(2**(j-k)), q[j], q[k])
circ.h(q[j])
Implementation <a id='implementation'></a>
End of explanation
"""
def input_state(circ, q, n):
"""n-qubit input state for QFT that produces output 1."""
for j in range(n):
circ.h(q[j])
circ.u1(math.pi/float(2**(j)), q[j]).inverse()
def qft(circ, q, n):
"""n-qubit QFT on q in circ."""
for j in range(n):
for k in range(j):
circ.cu1(math.pi/float(2**(j-k)), q[j], q[k])
circ.h(q[j])
"""
Explanation: First let's define the QFT function, as well as a function that creates a state from which a QFT will return 1:
End of explanation
"""
q = QuantumRegister(3)
c = ClassicalRegister(3)
qft3 = QuantumCircuit(q, c)
input_state(qft3, q, 3)
qft(qft3, q, 3)
for i in range(3):
qft3.measure(q[i], c[i])
print(qft3.qasm())
# run on local simulator
backend = Aer.get_backend("qasm_simulator")
simulate = execute(qft3, backend=backend, shots=1024).result()
simulate.get_counts()
"""
Explanation: Let's now implement a QFT on a prepared three qubit input state that should return $001$:
End of explanation
"""
%%qiskit_job_status
# Use the IBM Quantum Experience
backend = least_busy(IBMQ.backends(simulator=False))
shots = 1024
job_exp = execute(qft3, backend=backend, shots=shots)
results = job_exp.result()
plot_histogram(results.get_counts())
"""
Explanation: We indeed see that the outcome is always $001$ when we execute the code on the simulator.
We then see how the same circuit can be executed on real-device backends.
End of explanation
"""
|
Misteir/Machine_Learning | linear_regression/linear_regression1.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Librairies
End of explanation
"""
data = pd.read_csv('ex1data1.txt', header=None, names=['population', 'profit'])
data.head()
data.plot.scatter('population', 'profit')
"""
Explanation: read file content
End of explanation
"""
X = np.array(data["population"])
y = np.array(data["profit"])
"""
Explanation: Dots seem to follow a line, we could have done a correlation test to check if the two variabes are linked. Now we transform the data matrix into two numpy arrays.
End of explanation
"""
def predict(X, theta):
return (X * theta[1] + theta[0])
def gradient_descent1(X, y, theta, alpha, num_iters):
m = X.shape[0]
for i in range(0, num_iters):
theta0 = theta[0] - (alpha / m) * np.sum(predict(X, theta) - y)
theta1 = theta[1] - (alpha / m) * np.dot(predict(X, theta) - y, np.transpose(X))
theta = [theta0, theta1]
return theta
theta = np.zeros(2, dtype=float)
theta = gradient_descent1(X, y, theta, 0.01, 1500)
theta
"""
Explanation: now we will developp the two functions predict (apply theta to the X matrix) and gradient_descent1 (update theta)
End of explanation
"""
def visualize(theta):
fig = plt.figure()
ax = plt.axes()
ax.set_xlim([4.5,22.5])
ax.set_ylim([-5, 25])
ax.scatter(X, y)
line_x = np.linspace(0,22.5, 20)
line_y = theta[0] + line_x * theta[1]
ax.plot(line_x, line_y)
plt.show()
visualize(theta)
"""
Explanation: Expected output (for alpha 0.01 and 1500 iterations):[-3.6302914394043597, 1.166362350335582]
The visualize plot our dataset with the regression line corresponding to theta
End of explanation
"""
def cost(X, y, theta):
loss = predict(X, theta) - y
cost = (1 / (2 * X.shape[0])) * np.dot(loss, np.transpose(loss))
return(cost)
cost(X, y, [0, 0])
"""
Explanation: the cost function will allow us to record the evolution of the cost during the gradient descent
End of explanation
"""
def gradient_descent(X, y, theta, alpha, num_iters):
m = X.shape[0]
J_history = []
for i in range(0, num_iters):
theta0 = theta[0] - (alpha / m) * np.sum(predict(X, theta) - y)
theta1 = theta[1] - (alpha / m) * np.dot(predict(X, theta) - y, np.transpose(X))
theta = [theta0, theta1]
J_history.append(cost(X, y, theta))
return theta, J_history
theta = np.zeros(2, dtype=float)
theta, J_history = gradient_descent(X, y, theta, 0.01, 1500)
theta
"""
Explanation: expected output for [0, 0]: 32.072733877455676
the full version of gradient descent now records the cost history
End of explanation
"""
fit = plt.figure()
ax = plt.axes()
ax.plot(J_history)
"""
Explanation: Expected output for alhpa 0.01 and 1500 iterations: [-3.6302914394043597, 1.166362350335582]
End of explanation
"""
|
leriomaggio/code-coherence-analysis | Lexical Analysis.ipynb | bsd-3-clause | %load preamble_directives.py
"""
Explanation: Lexical Information Overlap
This notebook contains some code to process and normalize the lexical information appearing in CodeMethod comments and implementations
(i.e., CodeMethod.comment and CodeMethod.code, respectively).
The overall processing encompasses the following steps:
(Tokens Extraction)
The textual data are chunked into tokens (thanks to nltk)
(Tokens Normalization)
Most common (english) stopwords are removed, as well as Java language reserved keywords;
Each non-english token is processed by the LINSEN algorithm;
Each remaining token (a.k.a, lexeme) is turned into lowercase letters;
Resulting tokens are finally stemmed.
Once those processing steps are completed, the jaccard_coefficient is computed between code and comments of each method, and all the analysis information are then stored in a CodeLexiconInfo model instance).
Python Version
This notebook requires Python 3
Preliminaries
End of explanation
"""
from source_code_analysis.models import CodeLexiconInfo
"""
Explanation: Import Django Model for Code Lexicon Information
End of explanation
"""
from lexical_analysis import LINSENnormalizer
"""
Explanation: DATA FETCHING CODE
<a href="#data_analysis">SKIP</a> this part if the Database already contains CodeLexiconInfo data
Analysis Code
LINSEN Normalizer
End of explanation
"""
from lexical_analysis import LexicalAnalyzer
from source_code_analysis.models import SoftwareProject
target_sw_project = SoftwareProject.objects.get(name__iexact='CoffeeMaker')
# Use RelatedManager to get all the code methods associated to the target project
code_methods = target_sw_project.code_methods.all()
total_methods = code_methods.count()
coefficients = list()
for i, method in enumerate(code_methods):
print('Analyzing Method {0} out of {1}: {2}'.format(i+1, total_methods, method.method_name))
analyzer = LexicalAnalyzer(method)
analyzer.analyse_textual_information()
coefficients.append(analyzer.code_lexical_info.jaccard_coeff)
"""
Explanation: Lexical Analyzer
End of explanation
"""
from scipy import median
from scipy import mean
from scipy import var, std
import numpy as np
from source_code_analysis.models import SoftwareProject
projects = list()
projects.append(SoftwareProject.objects.get(name__iexact='CoffeeMaker', version__exact='1.0'))
projects.append(SoftwareProject.objects.get(name__iexact='Jfreechart', version__exact='0.6.0'))
projects.append(SoftwareProject.objects.get(name__iexact='Jfreechart', version__exact='0.7.1'))
projects.append(SoftwareProject.objects.get(name__iexact='JHotDraw', version__exact='7.4.1'))
print(projects)
"""
Explanation: <a name="data_analysis"></a>
DATA ANALYSIS CODE (Statistics)
End of explanation
"""
for project in projects:
code_methods = project.code_methods.all()
coefficients = list()
for method in code_methods:
# Check that this method has no "wrong_association"
n_evaluations = method.agreement_evaluations.count()
n_eval_wrong_assocation = method.agreement_evaluations.filter(wrong_association=True).count()
if n_evaluations == n_eval_wrong_assocation:
# if **all** the evaluations for the current method mark it as a wrong_association
# exclude it from the statistics
continue
clexicon_info = method.lexical_info
coefficients.append(clexicon_info.jaccard_coeff)
coeff = np.array(coefficients)
print('{proj} ({ver}) & {total} & {min:.3} & {max:.3} & {median:.3} & {mean:.3} & {variance:.3} & {devstd:.3} \\\\'.format(
proj = project.name.title(), ver=project.version,
total=coeff.size, min=coeff.min(), max=coeff.max(),
median=median(coeff), mean=coeff.mean(),
variance=var(coeff), devstd=std(coeff)))
"""
Explanation: Jaccard Coefficient Statistics
End of explanation
"""
# Import Scikit-Learn
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import linear_kernel
for project in projects:
# Populate the Doc Collection
document_collection = list()
# Get Methods
code_methods = project.code_methods.all()
for method in code_methods:
# Check that this method has no "wrong_association"
n_evaluations = method.agreement_evaluations.count()
n_eval_wrong_assocation = method.agreement_evaluations.filter(wrong_association=True).count()
if n_evaluations == n_eval_wrong_assocation:
# if **all** the evaluations for the current method mark it as a wrong_association
# exclude it from the statistics
continue
clexicon_info = method.lexical_info
document_collection.append(clexicon_info.normalized_comment)
document_collection.append(clexicon_info.normalized_code)
vectorizer = TfidfVectorizer(input='content', sublinear_tf=True, lowercase=False)
tfidf_values = vectorizer.fit_transform(document_collection)
#cosine_sim_vals = list()
#rows, cols = tfidf_values.shape
#for i in range(0, rows, 2):
# cosine_sim_vals.append(tfidf_values[i].dot(tfidf_values[i+1].T)[0,0])
#cosine_sim_vals = np.array(cosine_sim_vals)
comments, code = tfidf_values[::2], tfidf_values[1::2]
kernel_matrix = linear_kernel(comments, code) # arrays are still L2 (length) normalized
cosine_sim_vals = np.diag(kernel_matrix)
print('{proj} ({ver}) & {tot} & {min:.3} & {max:.3} & {med:.3} & {mu:.3} & {var:.3} & {sigma:.3} \\\\'.format(
proj=project.name.title(), ver=project.version, tot=cosine_sim_vals.size, min=cosine_sim_vals.min(),
max=cosine_sim_vals.max(), med=median(cosine_sim_vals), mu=cosine_sim_vals.mean(),
var=var(cosine_sim_vals), sigma=std(cosine_sim_vals)))
"""
Explanation: TFIDF Statistics
End of explanation
"""
coff_maker = projects[0]
methods = coff_maker.code_methods.all()
methods = methods[0:2]
docs = list()
for method in methods:
lex_info = method.lexical_info
docs.append(lex_info.normalized_comment)
docs.append(lex_info.normalized_code)
print('Methods: ', len(methods))
print('Docs: ', len(docs))
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(input='content', sublinear_tf=True, lowercase=False)
X = vectorizer.fit_transform(docs)
vectorizer.get_feature_names()
x = X[0].toarray()
from scipy.sparse import issparse
print(issparse(x))
x = x.ravel()
np.where(x>0)
np.take(x, np.where(x>0))
x[np.where(x>0)]
print(vectorizer.get_feature_names())
docs[0]
"""
Explanation: (TEST) Spare Analysis on the resulting structures (IR Index) --> SKIP to <a href="#tf_stats">Tf Statistics</a>
Coffee Maker
End of explanation
"""
jhotdraw = projects[-1]
methods = jhotdraw.code_methods.all()
methods = methods[0:2]
docs = list()
for method in methods:
lex_info = method.lexical_info
docs.append(lex_info.normalized_comment)
docs.append(lex_info.normalized_code)
print('Methods: ', len(methods))
print('Docs: ', len(docs))
docs[0], docs[1]
methods[0].lexical_info.normalized_comment
methods[0].lexical_info.normalized_code
methods[0].example.target
"""
Explanation: JHotDraw
End of explanation
"""
# Import Scikit-Learn
from sklearn.feature_extraction.text import TfidfVectorizer
## TODO: See the following "Optimization" subsections to see tests
from sklearn.metrics.pairwise import linear_kernel # array are still L2 normalized
for project in projects:
# Get Methods
code_methods = project.code_methods.all()
# Populate the Doc Collection
document_collection = list()
for method in code_methods:
# Check that this method has no "wrong_association"
n_evaluations = method.agreement_evaluations.count()
n_eval_wrong_assocation = method.agreement_evaluations.filter(wrong_association=True).count()
if n_evaluations == n_eval_wrong_assocation:
# if **all** the evaluations for the current method mark it as a wrong_association
# exclude it from the statistics
continue
clexicon_info = method.lexical_info
document_collection.append(clexicon_info.normalized_comment)
document_collection.append(clexicon_info.normalized_code)
vectorizer = TfidfVectorizer(input='content', sublinear_tf=False, lowercase=False, use_idf=False)
tf_values = vectorizer.fit_transform(document_collection)
#cosine_sim_vals = list()
#rows, cols = tf_values.shape
#for i in range(0, rows, 2):
# cosine_sim_vals.append(tf_values[i].dot(tf_values[i+1].T)[0,0])
#cosine_sim_vals = np.array(cosine_sim_vals)
comments, code = tf_values[::2], tf_values[1::2]
kernel_matrix = linear_kernel(comments, code)
cosine_sim_vals = np.diag(kernel_matrix)
print('{proj} ({ver}) & {total} & {min:.3} & {max:.3} & {median:.3} & {mean:.3} & {variance:.3} & {devstd:.3} \\\\'.format(
proj = project.name.title(), ver=project.version,
total=cosine_sim_vals.size,
min=cosine_sim_vals.min(),
max=cosine_sim_vals.max(),
median=median(cosine_sim_vals),
mean=cosine_sim_vals.mean(),
variance=var(cosine_sim_vals),
devstd=std(cosine_sim_vals)))
"""
Explanation: <a name="tf_stats"></a>
TF Statistics
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer
# Target Project (as this is just an example)
project = projects[0]
# Get Methods
code_methods = project.code_methods.all()
# Populate the Doc Collection
document_collection = list()
for method in code_methods:
clexicon_info = method.lexical_info
document_collection.append(clexicon_info.normalized_comment)
document_collection.append(clexicon_info.normalized_code)
vectorizer = TfidfVectorizer(input='content', sublinear_tf=True, lowercase=False)
tfidf_values = vectorizer.fit_transform(document_collection)
rows, cols = tfidf_values.shape
cosine_sim_vals = tfidf_values[0].dot(tfidf_values[1].T)[0,0]
for i in range(2, rows, 2):
cosine_sim_vals = np.vstack((cosine_sim_vals, tfidf_values[i].dot(tfidf_values[i+1].T)[0,0]))
cosine_sim_vals.ravel()
alt_method = np.einsum('ij,ij->i', tfidf_values[::2,].toarray(), tfidf_values[1::2,].toarray())
alt_method
alt_method.shape
cosine_sim_vals.ravel().shape
np.testing.assert_allclose(cosine_sim_vals.ravel(), alt_method)
from sklearn.metrics.pairwise import cosine_similarity, linear_kernel
comments, code = tfidf_values[::2], tfidf_values[1::2]
print(comments.shape, code.shape)
kernel = linear_kernel(comments, code)
np.diag(kernel)
from numpy.testing import assert_array_almost_equal
assert_array_almost_equal(alt_method, np.diag(kernel))
alt_method
cossim = cosine_similarity(comments, code)
np.diag(cossim)
assert_array_almost_equal(alt_method, np.diag(cossim))
assert_array_almost_equal(np.diag(cossim), np.diag(kernel))
"""
Explanation: Optimization
Trying to optimize the cosine_similarity computation replacing the cosine_sim_vals list
(try using np.vstack)
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer
from evaluations import Judge
judges_combinations = (('leonardo.nole', 'rossella.linsalata'),
('leonardo.nole', 'antonio.petrone'),
('leonardo.nole', 'antonio.petrone'),
('leonardo.nole', 'rossella.linsalata'),)
CODES_Labels = ('NC', 'DK', 'CO')
from collections import defaultdict
stats_results = defaultdict(list)
for pno, project in enumerate(projects):
# Get Methods
code_methods = project.code_methods.all()
# Populate the Doc Collection
document_collection = list()
method_ids_map = dict() # Map (dict) to store the association method.pk --> Row index in Tfidf Matrix
for mno, method in enumerate(code_methods):
clexicon_info = method.lexical_info
document_collection.append(clexicon_info.normalized_comment)
document_collection.append(clexicon_info.normalized_code)
method_ids_map[method.id] = mno*2
vectorizer = TfidfVectorizer(input='content', sublinear_tf=True, lowercase=False)
tfidf_values = vectorizer.fit_transform(document_collection)
j1_usrname, j2_usrname = judges_combinations[pno]
j1 = Judge(j1_usrname, project.name, project.version)
j2 = Judge(j2_usrname, project.name, project.version)
j1_evals = j1.three_codes_evaluations
j2_evals = j2.three_codes_evaluations
project_stats = list()
for code in range(3):
j1_evals_code = j1_evals[code]
j2_evals_code = j2_evals[code]
method_ids = j1_evals_code.intersection(j2_evals_code)
cosine_sim_vals = list()
for mid in method_ids:
i = method_ids_map[mid]
cosine_sim_vals.append(tfidf_values[i].dot(tfidf_values[i+1].T)[0,0])
cosine_sim_vals = np.array(cosine_sim_vals)
project_stats.append(cosine_sim_vals)
for code in range(3):
vals = project_stats[code]
label = CODES_Labels[code]
if vals.size > 0:
stats_results[label].append('{proj} ({ver}) & {total} & {min:.3} & {max:.3} & {median:.3} & {mean:.3} & {variance:.3} & {devstd:.3} \\\\'.format(
proj = project.name.title(),
ver=project.version,
total=vals.size,
min=vals.min(),
max=vals.max(),
median=median(vals),
mean=vals.mean(),
variance=var(vals),
devstd=std(vals)))
else:
stats_results[label].append('{proj} ({ver}) & \multicolumn{{7}}{{c|}}{{N.A.}} \\\\'.format(proj = project.name.title(),
ver=project.version))
for label in stats_results:
print('\n{0}\n'.format(label))
for value in stats_results[label]:
print(value)
"""
Explanation: TFIDF Statistics separated by Agreement Rates
End of explanation
"""
judges_combinations = (('leonardo.nole', 'rossella.linsalata'),
('leonardo.nole', 'antonio.petrone'),
('leonardo.nole', 'antonio.petrone'),
('leonardo.nole', 'rossella.linsalata'),)
CODES_Labels = ('NC', 'DK', 'CO')
from collections import defaultdict
stats_results_paths = defaultdict(list)
pwd_out = !pwd
current_dir = pwd_out[0]
folder_path = os.path.join(current_dir, 'distributions_per_rate_tfidf')
if not os.path.exists(folder_path):
os.makedirs(folder_path)
for pno, project in enumerate(projects):
# Get Methods
code_methods = project.code_methods.all()
# Populate the Doc Collection
document_collection = list()
method_ids_map = dict() # Map (dict) to store the association method.pk --> Row index in Tfidf Matrix
for mno, method in enumerate(code_methods):
clexicon_info = method.lexical_info
document_collection.append(clexicon_info.normalized_comment)
document_collection.append(clexicon_info.normalized_code)
method_ids_map[method.id] = mno*2
vectorizer = TfidfVectorizer(input='content', sublinear_tf=True, lowercase=False)
tfidf_values = vectorizer.fit_transform(document_collection)
j1_usrname, j2_usrname = judges_combinations[pno]
j1 = Judge(j1_usrname, project.name, project.version)
j2 = Judge(j2_usrname, project.name, project.version)
j1_evals = j1.three_codes_evaluations
j2_evals = j2.three_codes_evaluations
project_stats = list()
for code in range(3):
j1_evals_code = j1_evals[code]
j2_evals_code = j2_evals[code]
method_ids = j1_evals_code.intersection(j2_evals_code)
cosine_sim_vals = list()
for mid in method_ids:
i = method_ids_map[mid]
cosine_sim_vals.append(tfidf_values[i].dot(tfidf_values[i+1].T)[0,0])
cosine_sim_vals = np.array(cosine_sim_vals)
project_stats.append(cosine_sim_vals)
for code in range(3):
vals = project_stats[code]
label = CODES_Labels[code]
if vals.size > 0:
filename = '{label}_{proj}_({ver})_{total}.txt'.format(label=label,
proj=project.name.title(),
ver=project.version,
total=vals.size)
filepath = os.path.join(folder_path, filename)
np.savetxt(filepath, vals)
stats_results_paths[label].append(filepath)
for label in stats_results:
print('\n{0}\n'.format(label))
for path in stats_results_paths[label]:
print('Saved Filepath:', path)
"""
Explanation: TFIDF Values Distribution (Separated by Agreement Rates)
End of explanation
"""
|
Rauf-Kurbanov/au_dl_course | seminar_1/homework_task2.ipynb | gpl-3.0 | mnist = input_data.read_data_sets('/data/mnist', one_hot=True)
"""
Explanation: Step 1: Read in data<br>
using TF Learn's built in function to load MNIST data to the folder data/mnist
End of explanation
"""
with tf.Session() as sess:
start_time = time.time()
sess.run(tf.global_variables_initializer())
n_batches = int(mnist.train.num_examples/batch_size)
for i in range(n_epochs): # train the model n_epochs times
total_loss = 0
for _ in range(n_batches):
X_batch, Y_batch = mnist.train.next_batch(batch_size)
# TO-DO: run optimizer + fetch loss_batch
#
#
total_loss += loss_batch
print('Average loss epoch {0}: {1}'.format(i, total_loss/n_batches))
print('Total time: {0} seconds'.format(time.time() - start_time))
print('Optimization Finished!') # should be around 0.35 after 25 epochs
# test the model
preds = tf.nn.softmax(logits)
correct_preds = tf.equal(tf.argmax(preds, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32)) # need numpy.count_nonzero(boolarr) :(
n_batches = int(mnist.test.num_examples/batch_size)
total_correct_preds = 0
for i in range(n_batches):
X_batch, Y_batch = mnist.test.next_batch(batch_size)
accuracy_batch = sess.run([accuracy], feed_dict={X: X_batch, Y:Y_batch})
total_correct_preds += accuracy_batch
print('Accuracy {0}'.format(total_correct_preds/mnist.test.num_examples))
"""
Explanation: Step 2: create placeholders for features and labels<br>
each image in the MNIST data is of shape 28*28 = 784<br>
therefore, each image is represented with a 1x784 tensor<br>
there are 10 classes for each image, corresponding to digits 0 - 9.<br>
Features are of the type float, and labels are of the type int<br>
Step 3: create weights and bias<br>
weights and biases are initialized to 0<br>
shape of w depends on the dimension of X and Y so that Y = X * w + b<br>
shape of b depends on Y<br>
Step 4: build model<br>
the model that returns the logits.<br>
this logits will be later passed through softmax layer<br>
to get the probability distribution of possible label of the image<br>
DO NOT DO SOFTMAX HERE<br>
Step 5: define loss function<br>
use cross entropy loss of the real labels with the softmax of logits<br>
use the method:<br>
tf.nn.softmax_cross_entropy_with_logits(logits, Y)<br>
then use tf.reduce_mean to get the mean loss of the batch<br>
Step 6: define training op<br>
using gradient descent to minimize loss
End of explanation
"""
|
CrowdTruth/CrowdTruth-core | tutorial/notebooks/.ipynb_checkpoints/Sparse Multiple Choice Task - Relation Extraction-checkpoint.ipynb | apache-2.0 | import pandas as pd
test_data = pd.read_csv("../data/relex-sparse-multiple-choice.csv")
test_data.head()
"""
Explanation: CrowdTruth for Sparse Multiple Choice Tasks: Relation Extraction
In this tutorial, we will apply CrowdTruth metrics to a sparse multiple choice crowdsourcing task for Relation Extraction from sentences. The workers were asked to read a sentence with 2 highlighted terms, then pick from a multiple choice list what are the relations expressed between the 2 terms in the sentence. The options available in the multiple choice list change with the input sentence. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here.
To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: template, css, javascript.
This is a screenshot of the task as it appeared to workers:
A sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. Now you can check your data:
End of explanation
"""
import crowdtruth
from crowdtruth.configuration import DefaultConfig
"""
Explanation: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:
End of explanation
"""
class TestConfig(DefaultConfig):
inputColumns = ["sent_id", "term1", "b1", "e1", "term2", "b2", "e2", "sentence", "input_relations"]
outputColumns = ["output_relations"]
annotation_separator = "\n"
# processing of a closed task
open_ended_task = False
annotation_vector = [
"title", "founded_org", "place_of_birth", "children", "cause_of_death",
"top_member_employee_of_org", "employee_or_member_of", "spouse",
"alternate_names", "subsidiaries", "place_of_death", "schools_attended",
"place_of_headquarters", "charges", "origin", "places_of_residence",
"none"]
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
return judgments
"""
Explanation: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Relation Extraction task:
inputColumns: list of input columns from the .csv file with the input data
outputColumns: list of output columns from the .csv file with the answers from the workers
annotation_separator: string that separates between the crowd annotations in outputColumns
open_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False
annotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of all relations that were given as input to the crowd in at least one sentence
processJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector
The complete configuration class is declared below:
End of explanation
"""
data, config = crowdtruth.load(
file = "../data/relex-sparse-multiple-choice.csv",
config = TestConfig()
)
data['judgments'].head()
"""
Explanation: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data:
End of explanation
"""
results = crowdtruth.run(data, config)
"""
Explanation: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics:
End of explanation
"""
results["units"].head()
"""
Explanation: results is a dict object that contains the quality metrics for sentences, relations and crowd workers.
The sentence metrics are stored in results["units"]:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(results["units"]["uqs"])
plt.xlabel("Sentence Quality Score")
plt.ylabel("Sentences")
"""
Explanation: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentence. Here we plot its histogram:
End of explanation
"""
results["units"]["unit_annotation_score"].head(10)
"""
Explanation: The unit_annotation_score column in results["units"] contains the sentence-relation scores, capturing the likelihood that a relation is expressed in a sentence. For each sentence, we store a dictionary mapping each relation to its sentence-relation score.
End of explanation
"""
results["workers"].head()
"""
Explanation: The worker metrics are stored in results["workers"]:
End of explanation
"""
plt.hist(results["workers"]["wqs"])
plt.xlabel("Worker Quality Score")
plt.ylabel("Workers")
"""
Explanation: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
End of explanation
"""
results["annotations"]
"""
Explanation: The relation metrics are stored in results["annotations"]. The aqs column contains the relation quality scores, capturing the overall worker agreement over one relation.
End of explanation
"""
|
mdeff/ntds_2016 | project/reports/youtube_fame/Create_videos_database.ipynb | mit | VIDEOS_REQUEST_ID_LIMIT = 50
CHANNEL_REQUEST_ID_LIMIT = 50
key1 = "KEY"
key2 = "KEY"
DEVELOPER_KEY = key2
import requests
import json
import pandas as pd
from math import *
import numpy as np
import tensorflow as tf
import time
import collections
import os
import timeit
from IPython.display import display
#where the database will be stored
folder = os.path.join('sql_database')
#how many videos per search we want to get (max=500, multiple of 50)
video_per_search = 150
"""
Explanation: Create video database
In this notebook we create our video database based on research terms.
The research terms have to be written in the "searches.txt" file, each new line is a new research. We can have as many searches as we want.
The number of videos we get for each search line is 50 by default but it can be modified in the "Get a list of videos IDs" section.
End of explanation
"""
#----------------------------------------------------------------------------------------
# VIDEO REQUEST
# get the informations of the videos
# id_list : list of the ids of the videos we want to get the informations from
# db : the video dataframe where we will add the videos informations
#----------------------------------------------------------------------------------------
def videos_request(id_list,db):
nbr_videos = len(id_list)
#limit of request per "get"
nbr_requests = int(np.ceil(nbr_videos/VIDEOS_REQUEST_ID_LIMIT))
print('videos request(): number of videos:' , nbr_videos)
print('videos request(): number of requests:' , nbr_requests)
for i in range(nbr_requests):
print('videos request(): request:' , i+1, '/',nbr_requests)
limit = VIDEOS_REQUEST_ID_LIMIT
if limit > nbr_videos:
limit = nbr_videos
nbr_videos -= VIDEOS_REQUEST_ID_LIMIT
#concatenate the videos ids
req_id = id_list[i*VIDEOS_REQUEST_ID_LIMIT]
for j in range(limit-1):
req_id += ','+ id_list[i*VIDEOS_REQUEST_ID_LIMIT+j+1]
#we get "snippet" and "statistics"
url = 'https://www.googleapis.com/youtube/v3/videos?part=snippet,statistics&id={}&key={}'.format(req_id, DEVELOPER_KEY)
r = requests.get(url);
#print(r.text) #if we want to print the json response
data = json.loads(r.text)
# if we don't have any error we should have items
if 'items' in data:
for item in data['items']:
serie = dict()
serie['id'] = item['id']
serie['channelId'] = item['snippet']['channelId']
serie['title'] = item['snippet']['title']
serie['thumbnailUrl'] = item['snippet']['thumbnails']['default']['url']
serie['viewCount'] = item['statistics']['viewCount']
#sometimes those are not send for some reason, we check if there are in the response
if 'likeCount' in item['statistics']:
serie['likeCount'] = item['statistics']['likeCount']
else:
serie['likeCount'] = '0'
if 'dislikeCount' in item['statistics']:
serie['dislikeCount'] = item['statistics']['dislikeCount']
else:
serie['dislikeCount'] = '0'
if 'commentCount' in item['statistics']:
serie['commentCount'] = item['statistics']['commentCount']
else:
serie['commentCount'] = '0'
serie['subsCount'] = 'na'
db = db.append(serie, ignore_index=True)
else:
print('videos request(): error: no items in data')
#return the new dataframe
return db;
#----------------------------------------------------------------------------------------
# VIDEO ID LIST FROM CHANNEL ID
# get a list of videos from the channel IDs
# id channel : list of channel IDs
#----------------------------------------------------------------------------------------
def videos_id_list_from_channel_id(id_channel):
#prepare the videos list to return
vid_list = [];
print('videos_id_list_from_channel_id(): number of channels:', len(id_channel))
for n in range (len(id_channel)):
channel_id = id_channel[n]
r=requests.get('https://www.googleapis.com/youtube/v3/search?part=id&publishedBefore=2016-07-07T00:00:00Z&maxResults=20&type=video&channelId={}&key={}'.format(channel_id, DEVELOPER_KEY))
data = json.loads(r.text)
#print(r.text)
print('videos_id_list_from_channel_id(): channel:', n,'/', len(id_channel))
totalVideo = data['pageInfo']['totalResults']
nbIter = ceil(totalVideo/20)
print('videos_id_list_from_channel_id(): totalvideo=',totalVideo,'nbiter=',nbIter)
print('videos_id_list_from_channel_id(): page: 1 /', nbIter)
for items in data['items']:
vid_list += [items['id']['videoId']]
if 'nextPageToken' in data:
nextPage = data['nextPageToken']
for i in range(nbIter-1):
print('videos_id_list_from_channel_id(): page:', i+1 ,'/', nbIter)
r=requests.get('https://www.googleapis.com/youtube/v3/search?part=id&maxResults=20&type=video&channelId={}&pageToken={}&key={}'.format(channel_id, nextPage, DEVELOPER_KEY))
data = json.loads(r.text)
if 'nextPageToken' in data:
nextPage = data['nextPageToken']
if len(data['items'])<1:
print('ERROR: no items')
for items in data['items']:
vid_list += [items['id']['videoId']]
return vid_list
#----------------------------------------------------------------------------------------
# CHANNEL REQUEST
# get the informations of the channel (only number of subscribers)
# id_list : list of the ids of the channels we want to get the informations from
# db : the channel dataframe where we will add the videos informations
#----------------------------------------------------------------------------------------
def channel_request(id_list,db):
nbr_channels = len(id_list)
nbr_requests = int(np.ceil(nbr_channels/CHANNEL_REQUEST_ID_LIMIT))
print('channel_request(): number of channels:' , nbr_channels)
print('channel_request(): number of requests:' , nbr_requests)
for i in range(nbr_requests):
print('channel_request(): request:' , i+1, '/',nbr_requests)
limit = CHANNEL_REQUEST_ID_LIMIT
if limit > nbr_channels:
limit = nbr_channels
nbr_channels -= CHANNEL_REQUEST_ID_LIMIT
#concatenate the videos ids
req_id = id_list[i*CHANNEL_REQUEST_ID_LIMIT]
for j in range(limit-1):
req_id += ','+ id_list[i*CHANNEL_REQUEST_ID_LIMIT+j+1]
url = 'https://www.googleapis.com/youtube/v3/channels?part=statistics&id={}&key={}'.format(req_id, DEVELOPER_KEY)
r = requests.get(url);
#print(r.text) #if we want to print the json response
data = json.loads(r.text)
for item in data['items']:
serie = dict()
serie['id'] = item['id']
serie['subscriberCount'] = item['statistics']['subscriberCount']
db = db.append(serie, ignore_index=True)
return db;
#----------------------------------------------------------------------------------------
# SEARCH VIDEOS
# Return a list of videos IDs based on a list of search terms
# q : list of search terms
# limit : number of videos to retreive for each search term
#----------------------------------------------------------------------------------------
def search_videos(q,limit=500):
vid_list = [];
print('search_videos(): number of videos:',len(q))
for n in range (len(q)):
print('search_videos(): request: ',n,'/',len(q))
channel_id = q[n]
request0 = 0
r=requests.get('https://www.googleapis.com/youtube/v3/search?part=id&order=relevance&relevanceLanguage=FR&relevanceLanguage=FR&safeSearch=none&maxResults=50&type=video&q={}&key={}'.format(channel_id, DEVELOPER_KEY))
data = json.loads(r.text)
nbIter = ceil(limit/50)
print('search_videos(): page: 1/', nbIter)
if 'items' in data:
for items in data['items']:
vidId =items['id']['videoId']
vid_list += [vidId]
else:
print(r.text)
if 'nextPageToken' in data:
nextPage = data['nextPageToken']
for i in range(nbIter-1):
print('search_videos(): page:', i+1 ,'/', nbIter)
r=requests.get('https://www.googleapis.com/youtube/v3/search?part=id&order=relevance&relevanceLanguage=FR&relevanceLanguage=FR&safeSearch=none&maxResults=50&type=video&q={}&pageToken={}&key={}'.format(channel_id, nextPage, DEVELOPER_KEY))
data = json.loads(r.text)
#print(r.text)
if 'nextPageToken' in data:
nextPage = data['nextPageToken']
if len(data['items'])<1:
print('ERROR: no items')
for items in data['items']:
vid_list += [items['id']['videoId']]
return vid_list
"""
Explanation: Functions
The functions used to get the channels and videos informations
End of explanation
"""
videos_list_database = pd.DataFrame(columns=['id'])
filename = os.path.join(folder, 'videos_list.sqlite')
videos_list_database.to_sql('videos_list', 'sqlite:///' + filename, if_exists='replace')
"""
Explanation: Database creation
Warning! To use only if we want to create new sql database ! Do not use if you want to add the information to existing dataset !
Create a new empty dataset for: VIDEOS ID LIST
End of explanation
"""
videos_database = pd.DataFrame(columns=['id', 'channelId', 'title', 'thumbnailUrl', 'viewCount', 'likeCount','dislikeCount','commentCount','subsCount'])
filename = os.path.join(folder, 'videos.sqlite')
videos_database.to_sql('videos', 'sqlite:///' + filename, if_exists='replace')
"""
Explanation: Create a new empty dataset for: VIDEOS
End of explanation
"""
start_time = timeit.default_timer()
#get the existing database
videos_list_database = pd.read_sql('videos_list', 'sqlite:///' + os.path.join(folder, 'videos_list.sqlite'), index_col='index')
#convert to a list of str
vid_list = videos_list_database['id'].tolist()
#get the searches requests from the file
searches_list = [line.rstrip('\n') for line in open('searches.txt')]
print('Number of search terms: ', len(searches_list))
print('Number of video per search: ', video_per_search)
print('Total number of videos expected: ', len(searches_list)*video_per_search)
#get a list of videos IDs based on the search terms
searched_videos_list = search_videos(searches_list, video_per_search)
#add the new list of IDs to the existing one
vid_list += searched_videos_list
print('Number of video IDs we received: ', len(searched_videos_list))
print('Number of video IDs in the database: ', len(vid_list))
#delete the duplicates
vid_list=list(set(vid_list))
print('Number of video IDs in the database without duplicates: ', len(vid_list))
#store the videos IDs into a dataframe
videos_list_database = pd.DataFrame(vid_list,columns=['id'])
#store the dataframe in a sqlite database
filename = os.path.join(folder, 'videos_list.sqlite')
videos_list_database.to_sql('videos_list', 'sqlite:///' + filename, if_exists='replace')
stop_time = timeit.default_timer()
print('Time = ', np.ceil(stop_time - start_time), 'sec')
"""
Explanation: Get a list of videos IDs
Get a list of videos IDs based on the search terms stored in the searches.txt file.
Add the retrieved IDs to the videos_id_list database.
The duplicates are deleted.
End of explanation
"""
start_time = timeit.default_timer()
#get the "videos_id_list" and "videos" database
videos_list_database = pd.read_sql('videos_list', 'sqlite:///' + os.path.join(folder, 'videos_list.sqlite'), index_col='index')
videos_database = pd.read_sql('videos', 'sqlite:///' + os.path.join(folder, 'videos.sqlite'), index_col='index')
#create a list of videos IDs from the database
video_list = videos_list_database['id'].tolist()
print('Number of video IDs: ', len(vid_list))
#delete the duplicates
video_list=list(set(video_list))
print('Number of video IDs without duplicates: ', len(vid_list))
#get the informations of the videos
videos_database= videos_request(vid_list,videos_database);
print('Number of videos in the database: ', len(videos_database))
#We delete the duplicates from the dataframe and reset the index
videos_database = videos_database.drop_duplicates('title')
videos_database = videos_database.reset_index(drop=True)
print('Number of videos in the database without duplicates: ', len(videos_database))
#store the information into the database
filename = os.path.join(folder, 'videos.sqlite')
videos_database.to_sql('videos', 'sqlite:///' + filename, if_exists='replace')
stop_time = timeit.default_timer()
print('Time = ', np.ceil(stop_time - start_time), 'sec')
"""
Explanation: Get the videos informations
Get the information of the videos listed on the video_id_list_database, except the number of subscribers. We can not get the number of subscribers directly, we will get it on the next step.
Information retreived:
+ Channel ID
+ Video title
+ Thumbnail URL (120x90)
+ View count
+ Like count
+ Dislike count
+ Comment count
End of explanation
"""
start_time = timeit.default_timer()
#get the videos database
videos_database = pd.read_sql('videos', 'sqlite:///' + os.path.join(folder, 'videos.sqlite'), index_col='index')
#create a dataframe to store the number of subs
channels_database = pd.DataFrame(columns=['id','subscriberCount'])
#create a list of channels IDs
channels_list = videos_database['channelId'].tolist()
#delete the duplicates
channels_list=list(set(channels_list))
print('Number of channels:',len(channels_list))
#get the number of subscribers for each channel
channels_database= channel_request(channels_list,channels_database);
#just to be sure we delete the duplicates
channels_database = channels_database.drop_duplicates('id')
channels_database = channels_database.reset_index(drop=True)
channelsNotFound = 0
#for each video the get the subs count
for i in range(len(videos_database)):
channelId = videos_database['channelId'][i]
ans = channels_database.query("id == @channelId")
if not i%100:
print('Sub counter update: ',i,'/',len(videos_database))
#if we found the channel
if len(ans) == 1:
videos_database['subsCount'][i] = int(ans['subscriberCount'])
else:
videos_database['subsCount'][i] = 0
channelsNotFound += 1
print('Videos without correct subs count :',channelsNotFound)
#store the updated database
videos_database.to_sql('videos', 'sqlite:///' + filename, if_exists='replace')
stop_time = timeit.default_timer()
print('Time = ', np.ceil(stop_time - start_time), 'sec')
#display(channels_database)
#display the videos database
videos_database = pd.read_sql('videos', 'sqlite:///' + os.path.join(folder, 'videos.sqlite'), index_col='index')
display(videos_database)
"""
Explanation: Get the number of subscribers
Get the number of subscribers of the channel hosting the video. This step has to be done separately because we need to get the channel information to retrieve the number of subscribers.
End of explanation
"""
from PIL import Image
for i in range(len(videos_database['thumbnailUrl'])): #len(videos_database['thumbnailUrl'])
f = open('Image3/'+videos_database['id'][i]+'.png','wb')
f.write(requests.get(videos_database['thumbnailUrl'][i]).content)
f.close()
imag = pd.DataFrame(columns=['imag'])
for i in range(len(videos_database['thumbnailUrl'])):
serie = dict(imag=imag['imag'])
img = Image.open('Image3/'+videos_database['id'][i]+'.png')
p = np.array(img,'float32')
serie['imag'] = p.flatten()
imag=imag.append(serie,ignore_index=True)
filename = os.path.join(folder, 'imag.sqlite')
imag.to_sql('imag', 'sqlite:///' + filename, if_exists='replace')
print('done')
"""
Explanation: Get the images database
From the url of the thumbnails, we load the images and then we save them in a database for the CNN algorithm. It uses PIL, which is not installed on our dockers. But we can run this cell on Jupyter.
End of explanation
"""
|
jazracherif/algorithms | tsp/tsp.ipynb | mit | import numpy as np
file = "tsp.txt"
# file = "test2.txt"
data = open(file, 'r').readlines()
n = int(data[0])
graph = {}
for i,v in enumerate(data[1:]):
graph[i] = tuple(map(float, v.strip().split(" ")))
dist_val = np.zeros([n,n])
for i in range(n):
for k in range(n):
dist_val[i,k] = dist_val[k,i] = np.sqrt((graph[k][0]-graph[i][0])**2 + (graph[k][1]-graph[i][1])**2)
print (graph)
"""
Explanation: Traveling Salesman Problem
In this assignment you will implement one or more algorithms for the traveling salesman problem, such as the dynamic programming algorithm covered in the video lectures.
The file tsp.txt describes a TSP instance. The first line indicates the number of cities. Each city is a point in the plane, and each subsequent line indicates the x- and y-coordinates of a single city.
The distance between two cities is defined as the Euclidean distance --- that is, two cities at locations (x,y) and (z,w) have distance √((x−z)^2+(y−w)^2 between them.
In the box below, type in the minimum cost of a traveling salesman tour for this instance, rounded down to the nearest integer.
OPTIONAL: If you want bigger data sets to play with, check out the TSP instances from around the world https://www.tsp.gatech.edu/world/countries.html. The smallest data set (Western Sahara) has 29 cities, and most of the data sets are much bigger than that. What's the largest of these data sets that you're able to solve --- using dynamic programming or, if you like, a completely different method?
HINT: You might experiment with ways to reduce the data set size. For example, trying plotting the points. Can you infer any structure of the optimal solution? Can you use that structure to speed up your algorithm?
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
values = list(graph.values())
y = [values[i][0] for i in range(len(values))]
x = [values[i][1] for i in range(len(values))]
plt.scatter(y,x)
plt.show()
import collections
def to_key(a):
my_str = ""
for i in a:
my_str += str(int(i))
return my_str
def to_subset(v, n):
a = np.zeros(n)
a[v] = 1
return a
def create_all_subset(n):
A = collections.defaultdict(dict)
for m in range(1,n):
for a in (itertools.combinations(range(n), m)):
key = a + tuple([0 for i in range(n-m)])
print (a, tuple([0 for i in range(n-m)]), key, m, n)
for j in range(n):
A[to_key(key)][j] = np.inf
A[to_key(to_subset(0,n))][0] = 0
return A
# res= to_subset([2,3],5)
# print (res)
# print (to_key(res))
# A = create_all_subset(3)
# print (A)
# print (index_to_set(10,'25'))
# print(set_to_index([1,3]))
import itertools
def powerset(iterable):
"powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"
s = list(iterable)
return itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(1,len(s)+1))
def index_to_set(index, n='8'):
fmt = '{0:0'+n+'b}'
res = fmt.format(index)
mylist = list(res)
mylist.reverse()
print (res)
mylist = np.asarray(mylist, dtype=int)
ret = np.where(mylist==1)
# ret = []
# for i, j in enumerate(mylist):
# if j=="1":
# ret.append(i)
return list(ret[0])
def set_to_index(my_set):
# i = [1, 5, 7]
ret = 0
for i in my_set:
ret += 2**i
return ret
print ("~~ Test")
# print (set_to_index([1]))
# print (index_to_set(set_to_index([1])))
ex_all_sets = powerset(range(5))
for s in ex_all_sets:
print ("~~ Original set:", s)
print ("index:", set_to_index(s))
print ("recovered set:", index_to_set(set_to_index(s),'5'))
"""
Explanation: Draw points
End of explanation
"""
A = np.full([2**n, n], np.inf)
A[set_to_index([0]),0]=0
for i in range(0, n):
A[set_to_index([i]),i] = dist_val[i,0]
print (set_to_index([i]), dist_val[i,0])
"""
Explanation: Initialize the 2-D Array
End of explanation
"""
from tqdm import tqdm
def _dist(k, j):
return np.sqrt((graph[k][0]-graph[j][0])**2 + (graph[k][1]-graph[j][1])**2)
FULL = range(n)
for m in range(1,n):
# all_sets = powerset(range(1,m))
all_sets = itertools.combinations(FULL, m+1)
print ("Subset Size:",m)
for _set in all_sets:
if not _set:
continue
_set = list(_set)
# print ("Len Set", len(_set))
set2_idx = set_to_index(_set)
for j in _set:
_set2 = _set.copy()
_set2.remove(j)
if j==0 or not _set2:
continue
# print ("_set2", _set2)
_set2_idx = set_to_index(_set2)
# print ("handle Set", _set2, "idx",_set2_idx, "j:", j)
minval = np.inf
for k in _set2:
# print ("idxSet:", _set2_idx, "k:", k, "dist", A[_set2_idx,k])
val = A[_set2_idx,k] + dist_val[k,j]
if val < minval:
minval = val
# print ("minval",minval)
A[set2_idx,j] = minval
# print (A)
my_set = [i for i in range(n)]
print ("Full Set", my_set, set_to_index(my_set))
minval = np.inf
for j in range(1,n):
val = A[set_to_index(my_set),j] + dist_val[j,0]
if val < minval:
minval = val
print ("minval", minval)
# print (A[set_to_index(my_set),:])
"""
Explanation: Run the Dynamic Programming algorithm
End of explanation
"""
|
IndicoDataSolutions/SuperCell | plotlines/plotlines.ipynb | mit | import sys
import os
import pandas as pd # dataframes to store text samples + scores
# Plotting
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn # for more appealing plots
seaborn.set_style("darkgrid")
# Pretty printing
import pprint
pp = pprint.PrettyPrinter(indent=4)
# indico API
import indicoio # https://pypi.python.org/pypi/IndicoIo; install using `pip install -U indicoio`
# Is an indico API key defined?
if not indicoio.config.api_key:
sys.exit("Unable to find an indico API key, please add one to your python environment. Aborting.")
"""
Explanation: <p style="text-align: right"><i>D.Kuster<br>July 2015</i></p>
Exploring the shapes of stories with Python and machine learning APIs
A few months ago, a great video of Kurt Vonnegut circulated the web. He describes an idea for plotting the simple shapes of stories as good vs. ill fortune across the length of the story. He says:
"There's no reason why these simple shapes of stories can't be fed into computers"
by which he probably meant:
"Once you have a sequence of numerical scores, it is easy to draw these lines using computers"
<p style="text-align: center" font color="red"><span style="color:gray">click the image -> video on YouTube.com</span></p>
...but how to get the scores? When I watched the video, I just happened to be developing natural language processing tools for analyzing text. Swimming in that context, it was very natural to wonder:
"if we don't have Kurt Vonnegut on hand to interpret stories through his own experience, how can we plot the shapes of stories directly from the text itself?"
The problem (of plotting the shapes of stories) is similar to sentiment analysis on the narrator's viewpoint. But it was not clear if existing sentiment models were good enough to pick up on this signal across the length of a long story. Specifically, language is full of subtle contextual cues that can interact with each other. Sentiment models can struggle to balance the interactions amongst many positive vs. negative terms and return stable predictions over long-range context windows. And even once you have the algorithmic bits worked out, validating the result can be tricky when human interpretations differ.
This notebook implements a series of experiments using indico's machine learning API's to quickly test hypotheses. If you simply want to consume the story, keep on scrolling. If you want to tinker, maybe do some experiments of your own...this post was written as a Jupyter notebook for exactly that reason. Grab the notebook here on Github. And when you discover something interesting, please let us know, we love this stuff.
<img src="images/vonnegut-overlay.png">
TL;DR
Using two hacks and a multinomial logistic regression model of n-grams with TF-IDF features, a pre-trained sentiment model can score the long-range sentiment of text of stories, books, and movies. The models do a reasonable job of summarizing the "shapes of stories" directly from text. This method can be easily extended to search across databases to find stories with similar plot shape as the query story.
CONTENTS
Background
Define the problem (outcomes, technical requirements)
Simple experiment to learn what makes this problem hard
Add functionality to satisfy requirements
Identify a good use case (+data) for testing against human intepretation
Hack: sliding window to sample context
Smoothing methods
Hack: Dynamic time warping to compare plotlines across stories
Compare many stories to find similar plotlines
The Disney movie plot formula (or, "How would Vonnegut draw the shapes of Disney stories?")
1. Background
If you haven't watched the Vonnegut video yet, "listen and learn" is always a good first step. Few lectures are as short and sweet as this one!
Vonnegut gave us a method to describe the shapes of stories by plotting the protagonist's current fortune (good vs. ill) from the beginning to the end of the story. His method requires a human to read each story and intepret meaning thorugh their own personal context and understanding of stories in general. We're going to automate the scoring method using python and machine learning models for sentiment analysis.
To have any hope of automating a solution, we need clear, specific, solvable technical requirements. Then we'll write code to implement the functionality that satisfies each requirement.
2. Define the problem
The first step to solving a problem is to define it clearly in terms of desired outcomes and technical requirements. Let's do that next. We'll periodically update our progress by incrementing the numbers inside the brackets to indicate percent completion of each requirement.
To draw the shape of a story, we need an automated method that emits scores as it scans a text from beginning to end. Scores represent the positivity of the narrator.
Technical requirements
The method must be:
[ 0 ] Accurate. Any (human) reader should be able to validate accuracy for stories they have read by simply inspecting and interpreting the text; scores should be high (~1.0) when good stuff is happening, low (~0.0) when bad stuff is happening, and neutral (~0.5) otherwise.
[ 0 ] Sensitive to rapid (real) changes in fortune. Interesting stories tend to have abrupt changes of fortune, and this should be reflected in the plot line. "Sensitivity" is used here in the general engineering sense, not in the machine learning definition of recall (i.e., true positive rate)
[ 0 ] Interpretable. Simplicity is what makes Vonnegut's plotlines so compelling. We don't necessarily need such simple shapes, but our plots must enable meaningful conclusions about what is happening across the story.
[ 0 ] Visual to enable quick evaluation and comparison against Vonnegut's "shapes of stories".
[ 0 ] Automated. Requires no human judgment, just input text.
[ 0 ] Easy to use across a variety of texts.
We can define a few extra "nice to have" features too:
[ 0 ] Unequal lengths can be compared, because it is rare for two texts of interest to have exactly the same length.
[ 0 ] Enables experimentation so folks can try it on their own texts, build new visualizations, etc.
3. Start with a simple experiment (MVP)
We start with a very simple experiment. Then we'll evaluate our technical requirements and prioritize how we want to extend and add features. There are (at least) three good reasons to start with a simple experiment:
It forces us to get everything connected and working.
The process of implementing even the simplest working version will reveal real friction points in our implementation---we'll learn what are the hard and unsolved parts and start developing the intuition for how focused effort will yield results. Testing usually beats assumptions, and profiling hedges against premature optimization.
It also yields a product to which other people can react. Feedback from multiple sources is less likely to suffer the biases of any given individual.
Import modules
End of explanation
"""
# Sample text from the beginning of `The Lion King` fan-transcribed script
input_text = """Rafiki puts the juice and sand he collects on
Simba's brow---a ceremonial crown. He then picks
Simba up and ascends to the point of Pride Rock.
Mufasa and Sarabi follow. With a crescendo in the
music and a restatement of the refrain, Rafiki
holds Simba up for the crowd to view."""
"""
Explanation: Note: If this is your first time using an indico API, register for a free API key at indico.io. There are several ways to define your key for python; a good option is to add an environment variable to define your personal API key. On Linux or OSX, you can simply add a line to ~/.bash_profile to define your personal API key. For example: export INDICO_API_KEY=<your api key>
Simple example input
Link to full transcription of The Lion King movie, transcribed by fans
End of explanation
"""
score = indicoio.batch_sentiment([input_text]) # make an API call to get sentiment score for the input text
print(score)
"""
Explanation: Test ourselves (as human sentiment interpreters) by scoring the input text.
As a father of young children and an older brother who frequently pulled babysitting duty, I've watched The Lion King a gazillion times. So I can say that input_text is reservedly positive, and it occurs at the begining of The Lion King movie, when other positive things are happening in the movie (in general). As an English speaker, I don't read anything in the phrase as contradictory or particularly polarizing. The phrase "crescendo in the music" is a cue that something important (positive?) is happening, and life experience suggests the presentation of a new prince is a reservedly positive thing. Polite, modest smiles all around. If a sentiment score of 0.0 is totally negative, 1.0 is very positive, and 0.5 is neutral, I'd expect a sentiment score for the input text to be: 0.6 < score < 0.9.
You should make your own estimate too...do it now, before you see the predictions!
End of explanation
"""
def sample_window(seq, window_size = 10, stride = 1):
"""
Generator slides a window across the input sequence
and returns samples; window size and stride define
the context window
"""
for pos in xrange(0, len(seq), stride):
yield seq[pos : pos + window_size]
def merge(seq, stride = 4):
"""
Generator strides across the input sequence,
combining the elements between each stride.
"""
for pos in xrange(0, len(seq), stride):
yield seq[pos : pos + stride]
"""
Explanation: Validate the result
Is the score reasonable? Yes, everything agrees with my self-test.
OK, that was pretty easy! Now that we have a sentiment model connected and working on a toy example, it's time to try something more realistic.
We only ran a single test (n=1), so we cannot tell the difference between a reproducible method and a happy accident. Let's apply what we learned with this simple experiment to the technical requirements:
How the indico sentiment analysis API works
The sentiment analysis API abstracts the machine learning process into an API call, so you can use a pre-trained model on any input text. Send a list of strings, get back a list of scores. It is optimized to be fast, robust, and "accurate enough" (90% accuracy on IMDB), so it is well suited to the task at hand, where we want to score many samples across a story.
Behind the scenes, huge corpora of labeled text data were used to train a multinomial logistic regression model to differentiate between text labeled as positive vs. text labeled as negative. When you send a list of strings to the API, each string is tokenized into n-grams to compute TF-IDF (term frequency-inverse document frequency) features. Then the model uses the pre-trained features to predict the positivity of the input text, and return the score. The models are deployed on robust, load-balanced, distributed architecture, so multiple users can spray multiple requests and reliably get results.
Why use an API instead of implementing from scratch (using scikit-learn, etc)?
Because it abstracts away a whole mess of data collection, cleaning, feature engineering, pre-training, regularization, validation, deployment, and performance/scaling. Using the API above, you can use one line to get sentiment scores. For cases where you need to tweak things to get the best accuracy possible, or need to train on specific data for a specific purpose or domain, you should probably build your own custom model. But even then, pre-trained models can be great for running quick experiments to explore your data and quickly discover if it is worth investing in a custom model.
Evaluate requirements: simplest experiment
[ 5 ] Accurate. The score was reasonable for the one-line input text, but this toy example doesn't give us enough to evaluate performance.
[ 0 ] Sensitive. With only a single score, we have no way to evalute plotlines, let alone sensitivity. We need more than a single score, we need to emit a sequence of scores over the length of the input story.
[ 25 ] Interpretable. Easy to interpet a single score, but it doesn't help us understand the shape of the story.
[ 0 ] Visual. Nothing to visualize yet.
[ 50 ] Automated. All plumbed up and working, but we have to manually specifiy the input text. Need to add the functionality to automatically sample from an input text.
[ 100 ] Easy to use. Easy one-liner to get sentiment scores.
Nice to have
[ 0 ] Unequal lengths. Only tested one small snippet from one story.
[ 50 ] Enables experimentation. Redefine the input string, test anything you want (manually). Need an automated solution here, ideally something that operates at the granularity of a whole story.
Also, we haven't yet satisfied the main goal: draw the shape of a story using an automated method that emits scores as it scans a text from beginning to end. Technically, we could cut a story into chunks and paste each chunk in as input_text. But that would be silly, the whole point is to automate the method so we can compare to Vonnegut's shapes of stories. Let's add functionality to automatically generate samples from a text, retrieve sentiment scores for each sample, and plot the results for comparison to Vonnegut's shapes of stories.
Next step: Implement auto-sampling functionality
4. Add functionality: auto-sampling
To extend functionality to scan across the text and emit a sequence of scores, we'll need to upgrade our code to use data structures that can contain sequences (rather than just single values).
We also need a strategy for generating samples from a chunk of input text. A naive strategy is to slice the input text at regularly spaced intervals. Although it might seem appealing for the simplicity, slices introduce discontinuities in the context of the story, and this will cause problems with sentiment analysis. In other words, how should we choose where we make cuts so that we don't destroy information at at the boundary between chunks? Sentiment analysis can be tricky, and we want to make sure we are preserving all the information we can.
One useful solution to eliminate the slice boundaries is to sample from a sliding context window. This will require us to evaluate a greater number of samples, but the sentiment analysis model is fast, so no worries!
Hack 1: sliding window to sample context
Helper functions
End of explanation
"""
d = {} # dictionary to store results (regardless of story lengths)
# Parse text
delim = " "
words = [s for s in input_text.split()] # simplest tokenization method
# Merge words into chunks
### Note: this defines the granularity of context for sentiment analysis,
### might be worth experimenting with other sampling strategies!
merged_words = [" ".join(w) for w in merge(words, 5)]
# Sample a sliding window of context
delim = " "
samples = [delim.join(s) for s in sample_window(merged_words, 3, 1)]
pp.pprint(samples) # comment this line out for big input!
d['samples'] = samples
# Score sentiment using indico API
print("\n Submitting %i samples..." % (len(samples)))
scores = indicoio.batch_sentiment(samples)
d['scores'] = scores
print(" ...done!")
"""
Explanation: Core logic/script
End of explanation
"""
# Load the dict into a dataframe; overkill for now but useful below
df = pd.DataFrame()
for k,v in d.iteritems():
df[k] = pd.Series(v) # keys -> columns, values -> rows
df.plot(figsize = (16,8))
df # display the table of values
"""
Explanation: Visualize the results
End of explanation
"""
# define your corpus here as a list of text files
corpus = ["aladdin.txt",
"lionking.txt",
"mulan.txt",
"hunchback.txt",
"rescuersdownunder.txt",
"sleepingbeauty.txt",
"littlemermaid.txt"]
# New dict to hold data
d = {}
# Map names to input files on filesystem
root_fp = os.getcwd()
corpus_fp = os.path.join(root_fp, "texts") # put your text files in ./texts
# print("Looking for input text files: '%s'" % corpus_fp)
for t in corpus:
fp = os.path.join(corpus_fp, t)
print(" Reading '%s'" % t)
with open(fp, 'rb') as f:
text_name = t.split(".")[0] # strip .txt file extensions
sample_col = text_name + "_sample"
score_col = text_name + "_sentiment"
lines = [] # list to receive cleaned lines of text
# Quick text cleaning and transformations
for line in f:
if str(line) == str(""): # there are many blank lines in movie scripts, ignore them
continue
else:
line = line.replace("\n", " ").lower().strip().strip('*') # chain any other text transformations here
lines.append(line)
print(" %i lines read from '%s' with size: %5.2f kb" % (len(lines), t, sys.getsizeof(lines)/1024.))
# Construct a big string of clean text
text = " ".join(line for line in lines)
# split on sentences (period + space)
delim = ". "
sentences = [_ + delim for _ in text.split(delim)] # regexes are the more robust (but less readable) way to do this...
merged_sentences = [delim.join(s) for s in merge(sentences, 10)] # merge sentences into chunks
# split on words (whitespace)
delim = " "
words = [_ for _ in text.split()]
merged_words = [" ".join(w) for w in merge(words, 120)] # merge words into chunks
# Generate samples by sliding context window
delim = " "
samples = [delim.join(s) for s in sample_window(merged_words, 10, 1)]
d[sample_col] = samples
print(" submitting %i samples for '%s'" % (len(samples), text_name))
# API to get scores
scores = indicoio.batch_sentiment(samples)
d[score_col] = scores
print("\n...complete!")
df = pd.DataFrame()
# for k,v in d.iteritems():
for k,v in sorted(d.iteritems()): # sort to ensure dataframe is defined by longest sequence, which happens to be Aladdin
df[k] = pd.Series(v) # keys -> columns; rows -> columns
print(len(df))
df.head(5) # inspect the first 5 rows...looks OK?
# inspect the last 5 rows;
# since sequences are of unequal length, there should be a bunch of NaN's
# at the end for all but the longest sequence
df.tail(5)
"""
Explanation: What just happened?
Check out the visual explanation below!
Read some text and split it into words. This defines the granularity of context for sentiment analysis, so if you want to use a different sampling strategy, this would be the place to change it up by splitting on different delimiters. You can also use a fancier tokenizer or lemmatizer instead of the "split on whitespace" strategy here.
Merge the list of words into chunks of text. We employ a "split and merge" strategy for two reasons: it works reliably for most any text composed of words, and it gives us independent control over the length and stride when creating samples for sentiment analysis---makes it easy to experiment with different sizes of context windows, different granularities (i.e., chunk sizes), different sampling resolution (i.e., stride).
As humans, we tend to interpret context without even thinking about it...but machines need explicit samples across contexts to approximate that kind of behavior. To make sure we're sampling a reasonably stable context, we slide a window over the list of merged words to generate a bunch of overlapping samples.
Send the list of samples to the indico API for scoring, get back a list of scores.
<img src="images/animated.gif">
Evaluate requirements: added sampling via sliding context window
[ 5 ] Accurate. Unclear, hard to evaluate objectively. Given the small chunks of text being passed in here, the model is probably too sensitive. We need a better test that can be easily validated against human interpretation.
[ 25 ] Sensitive. On the super-simple input, we see plenty of dynamic range, but the response could be driven by either signal or noise. Real signals in sentiment models come from a few strong words, or from many weak words. We fed the model only short phrases and minimal context, so it has no choice but to respond to the few words that are present in the input. Because the sample input didn't happen to contain any particularly strong words, noise has a big impact on the output.
[ 100 ] Interpretable. Table view + line graph makes this easy enough for a human to scan and evaluate.
[ 100 ] Visual. It's just a line graph.
[ 100 ] Automated. All plumbed up and working!
[ 100 ] Easy to use. Pretty easy so far. But to evaluate more than the simple example, we'll need to abstract some things. We'll be careful to balance flexibility vs. usability.
Nice to have
[ 0 ] Unequal lengths. Only tested one sample.
[ 100 ] Enables experimentation. Redefine the input string, test anything you want.
Importantly, we satisfied the main goal of drawing the shape of a story using a fully-automated method that emits scores as it scans a text from beginning to end. We can compare these to Vonnegut's shapes of stories! Now let's improve the accuracy. We'll do it by finding more robust data, making our tests more rigorous, and validating the results in an experiment where human interpretation is relatively easy and transparent.
Next step: better data --> better tests
5. Assemble a corpus of data to validate subjective (human) interpretation
Now that we have a pipeline for drawing the shapes of stories, we need a better test. The first step is to find data.
To validate Vonnegut's hypothesis, I initially wanted to score the same stories he described. But I've only read Hamlet once, and that was more than enough. Vonnegut's stories may be archetypal fiction plots, but for me it was very hard to validate performance when I couldn't remember the context and sequence of events from those stories. Wait a second...he mentioned the Cinderella story, everyone knows that one, right?
I searched the web for a canonical version of Cinderella, but quickly discovered that the myth has dozens of variations. Having been exposed to many versions since childhood, it was impossible to attribute my interpretation of the Cinderella story to any single context or version. Turns out academics hypothesize how each version of Cinderella reflects the culture from whence it came...what a mess! This is the opposite of a good test for validating the performance of our plotlines. We want authoritative versions.
Finally, thinking "what is the most popular version of cinderella?"...I definitely remember watching Disney's version of Cinderella! Were movies scripts a better test than written stories?
It turns out that movies have a number of useful constraints for the task at hand. Written stories are typically consumed in many sittings, many contexts, over many hours, but movies are:
time-boxed and consumed all in a single sitting/context
sequences of events are more memorable when they occur on-screen as audio-visual media vs. written as text (for me, at least)
have similar lengths
every movie has a script, either the original or transcriptions produced by fans
Unfortunately, I couldn't find a good script of Disney's 1950 version of Cinderella freely available on the web. However, fans have transcribed many other movies, including The Lion King, Aladdin, The Little Mermaid, Sleeping Beauty, and more:
The web's largest movie script resource
Top Google hit for "Disney Movie Scripts"
Extend code to iterate over each story in a corpus
Now that we have multiple texts, we need to abstract the simple code above to iterate across a corpus of text files. Dataframes are a good data structure for storing and manipulating the result here. We'll also need to add some cleaning/munging code since movie scripts from the internet can be messy.
End of explanation
"""
ax = df['lionking_sentiment'].plot(colormap = 'jet', figsize=(16,8))
ax.set_xlabel("sample")
ax.set_ylabel("sentiment_score")
ax.set_title("Lion King")
"""
Explanation: Results!
Let's look at The Lion King since many people are familiar with the story/movie and will therefore be able to validate the results.
End of explanation
"""
# Pick out a few stories to compare visually
combo = pd.DataFrame()
combo['lionking_sentiment'] = df['lionking_sentiment']
combo['aladdin_sentiment'] = df['aladdin_sentiment']
# combo['littlemermaid_sentiment'] = df['littlemermaid_sentiment']
ax2 = combo.plot(colormap='jet', figsize = (16,8)) # ignore mistmatched sequence length
ax2.set_xlabel("sample")
ax2.set_ylabel("sentiment_score")
"""
Explanation: <img src="images/kv-sm.jpg">
<img src="images/overlay-LK.gif">
End of explanation
"""
# Pull out a single story here to test smoothing methods
df_sentiment = df["lionking_sentiment"].dropna()
df_roll = pd.rolling_mean(df_sentiment, 10)
ax = df_roll.plot(colormap = 'jet', figsize = (16, 8))
ax.set_xlabel("sample")
ax.set_ylabel("sentiment_score")
ax.set_title("Lion King, smoothed with rolling mean")
ax.set_xlim((0, 110))
"""
Explanation: Looks like The Lion King and Aladdin have very similar plot lines, from the sequencing (song, then something happens) to the big hump of hero happiness followed by the valley of gloom. Are we detecting some kind of Disney formula for movies/stories? It's hard to compare these visually when the plotlines are happening across different lengths. We'll need better way to compare, some kind of similarity metric that is robust to unequal lengths.
Note: due to the way we're sampling context with a sliding window, there is a tendency for sentiment scores to return towards neutral at the end of the story as the window runs off the end and starts losing context.
Evaluate progress
[ 100 ] Accurate. Bigger/better test data and resampling with a sliding window seems to have helped quite a lot! Feel free to tweak parameters (chunk size, window width, stride) to see how it affects sentiment scores. The values above seem to work reasonably well for movie scripts and fiction novels. Accuracy is rarely perfect, but for the task of validating plotlines of stories, at least in my own personal assessment, we're getting a more objective and informative result than Vonnegut's plotlines, so call the accuracy requirement satisfied.
[ 65 ] Sensitive. The sliding window helped a lot here too, where we can see the sentiment change quickly between positive and negative. But the plots are probably too complex and noisy compared to Vonnegut's simple shapes of stories...then again, Vonnegut's shapes might be too smooth, sacrificing relevant details in an attempt at generality. Too bad we can't ask him to do an apples-to-apples comparison here!
[ 90 ] Interpretable. We lost some power to interpret the plotlines when we started comparing stories. We need a way to map stories of different lengths into some common space where comparisons are more meaningful.
[ 100 ] Visual.
[ 100 ] Automated.
[ 100 ] Easy to use.
Nice to have
[ 10 ] Unequal lengths. We're handling sequences of different lengths using dictionaries and dataframes with NaN values, but this doesn't help us visually compare stories. Let's fix that.
[ 100 ] Enables experimentation. Redefine the input string, test anything you want.
Next steps: Explore smoothing methods
Smoothing kernels
Smoothing kernels are an obvious thing to try for filtering out some high-frequency noise, and several reasonable methods are available within pandas. For unequal length sequences, for now we'll simply ignore the NaN's and plot as much sequence as avaialable for each story.
End of explanation
"""
import scipy.stats as stats
import statsmodels.api as sm
lowess = sm.nonparametric.lowess(df_sentiment, df_sentiment.index, frac = 0.05)
fig = plt.gcf()
plt.plot(df_sentiment.index, df_sentiment, '.') # plot the values as dots
plt.plot(lowess[:, 0], lowess[:, 1]) # plot the smoothed output as solid line
fig.set_size_inches(16,8)
plt.xlabel('sample')
plt.ylabel('sentiment_score')
plt.title('Lion King, smoothed with Lowess')
plt.xlim((0, 110))
"""
Explanation: Alternative smoothing method: Lowess
The moving average smoothing isn't bad, and we're getting pretty close to Vonnegut's simple shapes! But since we used a sliding window to resample the text for sentiment analysis, another sliding window method is probably not the best choice here, as it could falsely convey a stronger confidence or stability of prediction than is justified by scores. We've also traded sensitivity for smoothness. Sentiment tends to be sensitive to the balance of positive vs. negative weights, so noise is probably a useful quantity to track, especially since we don't yet know how it varies across stories. Also, the bigger kernels take a while to accumulate samples, eating into the beginning of a story where interesting stuff can happen. Another method might be a better choice---one that doesn't consume data to build up statistical moments. Let's give Lowess smoothing a try and compare to the raw scores.
End of explanation
"""
import dtw # `pip install dtw`
lionking = df['lionking_sentiment'].dropna()
aladdin = df['aladdin_sentiment'].dropna()
print(len(lionking), len(aladdin))
dist, cost, path = dtw.dtw(lionking, aladdin) # compute the best DTW mapping
print("Minimum distance found: %-8.4f" % dist)
from matplotlib import cm # custom colormaps
from matplotlib.pyplot import imshow
imshow(cost.T, origin = 'lower', cmap = cm.hot, interpolation = 'nearest')
plt.plot(path[0], path[1], 'w') # white line shows the best path
plt.xlim((-0.5, cost.shape[0]-0.5))
plt.ylim((-0.5, cost.shape[1]-0.5))
plt.xlabel("lion king")
plt.ylabel("aladdin")
"""
Explanation: Evaluate progress: smoothing
[ 100 ] Accurate.
[ 100 ] Sensitive. There's usually a tradeoff of sensitivity vs. noise when evaluating smoothing methods, and that seems true here. Lowess smoothing gives a better balance of sensitivity vs. visual simplicity to compare against Vonnegut's shapes of stories. The moving window methods sacrifice too much and obfuscate the beginnings of stories. We'll use Lowess.
[ 100 ] Interpretable.
[ 100 ] Visual.
[ 100 ] Automated.
[ 100 ] Easy to use.
Nice to have
[ 10 ] Unequal lengths. Ignoring or truncating NaN's is not particuarly helpful, and interpolation would cause data transparency issues (e.g., "was this data point interpolated, or a real score?"). We need a better way to compare stories and score similarity. Time to find a suitable distance metric.
[ 100 ] Enables experimentation.
Next steps: distance metric & solve the problem of comparing unequal lengths
Distance metric
Hack 2: Dynamic Time Warping
The method of dynamic time warping is great for comparing sequences that have arbitrary insertions between otherwise similar data. It can also solve our problem of comparing sequences of unequal lengths. Intuitively, DTW seems like a good fit for our problem. Let's test it.
<img src="images/dtw-md.png">
<p style="text-align: center" font color="gray"><span style"color:gray">Image from: Rakthanmanon et al. "Searching and Mining Trillions of Time Series Subsequences under Dynamic Time Warping", Figure 3.</span></p>
How it works
Read the linked papers for details, but the gist is dynamic time warping gives us a way to map one sequence onto another sequence, using dynamic programming and distance measures between elements in each sequence. The best mapping from one sequence to another is the path that minimizes the pairwise distances. Lower distances indicate sequences with high similarity.
Interpreting the path graph
If we put one sequence (e.g., "lionking_sentiment") on the x-axis and another sequence (e.g., "aladdin_sentiment") on the y-axis, then the diagonal path from lower-left to upper right illustrates the transformation that best maps the x sequence onto the y sequence. For two identical sequences, the path would be a perfect diagonal. For difference sequences, the path reveals where each sequence is "warped" to accommodate the other sequence.
For the knowledge hounds, here's a link to the original paper that introduced DTW and Rakthanmanon et al.
End of explanation
"""
mermaid = df['littlemermaid_sentiment'].dropna()
print(len(lionking), len(mermaid))
dist, cost, path = dtw.dtw(lionking, mermaid)
print("Minimum distance found: %-8.4f" % dist)
from matplotlib import cm # import custom colormaps
from matplotlib.pyplot import imshow
imshow(cost.T, origin = 'lower', cmap = cm.hot, interpolation = 'nearest')
plt.plot(path[0], path[1], 'w') # white line for the best path
plt.xlim((-0.5, cost.shape[0]-0.5))
plt.ylim((-0.5, cost.shape[1]-0.5))
plt.xlabel("lion king")
plt.ylabel("little mermaid")
"""
Explanation: The Lion King and Aladdin have very similar plots! Hardly any warping is required to map one story sequence onto the other.
What about another pair of stories?
End of explanation
"""
for i in corpus:
for j in corpus:
(dist, cost, path) = dtw.dtw(df[i.split(".")[0] + "_sentiment"].dropna(),
df[j.split(".")[0] + "_sentiment"].dropna())
print("DTW distance from %s to %s: '%-6.3f'" % (i.split(".")[0], j.split(".")[0], dist))
"""
Explanation: The Lion King and The Little Mermaid appear to have similar plotlines, but there are gaps where things happen in Lion King but no corresponding features in Little Mermaid. This different story pacing could be because The Lion King's characters are thoroughly anthropomorphized and speak many lines throughout the movie whereas the characters in The Little Mermaid tend tell the story through action and visuals---the protagnoist loses her voice! Or it could be a difference in transcript length or quality showing through...something to investigate more deeply. We can see from the DTW path that the plotlines are different for the first part of the movie, but the last half is very similar.
5. Compare many stories to find similar plotlines
Since we have a distance metric, can we find plotlines based on a query story?
Using the DTW distance metric, it is straightward to compare all the pairs of stories in our corpus. Using these distances to sort (or search) for similar (or different) stories is left as an exercise for the reader :) There is probably a neat visualization to be made here, but beyond scope for now!
End of explanation
"""
lowess_frac = 0.05 # same smoothing as above, balances detail and smoothness
lionking_lowess = sm.nonparametric.lowess(df['lionking_sentiment'], df['lionking_sentiment'].index, frac = lowess_frac)
aladdin_lowess = sm.nonparametric.lowess(df['aladdin_sentiment'], df['aladdin_sentiment'].index, frac = lowess_frac)
rescuers_lowess = sm.nonparametric.lowess(df['rescuersdownunder_sentiment'], df['rescuersdownunder_sentiment'].index, frac = lowess_frac)
hunchback_lowess = sm.nonparametric.lowess(df['hunchback_sentiment'], df['hunchback_sentiment'].index, frac = lowess_frac)
fig = plt.gcf()
plt.plot()
plt.plot(lionking_lowess[:, 0], lionking_lowess[:, 1])
plt.plot(aladdin_lowess[:, 0], aladdin_lowess[:, 1])
plt.plot(rescuers_lowess[:, 0], rescuers_lowess[:, 1])
plt.plot(hunchback_lowess[:, 0], hunchback_lowess[:, 1])
plt.xlabel('sample')
plt.ylabel('sentiment_score')
plt.title('4 similar Disney movies: [The Lion King, Aladdin, Rescuers Down Under, Hunchback of Notre Dame]')
fig.set_size_inches(16,8)
"""
Explanation: Evaluate progress: distance metric and solving unequal lengths problem
[ 100 ] Accurate.
[ 100 ] Sensitive.
[ 100 ] Interpretable.
[ 100 ] Visual.
[ 100 ] Automated.
[ 100 ] Easy to use.
Nice to have
[ 100 ] Unequal lengths. Dynamic time warping gives us a distance metric for evaluating the similarity between stories, regardless of length.
[ 100 ] Enables experimentation.
Next step: Summary & ideas for new features!
6. The Disney movie script formula, by the plotlines
Or, "How would Vonnegut draw the shape of a Disney movie script?"
End of explanation
"""
lowess_frac = 0.25 # heavy smoothing here to compare to Vonnegut
lionking_lowess = sm.nonparametric.lowess(df['lionking_sentiment'], df['lionking_sentiment'].index, frac = lowess_frac)
aladdin_lowess = sm.nonparametric.lowess(df['aladdin_sentiment'], df['aladdin_sentiment'].index, frac = lowess_frac)
rescuers_lowess = sm.nonparametric.lowess(df['rescuersdownunder_sentiment'], df['rescuersdownunder_sentiment'].index, frac = lowess_frac)
hunchback_lowess = sm.nonparametric.lowess(df['hunchback_sentiment'], df['hunchback_sentiment'].index, frac = lowess_frac)
fig = plt.gcf()
plt.plot()
plt.plot(lionking_lowess[:, 0], lionking_lowess[:, 1])
plt.plot(aladdin_lowess[:, 0], aladdin_lowess[:, 1])
plt.plot(rescuers_lowess[:, 0], rescuers_lowess[:, 1])
plt.plot(hunchback_lowess[:, 0], hunchback_lowess[:, 1])
plt.xlabel('sample')
plt.ylabel('sentiment_score')
plt.title('4 similar Disney movies: [The Lion King, Aladdin, Rescuers Down Under, Hunchback of Notre Dame]')
fig.set_size_inches(16,8)
"""
Explanation: What if we dial up the smoothing to compare vs. Vonnegut's shapes of stories?
End of explanation
"""
|
ljwolf/spvcm | notebooks/spatially-varying-coefficients.ipynb | mit | side = np.arange(0,10,1)
grid = np.tile(side, 10)
beta1 = grid.reshape(10,10)
beta2 = np.fliplr(beta1).T
fig, ax = plt.subplots(1,2, figsize=(12*1.6, 6))
sns.heatmap(beta1, ax=ax[0])
sns.heatmap(beta2, ax=ax[1])
plt.show()
"""
Explanation: Today, we'll sample a spatially-varying coefficient model, like that discussed in Gelfand (2003). These models are of the form:
$$ y_i \sim \mathcal{N}(\mathbf{x}i'\beta{i.}, \tau^2)$$
where $\beta_{i.}$ reflects the vector of $p$ coefficient estimates local to site $i$.
This is a hierarchical model, where a prior on the $\beta$ effects is assigned as a function of a spatial kernel $\mathbf{H}(\phi)$, relating all $N$ sites to one another as a function of distance and attenuation parameter $\phi$, and an intrinsic covariance among the $\beta$ unrelated to spatial correlation, $\mathbf{T}$. This prior is often stated for a tiling of $\beta$ with $j$ process index changing faster than $i$ site index as:
$$ \vec{\beta} ~ \mathcal{N}(1_N \otimes \alpha, \mathbf{T} \otimes \mathbf{H}(\phi))$$
with $\alpha$ being the $j$-length process mean vector, and $1_N$ being the $N$-length vector of ones.
Then, $\phi$ is often assigned a gamma-distributed prior contingent on the scale of distances reflected in the form of the $\mathbf{H}(.)$ kernel, and $\mathbf{T}$ is assigned an inverse Wishart prior.
This model is amenable to Gibbs sampling, and a Gibbs sampler in spvcm been written in Python that can efficiently sample these models.
For starters, let's state a simple parameter surface we are interested in fitting:
End of explanation
"""
x1, x2 = np.random.normal(0,1,size=(100,2)).T
x1 = x1.reshape(100,1)
x2 = x2.reshape(100,1)
flat_beta1 = beta1.flatten().reshape(100,1)
flat_beta2 = beta2.flatten().reshape(100,1)
y = x1 * flat_beta1 + x2 * flat_beta2 + np.random.normal(0,1,size=(100,1))
"""
Explanation: This reflects a gradient from left to right, and from bottom to top of a perfectly square grid. While this is highly idealized, we can see how well the model recovers these estimates in the exercise below.
First, though, let's draw some random normal data for the exercise and construct our $y$ vector, letting $\tau^2=1$.
End of explanation
"""
f,ax = plt.subplots(1,2, figsize=(12,6))
sns.heatmap(y.reshape(10,10), ax=ax[1])
sns.regplot(beta1.flatten(), beta2.flatten(), ax=ax[0])
"""
Explanation: The aspatial distribution of our data does not betray any specific trending, since we've ensured that our $\beta_1$ and $\beta_2$ surfaces are perfectly independent of one another:
End of explanation
"""
positions = np.array(list(zip(flat_beta1.flatten(),
flat_beta2.flatten())))
X = np.hstack((x1, x2))
"""
Explanation: So, before we sample, let's assemble our data matrix and our coordinates. The coordinates are used to compute the spatial kernel function, $\mathbf{H}(\phi)$, which models the spatial similarity in the random component of $\beta$ in space.
End of explanation
"""
import time as t
m = SVC(y, X, positions, n_samples=0,
starting_values=dict(Phi=.5),
configs=dict(jump=.2))
start = t.time()
m.sample(2000, n_jobs=4)
end = t.time() - start
print('{} seconds elapsed'.format(end))
"""
Explanation: We can sample multiple traces in parallel using spvcm, so below, we will see progressbars for each of the chains independently:
End of explanation
"""
m.trace.plot(burn=1000)
plt.tight_layout()
plt.show()
"""
Explanation: We can see the structure of the model below, with our traceplots showing the sample paths, and the Kernel density estimates on right:
End of explanation
"""
a,b,c,d = np.squeeze(m.trace['Betas'])
est_b0s = np.squeeze(m.trace['Betas'])[:,:,::3].mean(axis=1)
est_b1s = np.squeeze(m.trace['Betas'])[:,:,1::3].mean(axis=1)
est_b2s = np.squeeze(m.trace['Betas'])[:,:,2::3].mean(axis=1)
"""
Explanation: Further, we can extract our estimatesfrom the trace:
End of explanation
"""
f,ax = plt.subplots(4,2, figsize=(16,20),
subplot_kw=dict(aspect='equal'))
cfgs = dict(xticklabels='', yticklabels='',
vmin=0, vmax=9, cmap='viridis')
for i, (b1,b2) in enumerate(zip(est_b1s, est_b2s)):
sns.heatmap(b1.reshape(10,10),ax=ax[i,0], cbar=True, **cfgs)
sns.heatmap(b2.reshape(10,10), ax=ax[i,1], cbar=True, **cfgs)
ax[i,0].set_title('Chain {} $\\beta_1$'.format(i))
ax[i,1].set_title('Chain {} $\\beta_2$'.format(i))
plt.tight_layout()
plt.show()
"""
Explanation: And verify that the estimates from all of our chains, though slightly different, look like our target surfaces:
End of explanation
"""
f,ax = plt.subplots(3,2, figsize=(16,20))
cfgs = dict(xticklabels='', yticklabels='',
vmin=None, vmax=None, cmap='viridis')
b1ref = est_b1s[0].reshape(10,10)
b2ref = est_b2s[0].reshape(10,10)
for i, (b1,b2) in enumerate(zip(est_b1s[1:], est_b2s[1:])):
sns.heatmap(b1ref - b1.reshape(10,10),ax=ax[i,0], cbar=True, **cfgs)
sns.heatmap(b2ref - b2.reshape(10,10), ax=ax[i,1], cbar=True, **cfgs)
ax[i,0].set_title('Chain 1 - Chain {}: $\\beta_1$'.format(i))
ax[i,1].set_title('Chain 1 - Chain {}: $\\beta_2$'.format(i))
plt.tight_layout()
plt.show()
"""
Explanation: Finally, it is important that our prediction error in the $\hat{\beta_i}$ estimates are uncorrelated. Below, we can see that, in the map, the surfaces are indeed spatially random:
End of explanation
"""
f,ax = plt.subplots(2,4, figsize=(20,10), sharex=True, sharey='row')
[sns.regplot(beta1.flatten(), est_b1.flatten(), color='k',
line_kws=dict(color='orangered'), ax=subax)
for est_b1,subax in zip(est_b1s, ax[0])]
[sns.regplot(beta2.flatten(), est_b2.flatten(), color='k',
line_kws=dict(color='orangered'), ax=subax)
for est_b2,subax in zip(est_b2s, ax[1])]
[subax.set_title("Chain {}".format(i)) for i,subax in enumerate(ax[0])]
"""
Explanation: Finally, we can see that the true and estimated values are strongly correlated:
End of explanation
"""
|
rrbb014/data_science | fastcampus_dss/2016_05_23/0523_08__SciPy 시작하기.ipynb | mit | rv = sp.stats.norm(loc=10, scale=10) # 정규분포는 노말이고, loc, scale은 선택이다. location = 평균, scale 은 표준편차?
rv.rvs(size=(3,10), random_state=1) # rvs = 실제 샘플 생성. (3x10) , random_state => seed값임.
sns.distplot(rv.rvs(size=10000, random_state=1))
xx = np.linspace(-40, 60, 1000)
pdf = rv.pdf(xx)
plt.plot(xx, pdf) # 확률밀도함수를 그렸다!
cdf = rv.cdf(xx)
plt.plot(xx, cdf)
"""
Explanation: 수치미분/적분 다 scipy에 있다!
수치적분 -> integrated
SciPy 시작하기
SciPy란
과학기술계산용 함수 및 알고리즘 제공
Home
http://www.scipy.org/
Documentation
http://docs.scipy.org/doc/
Tutorial
http://docs.scipy.org/doc/scipy/reference/tutorial/index.html
http://www.scipy-lectures.org/intro/scipy.html
SciPy Subpackages
scipy.stats
통계 Statistics
scipy.constants
물리/수학 상수 Physical and mathematical constants
scipy.special
수학 함수 Any special mathematical functions
scipy.linalg
선형 대수 Linear algebra routines
scipy.interpolate
보간 Interpolation
scipy.optimize
최적화 Optimization
scipy.fftpack
Fast Fourier transforms
scipy.stats 통계
Random Variable
확률 밀도 함수, 누적 확률 함수
샘플 생성
Parameter Estimation (fitting)
Test
scipy.stats 에서 제공하는 확률 모형
http://docs.scipy.org/doc/scipy/reference/stats.html
Continuous
http://docs.scipy.org/doc/scipy/reference/tutorial/stats/continuous.html#continuous-distributions-in-scipy-stats
uniform: A uniform continuous random variable.
norm: A normal continuous random variable.
beta: A beta continuous random variable.
gamma: A gamma continuous random variable.
t: A Student’s T continuous random variable.
chi2: A chi-squared continuous random variable.
f: An F continuous random variable.
multivariate_normal: A multivariate normal random variable.
dirichlet: A Dirichlet random variable.
wishart: A Wishart random variable.
Discrete
http://docs.scipy.org/doc/scipy/reference/tutorial/stats/discrete.html#discrete-distributions-in-scipy-stats
bernoulli: A Bernoulli discrete random variable.
binom: A binomial discrete random variable.
boltzmann: A Boltzmann (Truncated Discrete Exponential) random variable.
random variable 사용 방법
파라미터를 주고 random variable object 생성
method 사용
Common Method
rvs: 샘플 생성 (random value sampling) => 실제로 많이 쓸것이다. 100 하면 100개 뽑아줌.
pdf or pmf: Probability Density Function
cdf: Cumulative Distribution Function
stats: Return mean, variance, (Fisher’s) skew, or (Fisher’s) kurtosis
moment: non-central moments of the distribution
fit: parameter estimation => 어떤 샘플이 있을때,(정규분포라면), 중앙값 평균, 계산해준다.
Common Parameters
parameter는 모형 마다 달라진다.
random_state: seed
size: 생성하려는 샘플의 shape
loc: 일반적으로 평균의 값
scale: 일반적으로 표준편차의 값
End of explanation
"""
sp.pi
import scipy.constants
sp.constants.c # speed of light
"""
Explanation: scipy.constants 상수
특별 상수
scipy.pi
기타 상수
scipy.constants.XXXX
단위
yotta, zetta, exa, peta, tera, giga, mega, kilo, hecto, deka
deci, centi, milli, micro, nano, pico, femto, atto, zepto
lb, oz, degree
inch, foot, yard, mile, au, light_year, parsec
hectare, acre, gallon
mph, mach, knot
End of explanation
"""
x = np.linspace(-3, 3, 1000)
y1 = sp.special.erf(x)
a = plt.subplot(211)
plt.plot(x, y1)
plt.title("erf")
a.xaxis.set_ticklabels([])
y2 = sp.special.expit(x)
plt.subplot(212)
plt.plot(x, y2)
plt.title("logistic")
"""
Explanation: scipy.special 수학 함수
Gamma, Beta, Erf, Logit
Bessel, Legendre
End of explanation
"""
A = np.array([[1, 2],
[3, 4]])
sp.linalg.inv(A)
sp.linalg.det(A)
"""
Explanation: scipy.linalg 선형대수
inv, pinv, det
End of explanation
"""
from scipy.interpolate import interp1d
x = np.linspace(0, 10, num=11, endpoint=True)
y = np.cos(-x**2/9.0)
f = interp1d(x, y)
f2 = interp1d(x, y, kind='cubic')
xnew = np.linspace(0, 10, num=41)
plt.plot(x, y, 'o', xnew, f(xnew), '-', xnew, f2(xnew), '--')
plt.legend(['data', 'linear', 'cubic'])
x, y = np.mgrid[-1:1:20j, -1:1:20j]
z = (x+y) * np.exp(-6.0*(x*x+y*y))
plt.pcolormesh(x, y, z)
xnew, ynew = np.mgrid[-1:1:100j, -1:1:100j]
tck = sp.interpolate.bisplrep(x, y, z, s=0)
znew = sp.interpolate.bisplev(xnew[:,0], ynew[0,:], tck)
plt.pcolormesh(xnew, ynew, znew)
"""
Explanation: scipy.interpolate 보간
자료 사이의 빠진 부분을 유추
1차원 보간
2차원 보간
End of explanation
"""
from scipy import optimize
def f(x):
return x**2 + 10*np.sin(x)
x = np.arange(-10, 10, 0.1)
plt.plot(x, f(x))
result = optimize.minimize(f, 4)
print(result)
x0 = result['x']
x0
plt.plot(x, f(x));
plt.hold(True)
plt.scatter(x0, f(x0), s=200)
x1 = optimize.minimize(sixhump, (1, 1))['x']
x2 = optimize.minimize(sixhump, (-1, -1))['x']
print(x1, x2)
"""
Explanation: scipy.optimize 최적화
함수의 최소값 찾기
End of explanation
"""
time_step = 0.02
period = 5.
time_vec = np.arange(0, 20, time_step)
sig = np.sin(2 * np.pi / period * time_vec) + 0.5 * np.random.randn(time_vec.size)
plt.plot(sig)
import scipy.fftpack
sample_freq = sp.fftpack.fftfreq(sig.size, d=time_step)
sig_fft = sp.fftpack.fft(sig)
pidxs = np.where(sample_freq > 0)
freqs, power = sample_freq[pidxs], np.abs(sig_fft)[pidxs]
freq = freqs[power.argmax()]
plt.stem(freqs[:50], power[:50])
plt.xlabel('Frequency [Hz]')
plt.ylabel('plower')
rv = sp.stats.norm(10,10)
x = rv.rvs(100)
x
x_avg, x_std = sp.stats.norm.fit(x)
x_std
x_graph = rv.pdf(x)
sns.distplot(x)
y = rv.rvs(30, 100)
y
y.mean()
y_avg, y_std = sp.stats.norm.fit(y)
y_graph = sp.stats.norm(y_avg, y_std)
y_graph
aaa_list = []
for _ in range(30):
aaa = rv.rvs(100)
aaa_list.append(aaa.mean())
aaa_list
sns.distplot(aaa_list)
sns.distplot(sp.stats.norm(10,10).rvs((50,100)).mean(axis=1));
"""
Explanation: scipy.fftpack 고속 퓨리에 변환 Fast Fourier transforms
신호를 주파수(frequency)영역으로 변환
스펙트럼(spectrum)
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/0bf55d4c93021947144bdb72823131e5/read_neo_format.ipynb | bsd-3-clause | import neo
import mne
"""
Explanation: How to use data in neural ensemble (NEO) format
This example shows how to create an MNE-Python ~mne.io.Raw object from data
in the neural ensemble_ format. For general
information on creating MNE-Python's data objects from NumPy arrays, see
tut-creating-data-structures.
End of explanation
"""
reader = neo.io.ExampleIO('fakedata.nof')
block = reader.read(lazy=False)[0] # get the first block
segment = block.segments[0] # get data from first (and only) segment
signals = segment.analogsignals[0] # get first (multichannel) signal
data = signals.rescale('V').magnitude.T
sfreq = signals.sampling_rate.magnitude
ch_names = [f'Neo {(idx + 1):02}' for idx in range(signals.shape[1])]
ch_types = ['eeg'] * len(ch_names) # if not specified, type 'misc' is assumed
info = mne.create_info(ch_names=ch_names, ch_types=ch_types, sfreq=sfreq)
raw = mne.io.RawArray(data, info)
raw.plot(show_scrollbars=False)
"""
Explanation: This example uses NEO's ExampleIO object for creating fake data. The data
will be all zeros, so the plot won't be very interesting, but it should
demonstrate the steps to using NEO data. For actual data and different file
formats, consult the NEO documentation.
End of explanation
"""
|
drericstrong/Blog | 20170526_FastFourierTransformInPython.ipynb | agpl-3.0 | import numpy as np
from scipy import pi
import matplotlib.pyplot as plt
%matplotlib inline
# Sampling rate and time vector
start_time = 0 # seconds
end_time = 2 # seconds
sampling_rate = 1000 # Hz
N = (end_time - start_time)*sampling_rate # array size
# Frequency domain peaks
peak1_hz = 60 # Hz where the peak occurs
peak1_mag = 25 # magnitude of the peak
peak2_hz = 270 # Hz where the peak occurs
peak2_mag = 2 # magnitude of the peak
# Noise control
noise_loc = 0 # the Gaussian noise is mean-centered
noise_mag = 0.5 # magnitude of added noise
# Vibration data generation
time = np.linspace(start_time, end_time, N)
vib_data = peak1_mag*np.sin(2*pi*peak1_hz*time) + \
peak2_mag*np.sin(2*pi*peak2_hz*time) + \
np.random.normal(0, noise_mag, N)
# Data plotting
plt.plot(time[0:100], vib_data[0:100])
plt.xlabel('Time')
plt.ylabel('Vibration (g)')
plt.title('Time Domain (Healthy Machinery)');
"""
Explanation: This blog post will demonstrate how to use a Fourier Transform (FT) in Python to analyze data in the frequency-domain. An FT is designed to convert a time-domain signal into the frequency-domain. The Fast Fourier Transform (FFT), developed by Tukey et al. in the 1960s, is the most commonly used algorithm to accomplish a Fourier Transform in practice.
Features which may be hidden or invisible in the time-domain may be easier to assess in the frequency-domain. Converting time-domain data into the frequency-domain is also a commonly-used exploratory technique to reveal any repeating patterns.
One of the most promising applications of an FFT for equipment condition-monitoring is analysis of the vibration from accelerometers placed on rotating machinery. If the rotating machinery contains ball bearings, the balls within the bearings will pass the accelerometer at a specified frequency (depending on the number of balls and the bearing geometry) which will show up as a peak in the frequency spectrum. The magnitude of the peak is often used to diagnose faults within the bearings, with high magnitudes indicating an impending failure.
First, we will simulate some time-domain data with a strong frequency component at 60 Hz and a weak frequency component at 270 Hz, along with random normally-distributed process noise. Let's imagine that the equipment is rotating at 60 Hz (which explains why this frequency component is "strong"), and the bearing balls will pass the accelerometer at 270 Hz (with a weaker magnitude than the "fundamental" rotating frequency). Furthermore, assume that 2 seconds of data is available, sampled at 1000 Hz.
End of explanation
"""
from scipy.fftpack import fft
# Nyquist Sampling Criteria
T = 1/sampling_rate # inverse of the sampling rate
x = np.linspace(0.0, 1.0/(2.0*T), int(N/2))
# FFT algorithm
yr = fft(vib_data) # "raw" FFT with both + and - frequencies
y = 2/N * np.abs(yr[0:np.int(N/2)]) # positive freqs only
# Plotting the results
plt.plot(x, y)
plt.xlabel('Frequency (Hz)')
plt.ylabel('Vibration (g)')
plt.title('Frequency Domain (Healthy Machinery)');
"""
Explanation: Notice that the sinusoidal shape is fairly recognizable; this effect occurs because the 60 Hz peak has a high magnitude compared to the 270 Hz peak, and the amount of added Gaussian noise is minimal. The peak-to-peak measurement in the figure is greater than 0.015 seconds but less than 0.02 seconds; in fact, it's equal to 1/60 Hz, or about 0.017 seconds.
Both numpy and scipy contain functions for evaluating a Discrete Fourier Transform. We will be using the scipy.fftpack.ftt function in this example. This function will use the time-domain vibration data we created above to generate frequency-domain data.
According to the Nyquist-Shannon Sampling Theorem, beyond the scope of this blog post, we can only examine frequency-spectrum components up to the sampling rate divided by two. Increasing the sampling rate will widen the available frequency spectrum (we can look at higher frequencies); increasing the number of data samples will increase the resolution of the FFT (generally, the frequency spectrum is more accurate). Our sampling rate was 1000 Hz, so we can safely examine frequencies up to 500 Hz, which is greater than the largest frequency of interest (270 Hz).
The following code will generate the frequency spectrum of the vibration data using the scipy.fftpack.ftt function. The fft function will generate both the "positive-frequency" and "negative-frequency" parts of the spectrum (see here for more information), so we
End of explanation
"""
# Time-Domain Plot
peak2_mag2 = 8 # magnitude of the peak
vib_data2 = peak1_mag*np.sin(2*pi*peak1_hz*time) + \
peak2_mag2*np.sin(2*pi*peak2_hz*time) + \
np.random.normal(0, noise_mag, N)
plt.figure()
plt.plot(time[0:100], vib_data2[0:100])
plt.xlabel('Time')
plt.ylabel('Vibration (g)')
plt.title('Time Domain (Faulted Bearing)')
# Frequency-Domain Plot
yr2 = fft(vib_data2) # "raw" FFT with both + and - frequencies
y2 = 2/N * np.abs(yr2[0:np.int(N/2)]) # positive freqs only
plt.figure()
plt.plot(x, y2)
plt.xlabel('Frequency (Hz)')
plt.ylabel('Vibration (g)')
plt.title('Frequency Domain (Faulted Bearing)');
"""
Explanation: In the figure above, the primary peak can be seen at 60 Hz (rotating frequency of the equipment), while the secondary peak is at 270 Hz, as expected (ball pass frequency).
What if the bearing had a fault? If so, the magnitude of the peak at 270 Hz would be expected to increase, as in the following figure:
End of explanation
"""
# Reduced sampling frequency
sampling_rate3 = 500 # Hz
N3 = (end_time - start_time)*sampling_rate3 # array size
time3 = np.linspace(start_time, end_time, N3)
vib_data3 = peak1_mag*np.sin(2*pi*peak1_hz*time3) + \
peak2_mag*np.sin(2*pi*peak2_hz*time3) + \
np.random.normal(0, noise_mag, N3)
T3 = 1/sampling_rate3 # inverse of the sampling rate
x3 = np.linspace(0.0, 1.0/(2.0*T3), int(N3/2))
# FFT algorithm
yr3 = fft(vib_data3) # "raw" FFT with both + and - frequencies
y3 = 2/N3 * np.abs(yr3[0:np.int(N3/2)]) # positive freqs only
# Plotting the results
plt.plot(x3, y3)
plt.xlabel('Frequency (Hz)')
plt.ylabel('Vibration (g)')
plt.title('Frequency Domain (Aliasing)');
"""
Explanation: Notice that the 60 Hz component in the time-domain signal is slightly less recognizable, and the magnitude of the 270 Hz peak in the frequency-domain has increased significantly.
For monitoring the bearing condition, the maximum "narrowband" ball pass frequency (e.g. the band of frequencies between 260 Hz and 280 Hz) will be trended over time. If the magnitude of the peak deviates significantly from the magnitude under healthy machinery operation (e.g. increasing beyond the 1% confidence interval of the distribution of the peak magnitudes under normal operation), a bearing failure is likely imminent (diagnostics). If these vibration measurements are fairly noise-free and trendable over time, the failure of the bearing can be predicted in advance (prognostics) by determining the amount of time until the bearing reaches a predefined fault threshold.
Finally, what would happen if our sampling rate had been lower than the Nyquist Sampling Theorem? Let's reduce the sampling rate to 500 Hz and find out:
End of explanation
"""
|
wheeler-microfluidics/teensy-minimal-rpc | teensy_minimal_rpc/notebooks/dma-examples/Example - Scatter array of 4 to 4 separate arrays.ipynb | gpl-3.0 | import numpy as np
num_sources = 4
src_array = np.arange(1, num_sources + 1)
samples_per_source = 5
print src_array
print np.repeat(src_array, samples_per_source)
"""
Explanation: This example demonstrates how to scatter values from a source array to
implement the equivalent of the numpy.repeat function.
TODO
The method here uses one DMA channel per "source". However, this should be
able to be accomplished using the DMA scatter/gather functionality and use
at most 2 DMA channels
End of explanation
"""
import numpy as np
from teensy_minimal_rpc import SerialProxy
import teensy_minimal_rpc.DMA as dma
# Disconnect from existing proxy (if available)
try:
del proxy
except NameError:
pass
proxy = SerialProxy()
"""
Explanation: Connect to device
End of explanation
"""
N = 512
proxy.free_all()
# Allocate source array
src_addr = proxy.mem_alloc(N)
# Allocate destination array
dst_addr = proxy.mem_alloc(N)
# Fill first 16 bytes of source array with the numbers 1-16
proxy.mem_cpy_host_to_device(src_addr, np.arange(1, 17, dtype='uint8'))
# Fill the destination array with all zeros
proxy.mem_fill_uint32(dst_addr, 0, N / 4)
print 'SOURCE: ', proxy.mem_cpy_device_to_host(src_addr, 16)
print 'TARGET: ', proxy.mem_cpy_device_to_host(dst_addr, 16)
"""
Explanation: Allocate two arrays: source and destination
End of explanation
"""
# Create Transfer Control Descriptor configuration to match the settings
# shown in the example from the manual.
num_sources = 4
samples_per_source = 5
tcds = [dma.TCD(CITER_ELINKNO=dma.R_TCD_ITER_ELINKNO(ITER=1),
BITER_ELINKNO=dma.R_TCD_ITER_ELINKNO(ITER=1),
ATTR=dma.R_TCD_ATTR(SSIZE=dma.R_TCD_ATTR._8_BIT,
DSIZE=dma.R_TCD_ATTR._8_BIT),
NBYTES_MLNO=num_sources,
SADDR=int(src_addr),
SOFF=1,
SLAST=-num_sources,
DADDR=int(dst_addr + i),
DOFF=samples_per_source,
DLASTSGA=-samples_per_source * num_sources,
CSR=dma.R_TCD_CSR(START=0, DONE=False,
MAJORELINK=(True
if i < samples_per_source - 1
else False),
MAJORLINKCH=i + 1))
for i in xrange(samples_per_source)]
for i, tcd in enumerate(tcds):
proxy.update_dma_TCD(i, tcd)
"""
Explanation: Create Transfer Control Descriptor (TCD) configurations
End of explanation
"""
print 'SOURCE: ', proxy.mem_cpy_device_to_host(src_addr, 16)
# Fill the destination array with all zeros
proxy.mem_fill_uint32(dst_addr, 0, N / 4)
print 'TARGET:'
print ' Before:', proxy.mem_cpy_device_to_host(dst_addr, 16)
# Apply TCD configuration to DMA channel 0 to conduct transfer.
proxy.update_dma_registers(dma.Registers(SSRT=0))
print ' After:', proxy.mem_cpy_device_to_host(dst_addr, num_sources * samples_per_source)
# Confirm that output array matches expected values.
assert((proxy.mem_cpy_device_to_host(dst_addr, num_sources * samples_per_source)
== np.repeat(src_array, samples_per_source)).all())
"""
Explanation: Apply Transfer Control Descriptor configuration to start transfer
End of explanation
"""
|
PyPSA/PyPSA | examples/notebooks/unit-commitment.ipynb | mit | import pypsa
import pandas as pd
"""
Explanation: Unit commitment
This tutorial runs through examples of unit commitment for generators at a single bus. Examples of minimum part-load, minimum up time, minimum down time, start up costs, shut down costs and ramp rate restrictions are shown.
To enable unit commitment on a generator, set its attribute committable = True.
End of explanation
"""
nu = pypsa.Network(snapshots=range(4))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
committable=True,
p_min_pu=0.3,
marginal_cost=20,
p_nom=10000,
)
nu.add(
"Generator",
"gas",
bus="bus",
committable=True,
marginal_cost=70,
p_min_pu=0.1,
p_nom=1000,
)
nu.add("Load", "load", bus="bus", p_set=[4000, 6000, 5000, 800])
nu.lopf()
nu.generators_t.status
nu.generators_t.p
"""
Explanation: Minimum part load demonstration
In final hour load goes below part-load limit of coal gen (30%), forcing gas to commit.
End of explanation
"""
nu = pypsa.Network(snapshots=range(4))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
committable=True,
p_min_pu=0.3,
marginal_cost=20,
p_nom=10000,
)
nu.add(
"Generator",
"gas",
bus="bus",
committable=True,
marginal_cost=70,
p_min_pu=0.1,
up_time_before=0,
min_up_time=3,
p_nom=1000,
)
nu.add("Load", "load", bus="bus", p_set=[4000, 800, 5000, 3000])
nu.lopf()
nu.generators_t.status
nu.generators_t.p
"""
Explanation: Minimum up time demonstration
Gas has minimum up time, forcing it to be online longer
End of explanation
"""
nu = pypsa.Network(snapshots=range(4))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
committable=True,
p_min_pu=0.3,
marginal_cost=20,
min_down_time=2,
down_time_before=1,
p_nom=10000,
)
nu.add(
"Generator",
"gas",
bus="bus",
committable=True,
marginal_cost=70,
p_min_pu=0.1,
p_nom=4000,
)
nu.add("Load", "load", bus="bus", p_set=[3000, 800, 3000, 8000])
nu.lopf()
nu.objective
nu.generators_t.status
nu.generators_t.p
"""
Explanation: Minimum down time demonstration
Coal has a minimum down time, forcing it to go off longer.
End of explanation
"""
nu = pypsa.Network(snapshots=range(4))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
committable=True,
p_min_pu=0.3,
marginal_cost=20,
min_down_time=2,
start_up_cost=5000,
p_nom=10000,
)
nu.add(
"Generator",
"gas",
bus="bus",
committable=True,
marginal_cost=70,
p_min_pu=0.1,
shut_down_cost=25,
p_nom=4000,
)
nu.add("Load", "load", bus="bus", p_set=[3000, 800, 3000, 8000])
nu.lopf(nu.snapshots)
nu.objective
nu.generators_t.status
nu.generators_t.p
"""
Explanation: Start up and shut down costs
Now there are associated costs for shutting down, etc
End of explanation
"""
nu = pypsa.Network(snapshots=range(6))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
marginal_cost=20,
ramp_limit_up=0.1,
ramp_limit_down=0.2,
p_nom=10000,
)
nu.add("Generator", "gas", bus="bus", marginal_cost=70, p_nom=4000)
nu.add("Load", "load", bus="bus", p_set=[4000, 7000, 7000, 7000, 7000, 3000])
nu.lopf()
nu.generators_t.p
nu = pypsa.Network(snapshots=range(6))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
marginal_cost=20,
ramp_limit_up=0.1,
ramp_limit_down=0.2,
p_nom_extendable=True,
capital_cost=1e2,
)
nu.add("Generator", "gas", bus="bus", marginal_cost=70, p_nom=4000)
nu.add("Load", "load", bus="bus", p_set=[4000, 7000, 7000, 7000, 7000, 3000])
nu.lopf(nu.snapshots)
nu.generators.p_nom_opt
nu.generators_t.p
nu = pypsa.Network(snapshots=range(7))
nu.add("Bus", "bus")
# Can get bad interactions if SU > RU and p_min_pu; similarly if SD > RD
nu.add(
"Generator",
"coal",
bus="bus",
marginal_cost=20,
committable=True,
p_min_pu=0.05,
initial_status=0,
ramp_limit_start_up=0.1,
ramp_limit_up=0.2,
ramp_limit_down=0.25,
ramp_limit_shut_down=0.15,
p_nom=10000.0,
)
nu.add("Generator", "gas", bus="bus", marginal_cost=70, p_nom=10000)
nu.add("Load", "load", bus="bus", p_set=[0.0, 200.0, 7000, 7000, 7000, 2000, 0])
nu.lopf()
nu.generators_t.p
nu.generators_t.status
nu.generators.loc["coal"]
"""
Explanation: Ramp rate limits
End of explanation
"""
sets_of_snapshots = 6
p_set = [4000, 5000, 700, 800, 4000]
nu = pypsa.Network(snapshots=range(len(p_set) * sets_of_snapshots))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
committable=True,
p_min_pu=0.3,
marginal_cost=20,
min_down_time=2,
min_up_time=3,
up_time_before=1,
ramp_limit_up=1,
ramp_limit_down=1,
ramp_limit_start_up=1,
ramp_limit_shut_down=1,
shut_down_cost=150,
start_up_cost=200,
p_nom=10000,
)
nu.add(
"Generator",
"gas",
bus="bus",
committable=True,
marginal_cost=70,
p_min_pu=0.1,
up_time_before=2,
min_up_time=3,
shut_down_cost=20,
start_up_cost=50,
p_nom=1000,
)
nu.add("Load", "load", bus="bus", p_set=p_set * sets_of_snapshots)
overlap = 2
for i in range(sets_of_snapshots):
nu.lopf(nu.snapshots[i * len(p_set) : (i + 1) * len(p_set) + overlap], pyomo=False)
pd.concat(
{"Active": nu.generators_t.status.astype(bool), "Output": nu.generators_t.p}, axis=1
)
"""
Explanation: Rolling horizon example
This example solves sequentially in batches
End of explanation
"""
|
zshujon/USDC_Project_02_Traffic_Sign_Classification | 00_TSC_NN_Keras.ipynb | mit | import matplotlib.pyplot as plt
import random as rn
import numpy as np
from sklearn.model_selection import train_test_split
import pickle
from keras.models import Sequential
from keras.layers import Dense, Input, Activation
from keras.utils import np_utils
%matplotlib inline
"""
Explanation: <h1>Traffic Signs Classification</h1>
<p>Using German Traffic Sign Dataset (http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset). You can download the from <a href="https://d17h27t6h515a5.cloudfront.net/topher/2016/November/581faac4_traffic-signs-data/traffic-signs-data.zip"> here</a></p>
<h1>Imports</h1>
End of explanation
"""
train_data = 'data/train.p'
test_data = 'data/test.p'
with open(train_data, 'rb') as f:
train = pickle.load(f)
with open(test_data, 'rb') as f:
test = pickle.load(f)
"""
Explanation: <h1>Loading the Data</h1>
End of explanation
"""
X_train, X_val, Y_train, Y_val = train_test_split(train['features'], train['labels'], test_size=0.3, random_state=0)
X_test, Y_test = test['features'], test['labels']
n_train = X_train.shape[0]
n_val = X_val.shape[0]
n_test = X_test.shape[0]
image_shape = X_train.shape[1], X_train.shape[2]
n_channels = X_train.shape[3]
n_classes = np.unique(train['labels']).size
print('Train data size:\t\t\t', n_train)
print('Validation data size:\t\t\t', n_val)
print('test data size:\t\t\t\t', n_test)
print('Image shape:\t\t\t\t', image_shape)
print('Number of color channels in image:\t', n_channels)
print('Number of classes:\t\t\t', n_classes)
"""
Explanation: <h1>Data Info</h1>
<p>Spliting the train data as train and validation set</p>
End of explanation
"""
def reshape(arr):
return arr.reshape(-1, image_shape[0]*image_shape[1]*n_channels)
X_train_flat = reshape(X_train)
X_val_flat = reshape(X_val)
X_test_flat = reshape(X_test)
def print_info(st, arr_1, arr_2):
print('{} data shape before reshape: {}, and after reshape: {}'.format(st, arr_1.shape, arr_2.shape))
print_info('Train', X_train, X_test_flat)
print_info('Validation', X_val, X_val_flat)
print_info('Test', X_test, X_test_flat)
"""
Explanation: <h2>Reshape All the Data</h2>
End of explanation
"""
def normalize(arr):
arr = arr.astype('float32')
return (arr - np.mean(arr))/np.std(arr)
X_train_norm = normalize(X_train_flat)
X_val_norm = normalize(X_val_flat)
X_test_norm = normalize(X_test_flat)
def print_info(st, arr_1, arr_2):
print('{} Data: Before normalization : type: {}, mean: {}, std: {}. After processing, type: {}, mean: {}, std: {}'. format(st, arr_1.dtype, round(np.mean(arr_1),2), round(np.std(arr_1),2), arr_2.dtype, round(np.mean(arr_2),2), round(np.std(arr_2),2)))
print_info('Train', X_train_flat, X_train_norm)
print_info('Valdation', X_val_flat, X_val_norm)
print_info('Test', X_test_flat, X_test_norm)
"""
Explanation: <h2>Data Normalization</h2>
<p>Process all the data as close as mean 0.0 and standard deviation 1.0.</p>
End of explanation
"""
def make_categorical(arr):
return np_utils.to_categorical(arr, n_classes)
Y_train_cat = make_categorical(Y_train)
Y_val_cat = make_categorical(Y_val)
Y_test_cat = make_categorical(Y_test)
"""
Explanation: <p>Convert all the classes as one hot encode.</p>
End of explanation
"""
trc = rn.sample(range(n_test), 16)
def plot_images(arr_1, arr_2, pred=False):
fig, axes = plt.subplots(4, 4, figsize=(10,10))
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
if len(arr_1.shape) == 2:
ax.imshow(arr_1[trc[i]].reshape(32,32,3))
ax.set_xlabel('true:{}'.format(arr_2[trc[i]]))
else:
ax.imshow(arr_1[trc[i]])
ax.set_xlabel('true:{}, pred:{}'.format(arr_2[trc[i]], pred[trc[i]]))
ax.set_xticks([])
ax.set_yticks([])
plot_images(X_train_flat, Y_train)
"""
Explanation: <h2>Some Random Image</h2>
<p>Before normalization</p>
End of explanation
"""
plot_images(X_train_norm, Y_train)
"""
Explanation: <p>After normalization</p>
End of explanation
"""
model = Sequential()
model.add(Dense(256, activation='relu', input_shape=(32*32*3,)))
#model.add(Dense(800, activation='relu',input_shape=(1555,)))
model.add(Dense(43, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train_norm, Y_train_cat, batch_size=64, epochs=20, verbose=1, validation_data=(X_val_norm, Y_val_cat))
history.history['val_acc'][-1]
"""
Explanation: <h2>Build the Model with Keras</h2>
End of explanation
"""
score, acc = model.evaluate(X_test_norm, Y_test_cat, batch_size=64, verbose=0)
print('Score:\t', score)
print('Acc:\t{}%'.format(round(acc*100)))
"""
Explanation: <h2>Model Evaluation</h2>
End of explanation
"""
Y_predd = model.predict_classes(X_test_norm, batch_size=64, verbose=0)
plot_images(X_test, Y_test, Y_pred)
"""
Explanation: <h2>Predicted Classes</h2>
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cmcc/cmip6/models/cmcc-cm2-vhr4/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-vhr4', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-VHR4
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
CoderDojoTC/python-minecraft | classroom-code/exercises/Exercise 3 -- Basic Python Syntax.ipynb | mit | 1 + 1
2 * 4
(2 * 4) - 2
4 ** 2 # Raise a number to a power
16 / 4
15 / 4
2.5 * 2.0
15.0 / 4
"""
Explanation: Basic Python Syntax
In this exercise, you will work through some simple blocks of code so you learn the essentials of the Python language syntax.
For each of the code blocks below, read the code before running it. Try to imagine what it will do. Then run it, and check to see if you were right. If it did something other than what you expected, play with it a little bit. Can you make it do what you were expecting? Try changing it, and run it again, and see what happens.
Pressing Shift+Enter will run the currently highlighted code block.
Feel free to create a new, empty code block and use it as a place to experiment with code of your own.
Simple Expressions
Let's start off with some simple math.
End of explanation
"""
'a' + 'b'
'Python ' + 'is' + ' fun'
"Python isn't afraid of single quotes"
'It is not "afraid" of double quotes either'
"The value is " + 17
"The value is " + str(17)
'The value is {0}'.format(17)
'Is {0} smaller than {1}?'.format(5.0, 12)
'Yes {1} is bigger than {0}.'.format(5.0, 12)
"""
Explanation: Next, let's move on to strings.
End of explanation
"""
True
True or False
True and False
1 < 2
'a' > 'z'
'a' = 'a'
'a' == 'a'
"""
Explanation: Let's look at logical values
End of explanation
"""
[1, 2, 3]
range(1, 7)
[1, 2] + [3]
['a', 'b', 'c'] + ['c', 'd', 'e']
1 in [1, 2, 3]
7 in [1, 2, 3]
len([1, 2, 3, 4, 5, 10, 20])
max([1, 5, 2, 100, 75, 3])
"""
Explanation: Next, let's look at some lists.
End of explanation
"""
a = 3
print a
a = 2
a
a = 7
a + 1
b = 2
a * b
c = 'Python'
c
d = c + ' is cool'
d
a = [1, 2, 3]
a.extend([8, 10, 12])
a
a[0]
a[1]
a[-1]
a[-2]
a[2:5]
"""
Explanation: Variables
Variables will hold onto values you give them.
End of explanation
"""
a = 5
b = 10
if a < b:
print "Smaller"
else:
print "Larger"
a = 500
b = 100
if a < b:
print "Smaller"
else:
print "Larger"
a = 10
b = 100
if a == b:
print "Same"
else:
print "Different"
a = 10
b = 100
if a != b:
print "Not equal"
else:
print "Same"
"""
Explanation: Making Decisions
End of explanation
"""
a = [1, 2, 3, 4, 5, 6]
for i in a:
print i
for i in range(0, 20, 2):
print i
a = 5
while a > 0:
print a
a = a - 1
"""
Explanation: Going Around in Circles
End of explanation
"""
|
computational-class/cjc2016 | code/09.09-machine-learning-summary.ipynb | mit | %matplotlib inline
import sklearn
from sklearn import datasets
from sklearn import linear_model
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
from sklearn.preprocessing import scale
# boston data
boston = datasets.load_boston()
y = boston.target
X = boston.data
boston['feature_names']
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
# Fit regression model (using the natural log of one of the regressors)
results = smf.ols('boston.target ~ boston.data', data=boston).fit()
print(results.summary())
regr = linear_model.LinearRegression()
lm = regr.fit(boston.data, y)
lm.intercept_, lm.coef_, lm.score(boston.data, y)
predicted = regr.predict(boston.data)
fig, ax = plt.subplots()
ax.scatter(y, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('$Measured$', fontsize = 20)
ax.set_ylabel('$Predicted$', fontsize = 20)
plt.show()
"""
Explanation: 计算传播与机器学习
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
1、 监督式学习
工作机制:
- 这个算法由一个目标变量或结果变量(或因变量)组成。
- 这些变量由已知的一系列预示变量(自变量)预测而来。
- 利用这一系列变量,我们生成一个将输入值映射到期望输出值的函数。
- 这个训练过程会一直持续,直到模型在训练数据上获得期望的精确度。
- 监督式学习的例子有:回归、决策树、随机森林、K – 近邻算法、逻辑回归等。
2、非监督式学习
工作机制:
- 在这个算法中,没有任何目标变量或结果变量要预测或估计。
- 这个算法用在不同的组内聚类分析。
- 这种分析方式被广泛地用来细分客户,根据干预的方式分为不同的用户组。
- 非监督式学习的例子有:关联算法和 K–均值算法。
3、强化学习
工作机制:
- 这个算法训练机器进行决策。
- 它是这样工作的:机器被放在一个能让它通过反复试错来训练自己的环境中。
- 机器从过去的经验中进行学习,并且尝试利用了解最透彻的知识作出精确的商业判断。
- 强化学习的例子有马尔可夫决策过程。alphago
Chess. Here, the agent decides upon a series of moves depending on the state of the board (the environment), and the
reward can be defined as win or lose at the end of the game:
<img src = './img/mlprocess.png' width = 800>
线性回归
逻辑回归
决策树
SVM
朴素贝叶斯
K最近邻算法
K均值算法
随机森林算法
降维算法
Gradient Boost 和 Adaboost 算法
使用sklearn做线性回归
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
线性回归
通常用于估计连续性变量的实际数值(房价、呼叫次数、总销售额等)。
通过拟合最佳直线来建立自变量X和因变量Y的关系。
这条最佳直线叫做回归线,并且用 $Y= \beta *X + C$ 这条线性等式来表示。
系数 $\beta$ 和 C 可以通过最小二乘法获得
End of explanation
"""
boston.data
from sklearn.model_selection import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(boston.data,
boston.target,
test_size=0.2,
random_state=42)
regr = linear_model.LinearRegression()
lm = regr.fit(Xs_train, y_train)
lm.intercept_, lm.coef_, lm.score(Xs_train, y_train)
predicted = regr.predict(Xs_test)
fig, ax = plt.subplots()
ax.scatter(y_test, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('$Measured$', fontsize = 20)
ax.set_ylabel('$Predicted$', fontsize = 20)
plt.show()
"""
Explanation: 训练集和测试集
End of explanation
"""
from sklearn.model_selection import cross_val_score
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, boston.data , boston.target, cv = 3)
scores.mean()
scores = [cross_val_score(regr, boston.data,\
boston.target,\
cv = int(i)).mean() \
for i in range(3, 50)]
plt.plot(range(3, 50), scores,'r-o')
plt.show()
data_X_scale = scale(boston.data)
scores = cross_val_score(regr,data_X_scale, boston.target,\
cv = 7)
scores.mean()
"""
Explanation: 交叉验证
cross-validation
k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:
- A model is trained using k-1 of the folds as training data;
- the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy).
End of explanation
"""
import pandas as pd
df = pd.read_csv('../data/tianya_bbs_threads_list.txt', sep = "\t", header=None)
df=df.rename(columns = {0:'title', 1:'link', 2:'author',3:'author_page', 4:'click', 5:'reply', 6:'time'})
df[:2]
# 定义这个函数的目的是让读者感受到:
# 抽取不同的样本,得到的结果完全不同。
def randomSplit(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append([dataX[k]])
dataY_test.append(dataY[k])
else:
dataX_train.append([dataX[k]])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
import numpy as np
# Use only one feature
data_X = df.reply
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(np.log(df.click+1),
np.log(df.reply+1), 20)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(data_X_test, data_y_test))
data_X_train[:3]
y_true, y_pred = data_y_test, regr.predict(data_X_test)
plt.scatter(y_pred, y_true, color='black')
plt.show()
# Plot outputs
plt.scatter(data_X_test, data_y_test, color='black')
plt.plot(data_X_test, regr.predict(data_X_test), color='blue', linewidth=3)
plt.show()
# The coefficients
'Coefficients: \n', regr.coef_
# The mean square error
"Residual sum of squares: %.2f" % np.mean((regr.predict(data_X_test) - data_y_test) ** 2)
df.click_log = [[np.log(df.click[i]+1)] for i in range(len(df))]
df.reply_log = [[np.log(df.reply[i]+1)] for i in range(len(df))]
from sklearn.model_selection import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(df.click_log, df.reply_log,test_size=0.2, random_state=0)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(Xs_train, y_train)
# Explained variance score: 1 is perfect prediction
'Variance score: %.2f' % regr.score(Xs_test, y_test)
# Plot outputs
plt.scatter(Xs_test, y_test, color='black')
plt.plot(Xs_test, regr.predict(Xs_test), color='blue', linewidth=3)
plt.show()
from sklearn.model_selection import cross_val_score
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, df.click_log, \
df.reply_log, cv = 3)
scores.mean()
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, df.click_log,
df.reply_log, cv =5)
scores.mean()
"""
Explanation: 使用天涯bbs数据
End of explanation
"""
repost = []
for i in df.title:
if u'转载' in i:
repost.append(1)
else:
repost.append(0)
data_X = [[df.click[i], df.reply[i]] for i in range(len(df))]
data_X[:3]
from sklearn.linear_model import LogisticRegression
df['repost'] = repost
model = LogisticRegression()
model.fit(data_X,df.repost)
model.score(data_X,df.repost)
def randomSplitLogistic(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append(dataX[k])
dataY_test.append(dataY[k])
else:
dataX_train.append(dataX[k])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
# Create logistic regression object
log_regr = LogisticRegression()
# Train the model using the training sets
log_regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
'Variance score: %.2f' % log_regr.score(data_X_test, data_y_test)
y_true, y_pred = data_y_test, log_regr.predict(data_X_test)
y_true, y_pred
print(classification_report(y_true, y_pred))
from sklearn.cross_validation import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(data_X, df.repost, test_size=0.2, random_state=42)
# Create logistic regression object
log_regr = LogisticRegression()
# Train the model using the training sets
log_regr.fit(Xs_train, y_train)
# Explained variance score: 1 is perfect prediction
'Variance score: %.2f' % log_regr.score(Xs_test, y_test)
print('Logistic score for test set: %f' % log_regr.score(Xs_test, y_test))
print('Logistic score for training set: %f' % log_regr.score(Xs_train, y_train))
y_true, y_pred = y_test, log_regr.predict(Xs_test)
print(classification_report(y_true, y_pred))
logre = LogisticRegression()
scores = cross_val_score(logre, data_X, df.repost, cv = 3)
scores.mean()
logre = LogisticRegression()
data_X_scale = scale(data_X)
# The importance of preprocessing in data science and the machine learning pipeline I:
scores = cross_val_score(logre, data_X_scale, df.repost, cv = 3)
scores.mean()
"""
Explanation: 使用sklearn做logistic回归
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
logistic回归是一个分类算法而不是一个回归算法。
可根据已知的一系列因变量估计离散数值(比方说二进制数值 0 或 1 ,是或否,真或假)。
简单来说,它通过将数据拟合进一个逻辑函数(logistic function)来预估一个事件出现的概率。
因此,它也被叫做逻辑回归。因为它预估的是概率,所以它的输出值大小在 0 和 1 之间(正如所预计的一样)。
$$odds= \frac{p}{1-p} = \frac{probability\: of\: event\: occurrence} {probability \:of \:not\: event\: occurrence}$$
$$ln(odds)= ln(\frac{p}{1-p})$$
$$logit(x) = ln(\frac{p}{1-p}) = b_0+b_1X_1+b_2X_2+b_3X_3....+b_kX_k$$
End of explanation
"""
from sklearn import naive_bayes
' '.join(dir(naive_bayes))
"""
Explanation: 使用sklearn实现贝叶斯预测
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
Naive Bayes algorithm
It is a classification technique based on Bayes’ Theorem with an assumption of independence among predictors.
In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature.
why it is known as ‘Naive’? For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability that this fruit is an apple.
贝叶斯定理为使用$p(c)$, $p(x)$, $p(x|c)$ 计算后验概率$P(c|x)$提供了方法:
$$
p(c|x) = \frac{p(x|c) p(c)}{p(x)}
$$
P(c|x) is the posterior probability of class (c, target) given predictor (x, attributes).
P(c) is the prior probability of class.
P(x|c) is the likelihood which is the probability of predictor given class.
P(x) is the prior probability of predictor.
Step 1: Convert the data set into a frequency table
Step 2: Create Likelihood table by finding the probabilities like:
- p(Overcast) = 0.29, p(rainy) = 0.36, p(sunny) = 0.36
- p(playing) = 0.64, p(rest) = 0.36
Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class. The class with the highest posterior probability is the outcome of prediction.
Problem: Players will play if weather is sunny. Is this statement is correct?
We can solve it using above discussed method of posterior probability.
$P(Yes | Sunny) = \frac{P( Sunny | Yes) * P(Yes) } {P (Sunny)}$
Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 = 0.64
Now, $P (No | Sunny) = \frac{0.33 * 0.64}{0.36} = 0.60$, which has higher probability.
$P(No | Sunny) = \frac{P( Sunny | No) * P(No) } {P (Sunny)}$
Here we have P (Sunny |No) = 2/5 = 0.4, P(Sunny) = 5/14 = 0.36, P( No)= 5/14 = 0.36
Now, $P (Yes | Sunny) = \frac{0.4 * 0.36}{0.36} = 0.4$, which has lower probability.
End of explanation
"""
#Import Library of Gaussian Naive Bayes model
from sklearn.naive_bayes import GaussianNB
import numpy as np
#assigning predictor and target variables
x= np.array([[-3,7],[1,5], [1,2], [-2,0], [2,3], [-4,0], [-1,1], [1,1], [-2,2], [2,7], [-4,1], [-2,7]])
Y = np.array([3, 3, 3, 3, 4, 3, 3, 4, 3, 4, 4, 4])
#Create a Gaussian Classifier
model = GaussianNB()
# Train the model using the training sets
model.fit(x[:8], Y[:8])
#Predict Output
predicted= model.predict([[1,2],[3,4]])
predicted
"""
Explanation: naive_bayes.GaussianNB Gaussian Naive Bayes (GaussianNB)
naive_bayes.MultinomialNB([alpha, ...]) Naive Bayes classifier for multinomial models
naive_bayes.BernoulliNB([alpha, binarize, ...]) Naive Bayes classifier for multivariate Bernoulli models.
End of explanation
"""
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(df.click, df.reply, 20)
# Train the model using the training sets
model.fit(data_X_train, data_y_train)
#Predict Output
predicted= model.predict(data_X_test)
predicted
model.score(data_X_test, data_y_test)
from sklearn.cross_validation import cross_val_score
model = GaussianNB()
scores = cross_val_score(model, [[c] for c in df.click],\
df.reply, cv = 7)
scores.mean()
"""
Explanation: cross-validation
k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:
- A model is trained using k-1 of the folds as training data;
- the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy).
End of explanation
"""
from sklearn import tree
model = tree.DecisionTreeClassifier(criterion='gini')
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
# Predict
model.predict(data_X_test)
# crossvalidation
scores = cross_val_score(model, data_X, df.repost, cv = 3)
scores.mean()
"""
Explanation: 使用sklearn实现决策树
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
决策树
这个监督式学习算法通常被用于分类问题。
它同时适用于分类变量和连续因变量。
在这个算法中,我们将总体分成两个或更多的同类群。
这是根据最重要的属性或者自变量来分成尽可能不同的组别。
在上图中你可以看到,根据多种属性,人群被分成了不同的四个小组,来判断 “他们会不会去玩”。
为了把总体分成不同组别,需要用到许多技术,比如说 Gini、Information Gain、Chi-square、entropy。
End of explanation
"""
from sklearn import svm
# Create SVM classification object
model=svm.SVC()
' '.join(dir(svm))
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
# Predict
model.predict(data_X_test)
# crossvalidation
scores = []
cvs = [3, 5, 10, 25, 50, 75, 100]
for i in cvs:
score = cross_val_score(model, data_X, df.repost,
cv = i)
scores.append(score.mean() ) # Try to tune cv
plt.plot(cvs, scores, 'b-o')
plt.xlabel('$cv$', fontsize = 20)
plt.ylabel('$Score$', fontsize = 20)
plt.show()
"""
Explanation: 使用sklearn实现SVM支持向量机
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
将每个数据在N维空间中用点标出(N是你所有的特征总数),每个特征的值是一个坐标的值。
举个例子,如果我们只有身高和头发长度两个特征,我们会在二维空间中标出这两个变量,每个点有两个坐标(这些坐标叫做支持向量)。
现在,我们会找到将两组不同数据分开的一条直线。
两个分组中距离最近的两个点到这条线的距离同时最优化。
上面示例中的黑线将数据分类优化成两个小组
两组中距离最近的点(图中A、B点)到达黑线的距离满足最优条件。
这条直线就是我们的分割线。接下来,测试数据落到直线的哪一边,我们就将它分到哪一类去。
End of explanation
"""
import numpy as np
from sklearn import tree
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
train = pd.read_csv('../data/tatanic_train.csv',
sep = ",")
train.head()
train["Age"] = train["Age"].fillna(train["Age"].median())
train["Fare"] = train["Fare"].fillna(train["Fare"].median())
#Convert the male and female groups to integer form
train["Sex"][train["Sex"] == "male"] = 0
train["Sex"][train["Sex"] == "female"] = 1
#Impute the Embarked variable
train["Embarked"] = train["Embarked"].fillna('S')
#Convert the Embarked classes to integer form
train["Embarked"][train["Embarked"] == "S"] = 0
train["Embarked"][train["Embarked"] == "C"] = 1
train["Embarked"][train["Embarked"] == "Q"] = 2
#Create the target and features numpy arrays: target, features_one
target = train['Survived'].values
features_one = train[["Pclass", "Sex", "Age", "Fare"]].values
#Fit your first decision tree: my_tree_one
my_tree_one = tree.DecisionTreeClassifier()
my_tree_one = my_tree_one.fit(features_one, target)
#Look at the importance of the included features and print the score
print(my_tree_one.feature_importances_)
print(my_tree_one.score(features_one, target))
test = pd.read_csv('../data/tatanic_test.csv', sep = ",")
# Impute the missing value with the median
test.Fare[152] = test.Fare.median()
test["Age"] = test["Age"].fillna(test["Age"].median())
#Convert the male and female groups to integer form
test["Sex"][test["Sex"] == "male"] = 0
test["Sex"][test["Sex"] == "female"] = 1
#Impute the Embarked variable
test["Embarked"] = test["Embarked"].fillna('S')
#Convert the Embarked classes to integer form
test["Embarked"][test["Embarked"] == "S"] = 0
test["Embarked"][test["Embarked"] == "C"] = 1
test["Embarked"][test["Embarked"] == "Q"] = 2
# Extract the features from the test set: Pclass, Sex, Age, and Fare.
test_features = test[["Pclass","Sex", "Age", "Fare"]].values
# Make your prediction using the test set
my_prediction = my_tree_one.predict(test_features)
# Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions
PassengerId =np.array(test['PassengerId']).astype(int)
my_solution = pd.DataFrame(my_prediction, PassengerId, columns = ["Survived"])
my_solution[:3]
# Check that your data frame has 418 entries
my_solution.shape
# Write your solution to a csv file with the name my_solution.csv
my_solution.to_csv("../data/tatanic_solution_one.csv",
index_label = ["PassengerId"])
# Create a new array with the added features: features_two
features_two = train[["Pclass","Age","Sex","Fare",\
"SibSp", "Parch", "Embarked"]].values
#Control overfitting by setting "max_depth" to 10 and "min_samples_split" to 5 : my_tree_two
max_depth = 10
min_samples_split = 5
my_tree_two = tree.DecisionTreeClassifier(max_depth = max_depth,
min_samples_split = min_samples_split,
random_state = 1)
my_tree_two = my_tree_two.fit(features_two, target)
#Print the score of the new decison tree
print(my_tree_two.score(features_two, target))
# create a new train set with the new variable
train_two = train
train_two['family_size'] = train.SibSp + train.Parch + 1
# Create a new decision tree my_tree_three
features_three = train[["Pclass", "Sex", "Age", \
"Fare", "SibSp", "Parch", "family_size"]].values
my_tree_three = tree.DecisionTreeClassifier()
my_tree_three = my_tree_three.fit(features_three, target)
# Print the score of this decision tree
print(my_tree_three.score(features_three, target))
#Import the `RandomForestClassifier`
from sklearn.ensemble import RandomForestClassifier
#We want the Pclass, Age, Sex, Fare,SibSp, Parch, and Embarked variables
features_forest = train[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
#Building the Forest: my_forest
n_estimators = 100
forest = RandomForestClassifier(max_depth = 10, min_samples_split=2,
n_estimators = n_estimators, random_state = 1)
my_forest = forest.fit(features_forest, target)
#Print the score of the random forest
print(my_forest.score(features_forest, target))
#Compute predictions and print the length of the prediction vector:test_features, pred_forest
test_features = test[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
pred_forest = my_forest.predict(test_features)
print(len(test_features))
print(pred_forest[:3])
#Request and print the `.feature_importances_` attribute
print(my_tree_two.feature_importances_)
print(my_forest.feature_importances_)
#Compute and print the mean accuracy score for both models
print(my_tree_two.score(features_two, target))
print(my_forest.score(features_two, target))
"""
Explanation: 泰坦尼克号数据分析
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
End of explanation
"""
|
tensorflow/cloud | g3doc/tutorials/google_cloud_project_setup_instructions.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Cloud Authors.
End of explanation
"""
GCP_PROJECT_ID = 'YOUR_PROJECT_ID'
PROJECT_NUMBER = 'YOUR_PROJECT_NUMBER'
"""
Explanation: Setting Up and Connecting To Your Google Cloud Account
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/cloud/tutorials/google_cloud_project_setup_instructions.ipynb"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/cloud/blob/master/g3doc/tutorials/google_cloud_project_setup_instructions.ipynb""><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/cloud/blob/master/g3doc/tutorials/google_cloud_project_setup_instructions.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/cloud/tutorials/google_cloud_project_setup_instructions.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://kaggle.com/kernels/welcome?src=https://github.com/tensorflow/cloud/blob/master/g3doc/tutorials/google_cloud_project_setup_instructions.ipynb" target="blank">
<img width="90" src="https://www.kaggle.com/static/images/site-logo.png" alt="Kaggle logo">Run in Kaggle
</a>
</td>
</table>
This guide is to help first time users set up a Google Cloud Platform account specifically with the intention to use tensorflow_cloud to easily run training at scale on Google Cloud AI Platform. TensorFlow Cloud provides APIs that allow users to easily go from debugging, training, tuning Keras and TensorFlow code in a local or kaggle environment to distributed training/tuning on Cloud.
1. Sign up for Google Cloud Platform
To start go to https://cloud.google.com/ and click on “Get Started For Free". This is a two step sign up process where you will need to provide your name, address and a credit card. The starter account is free and it comes with $300 credit that you can use. For this step you will need to provide a Google Account ( i.e. your Gmail account) to sign in.
After completing the sign up process you will be redirected to Google Cloud Platform welcome page. click on the "Home" tab and make a note of your Project ID and Project number. (see Identifying projects)
End of explanation
"""
import sys
if "kaggle_secrets" in sys.modules:
from kaggle_secrets import UserSecretsClient
UserSecretsClient().set_gcloud_credentials(project=GCP_PROJECT_ID)
"""
Explanation: 2.Enable Google Cloud SDK and Authenticate your notebook
Now that you have set up your project we can continue with the rest of the configuration steps directly from this notebook. There Three types of notebook, this step (step #2) is slightly different for each notebook, the rest of the steps (step #3 to #6) are the same for all notebooks.
2.1. Auth for Kaggle notebooks
2.2. Auth for Colab notebook
2.3. Auth for Cloud AI Notebooks - Not supported.
2.1 Setup Auth for a Kaggle Notebook
If you are using a Kaggle notebook you will need to repeat this step for any new notebook that you use with this project. In your notebook click on Add-ons -> Google Cloud SDK and follow the instructions on the prompt. Then run the following command to add your auth credentials to this notebook.
End of explanation
"""
import sys
if "google.colab" in sys.modules:
from google.colab import auth
auth.authenticate_user()
"""
Explanation: 2.2 Setup Auth for a Colab Notebook
If you are using a Colab notebook you will need to repeat this step for any new notebook that you use with this project. Run the following command to add your auth credentials to this notebook.
End of explanation
"""
!gcloud beta billing accounts list
"""
Explanation: 3. Link your billing account to your project
Next step is to set up the billing account for this project. Google Cloud Creates a project for you by default which is called “My First Project”. We will use this default project. Use your Project ID (from step 1) to run the following commands. This will show you your Billing Account_ID, make a note of this for the next step.
End of explanation
"""
BILLING_ACCOUNT_ID = 'YOUR_BILLING_ACCOUNT_ID'
!gcloud beta billing projects link $GCP_PROJECT_ID --billing-account $BILLING_ACCOUNT_ID
"""
Explanation: Use your Billing Account_ID from above and run the following to link your billing account with your project.
Note if you use an existing project you may not see an Account_ID, this means you do not have the proper permissions to run the following commands, contact your admin or create a new project.
End of explanation
"""
!gcloud services --project $GCP_PROJECT_ID enable ml.googleapis.com cloudbuild.googleapis.com
"""
Explanation: 4. Enable Required APIs for tensorflow-cloud in your project
For tensorflow_cloud we use two specific APIs: AI Platform Training Jobs API and Cloud builder API. Note that this is a one time setup for this project, you do not need to rerun this command for every notebook.
End of explanation
"""
BUCKET_NAME = 'YOUR_BUCKET_NAME'
GCS_BUCKET = f'gs://{BUCKET_NAME}'
!gsutil mb -p $GCP_PROJECT_ID $GCS_BUCKET
"""
Explanation: 5. Create a Google Cloud Storage bucket
We will use this storage bucket for temporary assets as well as to save the model checkpoints. Make a note of the name of the bucket for future reference. Note bucket names are unique globally.
End of explanation
"""
SERVICE_ACCOUNT_NAME ='YOUR_SERVICE_ACCOUNT_NAME'
SERVICE_ACCOUNT_EMAIL = f'{SERVICE_ACCOUNT_NAME}@{GCP_PROJECT_ID}.iam.gserviceaccount.com'
!gcloud iam --project $GCP_PROJECT_ID service-accounts create $SERVICE_ACCOUNT_NAME
!gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role=roles/editor
"""
Explanation: Create a service account for HP Tuning jobs
This step is required to use HP Tuning on Google Cloud using CloudTuner.
To create a service account and give it project editor access run the following command and make a note of your service account name.
End of explanation
"""
DEFAULT_AI_PLATFORM_SERVICE_ACCOUNT = f'service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com'
!gcloud iam --project $GCP_PROJECT_ID service-accounts add-iam-policy-binding \
--role=roles/iam.serviceAccountAdmin \
--member=serviceAccount:$DEFAULT_AI_PLATFORM_SERVICE_ACCOUNT \
$SERVICE_ACCOUNT_EMAIL
"""
Explanation: The default AI Platform service account is identified by an email address with the format service-PROJECT_NUMBER@cloud-ml.google.com.iam.gserviceaccount.com. Using your Project number from step one, we construct the service account email and grant the default AI Platform service account admin role (roles/iam.serviceAccountAdmin) on your new service account.
End of explanation
"""
print(f"Your GCP_PROJECT_ID is: {GCP_PROJECT_ID}")
print(f"Your SERVICE_ACCOUNT_NAME is: {SERVICE_ACCOUNT_NAME}")
print(f"Your BUCKET_NAME is: {BUCKET_NAME}")
"""
Explanation: Congratulations !
You are now ready to run tensorflow-cloud. Note that these steps only need to be run one time. Once you have your project setup you can reuse the same project and bucket configuration for future runs. For any new notebooks you will need to repeat the step two to add your Google Cloud auth credentials.
Make a note of the following values as they are needed to run tensorflow-cloud.
End of explanation
"""
|
google/eng-edu | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | apache-2.0 | #@title Run on TensorFlow 2.x
%tensorflow_version 2.x
"""
Explanation: Linear Regression with a Real Dataset
This Colab uses a real dataset to predict the prices of houses in California.
Learning Objectives:
After doing this Colab, you'll know how to do the following:
Read a .csv file into a pandas DataFrame.
Examine a dataset.
Experiment with different features in building a model.
Tune the model's hyperparameters.
The Dataset
The dataset for this exercise is based on 1990 census data from California. The dataset is old but still provides a great opportunity to learn about machine learning programming.
Use the right version of TensorFlow
The following hidden code cell ensures that the Colab will run on TensorFlow 2.X.
End of explanation
"""
#@title Import relevant modules
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
# The following lines adjust the granularity of reporting.
pd.options.display.max_rows = 10
pd.options.display.float_format = "{:.1f}".format
"""
Explanation: Import relevant modules
The following hidden code cell imports the necessary code to run the code in the rest of this Colaboratory.
End of explanation
"""
# Import the dataset.
training_df = pd.read_csv(filepath_or_buffer="https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv")
# Scale the label.
training_df["median_house_value"] /= 1000.0
# Print the first rows of the pandas DataFrame.
training_df.head()
"""
Explanation: The dataset
Datasets are often stored on disk or at a URL in .csv format.
A well-formed .csv file contains column names in the first row, followed by many rows of data. A comma divides each value in each row. For example, here are the first five rows of the .csv file file holding the California Housing Dataset:
"longitude","latitude","housing_median_age","total_rooms","total_bedrooms","population","households","median_income","median_house_value"
-114.310000,34.190000,15.000000,5612.000000,1283.000000,1015.000000,472.000000,1.493600,66900.000000
-114.470000,34.400000,19.000000,7650.000000,1901.000000,1129.000000,463.000000,1.820000,80100.000000
-114.560000,33.690000,17.000000,720.000000,174.000000,333.000000,117.000000,1.650900,85700.000000
-114.570000,33.640000,14.000000,1501.000000,337.000000,515.000000,226.000000,3.191700,73400.000000
Load the .csv file into a pandas DataFrame
This Colab, like many machine learning programs, gathers the .csv file and stores the data in memory as a pandas Dataframe. pandas is an open source Python library. The primary datatype in pandas is a DataFrame. You can imagine a pandas DataFrame as a spreadsheet in which each row is identified by a number and each column by a name. pandas is itself built on another open source Python library called NumPy. If you aren't familiar with these technologies, please view these two quick tutorials:
NumPy
Pandas DataFrames
The following code cell imports the .csv file into a pandas DataFrame and scales the values in the label (median_house_value):
End of explanation
"""
# Get statistics on the dataset.
training_df.describe()
"""
Explanation: Scaling median_house_value puts the value of each house in units of thousands. Scaling will keep loss values and learning rates in a friendlier range.
Although scaling a label is usually not essential, scaling features in a multi-feature model usually is essential.
Examine the dataset
A large part of most machine learning projects is getting to know your data. The pandas API provides a describe function that outputs the following statistics about every column in the DataFrame:
count, which is the number of rows in that column. Ideally, count contains the same value for every column.
mean and std, which contain the mean and standard deviation of the values in each column.
min and max, which contain the lowest and highest values in each column.
25%, 50%, 75%, which contain various quantiles.
End of explanation
"""
#@title Double-click to view a possible answer.
# The maximum value (max) of several columns seems very
# high compared to the other quantiles. For example,
# example the total_rooms column. Given the quantile
# values (25%, 50%, and 75%), you might expect the
# max value of total_rooms to be approximately
# 5,000 or possibly 10,000. However, the max value
# is actually 37,937.
# When you see anomalies in a column, become more careful
# about using that column as a feature. That said,
# anomalies in potential features sometimes mirror
# anomalies in the label, which could make the column
# be (or seem to be) a powerful feature.
# Also, as you will see later in the course, you
# might be able to represent (pre-process) raw data
# in order to make columns into useful features.
"""
Explanation: Task 1: Identify anomalies in the dataset
Do you see any anomalies (strange values) in the data?
End of explanation
"""
#@title Define the functions that build and train a model
def build_model(my_learning_rate):
"""Create and compile a simple linear regression model."""
# Most simple tf.keras models are sequential.
model = tf.keras.models.Sequential()
# Describe the topography of the model.
# The topography of a simple linear regression model
# is a single node in a single layer.
model.add(tf.keras.layers.Dense(units=1,
input_shape=(1,)))
# Compile the model topography into code that TensorFlow can efficiently
# execute. Configure training to minimize the model's mean squared error.
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=my_learning_rate),
loss="mean_squared_error",
metrics=[tf.keras.metrics.RootMeanSquaredError()])
return model
def train_model(model, df, feature, label, epochs, batch_size):
"""Train the model by feeding it data."""
# Feed the model the feature and the label.
# The model will train for the specified number of epochs.
history = model.fit(x=df[feature],
y=df[label],
batch_size=batch_size,
epochs=epochs)
# Gather the trained model's weight and bias.
trained_weight = model.get_weights()[0]
trained_bias = model.get_weights()[1]
# The list of epochs is stored separately from the rest of history.
epochs = history.epoch
# Isolate the error for each epoch.
hist = pd.DataFrame(history.history)
# To track the progression of training, we're going to take a snapshot
# of the model's root mean squared error at each epoch.
rmse = hist["root_mean_squared_error"]
return trained_weight, trained_bias, epochs, rmse
print("Defined the create_model and traing_model functions.")
"""
Explanation: Define functions that build and train a model
The following code defines two functions:
build_model(my_learning_rate), which builds a randomly-initialized model.
train_model(model, feature, label, epochs), which trains the model from the examples (feature and label) you pass.
Since you don't need to understand model building code right now, we've hidden this code cell. You may optionally double-click the following headline to see the code that builds and trains a model.
End of explanation
"""
#@title Define the plotting functions
def plot_the_model(trained_weight, trained_bias, feature, label):
"""Plot the trained model against 200 random training examples."""
# Label the axes.
plt.xlabel(feature)
plt.ylabel(label)
# Create a scatter plot from 200 random points of the dataset.
random_examples = training_df.sample(n=200)
plt.scatter(random_examples[feature], random_examples[label])
# Create a red line representing the model. The red line starts
# at coordinates (x0, y0) and ends at coordinates (x1, y1).
x0 = 0
y0 = trained_bias
x1 = 10000
y1 = trained_bias + (trained_weight * x1)
plt.plot([x0, x1], [y0, y1], c='r')
# Render the scatter plot and the red line.
plt.show()
def plot_the_loss_curve(epochs, rmse):
"""Plot a curve of loss vs. epoch."""
plt.figure()
plt.xlabel("Epoch")
plt.ylabel("Root Mean Squared Error")
plt.plot(epochs, rmse, label="Loss")
plt.legend()
plt.ylim([rmse.min()*0.97, rmse.max()])
plt.show()
print("Defined the plot_the_model and plot_the_loss_curve functions.")
"""
Explanation: Define plotting functions
The following matplotlib functions create the following plots:
a scatter plot of the feature vs. the label, and a line showing the output of the trained model
a loss curve
You may optionally double-click the headline to see the matplotlib code, but note that writing matplotlib code is not an important part of learning ML programming.
End of explanation
"""
# The following variables are the hyperparameters.
learning_rate = 0.01
epochs = 30
batch_size = 30
# Specify the feature and the label.
my_feature = "total_rooms" # the total number of rooms on a specific city block.
my_label="median_house_value" # the median value of a house on a specific city block.
# That is, you're going to create a model that predicts house value based
# solely on total_rooms.
# Discard any pre-existing version of the model.
my_model = None
# Invoke the functions.
my_model = build_model(learning_rate)
weight, bias, epochs, rmse = train_model(my_model, training_df,
my_feature, my_label,
epochs, batch_size)
print("\nThe learned weight for your model is %.4f" % weight)
print("The learned bias for your model is %.4f\n" % bias )
plot_the_model(weight, bias, my_feature, my_label)
plot_the_loss_curve(epochs, rmse)
"""
Explanation: Call the model functions
An important part of machine learning is determining which features correlate with the label. For example, real-life home-value prediction models typically rely on hundreds of features and synthetic features. However, this model relies on only one feature. For now, you'll arbitrarily use total_rooms as that feature.
End of explanation
"""
def predict_house_values(n, feature, label):
"""Predict house values based on a feature."""
batch = training_df[feature][10000:10000 + n]
predicted_values = my_model.predict_on_batch(x=batch)
print("feature label predicted")
print(" value value value")
print(" in thousand$ in thousand$")
print("--------------------------------------")
for i in range(n):
print ("%5.0f %6.0f %15.0f" % (training_df[feature][10000 + i],
training_df[label][10000 + i],
predicted_values[i][0] ))
"""
Explanation: A certain amount of randomness plays into training a model. Consequently, you'll get different results each time you train the model. That said, given the dataset and the hyperparameters, the trained model will generally do a poor job describing the feature's relation to the label.
Use the model to make predictions
You can use the trained model to make predictions. In practice, you should make predictions on examples that are not used in training. However, for this exercise, you'll just work with a subset of the same training dataset. A later Colab exercise will explore ways to make predictions on examples not used in training.
First, run the following code to define the house prediction function:
End of explanation
"""
predict_house_values(10, my_feature, my_label)
"""
Explanation: Now, invoke the house prediction function on 10 examples:
End of explanation
"""
#@title Double-click to view the answer.
# Most of the predicted values differ significantly
# from the label value, so the trained model probably
# doesn't have much predictive power. However, the
# first 10 examples might not be representative of
# the rest of the examples.
"""
Explanation: Task 2: Judge the predictive power of the model
Look at the preceding table. How close is the predicted value to the label value? In other words, does your model accurately predict house values?
End of explanation
"""
my_feature = "?" # Replace the ? with population or possibly
# a different column name.
# Experiment with the hyperparameters.
learning_rate = 2
epochs = 3
batch_size = 120
# Don't change anything below this line.
my_model = build_model(learning_rate)
weight, bias, epochs, rmse = train_model(my_model, training_df,
my_feature, my_label,
epochs, batch_size)
plot_the_model(weight, bias, my_feature, my_label)
plot_the_loss_curve(epochs, rmse)
predict_house_values(15, my_feature, my_label)
#@title Double-click to view a possible solution.
my_feature = "population" # Pick a feature other than "total_rooms"
# Possibly, experiment with the hyperparameters.
learning_rate = 0.05
epochs = 18
batch_size = 3
# Don't change anything below.
my_model = build_model(learning_rate)
weight, bias, epochs, rmse = train_model(my_model, training_df,
my_feature, my_label,
epochs, batch_size)
plot_the_model(weight, bias, my_feature, my_label)
plot_the_loss_curve(epochs, rmse)
predict_house_values(10, my_feature, my_label)
"""
Explanation: Task 3: Try a different feature
The total_rooms feature had only a little predictive power. Would a different feature have greater predictive power? Try using population as the feature instead of total_rooms.
Note: When you change features, you might also need to change the hyperparameters.
End of explanation
"""
#@title Double-click to view the answer.
# Training is not entirely deterministic, but population
# typically converges at a slightly higher RMSE than
# total_rooms. So, population appears to be about
# the same or slightly worse at making predictions
# than total_rooms.
"""
Explanation: Did population produce better predictions than total_rooms?
End of explanation
"""
# Define a synthetic feature named rooms_per_person
training_df["rooms_per_person"] = ? # write your code here.
# Don't change the next line.
my_feature = "rooms_per_person"
# Assign values to these three hyperparameters.
learning_rate = ?
epochs = ?
batch_size = ?
# Don't change anything below this line.
my_model = build_model(learning_rate)
weight, bias, epochs, rmse = train_model(my_model, training_df,
my_feature, my_label,
epochs, batch_size)
plot_the_loss_curve(epochs, rmse)
predict_house_values(15, my_feature, my_label)
#@title Double-click to view a possible solution to Task 4.
# Define a synthetic feature
training_df["rooms_per_person"] = training_df["total_rooms"] / training_df["population"]
my_feature = "rooms_per_person"
# Tune the hyperparameters.
learning_rate = 0.06
epochs = 24
batch_size = 30
# Don't change anything below this line.
my_model = build_model(learning_rate)
weight, bias, epochs, mae = train_model(my_model, training_df,
my_feature, my_label,
epochs, batch_size)
plot_the_loss_curve(epochs, mae)
predict_house_values(15, my_feature, my_label)
"""
Explanation: Task 4: Define a synthetic feature
You have determined that total_rooms and population were not useful features. That is, neither the total number of rooms in a neighborhood nor the neighborhood's population successfully predicted the median house price of that neighborhood. Perhaps though, the ratio of total_rooms to population might have some predictive power. That is, perhaps block density relates to median house value.
To explore this hypothesis, do the following:
Create a synthetic feature that's a ratio of total_rooms to population. (If you are new to pandas DataFrames, please study the Pandas DataFrame Ultraquick Tutorial.)
Tune the three hyperparameters.
Determine whether this synthetic feature produces
a lower loss value than any of the single features you
tried earlier in this exercise.
End of explanation
"""
# Generate a correlation matrix.
training_df.corr()
"""
Explanation: Based on the loss values, this synthetic feature produces a better model than the individual features you tried in Task 2 and Task 3. However, the model still isn't creating great predictions.
Task 5. Find feature(s) whose raw values correlate with the label
So far, we've relied on trial-and-error to identify possible features for the model. Let's rely on statistics instead.
A correlation matrix indicates how each attribute's raw values relate to the other attributes' raw values. Correlation values have the following meanings:
1.0: perfect positive correlation; that is, when one attribute rises, the other attribute rises.
-1.0: perfect negative correlation; that is, when one attribute rises, the other attribute falls.
0.0: no correlation; the two columns are not linearly related.
In general, the higher the absolute value of a correlation value, the greater its predictive power. For example, a correlation value of -0.8 implies far more predictive power than a correlation of -0.2.
The following code cell generates the correlation matrix for attributes of the California Housing Dataset:
End of explanation
"""
#@title Double-click here for the solution to Task 5
# The `median_income` correlates 0.7 with the label
# (median_house_value), so median_income` might be a
# good feature. The other seven potential features
# all have a correlation relatively close to 0.
# If time permits, try median_income as the feature
# and see whether the model improves.
"""
Explanation: The correlation matrix shows nine potential features (including a synthetic
feature) and one label (median_house_value). A strong negative correlation or strong positive correlation with the label suggests a potentially good feature.
Your Task: Determine which of the nine potential features appears to be the best candidate for a feature?
End of explanation
"""
|
plissonf/DeepPlay | notebooks/web_scraping.ipynb | mit | from bs4 import BeautifulSoup
from lxml import html
import requests as rq
import pandas as pd
import re
import logging
"""
Explanation: AIDA Freediving Records
The project DeepPlay aims at exploring and displaying the world of competitive freediving using web-scraping, machine learning and data visualizations (e.g. D3.js). The main source of information is the official website of AIDA, International Association for the Development of Apnea. The present work has been created within 10 days including exploratory data analysis.
1- Scraping the data from the website
2- Data preparation / cleaning / extension (separate name / country, get GPS locations, get gender...)
3- Early data exploration (see exploratory_data_analysis.html)
-
Load modules
End of explanation
"""
def get_discipline_value(key):
disc = {'STA': 8 ,
'DYN': 6,
'DNF': 7,
'CWT': 3,
'CNF': 4,
'FIM': 5
}
if key in disc.keys():
value = disc[key]
discipline_url = '{}{}'.format('&disciplineId=', value)
return discipline_url
else:
logging.warning('Check your spelling. ' + key + ' is not a freediving discipline')
get_discipline_value('NFT')
"""
Explanation: -
The method get_discipline_value(key) selects one of 6 disciplines (dictionary keys: STA, DYN, DNF, CWT, CNF, FIM) and allocates its corresponding value (id) to a new url, discipline_url.
If the discipline is mispelled or inexistent, get_discipline_value throws the sentence "Check your spelling ... is not a freediving discipline".
The method is called within the following method scraper( ) function to obtain html pages associated with a discipline.
End of explanation
"""
def cleanser(a_list):
df = pd.DataFrame(a_list)
df.columns = ['Ranking', 'Name', 'Results', 'Announced', 'Points', 'Penalties', 'Date', 'Place']
df['Ranking'] = df['Ranking'].str.replace('.', '')
df['Country'] = df['Name'].str.extract('.*\((.*)\).*', expand=True)
df['Name'] = df['Name'].str.replace(r"\(.*\)","")
df['Results'] = df['Results'].str.replace('m', '')
df['Date'] = pd.to_datetime(df['Date'])
df = df.drop_duplicates(['Name', 'Results', 'Announced', 'Points', 'Penalties', 'Date', 'Place', 'Country'])
return df
"""
Explanation: -
The method cleanser( ) changes the list of lists named 'data' which is collected all html pages for each discipline into a cleaned and labelled dataframe df. The method uses regular expressions. It will also be called within the method scraper( ).
End of explanation
"""
def scraper(key):
#Obtain html code for url and Parse the page
base_url = 'https://www.aidainternational.org/Ranking/Rankings?page='
url = '{}1{}'.format(base_url, get_discipline_value(key))
page = rq.get(url)
soup = BeautifulSoup(page.content, "lxml")
#Use regex to identify the maximum number of pages for the discipline of interest
page_count = soup.findAll(text=re.compile(r"Page .+ of .+"))
max_pages = str(page_count).split(' ')[3].split('\\')[0]
total_obs = int(max_pages)*20
data = []
for p in range(1, int(max_pages)+1):
#For each page, create corresponding url, request the library, obtain html code and parse the page
url = '{}{}{}'.format(base_url, p, get_discipline_value(key))
#The break plays the role of safety guard if dictionary key is wrong (not spelled properly or non-existent) then the request
#for library is not executed (and not going through the for loop to generate the data), an empty dataframe is saved
if url == '{}{}None'.format(base_url, p):
break
else:
new_page = rq.get(url)
new_soup = BeautifulSoup(new_page.content, "lxml")
#For each page, each parsed page is saved into the list named "data"
rows = new_soup.table.tbody.findAll('tr')
for row in rows:
cols = row.find_all('td')
cols = [ele.text.strip() for ele in cols]
data.append([ele for ele in cols if ele])
p += 1
#Results from list "data" are cleaned using "cleanser" method and saved in a dataframe clean_df
clean_df = cleanser(data)
pd.set_option('max_rows', int(total_obs))
pd.set_option('expand_frame_repr', True)
#Dataframe df is saved in file results_key.csv to access results offline
filename = '/Users/fabienplisson/Desktop/Github_shares/DeepPlay/deepplay/data/cleaned/results_{}.csv'.format(key)
clean_df.to_csv(filename, encoding ='utf-8')
logging.warning('Finished!')
#with open(filename,'a') as f:
#f.write(clean_df.encode('uft-8'))
#f.closed
scraper('DYN')
"""
Explanation: -
The method scraper( ) crawls through an entire freediving discipline, identifies how many pages it consists of (max_pages), obtains html code from all urls and save this code into a list of lists (data). The later is saved into a cleaned data frame using cleanser( ), ready for data analysis
End of explanation
"""
|
keras-team/keras-io | examples/vision/ipynb/video_classification.ipynb | apache-2.0 | !pip install -q git+https://github.com/tensorflow/docs
"""
Explanation: Video Classification with a CNN-RNN Architecture
Author: Sayak Paul<br>
Date created: 2021/05/28<br>
Last modified: 2021/06/05<br>
Description: Training a video classifier with transfer learning and a recurrent model on the UCF101 dataset.
This example demonstrates video classification, an important use-case with
applications in recommendations, security, and so on.
We will be using the UCF101 dataset
to build our video classifier. The dataset consists of videos categorized into different
actions, like cricket shot, punching, biking, etc. This dataset is commonly used to
build action recognizers, which are an application of video classification.
A video consists of an ordered sequence of frames. Each frame contains spatial
information, and the sequence of those frames contains temporal information. To model
both of these aspects, we use a hybrid architecture that consists of convolutions
(for spatial processing) as well as recurrent layers (for temporal processing).
Specifically, we'll use a Convolutional Neural Network (CNN) and a Recurrent Neural
Network (RNN) consisting of GRU layers.
This kind of hybrid architecture is popularly known as a CNN-RNN.
This example requires TensorFlow 2.5 or higher, as well as TensorFlow Docs, which can be
installed using the following command:
End of explanation
"""
!wget -q https://git.io/JGc31 -O ucf101_top5.tar.gz
!tar xf ucf101_top5.tar.gz
"""
Explanation: Data collection
In order to keep the runtime of this example relatively short, we will be using a
subsampled version of the original UCF101 dataset. You can refer to
this notebook
to know how the subsampling was done.
End of explanation
"""
from tensorflow_docs.vis import embed
from tensorflow import keras
from imutils import paths
import matplotlib.pyplot as plt
import tensorflow as tf
import pandas as pd
import numpy as np
import imageio
import cv2
import os
"""
Explanation: Setup
End of explanation
"""
IMG_SIZE = 224
BATCH_SIZE = 64
EPOCHS = 10
MAX_SEQ_LENGTH = 20
NUM_FEATURES = 2048
"""
Explanation: Define hyperparameters
End of explanation
"""
train_df = pd.read_csv("train.csv")
test_df = pd.read_csv("test.csv")
print(f"Total videos for training: {len(train_df)}")
print(f"Total videos for testing: {len(test_df)}")
train_df.sample(10)
"""
Explanation: Data preparation
End of explanation
"""
# The following two methods are taken from this tutorial:
# https://www.tensorflow.org/hub/tutorials/action_recognition_with_tf_hub
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y : start_y + min_dim, start_x : start_x + min_dim]
def load_video(path, max_frames=0, resize=(IMG_SIZE, IMG_SIZE)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames)
"""
Explanation: One of the many challenges of training video classifiers is figuring out a way to feed
the videos to a network. This blog post
discusses five such methods. Since a video is an ordered sequence of frames, we could
just extract the frames and put them in a 3D tensor. But the number of frames may differ
from video to video which would prevent us from stacking them into batches
(unless we use padding). As an alternative, we can save video frames at a fixed
interval until a maximum frame count is reached. In this example we will do
the following:
Capture the frames of a video.
Extract frames from the videos until a maximum frame count is reached.
In the case, where a video's frame count is lesser than the maximum frame count we
will pad the video with zeros.
Note that this workflow is identical to problems involving texts sequences. Videos of the UCF101 dataset is known
to not contain extreme variations in objects and actions across frames. Because of this,
it may be okay to only consider a few frames for the learning task. But this approach may
not generalize well to other video classification problems. We will be using
OpenCV's VideoCapture() method
to read frames from videos.
End of explanation
"""
def build_feature_extractor():
feature_extractor = keras.applications.InceptionV3(
weights="imagenet",
include_top=False,
pooling="avg",
input_shape=(IMG_SIZE, IMG_SIZE, 3),
)
preprocess_input = keras.applications.inception_v3.preprocess_input
inputs = keras.Input((IMG_SIZE, IMG_SIZE, 3))
preprocessed = preprocess_input(inputs)
outputs = feature_extractor(preprocessed)
return keras.Model(inputs, outputs, name="feature_extractor")
feature_extractor = build_feature_extractor()
"""
Explanation: We can use a pre-trained network to extract meaningful features from the extracted
frames. The Keras Applications module provides
a number of state-of-the-art models pre-trained on the ImageNet-1k dataset.
We will be using the InceptionV3 model for this purpose.
End of explanation
"""
label_processor = keras.layers.StringLookup(
num_oov_indices=0, vocabulary=np.unique(train_df["tag"])
)
print(label_processor.get_vocabulary())
"""
Explanation: The labels of the videos are strings. Neural networks do not understand string values,
so they must be converted to some numerical form before they are fed to the model. Here
we will use the StringLookup
layer encode the class labels as integers.
End of explanation
"""
def prepare_all_videos(df, root_dir):
num_samples = len(df)
video_paths = df["video_name"].values.tolist()
labels = df["tag"].values
labels = label_processor(labels[..., None]).numpy()
# `frame_masks` and `frame_features` are what we will feed to our sequence model.
# `frame_masks` will contain a bunch of booleans denoting if a timestep is
# masked with padding or not.
frame_masks = np.zeros(shape=(num_samples, MAX_SEQ_LENGTH), dtype="bool")
frame_features = np.zeros(
shape=(num_samples, MAX_SEQ_LENGTH, NUM_FEATURES), dtype="float32"
)
# For each video.
for idx, path in enumerate(video_paths):
# Gather all its frames and add a batch dimension.
frames = load_video(os.path.join(root_dir, path))
frames = frames[None, ...]
# Initialize placeholders to store the masks and features of the current video.
temp_frame_mask = np.zeros(shape=(1, MAX_SEQ_LENGTH,), dtype="bool")
temp_frame_features = np.zeros(
shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype="float32"
)
# Extract features from the frames of the current video.
for i, batch in enumerate(frames):
video_length = batch.shape[0]
length = min(MAX_SEQ_LENGTH, video_length)
for j in range(length):
temp_frame_features[i, j, :] = feature_extractor.predict(
batch[None, j, :]
)
temp_frame_mask[i, :length] = 1 # 1 = not masked, 0 = masked
frame_features[idx,] = temp_frame_features.squeeze()
frame_masks[idx,] = temp_frame_mask.squeeze()
return (frame_features, frame_masks), labels
train_data, train_labels = prepare_all_videos(train_df, "train")
test_data, test_labels = prepare_all_videos(test_df, "test")
print(f"Frame features in train set: {train_data[0].shape}")
print(f"Frame masks in train set: {train_data[1].shape}")
"""
Explanation: Finally, we can put all the pieces together to create our data processing utility.
End of explanation
"""
# Utility for our sequence model.
def get_sequence_model():
class_vocab = label_processor.get_vocabulary()
frame_features_input = keras.Input((MAX_SEQ_LENGTH, NUM_FEATURES))
mask_input = keras.Input((MAX_SEQ_LENGTH,), dtype="bool")
# Refer to the following tutorial to understand the significance of using `mask`:
# https://keras.io/api/layers/recurrent_layers/gru/
x = keras.layers.GRU(16, return_sequences=True)(
frame_features_input, mask=mask_input
)
x = keras.layers.GRU(8)(x)
x = keras.layers.Dropout(0.4)(x)
x = keras.layers.Dense(8, activation="relu")(x)
output = keras.layers.Dense(len(class_vocab), activation="softmax")(x)
rnn_model = keras.Model([frame_features_input, mask_input], output)
rnn_model.compile(
loss="sparse_categorical_crossentropy", optimizer="adam", metrics=["accuracy"]
)
return rnn_model
# Utility for running experiments.
def run_experiment():
filepath = "/tmp/video_classifier"
checkpoint = keras.callbacks.ModelCheckpoint(
filepath, save_weights_only=True, save_best_only=True, verbose=1
)
seq_model = get_sequence_model()
history = seq_model.fit(
[train_data[0], train_data[1]],
train_labels,
validation_split=0.3,
epochs=EPOCHS,
callbacks=[checkpoint],
)
seq_model.load_weights(filepath)
_, accuracy = seq_model.evaluate([test_data[0], test_data[1]], test_labels)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
return history, seq_model
_, sequence_model = run_experiment()
"""
Explanation: The above code block will take ~20 minutes to execute depending on the machine it's being
executed.
The sequence model
Now, we can feed this data to a sequence model consisting of recurrent layers like GRU.
End of explanation
"""
def prepare_single_video(frames):
frames = frames[None, ...]
frame_mask = np.zeros(shape=(1, MAX_SEQ_LENGTH,), dtype="bool")
frame_features = np.zeros(shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype="float32")
for i, batch in enumerate(frames):
video_length = batch.shape[0]
length = min(MAX_SEQ_LENGTH, video_length)
for j in range(length):
frame_features[i, j, :] = feature_extractor.predict(batch[None, j, :])
frame_mask[i, :length] = 1 # 1 = not masked, 0 = masked
return frame_features, frame_mask
def sequence_prediction(path):
class_vocab = label_processor.get_vocabulary()
frames = load_video(os.path.join("test", path))
frame_features, frame_mask = prepare_single_video(frames)
probabilities = sequence_model.predict([frame_features, frame_mask])[0]
for i in np.argsort(probabilities)[::-1]:
print(f" {class_vocab[i]}: {probabilities[i] * 100:5.2f}%")
return frames
# This utility is for visualization.
# Referenced from:
# https://www.tensorflow.org/hub/tutorials/action_recognition_with_tf_hub
def to_gif(images):
converted_images = images.astype(np.uint8)
imageio.mimsave("animation.gif", converted_images, fps=10)
return embed.embed_file("animation.gif")
test_video = np.random.choice(test_df["video_name"].values.tolist())
print(f"Test video path: {test_video}")
test_frames = sequence_prediction(test_video)
to_gif(test_frames[:MAX_SEQ_LENGTH])
"""
Explanation: Note: To keep the runtime of this example relatively short, we just used a few
training examples. This number of training examples is low with respect to the sequence
model being used that has 99,909 trainable parameters. You are encouraged to sample more
data from the UCF101 dataset using the notebook mentioned above and train the same model.
Inference
End of explanation
"""
|
rishuatgithub/MLPy | torch/PYTORCH_NOTEBOOKS/02-ANN-Artificial-Neural-Networks/05-Neural-Network-Exercises.ipynb | apache-2.0 | import torch
import torch.nn as nn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
%matplotlib inline
df = pd.read_csv('../Data/income.csv')
print(len(df))
df.head()
df['label'].value_counts()
"""
Explanation: <img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
Neural Network Exercises
For these exercises we'll perform a binary classification on the Census Income dataset available from the <a href = 'http://archive.ics.uci.edu/ml/datasets/Adult'>UC Irvine Machine Learning Repository</a><br>
The goal is to determine if an individual earns more than $50K based on a set of continuous and categorical variables.
<div class="alert alert-danger" style="margin: 10px"><strong>IMPORTANT NOTE!</strong> Make sure you don't run the cells directly above the example output shown, <br>otherwise you will end up writing over the example output!</div>
Census Income Dataset
For this exercises we're using the Census Income dataset available from the <a href='http://archive.ics.uci.edu/ml/datasets/Adult'>UC Irvine Machine Learning Repository</a>.
The full dataset has 48,842 entries. For this exercise we have reduced the number of records, fields and field entries, and have removed entries with null values. The file <strong>income.csv</strong> has 30,000 entries
Each entry contains the following information about an individual:
* <strong>age</strong>: the age of an individual as an integer from 18 to 90 (continuous)
* <strong>sex</strong>: Male or Female (categorical)
* <strong>education</strong>: represents the highest level of education achieved by an individual (categorical)
* <strong>education_num</strong>: represents education as an integer from 3 to 16 (categorical)
<div><table style="display: inline-block">
<tr><td>3</td><td>5th-6th</td><td>8</td><td>12th</td><td>13</td><td>Bachelors</td></tr>
<tr><td>4</td><td>7th-8th</td><td>9</td><td>HS-grad</td><td>14</td><td>Masters</td></tr>
<tr><td>5</td><td>9th</td><td>10</td><td>Some-college</td><td>15</td><td>Prof-school</td></tr>
<tr><td>6</td><td>10th</td><td>11</td><td>Assoc-voc</td><td>16</td><td>Doctorate</td></tr>
<tr><td>7</td><td>11th</td><td>12</td><td>Assoc-acdm</td></tr>
</table></div>
<strong>marital-status</strong>: marital status of an individual (categorical)
<div><table style="display: inline-block">
<tr><td>Married</td><td>Divorced</td><td>Married-spouse-absent</td></tr>
<tr><td>Separated</td><td>Widowed</td><td>Never-married</td></tr>
</table></div>
<strong>workclass</strong>: a general term to represent the employment status of an individual (categorical)
<div><table style="display: inline-block">
<tr><td>Local-gov</td><td>Private</td></tr>
<tr><td>State-gov</td><td>Self-emp</td></tr>
<tr><td>Federal-gov</td></tr>
</table></div>
<strong>occupation</strong>: the general type of occupation of an individual (categorical)
<div><table style="display: inline-block">
<tr><td>Adm-clerical</td><td>Handlers-cleaners</td><td>Protective-serv</td></tr>
<tr><td>Craft-repair</td><td>Machine-op-inspct</td><td>Sales</td></tr>
<tr><td>Exec-managerial</td><td>Other-service</td><td>Tech-support</td></tr>
<tr><td>Farming-fishing</td><td>Prof-specialty</td><td>Transport-moving</td></tr>
</table></div>
<strong>hours-per-week</strong>: the hours an individual has reported to work per week as an integer from 20 to 90 (continuous)
<strong>income</strong>: whether or not an individual makes more than \$50,000 annually (label)
<strong>label</strong>: income represented as an integer (0: <=\$50K, 1: >\$50K) (optional label)
Perform standard imports
Run the cell below to load the libraries needed for this exercise and the Census Income dataset.
End of explanation
"""
df.columns
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS:
print(f'cat_cols has {len(cat_cols)} columns')
print(f'cont_cols has {len(cont_cols)} columns')
print(f'y_col has {len(y_col)} column')
# DON'T WRITE HERE
"""
Explanation: 1. Separate continuous, categorical and label column names
You should find that there are 5 categorical columns, 2 continuous columns and 1 label.<br>
In the case of <em>education</em> and <em>education-num</em> it doesn't matter which column you use. For the label column, be sure to use <em>label</em> and not <em>income</em>.<br>
Assign the variable names "cat_cols", "cont_cols" and "y_col" to the lists of names.
End of explanation
"""
# CODE HERE
# DON'T WRITE HERE
"""
Explanation: 2. Convert categorical columns to category dtypes
End of explanation
"""
# THIS CELL IS OPTIONAL
df = shuffle(df, random_state=101)
df.reset_index(drop=True, inplace=True)
df.head()
"""
Explanation: Optional: Shuffle the dataset
The <strong>income.csv</strong> dataset is already shuffled. However, if you would like to try different configurations after completing the exercises, this is where you would want to shuffle the entire set.
End of explanation
"""
# CODE HERE
# DON'T WRITE HERE
"""
Explanation: 3. Set the embedding sizes
Create a variable "cat_szs" to hold the number of categories in each variable.<br>
Then create a variable "emb_szs" to hold the list of (category size, embedding size) tuples.
End of explanation
"""
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
cats[:5]
# DON'T WRITE HERE
"""
Explanation: 4. Create an array of categorical values
Create a NumPy array called "cats" that contains a stack of each categorical column <tt>.cat.codes.values</tt><br>
Note: your output may contain different values. Ours came after performing the shuffle step shown above.
End of explanation
"""
# CODE HERE
# DON'T WRITE HERE
"""
Explanation: 5. Convert "cats" to a tensor
Convert the "cats" NumPy array to a tensor of dtype <tt>int64</tt>
End of explanation
"""
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
conts[:5]
# DON'T WRITE HERE
"""
Explanation: 6. Create an array of continuous values
Create a NumPy array called "conts" that contains a stack of each continuous column.<br>
Note: your output may contain different values. Ours came after performing the shuffle step shown above.
End of explanation
"""
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
conts.dtype
# DON'T WRITE HERE
"""
Explanation: 7. Convert "conts" to a tensor
Convert the "conts" NumPy array to a tensor of dtype <tt>float32</tt>
End of explanation
"""
# CODE HERE
# DON'T WRITE HERE
"""
Explanation: 8. Create a label tensor
Create a tensor called "y" from the values in the label column. Be sure to flatten the tensor so that it can be passed into the CE Loss function.
End of explanation
"""
# CODE HERE
b = 30000 # suggested batch size
t = 5000 # suggested test size
# DON'T WRITE HERE
"""
Explanation: 9. Create train and test sets from <tt>cats</tt>, <tt>conts</tt>, and <tt>y</tt>
We use the entire batch of 30,000 records, but a smaller batch size will save time during training.<br>
We used a test size of 5,000 records, but you can choose another fixed value or a percentage of the batch size.<br>
Make sure that your test records remain separate from your training records, without overlap.<br>
To make coding slices easier, we recommend assigning batch and test sizes to simple variables like "b" and "t".
End of explanation
"""
class TabularModel(nn.Module):
def __init__(self, emb_szs, n_cont, out_sz, layers, p=0.5):
# Call the parent __init__
super().__init__()
# Set up the embedding, dropout, and batch normalization layer attributes
self.embeds = nn.ModuleList([nn.Embedding(ni, nf) for ni,nf in emb_szs])
self.emb_drop = nn.Dropout(p)
self.bn_cont = nn.BatchNorm1d(n_cont)
# Assign a variable to hold a list of layers
layerlist = []
# Assign a variable to store the number of embedding and continuous layers
n_emb = sum((nf for ni,nf in emb_szs))
n_in = n_emb + n_cont
# Iterate through the passed-in "layers" parameter (ie, [200,100]) to build a list of layers
for i in layers:
layerlist.append(nn.Linear(n_in,i))
layerlist.append(nn.ReLU(inplace=True))
layerlist.append(nn.BatchNorm1d(i))
layerlist.append(nn.Dropout(p))
n_in = i
layerlist.append(nn.Linear(layers[-1],out_sz))
# Convert the list of layers into an attribute
self.layers = nn.Sequential(*layerlist)
def forward(self, x_cat, x_cont):
# Extract embedding values from the incoming categorical data
embeddings = []
for i,e in enumerate(self.embeds):
embeddings.append(e(x_cat[:,i]))
x = torch.cat(embeddings, 1)
# Perform an initial dropout on the embeddings
x = self.emb_drop(x)
# Normalize the incoming continuous data
x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1)
# Set up model layers
x = self.layers(x)
return x
"""
Explanation: Define the model class
Run the cell below to define the TabularModel model class we've used before.
End of explanation
"""
# CODE HERE
# DON'T WRITE HERE
"""
Explanation: 10. Set the random seed
To obtain results that can be recreated, set a torch manual_seed (we used 33).
End of explanation
"""
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
model
# DON'T WRITE HERE
"""
Explanation: 11. Create a TabularModel instance
Create an instance called "model" with one hidden layer containing 50 neurons and a dropout layer p-value of 0.4
End of explanation
"""
# CODE HERE
# DON'T WRITE HERE
"""
Explanation: 12. Define the loss and optimization functions
Create a loss function called "criterion" using CrossEntropyLoss<br>
Create an optimization function called "optimizer" using Adam, with a learning rate of 0.001
End of explanation
"""
import time
start_time = time.time()
epochs = 300
losses = []
for i in range(epochs):
i+=1
y_pred = model(cat_train, con_train)
loss = criterion(y_pred, y_train)
losses.append(loss)
# a neat trick to save screen space:
if i%25 == 1:
print(f'epoch: {i:3} loss: {loss.item():10.8f}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'epoch: {i:3} loss: {loss.item():10.8f}') # print the last line
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
"""
Explanation: Train the model
Run the cell below to train the model through 300 epochs. Remember, results may vary!<br>
After completing the exercises, feel free to come back to this section and experiment with different parameters.
End of explanation
"""
# CODE HERE
# DON'T WRITE HERE
"""
Explanation: 13. Plot the Cross Entropy Loss against epochs
Results may vary. The shape of the plot is what matters.
End of explanation
"""
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
print(f'CE Loss: {loss:.8f}')
# TO EVALUATE THE TEST SET
"""
Explanation: 14. Evaluate the test set
With torch set to <tt>no_grad</tt>, pass <tt>cat_test</tt> and <tt>con_test</tt> through the trained model. Create a validation set called "y_val". Compare the output to <tt>y_test</tt> using the loss function defined above. Results may vary.
End of explanation
"""
# CODE HERE
# DON'T WRITE HERE
"""
Explanation: 15. Calculate the overall percent accuracy
Using a for loop, compare the argmax values of the <tt>y_val</tt> validation set to the <tt>y_test</tt> set.
End of explanation
"""
# WRITE YOUR CODE HERE:
# RUN YOUR CODE HERE:
# DON'T WRITE HERE
"""
Explanation: BONUS: Feed new data through the trained model
See if you can write a function that allows a user to input their own values, and generates a prediction.<br>
<strong>HINT</strong>:<br>There's no need to build a DataFrame. You can use inputs to populate column variables, convert them to embeddings with a context dictionary, and pass the embedded values directly into the tensor constructors:<br>
<pre>mar = input("What is the person's marital status? ")
mar_d = dict(Divorced=0, Married=1, Married-spouse-absent=2, Never-married=3, Separated=4, Widowed=5)
mar = mar_d[mar]
cats = torch.tensor([..., ..., mar, ..., ...], dtype=torch.int64).reshape(1,-1)</pre>
Make sure that names are put in alphabetical order before assigning numbers.
Also, be sure to run <tt>model.eval()</tt> before passing new date through. Good luck!
End of explanation
"""
|
sthuggins/phys202-2015-work | days/day06/Matplotlib.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Visualization with Matplotlib
Learning Objectives: Learn how to make basic plots using Matplotlib's pylab API and how to use the Matplotlib documentation.
This notebook focuses only on the Matplotlib API, rather that the broader question of how you can use this API to make effective and beautiful visualizations.
Imports
The following imports should be used in all of your notebooks where Matplotlib in used:
End of explanation
"""
t = np.linspace(0, 10.0, 100)
plt.plot(t, np.sin(t))
plt.xlabel('Time')
plt.ylabel('Signal')
plt.title('My Plot'); # supress text output
"""
Explanation: Overview
The following conceptual organization is simplified and adapted from Benjamin Root's AnatomyOfMatplotlib tutorial.
Figures and Axes
In Matplotlib a single visualization is a Figure.
A Figure can have multiple areas, called subplots. Each subplot is an Axes.
If you don't create a Figure and Axes yourself, Matplotlib will automatically create one for you.
All plotting commands apply to the current Figure and Axes.
The following functions can be used to create and manage Figure and Axes objects.
Function | Description
:-----------------|:----------------------------------------------------------
figure | Creates a new Figure
gca | Get the current Axes instance
savefig | Save the current Figure to a file
sca | Set the current Axes instance
subplot | Create a new subplot Axes for the current Figure
subplots | Create a new Figure and a grid of subplots Axes
Plotting Functions
Once you have created a Figure and one or more Axes objects, you can use the following function to put data onto that Axes.
Function | Description
:-----------------|:--------------------------------------------
bar | Make a bar plot
barh | Make a horizontal bar plot
boxplot | Make a box and whisker plot
contour | Plot contours
contourf | Plot filled contours
hist | Plot a histogram
hist2d | Make a 2D histogram plot
imshow | Display an image on the axes
matshow | Display an array as a matrix
pcolor | Create a pseudocolor plot of a 2-D array
pcolormesh | Plot a quadrilateral mesh
plot | Plot lines and/or markers
plot_date | Plot with data with dates
polar | Make a polar plot
scatter | Make a scatter plot of x vs y
Plot modifiers
You can then use the following functions to modify your visualization.
Function | Description
:-----------------|:---------------------------------------------------------------------
annotate | Create an annotation: a piece of text referring to a data point
box | Turn the Axes box on or off
clabel | Label a contour plot
colorbar | Add a colorbar to a plot
grid | Turn the Axes grids on or off
legend | Place a legend on the current Axes
loglog | Make a plot with log scaling on both the x and y axis
semilogx | Make a plot with log scaling on the x axis
semilogy | Make a plot with log scaling on the y axis
subplots_adjust | Tune the subplot layout
tick_params | Change the appearance of ticks and tick labels
ticklabel_format| Change the ScalarFormatter used by default for linear axes
tight_layout | Automatically adjust subplot parameters to give specified padding
text | Add text to the axes
title | Set a title of the current axes
xkcd | Turns on XKCD sketch-style drawing mode
xlabel | Set the x axis label of the current axis
xlim | Get or set the x limits of the current axes
xticks | Get or set the x-limits of the current tick locations and labels
ylabel | Set the y axis label of the current axis
ylim | Get or set the y-limits of the current axes
yticks | Get or set the y-limits of the current tick locations and labels
Basic plotting
For now, we will work with basic line plots (plt.plot) to show how the Matplotlib pylab plotting API works. In this case, we don't create a Figure so Matplotlib does that automatically.
End of explanation
"""
f = plt.figure(figsize=(9,6)) # 9" x 6", default is 8" x 5.5"
plt.plot(t, np.sin(t), 'r.');
plt.xlabel('x')
plt.ylabel('y')
"""
Explanation: Basic plot modification
With a third argument you can provide the series color and line/marker style. Here we create a Figure object and modify its size.
End of explanation
"""
from matplotlib import lines
lines.lineStyles.keys()
from matplotlib import markers
markers.MarkerStyle.markers.keys()
"""
Explanation: Here is a list of the single character color strings:
b: blue
g: green
r: red
c: cyan
m: magenta
y: yellow
k: black
w: white
The following will show all of the line and marker styles:
End of explanation
"""
plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')
plt.xlim(-1.0, 11.0)
plt.ylim(-1.0, 1.0)
"""
Explanation: To change the plot's limits, use xlim and ylim:
End of explanation
"""
plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')
plt.xlim(0.0, 10.0)
plt.ylim(-1.0, 1.0)
plt.xticks([0,5,10], ['zero','five','10'])
plt.tick_params(axis='y', direction='inout', length=10)
"""
Explanation: You can change the ticks along a given axis by using xticks, yticks and tick_params:
End of explanation
"""
plt.plot(np.random.rand(100), 'b-')
plt.grid(True)
plt.box(False)
"""
Explanation: Box and grid
You can enable a grid or disable the box. Notice that the ticks and tick labels remain.
End of explanation
"""
plt.plot(t, np.sin(t), label='sin(t)')
plt.plot(t, np.cos(t), label='cos(t)')
plt.xlabel('t')
plt.ylabel('Signal(t)')
plt.ylim(-1.5, 1.5)
plt.xlim(right=12.0)
plt.legend()
"""
Explanation: Multiple series
Multiple calls to a plotting function will all target the current Axes:
End of explanation
"""
plt.subplot(2,1,1) # 2 rows x 1 col, plot 1
plt.plot(t, np.exp(0.1*t))
plt.ylabel('Exponential')
plt.subplot(2,1,2) # 2 rows x 1 col, plot 2
plt.plot(t, t**2)
plt.ylabel('Quadratic')
plt.xlabel('x')
plt.tight_layout()
"""
Explanation: Subplots
Subplots allow you to create a grid of plots in a single figure. There will be an Axes associated with each subplot and only one Axes can be active at a time.
The first way you can create subplots is to use the subplot function, which creates and activates a new Axes for the active Figure:
End of explanation
"""
f, ax = plt.subplots(2, 2)
for i in range(2):
for j in range(2):
plt.sca(ax[i,j])
plt.plot(np.random.rand(20))
plt.xlabel('x')
plt.ylabel('y')
plt.tight_layout()
"""
Explanation: In many cases, it is easier to use the subplots function, which creates a new Figure along with an array of Axes objects that can be indexed in a rational manner:
End of explanation
"""
f, ax = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(6,6))
for i in range(2):
for j in range(2):
plt.sca(ax[i,j])
plt.plot(np.random.rand(20))
if i==1:
plt.xlabel('x')
if j==0:
plt.ylabel('y')
plt.tight_layout()
"""
Explanation: The subplots function also makes it easy to pass arguments to Figure and to share axes:
End of explanation
"""
plt.plot(t, np.sin(t), marker='o', color='darkblue',
linestyle='--', alpha=0.3, markersize=10)
"""
Explanation: More marker and line styling
All plot commands, including plot, accept keyword arguments that can be used to style the lines in more detail. Fro more information see:
Controlling line properties
Specifying colors
End of explanation
"""
|
p0licat/university | Experiments/Crawling/Jupyter Notebooks/Analysis.ipynb | mit | import mariadb
import json
with open('../credentials.json', 'r') as crd_json_fd:
json_text = crd_json_fd.read()
json_obj = json.loads(json_text)
credentials = json_obj["Credentials"]
username = credentials["username"]
password = credentials["password"]
table_name = "publications"
db_name = "ubbcluj"
mariadb_connection = mariadb.connect(user=username, password=password, database=db_name)
mariadb_cursor = mariadb_connection.cursor()
"""
Explanation: DB Analysis
With a few hundred papers in the DB, some experiments can be done on the data.
1. Connecting to the DB
End of explanation
"""
queryString = "SELECT COUNT(*) FROM "
queryString += table_name
try:
mariadb_cursor.execute(queryString)
except Exception as ex:
print(ex)
for item in mariadb_cursor:
count = item[0]
print("Number of publications: {0}".format(count))
"""
Explanation: 2. Counting publications.
End of explanation
"""
queryString = "SELECT Affiliations, COUNT(*) AS c FROM publications GROUP BY Affiliations ORDER BY c DESC"
try:
mariadb_cursor.execute(queryString)
except Exception as ex:
print(ex)
affiliations = []
for item in mariadb_cursor:
Affiliation = item[0]
affiliations.append(item[0])
c = item[1]
print("{0} : {1} occurences".format(Affiliation, c))
for i in affiliations:
if "conference" in [k.lower() for k in i.split()]:
print(i)
for i in affiliations:
if "journal" in [k.lower() for k in i.split()]:
print(i)
tokens = []
for i in affiliations:
words = i.split()
for word in words:
tokens.append(word)
tokens
from nltk.corpus import stopwords
import nltk
nltk.download('stopwords')
sr= stopwords.words('english')
clean_tokens = tokens[:]
for token in tokens:
if token in stopwords.words('english'):
clean_tokens.remove(token)
freq = nltk.FreqDist(clean_tokens)
for key,val in freq.items():
#print(str(key) + ':' + str(val))
pass
freq.plot(20, cumulative=False)
# Histogram of professors publication number
queryString = "SELECT (Select FullName from humanoid_entities where id = ProfessorId), ProfessorId, COUNT(Title) FROM publications GROUP BY ProfessorId ORDER BY COUNT(Title) desc"
try:
mariadb_cursor.execute(queryString)
except Exception as ex:
print(ex)
name_dict = {}
tup_list = []
max_id = 0
for item in mariadb_cursor:
ProfName = item[0]
ProfId = item[1]
max_id = max(max_id, ProfId)
Count = item[2]
tup_list.append((ProfName, ProfId, Count))
name_dict[ProfId] = ProfName
import pandas as pd
print(tup_list)
final_list = []
for i in range(max_id):
found_id = False
found_value = 0
for k in tup_list:
if i == k[1]:
found_id = True
found_value = k[2]
break
if not found_id:
name_dict[i] = "NONE"
final_list.append((i, found_value))
fa = pd.DataFrame([k[1] for k in final_list])
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as plticker
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.bar([name_dict[k[0]] for k in final_list], [k[1] for k in final_list])
loc = plticker.MultipleLocator(base=1.0)
ax.xaxis.set_major_locator(loc)
plt.xticks(rotation=90)
plt.show()
"""
Explanation: 3. Distinct Affiliations
End of explanation
"""
queryString = "SELECT Title FROM publications"
try:
mariadb_cursor.execute(queryString)
except Exception as ex:
print(ex)
titles = []
for item in mariadb_cursor:
Title = item[0]
titles.append(item[0])
from sklearn.feature_extraction.text import CountVectorizer
corpus = titles[:]
corpus
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(corpus)
print(vectorizer.get_feature_names())
print(X.shape)
print(X.toarray())
for k in X.toarray():
for j in k:
if j > 1:
print(j)
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
vec = TfidfVectorizer(use_idf=False, norm='l1')
matrix = vec.fit_transform(corpus)
pd.DataFrame(matrix.toarray(), columns=vec.get_feature_names())
from textblob import TextBlob
import nltk
nltk.download('punkt')
def textblob_tokenizer(str_input):
blob = TextBlob(str_input.lower())
tokens = blob.words
words = [token.stem() for token in tokens]
return words
vec = CountVectorizer(tokenizer=textblob_tokenizer)
matrix = vec.fit_transform(corpus)
pd.DataFrame(matrix.toarray(), columns=vec.get_feature_names())
vec = TfidfVectorizer(tokenizer=textblob_tokenizer,
stop_words='english',
use_idf=True)
matrix = vec.fit_transform(corpus)
df = pd.DataFrame(matrix.toarray(), columns=vec.get_feature_names())
for word in df.columns:
for row in df[word]:
if row != 0.0:
print(word, row)
from sklearn.cluster import KMeans
number_of_clusters = 10
km = KMeans(n_clusters=number_of_clusters)
km.fit(matrix)
km.fit
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vec.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :5]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
results = pd.DataFrame({
'corpus': corpus,
'category': km.labels_
})
results.sort_values('category')
for k in results.sort_values('category').values:
print(k[1], " --- ", k[0])
### GENSIM
from gensim.models import word2vec
from gensim.test.utils import common_texts, get_tmpfile
tokenized_sentences = [[j.lower() for j in st.split() if j not in stopwords.words('english')] for st in corpus]
model = word2vec.Word2Vec(tokenized_sentences, min_count=1)
model.save("word2vec.model")
#model = word2vec.load("word2vec.model")
model
model.wv["study"]
words = list(model.wv.vocab)
print(words)
X = model[model.wv.vocab]
df = pd.DataFrame(df)
df.shape
df.head()
import numpy as np
import io
out_v = io.open('vecs.tsv', 'w', encoding='utf-8')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for word in model.wv.vocab:
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in model[word]]) + "\n")
out_v.close()
out_m.close()
"""
Explanation: 3. TF-IDF and K-Means?
3. Distinct Affiliations
End of explanation
"""
|
Ttl/scikit-rf | doc/source/tutorials/NetworkSet.ipynb | bsd-3-clause | ls data/ro*
"""
Explanation: NetworkSet
Introduction
The NetworkSet object represents an unordered set of networks. It
provides methods iterating and slicing the set, sorting by datetime, calculating statistical quantities, and displaying uncertainty bounds on plots.
Creating a NetworkSet
Lets take a look in the data/ folder, there are some redundant measurements of a network called ro, which is a radiating open waveguide.
End of explanation
"""
import skrf as rf
rf.read_all(rf.data.pwd, contains='ro')
"""
Explanation: The files ro,1.s1p , ro,2.s1p, ... are redundant measurements on
which we would like to calculate statistics using the NetworkSet
class.
A NetworkSet is created from a list or dict of
Network's. So first we need to load all of the
touchstone files into Networks. This can be done quickly with
rf.read_all, The argument contains is used to load only files
which match a given substring.
End of explanation
"""
from skrf import NetworkSet
ro_dict = rf.read_all(rf.data.pwd, contains='ro')
ro_ns = NetworkSet(ro_dict, name='ro set')
ro_ns
"""
Explanation: This can be passed directly to the NetworkSet constructor,
End of explanation
"""
ro_ns[0]
"""
Explanation: A NetworkSet can also be constructed directly from a dir with NetworkSet.from_dir() or from a zipfile of touchstones through the class method NetworkSet.from_zip().
Accesing Network Methods
The Network elements in a NetworkSet can be accessed like the elements of list,
End of explanation
"""
%matplotlib inline
from pylab import *
import skrf as rf
rf.stylely()
ro_ns.plot_s_db()
"""
Explanation: Most Network methods are also methods of
NetworkSet. These methods are called on each
Network element individually. For example to
plot the log-magnitude of the s-parameters of each Network.
End of explanation
"""
ro_ns.mean_s
"""
Explanation: Statistical Properties
Statistical quantities can be calculated by accessing
properties of the NetworkSet. To calculate the complex
average of the set, access the mean_s property
End of explanation
"""
ro_ns.mean_s.plot_s_db(label='ro')
"""
Explanation: The naming convention of the statistical operator properties are NetworkSet.{function}_{parameter}, where function is the name of the
statistical function, and parameter is the Network parameter to operate
on. These methods return a Network object, so they can be
saved or plotted in the same way as you would with a Network.
To plot the log-magnitude of the complex mean response
End of explanation
"""
ro_ns.std_s.plot_s_re(y_label='Standard Deviations')
"""
Explanation: Or to plot the standard deviation of the complex s-parameters,
End of explanation
"""
ro_ns.mean_s_deg.plot_s_re()
"""
Explanation: Using these properties it is possible to calculate statistical quantities on the scalar
components of the complex network parameters. To calculate the
mean of the phase component,
End of explanation
"""
ro_ns.plot_uncertainty_bounds_s_db()
ro_ns.plot_uncertainty_bounds_s_deg()
"""
Explanation: Plotting Uncertainty Bounds
Uncertainty bounds can be plotted through the methods
End of explanation
"""
ro_ns.write_touchstone(dir='data/')
"""
Explanation: Reading and Writing
To write all Networks of a NetworkSet out to individual touchstones,
End of explanation
"""
rf.write('ro set.ns', ro_ns)
ro_ns = rf.read('ro set.ns')
ro_ns
"""
Explanation: For temporary data storage, NetworkSets can be saved and read from disk
using the functions rf.read and rf.write
End of explanation
"""
ro_ns.write_spreadsheet('data/ro_spreadsheet.xls', form='db')
"""
Explanation: Export to Excel, csv, or html
NetworkSets can also be exported to other filetypes. The format of the output; real/imag, mag/phase is adjustable, as is the output type; csv, excel, html. For example to export mag/phase for each network into an Excel spreadsheet for your boss[s]
End of explanation
"""
|
kbennion/foundations-hw | 6.16Notes.ipynb | mit | #subject lines that have dates, e.g. 12/01/99
[line for line in subjects if re.search("\d\d/\d\d/\d\d", line)]
"""
Explanation: metachars
. any char
\w any alphanumeric (a-z, A-Z, 0-9, _)
\s any whitespace char (" _, \t, \n)
\S any nonwhitespace
\d any digit (0-9)
. searches for an actual period
End of explanation
"""
[line for line in subjects if re.search("[aeiou][aeiou][aeiou][aeiou]", line)]
[line for line in subjects if re.search("F[wW]:", line)]
"""
Explanation: define your own character classes
inside your regular expression, write [aeiou]
End of explanation
"""
[line for line in subjects if res.search("^[Nn]ew [Yy]ork", line)]
[line for line in subjects if re.search(r"\boil\b", line)]
"""
Explanation: metacharacters
^ beginning of string
$ end of string
\b word boundary
End of explanation
"""
x = "this is \na test"
print(x)
x = "this is\t\t\tanother test"
print(x)
normal = "hello\nthere"
raw = r"hello\nthere"
print("normal:", normal)
print("raw:", raw)
"""
Explanation: aside: metacharacters and escape characters
\n new line
\t tab
\ single backslash
(python interprets these)
End of explanation
"""
[line for line in subjects if re.search(r"\b(?:[Cc]at|[kK]itty|[kK]itten)\b", line)]
"""
Explanation: metacharacters 3: quantifiers
* match zero or more times
{n} matches exactly n times
{n,m} matches at least n times, but no more than m times
{n,} matches at least n times, but maybe infinite times
+ match at least once ({1,})
? match one time or zero times
[line for line in subjects if re.search(r"^R
string matches regular expression if at the first line, you encounter .......
End of explanation
"""
all_subjects = open("enronsubjects.txt").read()
all_subjects[:1000]
#looking for domain names
[line for line in subjectts if re.search](r"\b\w+\.(?:com|net|org)\b", line)
#re.findall(r"\b\w+\.(?:com|net|org)\b", all_subjects)
#"will you pass teh pepper?" re.search "yes"
#"will you pass the pepper?" re.findall "yes, here it is" *passes pepper*
"""
Explanation: more metacharacters: alternation
.......
capturing
read teh whole corpus in as one big string
End of explanation
"""
|
gregnordin/ECEn360_W15 | plane_waves/dev_notes.ipynb | mit | %%javascript
IPython.load_extensions('calico-document-tools');
!date
from pyqtgraph.Qt import QtCore, QtGui
import pyqtgraph.opengl as gl
import pyqtgraph as pg
import numpy as np
"""
Explanation: Table of Contents
Objective: propagating plane wave visualization
How to get docstrings for a class definition
Figure out what makeARGB is doing
Make a semi-transparent rectangle (image)
What is np.vstack.transpose() doing?
Make vertical lines
Simple coordinate transformation
End of explanation
"""
help(pg.opengl.GLLinePlotItem)
help(pg.opengl.GLGridItem)
help(pg.QtGui.QGraphicsRectItem)
"""
Explanation: Objective: propagating plane wave visualization
To do
- DONE 2/27/15 - Propagating e-field
- TRIED 2/27/15, ARROWS DON'T SEEM TO WORK IN 3D, JUST 2D. include some propagating arrows
- DONE 2/27/15 - propagating h-field
- TRIED 2/27/15, DOESN'T LOOK GOOD - semi-transparent plane in x-y through which plane wave propagates
- DONE 2/27/15 - Add more visible axes
- DONE 2/27/15 - What is np.vstack doing?
- TRIED 2/25/15, NO FACILITY TO DO THIS - Fill to zero?
- DONE 2/28/15 - If not, make my own vertical lines to zero?
- TRIED 2/28/15, NO DOCUMENTATION INDICATING HOW TO DO THIS - Add labels (E, H, z)
- DONE 2/28/15, DOESN'T LOOK GREAT IF NOT LINEAR POLARIZATION - Change efield function to set arbitrary polarization state
- Add ability to change propagation velocity
How to get docstrings for a class definition
End of explanation
"""
image_shape = (4,4)
uniform_values = np.ones(image_shape) * 255
uniform_image = pg.makeARGB(uniform_values)
print uniform_values
print uniform_image
from pyqtgraph.Qt import QtCore, QtGui
import pyqtgraph.opengl as gl
import pyqtgraph as pg
import numpy as np
app = QtGui.QApplication([])
w = gl.GLViewWidget()
w.opts['distance'] = 200
w.show()
w.setWindowTitle('pyqtgraph example: GLImageItem')
## create volume data set to slice three images from
shape = (100,100,70)
data = np.random.normal(size=shape)
#data += pg.gaussianFilter(np.random.normal(size=shape), (15,15,15))*15
## slice out three planes, convert to RGBA for OpenGL texture
levels = (-0.08, 0.08)
tex1 = pg.makeRGBA(data[shape[0]/2], levels=levels)[0] # yz plane
tex2 = pg.makeRGBA(data[:,shape[1]/2], levels=levels)[0] # xz plane
tex3 = pg.makeRGBA(data[:,:,shape[2]/2], levels=levels)[0] # xy plane
#tex1[:,:,3] = 128
tex2[:,:,3] = 128
#tex3[:,:,3] = 128
## Create three image items from textures, add to view
v1 = gl.GLImageItem(tex1)
v1.translate(-shape[1]/2, -shape[2]/2, 0)
v1.rotate(90, 0,0,1)
v1.rotate(-90, 0,1,0)
#w.addItem(v1)
v2 = gl.GLImageItem(tex1)
v2.translate(-shape[0]/2, -shape[2]/2, 0)
v2.rotate(-90, 1,0,0)
w.addItem(v2)
v3 = gl.GLImageItem(tex3)
v3.translate(-shape[0]/2, -shape[1]/2, 0)
#w.addItem(v3)
ax = gl.GLAxisItem()
w.addItem(ax)
## Start Qt event loop unless running in interactive mode.
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
print shape[0], shape[1], shape[2]
print len(data[shape[0]/2]), len(data[:,shape[1]/2])
shape = (5,4,3)
data = np.random.normal(size=shape)
print data
print data[shape[0]/2]
print data[:,shape[1]/2]
print data[:,:,shape[2]/2]
tex = pg.makeRGBA(data[shape[2]/2])[0]
print tex
image_shape = (3,5)
uniform_values = np.ones(image_shape) * 255
uniform_image = pg.makeARGB(uniform_values)[0]
uniform_image[:,:,3] = 128
print uniform_image
"""
Explanation: Figure out what makeARGB is doing
End of explanation
"""
from pyqtgraph.Qt import QtCore, QtGui
import pyqtgraph.opengl as gl
import pyqtgraph as pg
import numpy as np
app = QtGui.QApplication([])
w = gl.GLViewWidget()
w.opts['distance'] = 20
w.show()
w.setWindowTitle('pyqtgraph example: GLImageItem')
## create volume data set to slice three images from
shape = (100,100,70)
data = np.random.normal(size=shape)
#data += pg.gaussianFilter(np.random.normal(size=shape), (15,15,15))*15
## make images
image_shape = (6,6)
uniform_values = np.ones(image_shape) * 255
uniform_image = pg.makeARGB(uniform_values)[0]
uniform_image[:,:,1] = 128
uniform_image_transparent = pg.makeARGB(uniform_values)[0]
uniform_image_transparent[:,:,3] = 128
## Create image items from textures, add to view
v2 = gl.GLImageItem(uniform_image)
v2.translate(-image_shape[0]/2, -image_shape[1]/2, 0)
v2.rotate(90, 1,0,0)
v2.translate(0, -2, 0)
w.addItem(v2)
v1 = gl.GLImageItem(uniform_image_transparent)
v1.translate(-image_shape[0]/2, -image_shape[1]/2, 0)
v1.rotate(90, 1,0,0)
w.addItem(v1)
ax = gl.GLAxisItem()
w.addItem(ax)
## Start Qt event loop unless running in interactive mode.
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
"""
Explanation: Make a semi-transparent rectangle (image)
End of explanation
"""
x = np.linspace(0,2,3)
y = np.linspace(10,12,3)
z = np.linspace(20,22,3)
print x, '\n', y, '\n', z, '\n'
pts = np.vstack([x,y,z])
print pts, '\n'
pts = pts.transpose()
print pts
"""
Explanation: What is np.vstack.transpose() doing?
End of explanation
"""
x = np.linspace(0,3,4)
y = np.linspace(10,13,4)
z = np.linspace(20,23,4)
#print x, '\n', y, '\n', z, '\n'
pts = np.vstack([x,y,z])
#print pts, '\n'
pts = pts.transpose()
print pts
print pts.shape
pts2 = np.zeros(shape=(2*pts.shape[0], pts.shape[1]))
print pts2
print pts2.shape
for i in range(pts.shape[0]):
pts2[2*i,2] = pts[i,2]
pts2[2*i + 1,:] = pts[i,:]
print pts2
# Function to create new array from old
# where new array is formatted to prepare to
# draw lines perpendicular from z-axis to
# curve defined by input array
def preptomakelines(pts):
pts2 = np.zeros(shape=(2*pts.shape[0], pts.shape[1]))
for i in range(pts.shape[0]):
pts2[2*i,2] = pts[i,2]
pts2[2*i + 1,:] = pts[i,:]
return pts2
pts2 = preptomakelines(pts)
print pts, '\n\n', pts2
"""
Explanation: Answer: take row vectors x, y, & z and concatenate them as column vectors in a 2D matrix
Make vertical lines
End of explanation
"""
x = np.linspace(0,3,4)
y = np.linspace(10,13,4)
z = np.linspace(20,23,4)
pts = np.vstack([x,y,z])
pts = pts.transpose()
print pts
temp2Darray = [[0, 0, 1],
[1, 0, 0],
[0, 1, 0]]
rot_efield_coord = np.array(temp2Darray)
print rot_efield_coord
pts_efield_coord = np.dot(pts, rot_efield_coord)
print pts_efield_coord
temp2Darray = [[1, 0, 0],
[0, 0, 1],
[0, 1, 0]]
rot_hfield_coord = np.array(temp2Darray)
print rot_hfield_coord
pts_hfield_coord = np.dot(pts, rot_hfield_coord)
print pts_hfield_coord
print pts
pts = np.dot(pts, rot_efield_coord)
print pts
"""
Explanation: Simple coordinate transformation
We want to go from calculation coordinates (x,y,z) to pyqtgraph coordinates (xx,yy,zz). For the electric field the transformation is:
x -> zz
y -> xx
z -> yy
This is the same as rotate -90 degrees about the y axis, then rotate 90 degrees about the z-axis. For the magnetic field the transformation is:
x -> xx
y -> zz
z -> yy
End of explanation
"""
|
lmoresi/UoM-VIEPS-Intro-to-Python | Notebooks/Mapping/2 - Images and GeoTIFFs.ipynb | mit | %pylab inline
import cartopy
import gdal
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
globalmarble = gdal.Open("../../Data/Resources/BlueMarbleNG-TB_2004-06-01_rgb_3600x1800.TIFF")
globalmarble_img = globalmarble.ReadAsArray().transpose(1,2,0)
# Note that we convert the gdal object into an image array - and also have to re-organise the data
# This is a numpy call that you can look up to see what it does and you can also look
# at the original array data to see what is there.
fig = plt.figure(figsize=(12, 12), facecolor="none")
plt.imshow(globalmarble_img)
"""
Explanation: Handling images
We will work with images which are in geotiff format. These are standard image files (i.e. they are .tif files) but they also contain information on the coordinates and the base projection of the image. There are various tools for making and converting these images but one important thing to know is that they are the becoming a standard that you are very likely to encounter as a way to store data and deliver it on the web. If you download tiles of data served up as google earth layers, you may well find that you have a geotiff.
Here is an example - it uses the gdal library for manipulating geospatial data. The image is a freely available download of the NASA blue marble image set which I retrieved for you.
End of explanation
"""
print "1 - ", globalmarble.GetProjection(), "\n"
print "2 - ", globalmarble.GetDescription(), "\n"
print "3 - ", globalmarble.GetMetadata(), "\n"
print "4 - ", globalmarble.GetGeoTransform()
"""
Explanation: This looks really nice but it is just the original image plotted in its original shape.
The gdal object can tell you the projection which applies to the original data and various other attributes. This might not seems particularly useful at this point, but it is helpful to realise that there is a lot of information being passed around behind the scenes when you use these tools.
End of explanation
"""
fig = plt.figure(figsize=(12, 12), facecolor="none")
ax = plt.axes(projection=ccrs.Orthographic())
plt.imshow(globalmarble_img, zorder=0)
ax.coastlines(color="Yellow", zorder=1)
plt.show()
"""
Explanation: Projections
What if we want to use a different projection for the image. For example, we might want to use this as a background to plot some other information. How does this work ?
Let's try what we used before to plot with a different projection. Specifically, let's try an orthographic projection which should wrap the image around a sphere and show us the disk from a particular orientation. Here is one I made earlier ... the output should look like this:
<img src="../../Data/Reference/OrthographicProjectionBlueMarble.png" width=30%>
End of explanation
"""
fig = plt.figure(figsize=(12, 12), facecolor="none")
ax = plt.axes(projection=ccrs.Orthographic())
plt.imshow(globalmarble_img, zorder=0, transform=ccrs.PlateCarree())
ax.coastlines(color="Yellow", zorder=1)
plt.show()
"""
Explanation: OK, that didn't look like the sample image that I claimed it should and the reason is that we didn't tell the plotting routines what the original projection for the data was. Here is the fix: tell the imshow command the transformation of the original data - (this can take a little while to process).
End of explanation
"""
base_projection = ccrs.PlateCarree()
globalmag = gdal.Open("../../Data/Resources/EMAG2_image_V2.tif")
globalmag_img = globalmag.ReadAsArray().transpose(1,2,0).astype(numpy.float16)
globalmag_img_s = globalmag_img[::2,::2,::]
del(globalmag)
globaletopo = gdal.Open("../../Data/Resources/color_etopo1_ice_low.tif")
globaletopo_img = globaletopo.ReadAsArray().transpose(1,2,0).astype(numpy.float16)
del(globaletopo)
print globalmag_img.shape
print globalmag_img_s.shape
print globalmarble_img.shape
# Global pretty map ... can you make one centred on Australia and save it to an image ?
#projection = ccrs.Orthographic(central_longitude=80.0, central_latitude=30.0, globe=None)
this_projection = ccrs.PlateCarree()
global_extent = [-180.0, 180.0, -90.0, 90.0]
fig = plt.figure(figsize=(12, 12), facecolor="none")
ax = plt.axes(projection=this_projection)
ax.imshow(globalmag_img_s, origin='upper', transform=base_projection, extent=global_extent)
ax.imshow(globalmarble_img, origin='upper', transform=base_projection, extent=global_extent, alpha=0.5)
ax.coastlines(color="yellow")
plt.show()
"""
Explanation: You can try other projections here, though I have found quite a few do not behave in quite the way you expect !
Feel free to play with these data which are global magnetic intensity, the global etopo database of topography and bathymetry in color format and a black/white (height only) version of the same thing, all of them have the base projection of PlateCarree. Note that we can define a variable which points to this function and pass it into the transform argument of imshow
End of explanation
"""
|
martadesimone/Protoplanetarydisks | New_Table.ipynb | gpl-2.0 | import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from astropy.table import Table
from astropy import units as u
from astropy.modeling.blackbody import blackbody_nu
"""
Explanation: NewTable
Code to create a merging table from some tables in input.
After comparing the tables (in this case the one in the article of Andrews+13) and understand the rows in common we can create a new table cointaining the columns of interest of the rows in common.
There is also a function to calculate the Mass of the disK from the continuum Flux.
End of explanation
"""
def Mdisk(F,d,r,k,Td,lam):
#Eq 2 Andrews et al. 2013
#F is *mm* Flux in *Jy*
#d is the distance in *pc*
#r is the dust to mass ratio *adimensional*
#k is the dust opacity in *cm^2/g*
#lam is the wavelenght in *mm*
c=2.99792458e10 # Light velocity *cm/s*
h=6.6260755e-27 # Planck constant in erg*s (cgs)
kb=1.3806488e-16 # Boltzmann constant in erg/K (cgs)
ni=c/(lam*0.1) #in Hz
B_ni=1e23*2*h*ni**3/c**2*(1./(np.exp(h*ni/kb/Td)-1)) # Planck function in *Jy* --> 1Jy=1e-23erg/s/cm^2/Hz
# B_ni=1e23*blackbody_nu(ni, Td) # Planck function in *Jy* --> 1Jy=1e-23erg/s/cm^2/Hz
# return np.log10(F)+2*np.log10(d*3.08568e18)-np.log10(r*k*1.988e33)-np.log10(B_ni) #log(Disk mass/*Msol*)
return F*(d*3.086e18)**2/k/1.988e33/r/B_ni #Disk mass in *Msol*
def eMd(errF,F,Md):
return Md*errF/F
"""
Explanation: Function to calculate the Disk Mass from the Fmm, and the corrispective Error
End of explanation
"""
Tab3=Table.read("Tab3_totbin.fit")
Tab2=Table.read('Tab2_totbin.fit')
Tab4=Table.read('Tab4_totbin.fit')
Tab2.remove_rows([75,206]) #remove the row of CIDA11 B and the row not present in table 4 (J04290068+2755033)
Tab3.remove_rows([66,76,134,146])
#remove the rows not present in table 2 (J04263055+2443558, J04335245+2612548, J04361030+2159364)
#and in table 4 (J04290068+2755033)
Tab4.remove_rows([8,10,31,34,76,94,103,110,118,125,126,174,180,195,204,220])
#removing close pairs star (MHO2B,MHO3B, V892TAUB,DFTAUB,ZZTAUB,V710TAUB,GGTAUAb,UZTauEb,
#V807TauBa,V807TauBb,J04403979+2519061B,CoKuTau/4B,DQTauB,St34B,CIDA11B)
name=Tab2['Name'] #sources name
lF=Tab2['l_F1_3'] # sign for upper limit
F1_3=Tab2["F1_3"] # mm Flux at 1.3mm
eF1_3=Tab2['e_F1_3'] #error on mm Flux
SpT=Tab2["SpT"] # Stellar Spectral Type
logM=Tab3["logM_"] # Stellar host mass
b_logMlo=Tab3["b_logM_"] #low value of range of confidence (1sigma)
b_logMup=Tab3["b_logm__lc"] #upper value of range of confidence (1sigma)
logL=Tab4['logL_'] #Stellar host Luminosity
#logT=Tab4['logT_'] #Stellar host Temperature
logM2=Tab3["logM_2"] # Stellar host mass
b_logMlo2=Tab3["b_logM_2"] #low value of range of confidence (1sigma)
b_logMup2=Tab3["b_logm_2_lc"] #upper value of range of confidence (1sigma)
logM3=Tab3["logM_3"] # Stellar host mass
b_logMlo3=Tab3["b_logM_3"] #low value of range of confidence (1sigma)
b_logMup3=Tab3["b_logm_3_lc"] #upper value of range of confidence (1sigma)
##### Error on upper limits
nnan=np.where(np.isnan(eF1_3))
eF1_3[nnan]=F1_3[nnan]/3. #considering 1sigma
#eF1_3[nnan]=1e-20
"""
Explanation: Reading Tables, compare them and remove the rows not in common.
End of explanation
"""
d=140 #pc distance of Taurus
r=1 #dust to gass mass ratio
k=2.3 #cm^2/g at 1.3mm
lam=1.33 #wavelenght in mm
L=10.**logL
L_true=np.delete(L,(39,75,95,117,148,165))
L[39]=L[75]=L[95]=L[117]=L[148]=L[165]=np.mean(L_true)
#Temperature of the disk
T_20=20 #K
Td=25.*(L)**(1./4.)
tmin = 10. ###minimum value of T in K
ncold = np.where(Td <tmin)
Td[ncold] = tmin
Md=Mdisk(F1_3,d,r,k,Td,lam)
Md_20=Mdisk(F1_3,d,r,k,T_20,lam)/3e-6 #Md in Earth masses!
logMd=np.log10(Md)
logMd_20=np.log10(Md_20)
calibF1_3=np.sqrt(eF1_3**2+(0.1*F1_3)**2) ##Error on Fmm + calibration error (10%) quadratic sum
e_Md=eMd(calibF1_3,F1_3,Md) #error on Md
e_Md_20=eMd(calibF1_3,F1_3,Md_20) #error on Md
add_Md=np.sqrt(e_Md**2+(0.15*Md)**2) ##Error on Md + additional error (15%) for the ambiguity in assuming distance quadratic sum
add_Md_20=np.sqrt(e_Md_20**2+(0.15*Md_20)**2) ##Error on Md + additional error (15%) for the ambiguity in assuming distance quadratic sum
###Errors on Md in log scale
Dp=np.log10(Md+add_Md)-logMd #superior error bar
Dn=logMd-np.log10(Md-add_Md) #inferior error bar
Dp_20=np.log10(Md_20+add_Md_20)-logMd_20 #superior error bar
Dn_20=logMd_20-np.log10(Md_20-add_Md_20) #inferior error bar
#plt.errorbar((Td),(L), fmt='s')
#plt.xscale('log')
#plt.yscale('log')
#plt.show()
T=Table([name,logM,b_logMlo,b_logMup,lF,logMd, Dp, Dn, logMd_20, Dp_20, Dn_20 ,F1_3,eF1_3,calibF1_3,SpT],
names=('Name','LogM*','bM*_lo','bM*_up','l','LogMd', 'Dp', 'Dn' ,'LogMd_20', 'Dp_20', 'Dn_20',
'F1_3','eF1_3','calibF','SpT'))
#T['LogMd_20'].unit='Mearth'
S=Table([name,logM2,b_logMlo2,b_logMup2,logM3,b_logMlo3,b_logMup3],
names=('Name','LogM*2','bM*_lo2','bM*_up2','LogM*3','bM*_lo3','bM*_up3'))
##### Errors for upper limits
#nnan=np.where(np.isnan(T['eF1_3']))
#T['eF1_3'][nnan]=1e-20
#T['Dp'].fill_value=0
#T['Dn'].fill_value=0
#T_new=T.filled()
#T_new.show_in_browser()
#T_new.show_in_notebook()
"""
Explanation: Calculate the Disk Mass with the function above
End of explanation
"""
T.write('Table.fit', format='fits', overwrite='True')
S.write('Table_Mass.fit', format='fits', overwrite='True')
New=Table.read("Table.fit")
#New.info()
#New.colnames
New.show_in_browser()
"""
Explanation: Write the new table in a new fit file, and read it
End of explanation
"""
|
Geosyntec/pycvc | examples/2 - Hydrologic Summaries.ipynb | bsd-3-clause | %matplotlib inline
import os
import sys
import datetime
import warnings
import numpy as np
import matplotlib.pyplot as plt
import pandas
import seaborn
seaborn.set(style='ticks', context='paper')
import wqio
from wqio import utils
import pybmpdb
import pynsqd
import pycvc
min_precip = 1.9999
big_storm_date = datetime.date(2013, 7, 8)
pybmpdb.setMPLStyle()
seaborn.set(style='ticks', rc={'text.usetex': False}, palette='deep')
POCs = [p['cvcname'] for p in filter(lambda p: p['include'], pycvc.info.POC_dicts)]
if wqio.testing.checkdep_tex() is None:
tex_msg = ("LaTeX not found on system path. You will "
"not be able to compile ISRs to PDF files")
warnings.warn(tex_msg, UserWarning)
warning_filter = "ignore"
warnings.simplefilter(warning_filter)
"""
Explanation: CVC Data Summaries (with simple method hydrology)
Setup the basic working environment
End of explanation
"""
# simple method file
tidy_file = "output/tidy/hydro_simple.csv"
# # SWMM Files
# tidy_file = "output/tidy/hydro_swmm.csv"
hydro = pandas.read_csv(tidy_file, parse_dates=['start_date', 'end_date'])
"""
Explanation: Load tidy data
Data using the Simple Method hydrology is suffixed with _simple.
You could also use the SWMM Model hydrology with the _SWMM files.
End of explanation
"""
def winsorize_duration(g):
winsor_limits = {
'ED-1': (0.0, 0.0),
'LV-1': (0.2, 0.1),
'LV-2': (0.2, 0.3),
'LV-4': (0.0, 0.0),
}
return wqio.utils.winsorize_dataframe(g, duration_hours=winsor_limits[g.name])
with pandas.ExcelWriter("output/xlsx/CVCHydro_StormInfo_Simple.xlsx") as xl_storminfo:
hydro.to_excel(xl_storminfo, sheet_name='Storm Info', index=False)
for timegroup in [None, 'year', 'season', 'grouped_season']:
stat_options = {
'minprecip': min_precip,
'groupby_col': timegroup,
}
(
hydro.groupby('site')
.apply(winsorize_duration)
.pipe(pycvc.summary.remove_load_data_from_storms, [big_storm_date], 'start_date')
.pipe(pycvc.summary.storm_stats, **stat_options)
.to_excel(xl_storminfo, sheet_name='Storm Stats - {}'.format(timegroup), index=False)
)
"""
Explanation: High-level summaries
Hydrologic info and stats
Does not include the July 8, 2013 storm event.
For LV-1 and LV-2, event durations are winsorized to replace outliers beyond the 97.5 percentile.
For more information, see:
scipy.stats.mstats.winsorize
wqio.utils.winsorize_dataframe
End of explanation
"""
for site in ['ED-1', 'LV-2', 'LV-4']:
for by in ['year', 'outflow', 'season', 'grouped_season']:
try:
pycvc.viz.hydro_pairplot(hydro, site, by=by)
except:
print('failed on {}, {}'.format(site, by))
"""
Explanation: Hydrologic Pairplots
Expected failures due to lack of data:
1. LV-2, outflow
1. LV-4, grouped_season
End of explanation
"""
sites = [
{'name': 'ED-1', 'color': seaborn.color_palette()[0]},
{'name': 'LV-1', 'color': seaborn.color_palette()[1]},
{'name': 'LV-2', 'color': seaborn.color_palette()[4]},
{'name': 'LV-4', 'color': seaborn.color_palette()[5]},
]
for site in sites:
pycvc.viz.hydro_jointplot(
hydro=hydro, site=site['name'],
xcol='total_precip_depth',
ycol='outflow_mm',
conditions="outflow_mm > 0",
one2one=True,
color=site['color'],
)
pycvc.viz.hydro_jointplot(
hydro=hydro, site=site['name'],
xcol='antecedent_days',
ycol='outflow_mm',
conditions="outflow_mm > 0",
one2one=False,
color=site['color'],
)
pycvc.viz.hydro_jointplot(
hydro=hydro, site=site['name'],
xcol='total_precip_depth',
ycol='antecedent_days',
conditions="outflow_mm == 0",
one2one=False,
color=site['color'],
)
pycvc.viz.hydro_jointplot(
hydro=hydro, site=site['name'],
xcol='peak_precip_intensity',
ycol='peak_outflow',
conditions=None,
one2one=False,
color=site['color'],
)
plt.close('all')
"""
Explanation: Hydrologic joint distribution plots
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cams/cmip6/models/sandbox-1/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-1', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CAMS
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
phnmnl/workflow-demo | OpenMS/OpenMS.ipynb | apache-2.0 | import os
workingDir="OpenMS"
if not os.path.exists(workingDir):
os.makedirs(workingDir)
os.chdir(workingDir)
"""
Explanation: OpenMS Workflow
OpenMS is an open source platform for LC/MS data pre-processing and analysis.
Several tools have been developed using OpenMS library including noise reduction, centroiding, quantification, and alignment.
The following workflow has been developed to perform a simple LC/MS metabolomics data processing and analysis using OpenMS, Python, and R, utilizing the Docker containers concept. All the Docker images are available through DockerHub and the Docker files can be obtained through the PhenoMeNal GitHub.
A typical LC/MC data processing consists of the following components:
Peak picking (centroiding)
Feature Finding (quantification)
Feature Linking (matching)
Conversion (e.g. to tab-separated values files)
Downstream Analysis
In the rest of this text, we will go through all the mentioned steps using OpenMS. For more information about specific tools please refer to OpenMS documentation. We will also use iPython within Jupyter notebook as the interface and R for downstream data analysis.
Data prepration
Assuming you are in the root directory of Jupyter, we create a folder (named "OpenMS") and change the working directory to this folder.
End of explanation
"""
import urllib.request
urllib.request.urlretrieve("https://raw.githubusercontent.com/PayamE/Containers/master/data/data_list.txt","data_list.txt")
urllib.request.urlretrieve("https://raw.githubusercontent.com/PayamE/Containers/master/data/params.txt","params.txt")
"""
Explanation: We then download two text files (called "data_list.txt" and "params.txt") where "data_list.txt" contains the names of the raw MS files (mzML) and download links for each of the files.
In addition, since OpenMS tools reads parameters as ".ini" files, we also need to download these files from the links provided in "params.txt".
End of explanation
"""
import csv
import os
rawDirectory="rawFiles"
if not os.path.exists(rawDirectory):
os.makedirs(rawDirectory)
paramDirectory="paramFiles"
if not os.path.exists(paramDirectory):
os.makedirs(paramDirectory)
param_path=[]
with open('params.txt','r') as f:
reader=csv.reader(f,delimiter='\t')
for row in reader:
urllib.request.urlretrieve(row[1],paramDirectory+"/"+row[0])
param_path.append(row)
data_path=[]
with open('data_list.txt','r') as f:
reader=csv.reader(f,delimiter='\t')
for row in reader:
urllib.request.urlretrieve(row[1],rawDirectory+"/"+row[0])
data_path.append(row)
"""
Explanation: Next, we create two folders ("rawFiles" and "paramFiles") and download the files using the links provided in "data_list.txt" and "params.txt" to their corresponding folders.
End of explanation
"""
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning) # suppress warnings
"""
Explanation: After that, we suppress possible warnings which might be issued due to insecure connection.
End of explanation
"""
control=input()
"""
Explanation: We input the address to the control node
End of explanation
"""
import getpass
password=getpass.getpass()
"""
Explanation: as well as the password.
End of explanation
"""
url_peakPicker="https://admin:"+password+"@"+control+"/chronos/scheduler/iso8601"
url_featureFinder="https://admin:"+password+"@"+control+"/chronos/scheduler/dependency"
i=0
containerNamesfeatureFinder=[]
featureFinderoutNames=[]
peakPickerDir="peakPickerDir"
featureFinderDir="featureFinderDir"
if not os.path.exists(peakPickerDir):
os.makedirs(peakPickerDir)
if not os.path.exists(featureFinderDir):
os.makedirs(featureFinderDir)
#2030-01-01T12:00:00Z
for mzFile in data_path:
i=i+1
peakPickerInputFile=mzFile[0]
peakPickerOutputFile=mzFile[0]
containerNamePeakPicker="peakpickerhires"+"_"+peakPickerInputFile.replace(".mzML","")
json_peakpicker="""
{
"schedule" : "R1//PT1H",
"cpus": "0.45",
"mem": "256",
"epsilon" : "PT10M",
"name" : "%s",
"container": {
"type": "DOCKER",
"image": "payamemami/peakpickerhires",
"volumes": [{
"hostPath": "/mnt/container-volumes/jupyter/%s",
"containerPath": "/data",
"mode": "RW"
}]
},
"command" : "PeakPickerHiRes -in /data/%s/%s -out /data/%s/%s -ini /data/%s",
"owner" : "payam.emami@medsci.uu.se"
}
""" % (containerNamePeakPicker,workingDir,rawDirectory,peakPickerInputFile,peakPickerDir ,peakPickerOutputFile, paramDirectory+"/peakPickerParam.ini")
featureFinderoutput=peakPickerOutputFile.replace(".mzML",".featureXML")
containerNamefeatureFinder="featurefindermetabo"+"_"+peakPickerOutputFile.replace(".mzML","")
containerNamesfeatureFinder.append(containerNamefeatureFinder)
featureFinderoutNames.append(featureFinderoutput)
json_featurefinder="""
{
"parents" : ["%s"],
"cpus": "0.45",
"mem": "256",
"epsilon" : "PT10M",
"name" : "%s",
"container": {
"type": "DOCKER",
"image": "payamemami/featurefindermetabo",
"volumes": [{
"hostPath": "/mnt/container-volumes/jupyter/%s",
"containerPath": "/data",
"mode": "RW"
}]
},
"command" : "FeatureFinderMetabo -in /data/%s/%s -out /data/%s/%s -ini /data/%s",
"owner" : "payam.emami@medsci.uu.se"
}
""" % (containerNamePeakPicker,containerNamefeatureFinder,workingDir,peakPickerDir,peakPickerOutputFile,featureFinderDir ,featureFinderoutput,paramDirectory+"/featureFinderParam.ini")
response=requests.post(url_peakPicker, headers = {'content-type' : 'application/json'}, data=json_peakpicker, verify=False)
print("HTTP response code peakPicker: " + str(response.status_code))
response=requests.post(url_featureFinder, headers = {'content-type' : 'application/json'}, data=json_featurefinder, verify=False)
print("HTTP response code featureFinder: " + str(response.status_code))
"""
Explanation: Now we are ready to perform the first step of data pre-processing!
Peak Picking and feature finding
Here we begin with peak picking which converts raw data into peak lists which will be used for further processing. We will then perform feature finding on the centrioded data (resulted from peak picking) to detect retention time and isotope.
Briefly, for each raw data file, we prepare two "json" files. The first json ("json_peakpicker") will be used to spin up "PeakPickerHiRes" Docker container on a raw mzML file and the second json "(json_featurefinder") will be used for performing feature finding using "FeatureFinderMetabo" container on the result of PeakPickerHiRes. The most important difference in spin up method between PeakPickerHiRes and FeatureFinderMetabo is that FeatureFinderMetabo will be a dependent job, meaning that it will only run if "PeakPickerHiRes" has successfully finished. So, a FeatureFinderMetabo container, set to work on file "X", will run if PeakPickerHiRes on the same file ("X") has finished. This means that a large number of PeakPickerHiRes-FeatureFinderMetabo pairs can be run at the same time.
End of explanation
"""
featureLinkerDir="featureLinkerDir"
if not os.path.exists(featureLinkerDir):
os.makedirs(featureLinkerDir)
url_featureLinker="https://admin:"+password+"@"+control+"/chronos/scheduler/dependency"
featureLinkerInput=' '.join(["/data/"+featureFinderDir+"/" + fileName for fileName in featureFinderoutNames])
featureLinkerOutput="featureLinkerResult.consensusXML"
containerNamefeatureLinker="featureLinker"
parents="%s"%(containerNamesfeatureFinder)
json_featureLinker="""
{
"parents" : %s,
"cpus": "0.45",
"mem": "256",
"epsilon" : "PT10M",
"name" : "%s",
"container": {
"type": "DOCKER",
"image": "payamemami/featurelinkerunlabeledqt",
"volumes": [{
"hostPath": "/mnt/container-volumes/jupyter/%s",
"containerPath": "/data",
"mode": "RW"
}]
},
"command" : "FeatureLinkerUnlabeledQT -in %s -out /data/%s/%s -ini /data/%s",
"owner" : "payam.emami@medsci.uu.se"
}
""" % (parents.replace("\'","\""),containerNamefeatureLinker,workingDir,featureLinkerInput,featureLinkerDir,featureLinkerOutput,paramDirectory+"/featureLinkerParam.ini")
response=requests.post(url_featureLinker, headers = {'content-type' : 'application/json'}, data=json_featureLinker, verify=False)
print("HTTP response code featureFinder: " + str(response.status_code))
"""
Explanation: Feature linking
This process is used to match corresponding features across all the MS runs.
The method of spinning up is similar to that of feature finder. However, the container ("featurelinkerunlabeledqt") will be dependent on all of the FeatureFinderMetabo containers in the previous step. This means that "featurelinkerunlabeledqt" will only run if all the FeatureFinderMetabo processes successfully finish.
End of explanation
"""
textExporterDir="textExporterDir"
if not os.path.exists(textExporterDir):
os.makedirs(textExporterDir)
url_textExporter="https://admin:"+password+"@"+control+"/chronos/scheduler/dependency"
textExporterInput=featureLinkerDir+"/"+featureLinkerOutput
textExporterOutput="textExporterOutput.csv"
containerNameTextExporter="textexporter"
parents="%s"%(containerNamefeatureLinker)
json_textExporter="""
{
"parents" : ["%s"],
"cpus": "0.45",
"mem": "256",
"epsilon" : "PT10M",
"name" : "%s",
"container": {
"type": "DOCKER",
"image": "payamemami/textexporter",
"volumes": [{
"hostPath": "/mnt/container-volumes/jupyter/%s",
"containerPath": "/data",
"mode": "RW"
}]
},
"command" : "TextExporter -in /data/%s -out /data/%s/%s -ini /data/%s",
"owner" : "payam.emami@medsci.uu.se"
}
""" % (parents,containerNameTextExporter,workingDir,textExporterInput,textExporterDir,textExporterOutput,paramDirectory+"/textExporter.ini")
response=requests.post(url_textExporter, headers = {'content-type' : 'application/json'}, data=json_textExporter, verify=False)
print("HTTP response code featureFinder: " + str(response.status_code))
"""
Explanation: Exporting to CSV file
The consensusXML output of the linking process will be converted to a CSV file. This process is also dependent on the previous step.
End of explanation
"""
convertToDecyderDir="convertToDecyderDir"
if not os.path.exists(convertToDecyderDir):
os.makedirs(convertToDecyderDir)
url_convertToDecyder="https://admin:"+password+"@"+control+"/chronos/scheduler/dependency"
convertToDecyderInput=textExporterDir+"/"+textExporterOutput
convertToDecyderOutput=""
containerNameconvertToDecyder="converttodecyder"
textExporterOutputName="textExporterOutput.xls"
parents="%s"%(containerNameTextExporter)
json_textExporter="""
{
"parents" : ["%s"],
"cpus": "0.45",
"mem": "256",
"epsilon" : "PT10M",
"name" : "%s",
"container": {
"type": "DOCKER",
"image": "payamemami/converttodecyder",
"volumes": [{
"hostPath": "/mnt/container-volumes/jupyter/%s",
"containerPath": "/data",
"mode": "RW"
}]
},
"command" : "Rscript convert_to_decyder.R -in=/data/%s -out=/data/%s -name=%s",
"owner" : "payam.emami@medsci.uu.se"
}
""" % (parents,containerNameconvertToDecyder,workingDir,convertToDecyderInput,convertToDecyderDir,textExporterOutputName)
response=requests.post(url_textExporter, headers = {'content-type' : 'application/json'}, data=json_textExporter, verify=False)
print("HTTP response code featureFinder: " + str(response.status_code))
"""
Explanation: Formatting the CSV file in R
The CSV file from the previous step is converted to an "Excel" like file. This step will only run if TextExporter has finished.
End of explanation
"""
plotDir="plotDir"
if not os.path.exists(plotDir):
os.makedirs(plotDir)
url_plot="https://admin:"+password+"@"+control+"/chronos/scheduler/dependency"
plotInput=convertToDecyderDir+"/"+textExporterOutputName
plotOutput="plot.png"
containerNameplot="plotmsdata"
parents="%s"%(containerNameconvertToDecyder)
json_plot="""
{
"parents" : ["%s"],
"cpus": "0.45",
"mem": "256",
"epsilon" : "PT10M",
"name" : "%s",
"container": {
"type": "DOCKER",
"image": "payamemami/plotmsdata",
"volumes": [{
"hostPath": "/mnt/container-volumes/jupyter/%s",
"containerPath": "/data",
"mode": "RW"
}]
},
"command" : "Rscript plotMSData.R -in=/data/%s -out=/data/%s/%s -pattern=intensity_ -impute=T -plottype=BOX -width=20 -height=20 -imagetype=PNG -log=T",
"owner" : "payam.emami@medsci.uu.se"
}
""" % (parents,containerNameplot,workingDir,plotInput,plotDir,plotOutput)
response=requests.post(url_plot, headers = {'content-type' : 'application/json'}, data=json_plot, verify=False)
print("HTTP response code featureFinder: " + str(response.status_code))
"""
Explanation: Using the formatted file, we plot the data using R functions from the ggplot package.
End of explanation
"""
from IPython.display import Image
Image(filename=plotDir+"/"+plotOutput)
"""
Explanation: Finally, we can see the plot!
End of explanation
"""
|
dcavar/python-tutorial-for-ipython | notebooks/Deep Learning Tutorial.ipynb | apache-2.0 | from typing import Callable
"""
Explanation: Deep Learning Tutorial
(C) 2019 by Damir Cavar
This notebook was inspired by numerous totorials and other notebooks online, and books like Weidman (2019), ...
General Conventions
In the following Python code I will make use of type hints for Python to make explicit the variable and return types of all the functions used. This is supposed to make the code semantics more transparent.
End of explanation
"""
import numpy as np
"""
Explanation: Our Core Python Libraries
We will make use of the scientific computing package numpy in this notebook. In the following cell we import numpy and refer to it as np:
End of explanation
"""
from numpy import ndarray
"""
Explanation: We will need the ndarray object from numpy. By importing it directly here, we intend to simplify and reduce the code in the following:
End of explanation
"""
from matplotlib import pyplot as plt
"""
Explanation: Some of the functions that we will need to use will be plotted. We use the pyplot library from the matplotlib, refering to it as plt.
End of explanation
"""
def leaky_relu(x: ndarray) -> ndarray:
"""
Apply Leaky ReLU to each element in ndarray.
"""
return np.maximum(0.2 * x, x)
"""
Explanation: Activation Functions
The following function is taken from Weidman (2019):
End of explanation
"""
%matplotlib inline
x = np.arange(-2, 2, 0.05)
y = leaky_relu(x)
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.plot(x, y)
ax.set_title("Leaky ReLU")
ax.set_xlabel("x")
ax.set_ylabel("y")
"""
Explanation: We can plot the Leaky ReLU function as follows:
End of explanation
"""
%matplotlib inline
x = np.arange(-2, 2, 0.05)
y = x.clip(min=0)
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.plot(x, y)
ax.set_title("ReLU")
ax.set_xlabel("x")
ax.set_ylabel("y")
def deriv(func : Callable[[ndarray], ndarray],
input_ : ndarray,
delta : float = 0.001) -> ndarray:
"""
Evaluates the derivative of a function 'func' at every element in the 'input_' array.
"""
return (func(input_ + delta) - func(input_ - delta)) / (2 * delta)
"""
Explanation: We can reformulate the ReLU function as a special variant of the numpy.clip function, applied here to the nparray x:
End of explanation
"""
%matplotlib inline
def f(x):
return 1/x
x = np.linspace(0.1,1.5,150)
y = f(x)
a = .4
h = 0.001
fprime = (f(a + h) - f(a)) / h # derivative
tan = f(a) + fprime * (x - a) # tangent
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.plot(x, y, 'b', a, f(a), 'om', x, tan, '--r')
ax.set_title("Slope")
ax.set_xlabel("x")
ax.set_ylabel("y")
"""
Explanation: Derivatives
Derivatives are the amount of change of the result of a function when changing the input slightly at a certain value a.
$$\frac{df}{du}(a) = \lim_{\Delta \rightarrow 0} \frac{f(a + \Delta) - f(a - \Delta)}{2 \times \Delta}$$
To approximate the limit, we can set a very small value for $\Delta$:
$$\frac{df}{du}(a) = \frac{f(a + 0.001) - f(a - 0.001)}{2 \times 0.001} = \frac{f(a + 0.001) - f(a - 0.001)}{0.002}$$
We can simplify this equation to optimize the computation by taking $\Delta = 0.001$ only once into account:
$$\frac{df}{du}(a) = \frac{f(a + 0.001) - f(a)}{0.001}$$
This is in fact the slope of the function f(x) at point a, represented in the following by the tangent (red line):
End of explanation
"""
def deriv(func: Callable[[ndarray], ndarray],
input_: ndarray,
delta: float = 0.001) -> ndarray:
"""
Computes the derivate of func for every value in the input array.
"""
return (func(input_ + delta) - func(input_)) / delta
"""
Explanation: A simplified derivative function could be formulated as follows:
End of explanation
"""
def softmax(x: ndarray) -> ndarray:
"""
Compute softmax values for each sets of scores in x.
"""
return np.exp(x) / np.sum(np.exp(x), axis=0)
"""
Explanation: Softmax
$$softmax(z_i) = \frac{e^{z_i}}{\sum_{j=1}^d}\mbox{ with } 1 \leq i \geq d$$
We can define the softmax function as follows:
End of explanation
"""
scores = [3.0, 1.0, 0.2]
softmaxscores = softmax(scores)
print("Softmax:", softmaxscores, "\tSum:", sum(softmaxscores))
"""
Explanation: The following example shows the effect:
End of explanation
"""
|
dereneaton/ipyrad | newdocs/API-analysis/cookbook-vcf2hdf5.ipynb | gpl-3.0 | # conda install ipyrad -c bioconda
# conda install htslib -c bioconda
# conda install bcftools -c bioconda
import ipyrad.analysis as ipa
import pandas as pd
"""
Explanation: <span style="color:gray">ipyrad-analysis toolkit:</span> vcf_to_hdf5
View as notebook
Many genome assembly tools will write variant SNP calls to the VCF format (variant call format). This is a plain text file that stores variant calls relative to a reference genome in tabular format. It includes a lot of additional information about the quality of SNP calls, etc., but is not very easy to read or efficient to parse. To make analyses run a bit faster ipyrad uses a simplified format to store this information in the form of an HDF5 database. You can easily convert any VCF file to this HDF5 format using the ipa.vcf_to_hdf5() tool.
This tool includes an added benefit of allowing you to enter an (optional) ld_block_size argument when creating the file which will store information that can be used downstream by many other tools to subsample SNPs and perform bootstrap resampling in a way that reduces the effects of linkage among SNPs. If your data are assembled RAD data then the ld_block_size is not required, since we can simply use RAD loci as the linkage blocks. But if you want to combine reference-mapped RAD loci located nearby in the genome as being on the same linkage block then you can enter a value such as 50,000 to create 50Kb linkage block that will join many RAD loci together and sample only 1 SNP per block in each bootstrap replicate. If your data are not RAD data, e.g., whole genome data, then the ld_block_size argument will be required in order to encode linkage information as discrete blocks into your database.
Required software
If you are converting a VCF file assembled from some other tool (e.g., GATK, freebayes, etc.) then you will need to install the htslib and bcftools software and use them as described below.
End of explanation
"""
%%bash
# compress the VCF file if not already done (creates .vcf.gz)
bgzip data.vcf
# tabix index the compressed VCF (creates .vcf.gz.tbi)
tabix data.vcf.gz
# remove multi-allelic SNPs and INDELs and PIPE to next command
bcftools view -m2 -M2 -i'CIGAR="1X" & QUAL>30' data.vcf.gz -Ou |
# remove extra annotations/formatting info and save to new .vcf
bcftools annotate -x FORMAT,INFO > data.cleaned.vcf
# recompress the final file (create .vcf.gz)
bgzip data.cleaned.vcf
"""
Explanation: Pre-filter data from other programs (e.g., FreeBayes, GATK)
You can use the program bcftools to pre-filter your data to exclude indels and low quality SNPs. If you ran the conda install commands above then you will have all of the required tools installed. To achieve the format that ipyrad expects you will need to exclude indel containing SNPs (this may change in the future). Further quality filtering is optional.
The example below reduced the size of a VCF data file from 29Gb to 80Mb! VCF contains a lot of information that you do not need to retain through all of your analyses. We will keep only the final genotype calls.
Note that the code below is bash script. You can run this from a terminal, or in a jupyter notebook by appending the (%%bash) header like below.
End of explanation
"""
# load the VCF as an datafram
dfchunks = pd.read_csv(
"/home/deren/Documents/ipyrad/sandbox/Macaque-Chr1.clean.vcf.gz",
sep="\t",
skiprows=1000,
chunksize=1000,
)
# show first few rows of first dataframe chunk
next(dfchunks).head()
"""
Explanation: A peek at the cleaned VCF file
End of explanation
"""
# init a conversion tool
converter = ipa.vcf_to_hdf5(
name="Macaque_LD20K",
data="/home/deren/Documents/ipyrad/sandbox/Macaque-Chr1.clean.vcf.gz",
ld_block_size=20000,
)
# run the converter
converter.run()
"""
Explanation: Converting clean VCF to HDF5
Here I using a VCF file from whole geome data for 20 monkey's from an unpublished study (in progress). It contains >6M SNPs all from chromosome 1. Because many SNPs are close together and thus tightly linked we will likely wish to take linkage into account in our downstream analyses.
The ipyrad analysis tools can do this by encoding linkage block information into the HDF5 file. Here we encode ld_block_size of 20K bp. This breaks the 1 scaffold (chromosome) into about 10K linkage blocks. See the example below of this information being used in an ipyrad PCA analysis.
End of explanation
"""
# init a PCA tool and filter to allow no missing data
pca = ipa.pca(
data="./analysis-vcf2hdf5/Macaque_LD20K.snps.hdf5",
mincov=1.0,
)
"""
Explanation: Downstream analyses
The data file now contains 6M SNPs across 20 samples and N linkage blocks. By default the PCA tool subsamples a single SNP per linkage block. To explore variation over multiple random subsamplings we can use the nreplicates argument.
End of explanation
"""
pca.run_and_plot_2D(0, 1, seed=123);
"""
Explanation: Run a single PCA analysis from subsampled unlinked SNPs
End of explanation
"""
pca.run_and_plot_2D(0, 1, seed=123, nreplicates=25);
"""
Explanation: Run multiple PCAs over replicates of subsampled SNPs
Here you can see the results for a different 10K SNPs that are sampled in each replicate iteration. If the signal in the data is robust then we should expect to see the points clustering at a similar place across replicates. Internally ipyrad will rotate axes to ensure the replicate plots align despite axes swapping (which is arbitrary in PCA space). You can see this provides a better view of uncertainty in our estimates than the plot above (and it looks cool!)
End of explanation
"""
|
tiagoft/curso_audio | tdf_audio.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
fs = 44100
T = 3 # segundos
N = fs*T # numero de amostras do sinal
f = 1000 # Frequencia da senoide
t = np.linspace(0, T, N) # Aqui, defino os instantes de tempo em que vou amostrar o sinal
x = np.cos(2 * np.pi * f * t)
plt.plot(t,x)
plt.xlabel('Tempo (s)')
plt.ylabel('Magnitude')
plt.show()
"""
Explanation: Transformada de Fourier para Análise de Áudio
Objetivos
Ao fim desta iteração, o aluno será capaz de:
Utilizar a Transformada Discreta de Fourier (TDF) para analisar sinais de áudio, entendendo suas limitações.
Utilizar espectrogramas para análise de áudio, manipular os parâmetros de um espectrograma de forma a evidenciar eventos acústicos.
Pré-requisitos
Neste ponto do curso, o estudante deverá ter tranquilidade com os seguintes conceitos:
1) Séries de Fourier são modelos capazes de descrever algumas propriedades de sons harmônicos
Como sabemos desde os experimentos de Helmholtz, as oscilações relacionadas à sensação de sons harmônicos (tais como os emitidos por cordas vibrantes e tubos de ressonância) podem ser descritas por uma série de cossenóides cujas frequências são múltiplas de uma fundamental $f_0$, na forma:
$$x(t) = \sum_{m=1}^M a_m \cos (2 \pi m f_0 t + \phi_m),$$
onde:
* os coeficientes $\phi_m$ se relacionam à fase de cada cossenóide, e têm pouco impacto perceptual;
* os coeficientes $a_m$ se relacionam à amplitude de cada cossenóide, e são grandemente relacionadas ao timbre percebido.
2) Qualquer sinal - periódico ou não - pode ser representado por sua Transformada de Fourier
A Transformada de Fourier de um sinal transporta sua representação no domínio do tempo para o domínio da frequência. Um sinal $x(t)$ no domínio do tempo pode ser representado, no domínio da frequência, por um sinal $X(f)$, tal que:
$$X(f) = \int x(t) e^{j 2 \pi f t} dt,$$
onde:
* $j$ é a unidade complexa ($j^2 = -1$),
* $t$ é a variável, em segundos, relacionada ao domínio do tempo
* $f$ é a variável, em Hertz, relacionada ao domínio da frequência, e
* A exponencial complexa se relaciona a senóides e cossenóides através da Equação de Euler: $e ^ j\theta = \cos(\theta) + j \sin(\theta)$
3) Sinais no domínio do tempo podem ser amostrados, gerando uma série numérica correpondente
Quando um sinal $x(t)$ é amostrado com frequência de amostragem $f_s$, geramos uma série numérica $x[n]$, onde $n$ é o número (índice) de uma amostra, na forma:
$$x[n] = x(\frac{n}{f_s}).$$
Neste caso, sabemos que a amostra $x[n]$ representa a magnitude (ou, simplesmente, o valor medido) do sinal $x(t)$ no instante $t = \frac{n}{f_s}$. Também, sabemos que $x[n]$ contem toda a informação de $x(t)$, sem distorções, se a frequência de amostragem $f_s$ for superior ao dobro da maior frequência utilizada para representar o sinal em sua Transformada de Fourier. Por fim, sabemos que, nessas condições, o sinal $x[n]$ pode ser sintetizado novamente (usando uma placa de som, por exemplo) de forma a recuperar o sinal contínuo $x(t)$.
4) Sinais discretos podem ser representados no domínio da frequência por sua Transformada Discreta de Fourier
A Transformada Discreta de Fourier (TDF) $X[k]$ de um sinal discreto $x[n]$ é calculada por:
$$X[k] = \sum_{n=0}^{N-1} x[n] e^{\frac{-j 2 \pi k n}{N}},$$
onde:
* $N$ é número de amostras tomadas para o cálculo da TDF,
* $n$ é o índice da sequência numérica no domínio do tempo discreto e varia de $0$ até $N-1$
* $k$ é o índice da sequência numérica no domínio da frequência, e varia de $0$ até $N-1$
Se a condição de Nyquist for satisfeita, sabemos que o elemento $X[k]$ é uma amostra da Transformada de Fourier de $x(t)$ para a frequência $f = f_s \frac{k}{N}$.
Sinais estacionários
Inicialmente, devemos desenvolver algum traquejo sobre o que significa "domínio do tempo" e "domínio da frequência". Para isso, vamos emular a amostragem de um sinal ideal $\cos(2 \pi f t)$ e então operar sobre ele.
End of explanation
"""
# Definirei os indices maximo e minimo como uma funcao de instantes de tempo marcados em segundos
# mantendo o significado fisico das minhas escolhas
tmax = 0.015 # Segundos
tmin = 0. # Segundos
# Conversao para indices:
nmax = int(tmax * fs) # O arredondamento evita que nmax e nmin tenham valores fracionarios
nmin = int(tmin * fs)
# Novo desenho:
plt.plot(t[nmin:nmax], x[nmin:nmax]) # x[a:b] é um vetor com os elementos de x de índices a ate b-1
plt.title('nmin='+str(nmin)+' nmax='+str(nmax))
plt.xlabel('Tempo (s)')
plt.ylabel('Magnitude')
plt.show()
"""
Explanation: Verificamos que nossa figura não mostra informações úteis. O que aconteceu é que, nesse pequeno espaço de tela, estão mostrados $f_s \times T$ pontos de dados, que por sua vez estão ligados por linhas que vêm do processo de desenho. Assim, para visualizar nosso sinal $x[n]$, é preciso delimitar um intervalo pequeno de índices que gostaríamos de mostrar:
End of explanation
"""
y = np.fft.fft(x) # Calculamos a DFT com apenas uma linha de código
k = np.linspace(0, len(x), len(x)) # Valores de k
freq = fs * k / len(x) # Aqui, determino a frequência relacionada a cada valor de k
plt.plot(freq, np.abs(y)) # Vou desenhar apenas o módulo de y, descartando a fase
plt.xlabel('Frequencia (Hz)')
plt.ylabel('Magnitude')
plt.show()
"""
Explanation: De nossas premissas, podemos esperar que a Transformada Discreta de Fourier (TDF) de nosso sinal $x[n]$ apresentará um pico de magnitude no índice $k$ correspondente à frequência de nosso sinal. O pacote numpy já implementa a TDF através do algoritmo FFT (Fast Fourier Transform). Assim, temos:
End of explanation
"""
fmax = 5000 # Hz
fmin = 0
kmax = fmax * len(x)/fs
kmin = fmin * len(x)/fs
plt.plot(freq[kmin:kmax], np.abs(y)[kmin:kmax]) # Vou desenhar apenas o módulo de y, descartando a fase
plt.xlabel('Frequencia (Hz)')
plt.ylabel('Magnitude')
plt.show()
"""
Explanation: Neste gráfico, é possível visualizar a simetria da DFT ao redor da frequência de Nyquist ($\frac{1}{2}f_s$). Porém, ele apresenta o mesmo problema que o primeiro gráfico no domínio do tempo: por mostrar muitos pontos de dados, é difícil de visualizar. Vamos adotar uma solução análoga à do domínio do tempo e cercar as freuquências que gostaríamos de representar:
End of explanation
"""
f1 = 500
f2 = 2500
a1 = 1.0
a2 = 0.2
xs = (a1 * np.cos(2 * np.pi * f1 * t)) + (a2 * np.cos(2 * np.pi * f2 * t))
ys = np.fft.fft(xs)
plt.plot(t[nmin:nmax], xs[nmin:nmax])
plt.xlabel('Tempo (s)')
plt.ylabel('Magnitude')
plt.show()
plt.plot(freq[kmin:kmax], np.abs(ys)[kmin:kmax])
plt.xlabel('Frequencia (Hz)')
plt.ylabel('Magnitude')
plt.show()
"""
Explanation: Então, como esperávamos, vemos um pico de magnitude na DFT no índice relacionado à frequência de nossa cossenóide.
Uma característica interessante da DFT é que ela permite visualizar sinais que são somas de diferentes cossenóides, evidenciando as magnitudes dessas mesmas cossenóides:
End of explanation
"""
tc = np.linspace(0, T*2, N*2)
xc = np.hstack( (x, xs) )
# Podemos desenhar a primeira metade e a segunda metade do sinal em graficos diferentes:
tmin1 = 0
tmax1 = 0.01
tmin2 = 4
tmax2 = 4.01
# Conversao para indices:
nmax1 = int(tmax1 * fs)
nmin1 = int(tmin1 * fs)
nmax2 = int(tmax2 * fs)
nmin2 = int(tmin2 * fs)
plt.plot(tc, xc)
plt.title('Sinal xc completo')
plt.xlabel('Tempo (s)')
plt.ylabel('Magnitude')
plt.show()
plt.plot(tc[nmin1:nmax1], xc[nmin1:nmax1])
plt.title('nmin='+str(nmin1)+' nmax='+str(nmax1))
plt.xlabel('Tempo (s)')
plt.ylabel('Magnitude')
plt.show()
plt.plot(tc[nmin2:nmax2], xc[nmin2:nmax2])
plt.title('nmin='+str(nmin2)+' nmax='+str(nmax2))
plt.xlabel('Tempo (s)')
plt.ylabel('Magnitude')
plt.show()
"""
Explanation: Sinais não-estacionários
Nos exemplos acima, utilizamos apenas sinais que permanecem constantes no tempo. Vejamos o que acontece num outro caso. Vamos supor um sinal criado pela concatenação de $x[n]$, tal qual definido no primeiro exemplo, e do sinal $x_s[n]$, tal qual definido no segundo exemplo. Nesse caso, geraremos o sinal $x_c[n] = [x[n], x_s[n]]$:
End of explanation
"""
yc = np.fft.fft(xc)
k2 = np.linspace(0, len(xc), len(xc)) # Veja que o mapeamento de valores de k muda, porque o tamanho da TDF mudou.
freq2 = fs * k2 / len(xc) # Aqui, determino a frequência relacionada a cada valor de k
kmax2 = fmax * len(xc)/fs
kmin2 = fmin * len(xc)/fs
plt.plot(freq2[kmin2:kmax2], np.abs(yc)[kmin2:kmax2])
plt.xlabel('Frequencia (Hz)')
plt.ylabel('Magnitude')
plt.show()
"""
Explanation: Como esperávamos, podemos visualizar:
1. Na forma de onda do sinal completo, parece haver uma "quebra" no ponto em que criamos a junção entre os dois sinais de entrada;
1. Ao expandir a primeira metade do sinal, é possível visualizar a forma de onda de $x[n]$
1. Da mesma forma, na segunda metade é possível visualizar a forma de onda de $x_s[n]$.
Vamos agora calcular a TDF de $x_c[n]$:
End of explanation
"""
yc1 = np.fft.fft(xc[0:N])
yc2 = np.fft.fft(xc[N:2*N])
k3 = np.linspace(0, len(yc1), len(yc1)) # Veja que o mapeamento de valores de k muda, porque o tamanho da TDF mudou.
freq3 = fs * k3 / len(yc1) # Aqui, determino a frequência relacionada a cada valor de k
kmax3 = fmax * len(yc1)/fs
kmin3 = fmin * len(yc2)/fs
plt.plot(freq3[kmin3:kmax3], np.abs(yc1)[kmin3:kmax3])
plt.title('TDF da primeira metade do sinal')
plt.xlabel('Frequencia (Hz)')
plt.ylabel('Magnitude')
plt.show()
plt.plot(freq3[kmin3:kmax3], np.abs(yc2)[kmin3:kmax3])
plt.title('TDF da segunda metade do sinal')
plt.xlabel('Frequencia (Hz)')
plt.ylabel('Magnitude')
plt.show()
"""
Explanation: Verificamos que essa manifestação se parece com a soma das TDFs de $x[n]$ e de $x_s[n]$.
Assim, verificamos que a TDF é uma ferramenta importante para salientar informação no domínio da frequência, embora prejudique a visualização de informações no domínio do tempo.
Resolução no domínio do tempo
Uma solução para o problema de resolução no tempo, em nosso caso, é calcular duas TDFs: uma para a primeira metade do sinal e outra para a segunda metade. Assim, teríamos:
End of explanation
"""
# Este codigo sobrepoe as variaveis definidas acima. Para voltar, sera preciso rodar tudo novamente.
fs = 44100.
N = 256
zeros = 10 # O sinal recebera zeros ate atingir o tamanho zeros * N
f0 = 400
t = np.linspace(0, N/fs, N)
x = np.cos(2 * np.pi * f0 * t)
y = np.fft.fft(x, N*zeros) # A janela retangular eh definida implicitamente, neste caso.
freq = np.linspace(0, fs, len(y))
fmax = 1600.
fmin = 000.
kmax = int(fmax * len(y)/fs)
kmin = int(fmin * len(y)/fs)
# Escala logaritmica no dominio da frequencia
y = y/np.max(np.abs(y))
y = 20 * np.log10(np.abs(y)) # a magnitude sera expressa em dB, onde 0dB eh o maximo valor da magnitude
plt.plot(freq[kmin:kmax], y[kmin:kmax])
plt.xlabel('Frequencia (Hz)')
plt.ylabel('Magnitude (dB)')
plt.show()
"""
Explanation: Nesse caso, o que fizemos foi calcular a TDF em um intervalo de tempo curto. Neste momento, devemos estar tranquilos com a percepção de que a resolução no domínio do tempo foi ampliada.
Porém, verificamos que acontece um fenômeno interessante ao tomarmos uma TDF com menos pontos - e, portanto, com maior resolução no domínio do tempo. Neste caso, podemos calcular imediatamente que o número de pontos que representam o domínio da frequência também se reduz, proporcionalmente. Isso pode tornar mais difícil identificar dois componentes de frequência muito próximos.
Vamos modelar matematicamente essa resolução, e mostraremos que somente podemos reduzir a quantidade de pontos de nossa TDF dentro de limites que dependem do conteúdo espectral que desejamos visualizar.
Resolução no domínio da frequência
A seguir, conduzo um raciocínio que é bem conhecido, e mostrado com mais rigor matemático em diversas outras fontes. Esse raciocínio é retomado e, a seguir, demonstrado.
Sabemos que a Transformada de Fourier de um sinal contínuo é calculada dentro de limites infinitos.
Também, sabemos que a Transformada de Fourier da multiplicação de dois sinais $x_1(t)x_2(t)$ é igual à convolução de seus espectros $X_1(f)*X_2(f)$.
Portanto, ao multiplicar um sinal cossenoidal $\cos(2 \pi f_0 t)$ por uma janela retangular (que tem valor $1$ dentro de limites $t0$ e $t1$ conhecidos de tempo e valor $0$ fora desses limites), temos um sinal cujo espectro resultante será a convolução de um impulso $\delta(f-f_0)$ no domínio da frequência com uma função sampling $s(f,t0,t1)$ , caracterizada por um lóbulo central e uma série de lóbulos laterais adjacentes.
Esse resultado é a própria função sampling, deslocada de $f_0$ Hz: $s(f-f_0, t0, t1)$.
No caso da janela retangular, a largura do lóbulo coincide com a diferença de frequência relacionada a dois coeficientes adjacentes da TDF.
Podemos visualizar o formato da função sampling e os efeitos de seu deslocamento no tempo utilizando os seguintes procedimentos:
Aplicar uma janela suficientemente curta sobre um sinal cossenoidal
Utilizar escala logaritmica no eixo vertical, evitando que os coeficientes
Artificialmente, adicionar zeros ao final do sinal multiplicado (essa técnica se chama zero-padding), de forma a forçar a existência de mais amostras no domínio da frequência.
Assim, temos:
End of explanation
"""
fs = 44100.
N = 256
zeros = 10 # O sinal recebera zeros ate atingir o tamanho zeros * N
f0 = 2000
t = np.linspace(0, N/fs, N)
x = np.cos(2 * np.pi * f0 * t)
N = 256
w = np.hanning(N)
plt.plot(t, x)
plt.xlabel('Tempo (s)')
plt.ylabel('Magnitude')
plt.title('Sinal cossenoidal no dominio do tempo')
plt.show()
plt.plot(t, w)
plt.xlabel('Tempo (s)')
plt.ylabel('Magnitude')
plt.title('Janela de Hanning')
plt.show()
plt.plot(t, x*w)
plt.xlabel('Tempo (s)')
plt.ylabel('Magnitude')
plt.title('Sinal multiplicado pela janela')
plt.show()
"""
Explanation: Verificamos, portanto, que o lóbulo central - que corresponde a nossa frequência central - tem uma largura definida. Antes de prosseguir, veja o que acontece quando o número de pontos de nossa TDF muda (modificando $N$).
Um problema da janela retangular é que os lóbulos adjacentes ao lóbulo central têm amplitude elevada. Isso significa que um sinal cossenoidal de uma determinada frequência também gera manifestações que são medidas no coeficiente relacionado a outra frequência, e isso ocorre de uma forma bastante acentuada. Uma forma de amenizar esse processo é utilizar uma janela com bordas suaves, ao invés da janela retangular.
Em áudio, é comum utilizar uma janela de Hanning
End of explanation
"""
# Este codigo sobrepoe as variaveis definidas acima. Para voltar, sera preciso rodar tudo novamente.
fs = 44100.*2
N = 256
zeros = 10 # O sinal recebera zeros ate atingir o tamanho zeros * N
f0 = 400
t = np.linspace(0, N/fs, N)
x = np.cos(2 * np.pi * f0 * t)
# Processo da janela de Hanning
w = np.hanning(N)
xw = x * w
y = np.fft.fft(x, N*zeros) # A janela retangular eh definida implicitamente, neste caso.
yw = np.fft.fft(xw, N*zeros)
freq = np.linspace(0, fs, len(y))
fmax = 1600.
fmin = 000.
kmax = int(fmax * len(y)/fs)
kmin = int(fmin * len(y)/fs)
# Escala logaritmica no dominio da frequencia
y = y/np.max(np.abs(y))
y = 20 * np.log10(np.abs(y)) # a magnitude sera expressa em dB, onde 0dB eh o maximo valor da magnitude
yw = yw/np.max(np.abs(yw))
yw = 20 * np.log10(np.abs(yw)) # a magnitude sera expressa em dB, onde 0dB eh o maximo valor da magnitude
plt.plot(freq[kmin:kmax], y[kmin:kmax], label='Retangular')
plt.plot(freq[kmin:kmax], yw[kmin:kmax], 'r', label='Hanning')
plt.xlabel('Frequencia (Hz)')
plt.ylabel('Magnitude (dB)')
plt.legend()
plt.show()
"""
Explanation: Abaixo, verificamos o efeito da aplicação da janela de Hanning nos lóbulos observados na TDF:
End of explanation
"""
|
bashalex/datapot | notebooks/DatapotUsageExamples.ipynb | gpl-3.0 | import datapot as dp
from datapot import datasets
import pandas as pd
from __future__ import print_function
import sys
import bz2
import time
import xgboost as xgb
from sklearn.model_selection import cross_val_score
import datapot as dp
from datapot.utils import csv_to_jsonlines
"""
Explanation: Datapot Usage Examples
End of explanation
"""
transactions = pd.read_csv('../data/transactions.csv')
transactions.head()
"""
Explanation: Dataset with timestamp features extraction.
Convert CSV file to JSON lines
End of explanation
"""
datapot = dp.DataPot()
from datapot.utils import csv_to_jsonlines
csv_to_jsonlines('../data/transactions.csv', '../data/transactions.jsonlines')
data_trns = open('../data/transactions.jsonlines')
data_trns.readline()
"""
Explanation: Creating the DataPot object.
End of explanation
"""
datapot.detect(data_trns, limit=100)
t0 = time.time()
datapot.fit(data_trns, verbose=True)
print('fit time:', time.time()-t0)
datapot
"""
Explanation: Let's call the fit method. It automatically finds appropriate transformers for the fields of jsonlines file. The parameter 'limit' means how many objects will be used to detect the right transformers.
End of explanation
"""
datapot.remove_transformer('merchant_id', 0)
t0 = time.time()
df_trns = datapot.transform(data_trns)
print('transform time:', time.time()-t0)
df_trns.head()
"""
Explanation: Let's remove the SVDOneHotTransformer
End of explanation
"""
import datapot as dp
from datapot import datasets
"""
Explanation: Bag of Words Meets Bags of Popcorn
Usage example for unstructured textual bzip2-compressed data
https://www.kaggle.com/c/word2vec-nlp-tutorial/data
datapot.fit method subsamples the data to detect language and choose corresponding stopwords and stemming.
For each review datapot.transform generates an SVD-compressed 12-dimensional tfidf-vector representation.
End of explanation
"""
data_imdb = datasets.load_imdb()
"""
Explanation: Load data from datapot.datasets
End of explanation
"""
data_imdb = bz2.BZ2File('data/imdb.jsonlines.bz2')
datapot_imdb = dp.DataPot()
t0 = time.time()
datapot_imdb.detect(data_imdb)
print('detect time:', time.time()-t0)
datapot_imdb
datapot_imdb.remove_transformer('sentiment', 0)
t0 = time.time()
datapot_imdb.fit(data_imdb, verbose=True)
print('fit time:', time.time()-t0)
t0 = time.time()
df_imdb = datapot_imdb.transform(data_imdb)
print('transform time:', time.time()-t0)
df_imdb.head()
X = df_imdb.drop(['sentiment'], axis=1)
y = df_imdb['sentiment']
model = xgb.XGBClassifier()
cv_score = cross_val_score(model, X, y, cv=5)
assert all(i > 0.5 for i in cv_score), 'Low score!'
print('Cross-val score:', cv_score)
model.fit(X, y)
fi = model.feature_importances_
print('Feature importance:')
print(*(list(zip(X.columns, fi))), sep='\n')
"""
Explanation: Or load directly from file
End of explanation
"""
from datapot import datasets
data_job = datasets.load_job_salary()
# Or load from file%:
# data_job = bz2.BZ2File('datapot/data/job.jsonlines.bz2')
datapot_job = dp.DataPot()
t0 = time.time()
datapot_job.detect(data_job)
print('detect time:', time.time()-t0)
datapot_job
t0 = time.time()
datapot_job.fit(data_job, verbose=True)
print('fit time:', time.time()-t0)
t0 = time.time()
df_job = datapot_job.transform(data_job)
print('transform time:', time.time()-t0)
print(df_job.columns)
print(df_job.shape)
df_job.head()
X_job = df_job.drop(['SalaryNormalized', 'Id'], axis=1)
y_job = pd.qcut(df_job['SalaryNormalized'].values, q=2, labels=[0,1]).ravel()
model = xgb.XGBClassifier()
cv_score_job = cross_val_score(model, X_job, y_job, cv=5)
print('Cross-val score:', cv_score_job)
assert all(i > 0.5 for i in cv_score_job), 'Low score!'
model.fit(X_job, y_job)
fi_job = model.feature_importances_
print('Feature importance:')
print(*(list(zip(X_job.columns, fi_job))), sep='\n')
"""
Explanation: Job Salary Prediction
Usage example for unstructured textual bzip2-compressed data
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/eb0c29f55af0173daab811d4f4dc2f40/simulated_raw_data_using_subject_anatomy.ipynb | bsd-3-clause | # Author: Ivana Kojcic <ivana.kojcic@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Kostiantyn Maksymenko <kostiantyn.maksymenko@gmail.com>
# Samuel Deslauriers-Gauthier <sam.deslauriers@gmail.com>
# License: BSD-3-Clause
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
print(__doc__)
# In this example, raw data will be simulated for the sample subject, so its
# information needs to be loaded. This step will download the data if it not
# already on your machine. Subjects directory is also set so it doesn't need
# to be given to functions.
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
subject = 'sample'
meg_path = op.join(data_path, 'MEG', subject)
# First, we get an info structure from the sample subject.
fname_info = op.join(meg_path, 'sample_audvis_raw.fif')
info = mne.io.read_info(fname_info)
tstep = 1 / info['sfreq']
# To simulate sources, we also need a source space. It can be obtained from the
# forward solution of the sample subject.
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
src = fwd['src']
# To simulate raw data, we need to define when the activity occurs using events
# matrix and specify the IDs of each event.
# Noise covariance matrix also needs to be defined.
# Here, both are loaded from the sample dataset, but they can also be specified
# by the user.
fname_event = op.join(meg_path, 'sample_audvis_raw-eve.fif')
fname_cov = op.join(meg_path, 'sample_audvis-cov.fif')
events = mne.read_events(fname_event)
noise_cov = mne.read_cov(fname_cov)
# Standard sample event IDs. These values will correspond to the third column
# in the events matrix.
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
# Take only a few events for speed
events = events[:80]
"""
Explanation: Simulate raw data using subject anatomy
This example illustrates how to generate source estimates and simulate raw data
using subject anatomy with the :class:mne.simulation.SourceSimulator class.
Once the raw data is simulated, generated source estimates are reconstructed
using dynamic statistical parametric mapping (dSPM) inverse operator.
End of explanation
"""
activations = {
'auditory/left':
[('G_temp_sup-G_T_transv-lh', 30), # label, activation (nAm)
('G_temp_sup-G_T_transv-rh', 60)],
'auditory/right':
[('G_temp_sup-G_T_transv-lh', 60),
('G_temp_sup-G_T_transv-rh', 30)],
'visual/left':
[('S_calcarine-lh', 30),
('S_calcarine-rh', 60)],
'visual/right':
[('S_calcarine-lh', 60),
('S_calcarine-rh', 30)],
}
annot = 'aparc.a2009s'
# Load the 4 necessary label names.
label_names = sorted(set(activation[0]
for activation_list in activations.values()
for activation in activation_list))
region_names = list(activations.keys())
"""
Explanation: In order to simulate source time courses, labels of desired active regions
need to be specified for each of the 4 simulation conditions.
Make a dictionary that maps conditions to activation strengths within
aparc.a2009s :footcite:DestrieuxEtAl2010 labels.
In the aparc.a2009s parcellation:
'G_temp_sup-G_T_transv' is the label for primary auditory area
'S_calcarine' is the label for primary visual area
In each of the 4 conditions, only the primary area is activated. This means
that during the activations of auditory areas, there are no activations in
visual areas and vice versa.
Moreover, for each condition, contralateral region is more active (here, 2
times more) than the ipsilateral.
End of explanation
"""
def data_fun(times, latency, duration):
"""Function to generate source time courses for evoked responses,
parametrized by latency and duration."""
f = 15 # oscillating frequency, beta band [Hz]
sigma = 0.375 * duration
sinusoid = np.sin(2 * np.pi * f * (times - latency))
gf = np.exp(- (times - latency - (sigma / 4.) * rng.rand(1)) ** 2 /
(2 * (sigma ** 2)))
return 1e-9 * sinusoid * gf
"""
Explanation: Create simulated source activity
Generate source time courses for each region. In this example, we want to
simulate source activity for a single condition at a time. Therefore, each
evoked response will be parametrized by latency and duration.
End of explanation
"""
times = np.arange(150, dtype=np.float64) / info['sfreq']
duration = 0.03
rng = np.random.RandomState(7)
source_simulator = mne.simulation.SourceSimulator(src, tstep=tstep)
for region_id, region_name in enumerate(region_names, 1):
events_tmp = events[np.where(events[:, 2] == region_id)[0], :]
for i in range(2):
label_name = activations[region_name][i][0]
label_tmp = mne.read_labels_from_annot(subject, annot,
subjects_dir=subjects_dir,
regexp=label_name,
verbose=False)
label_tmp = label_tmp[0]
amplitude_tmp = activations[region_name][i][1]
if region_name.split('/')[1][0] == label_tmp.hemi[0]:
latency_tmp = 0.115
else:
latency_tmp = 0.1
wf_tmp = data_fun(times, latency_tmp, duration)
source_simulator.add_data(label_tmp,
amplitude_tmp * wf_tmp,
events_tmp)
# To obtain a SourceEstimate object, we need to use `get_stc()` method of
# SourceSimulator class.
stc_data = source_simulator.get_stc()
"""
Explanation: Here, :class:~mne.simulation.SourceSimulator is used, which allows to
specify where (label), what (source_time_series), and when (events) event
type will occur.
We will add data for 4 areas, each of which contains 2 labels. Since add_data
method accepts 1 label per call, it will be called 2 times per area.
Evoked responses are generated such that the main component peaks at 100ms
with a duration of around 30ms, which first appears in the contralateral
cortex. This is followed by a response in the ipsilateral cortex with a peak
about 15ms after. The amplitude of the activations will be 2 times higher in
the contralateral region, as explained before.
When the activity occurs is defined using events. In this case, they are
taken from the original raw data. The first column is the sample of the
event, the second is not used. The third one is the event id, which is
different for each of the 4 areas.
End of explanation
"""
raw_sim = mne.simulation.simulate_raw(info, source_simulator, forward=fwd)
raw_sim.set_eeg_reference(projection=True)
mne.simulation.add_noise(raw_sim, cov=noise_cov, random_state=0)
mne.simulation.add_eog(raw_sim, random_state=0)
mne.simulation.add_ecg(raw_sim, random_state=0)
# Plot original and simulated raw data.
raw_sim.plot(title='Simulated raw data')
"""
Explanation: Simulate raw data
Project the source time series to sensor space. Three types of noise will be
added to the simulated raw data:
multivariate Gaussian noise obtained from the noise covariance from the
sample data
blink (EOG) noise
ECG noise
The :class:~mne.simulation.SourceSimulator can be given directly to the
:func:~mne.simulation.simulate_raw function.
End of explanation
"""
epochs = mne.Epochs(raw_sim, events, event_id, tmin=-0.2, tmax=0.3,
baseline=(None, 0))
evoked_aud_left = epochs['auditory/left'].average()
evoked_vis_right = epochs['visual/right'].average()
# Visualize the evoked data
evoked_aud_left.plot(spatial_colors=True)
evoked_vis_right.plot(spatial_colors=True)
"""
Explanation: Extract epochs and compute evoked responsses
End of explanation
"""
method, lambda2 = 'dSPM', 1. / 9.
inv = mne.minimum_norm.make_inverse_operator(epochs.info, fwd, noise_cov)
stc_aud = mne.minimum_norm.apply_inverse(
evoked_aud_left, inv, lambda2, method)
stc_vis = mne.minimum_norm.apply_inverse(
evoked_vis_right, inv, lambda2, method)
stc_diff = stc_aud - stc_vis
brain = stc_diff.plot(subjects_dir=subjects_dir, initial_time=0.1,
hemi='split', views=['lat', 'med'])
"""
Explanation: Reconstruct simulated source time courses using dSPM inverse operator
Here, source time courses for auditory and visual areas are reconstructed
separately and their difference is shown. This was done merely for better
visual representation of source reconstruction.
As expected, when high activations appear in primary auditory areas, primary
visual areas will have low activations and vice versa.
End of explanation
"""
|
caseyjlaw/FRB121102 | AOVLA_spectrum.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pylab as pl
import astropy.io.fits as fits
import rtpipe
import rtlib_cython as rtlib
import astropy.units as units
import astropy.coordinates as coord
from astropy.time import Time
# confirm version is is earlier than 1.54 if using old dm scale
print(rtpipe.__version__)
"""
Explanation: Comparing Arecibo, VLA, and Effelsberg constraints on FRB 121102 burst spectra
Build broad band spectra of FRB for Law et al 2017
Note that this notebook assumes:
- rtpipe <= v1.54
- sdmpy<=1.35
This is due to an error in the DM scale that must be preserved to properly redetect the FRB. Starting at v1.55, the DM scale factor changed from 4.2e-3 to 4.1488e-3.
The DM scale error is 1.2%, so DM=560.5 (in v 1.54) is actually 567.4 (in v1.55 and in other codes and in reality).
End of explanation
"""
dmdelay_new = lambda dm, f0, f1: 4.1488e-3*dm*(1./f0**2 - 1./f1**2) # inconsistent with rtpipe
dmdelay = lambda dm, f0, f1: 4.2e-3*dm*(1./f0**2 - 1./f1**2) # consistent with rtpipe, but requires scaling by 1.2%
"""
Explanation: Useful functions
End of explanation
"""
#name = 'puppi_57648_C0531+33_0048_3756.99.ar.paz'
name = 'puppi_57648_C0531+33_0048_3757.00.ar.paz.pT'
fits.info(name)
hdu = fits.open(name)
hdu0, hdu1, hdu2, hdu3, hdu4 = hdu[0], hdu[1], hdu[2], hdu[3], hdu[4]
hdu0.header
"""
Explanation: Read coherently dedispersed Arecibo dynamic spectrum
End of explanation
"""
nch0 = 512
obsfreq = 1780.
df = 1.5625
tint0 = 2.04780080448418e-05
#dm0 = 560.5
dm0 = 553.7 # equivalent to 560.5 in correct scaling
dmt_ao = dmdelay(dm0, obsfreq*1e-3, 1e4)/(24*3600)
bary_ao = 75.7555109 # seconds, from presto
mjd0 = int(57648) # subtract this for convenience. must be an integer mjd
#mjdfrac_ao = (37750 + 0.319370962786796)/(24*3600) - dmt_ao # start time from header
#mjdfrac_ao = (37750 + 0.336629511829426)/(24*3600) - dmt_ao # start time from header, second round
mjdfrac_ao = 57648.43692057008 - dmt_ao - mjd0 + bary_ao/(24*3600)
nint0 = 4096
dt_ao = tint0*nint0
"""
Explanation: Define python names for Arecibo header info
End of explanation
"""
tbin = 8
nint1 = nint0/tbin
tint0 = dt_ao/nint0
print('Orig time resolution {0}'.format(tint0))
tint1 = tint0*tbin
print('New time resolution {0}'.format(tint1))
data = fits.getdata(name, ext=4)
spec = data[0][9] # spectrum read as (npol, nchan, nint) array
specI = spec[:2].mean(axis=0) # average over first two to get stokes I **confirm**
specI = specI.reshape(nch0, nint1, tbin).mean(axis=2) # use array slicing to rebin
# flag bad channels
flagch = range(0, 16) + [99, 100, 101, 286, 287, 310, 311] + range(320, 356) + range(nch0-110, nch0)
specI[flagch] = 0
specI = np.ma.masked_equal(specI, 0)
fbin = 2
nch1 = nch0/fbin
print('Orig freq resolution {0}'.format(df))
print('New freq resolution {0}'.format(df*fbin))
specAO = specI.reshape(nch1, fbin, nint1).mean(axis=1) # use array slicing to rebin
meanspec = specAO.mean(axis=1)
specAO = specAO - meanspec[:,None]
specAO = (specAO-np.ma.mean(specAO))/np.ma.std(specAO) # convert to snr per pixel
#specAO = specAO/5 # scale by gain relative to VLA?
print(specAO.shape)
tbary_ao = np.linspace(mjdfrac_ao, mjdfrac_ao+(dt_ao/(24*3600)), nint1)
freqs_ao = np.linspace(obsfreq - fbin*df*nch1, obsfreq, nch1)
extent_ao = (tbary_ao[0], tbary_ao[-1], freqs_ao[0], freqs_ao[-1])
fig = pl.figure(figsize=(12,8))
pl.imshow(specAO, interpolation='nearest', aspect='auto', extent=extent_ao)
pl.colorbar()
pl.figure(figsize=(8,8))
pl.subplot(211)
pl.plot(tbary_ao, specAO.mean(axis=0))
pl.ylabel('Amp (arb)')
pl.xlabel('Time (s)')
pl.subplot(212)
pl.plot(freqs_ao, meanspec)
pl.xlabel('Freq (MHz)')
pl.ylabel('Amp (arb)')
"""
Explanation: Read dynamic spectrum and average down a bit
Define time/freq binning and barycentric time grid for Arecibo data
End of explanation
"""
sp = specAO.reshape(nch1, nint1/16, 16).mean(axis=2)
print(sp.mean(axis=0).max()/sp.mean(axis=0)[:20].std())
tpk_ao = tbary_ao[np.where(specAO.mean(axis=0) == specAO.mean(axis=0).max())][0]
print('%.10f' % tpk_ao)
"""
Explanation: Arecibo burst SNR
End of explanation
"""
sigma = 4
peakbins = np.where(specAO.mean(axis=0) >= sigma*specAO.mean(axis=0)[:256].std())
peaktimes = tbary_ao[peakbins]
window = (peaktimes.max()-peaktimes.min())*24*3600
print('{0} sigma limit selects {1} ms of pulse ({2} bins)'.format(sigma,
window*1e3,
len(peaktimes)))
print('Window width is a bit wider than properly-dedispersed pulse, due to extra sweep')
Sint_sys = specAO.mean(axis=0)[peakbins].mean()
noise = specAO.mean(axis=0)[:peakbins[0][0]].std()/np.sqrt(len(peaktimes))
Sint = (3./np.sqrt(600e6*window*2))*Sint_sys/noise
print('Integrated flux density over {0} ms pulse: {1} mJy'.format(window, Sint*1e3))
Sspec_sys = specAO[:, peakbins[0]].mean(axis=1)
noise_spec = specAO[:, :peakbins[0][0]].std()/np.sqrt(len(peaktimes))
Sspec = (3./np.sqrt(3.125e6*window*2))*Sspec_sys/noise_spec
print('A (less good) integrated flux density over 2 ms pulse: {0} mJy'.format(int(Sspec.mean()*1e3)))
"""
Explanation: crude integrated flux measurement
End of explanation
"""
fig = pl.figure(figsize=(15,7))
ax = fig.add_subplot(111)
pl.plot(1e-3*freqs_ao[::-1], Sspec, 'k.')
pl.text(0.75, 0.88, '57648, Arecibo', horizontalalignment='left', fontsize=24,
verticalalignment='center', transform=ax.transAxes)
pl.errorbar(1.600, 1.2*Sspec.max(), yerr=(3./np.sqrt(3.125e6*window*2)), fmt='k.', ecolor='k')
pl.ylim(-0.03, Sspec.max()*1.4)
pl.xlabel('Frequency (GHz)', fontsize=18)
pl.ylabel('Flux density (Jy)', fontsize=18)
xt = pl.setp(ax.get_xticklabels(), fontsize=18)
yt = pl.setp(ax.get_yticklabels(), fontsize=18)
ax.xaxis.set_tick_params(width=4, color='k')
ax.yaxis.set_tick_params(width=4, color='k')
fig.savefig('specAO_57648.pdf', format='pdf')
"""
Explanation: Make nice Arecibo burst spectrum like the VLA ones
End of explanation
"""
calstring = """2.0520 2.89698 0.00279
2.1800 ******* *******
2.3080 ******* *******
2.4360 3.53585 0.00377
2.5640 3.69554 0.00376
2.6920 3.85507 0.00423
2.8200 4.00438 0.00486
2.9480 4.11069 0.00562
3.0520 4.20375 0.00631
3.1800 4.29385 0.00662
3.3080 4.36557 0.00715
3.4360 4.43684 0.00786
3.5640 4.46937 0.00850
3.6920 4.52488 0.00860
3.8200 4.53571 0.00969
3.9480 4.54625 0.00859"""
# parse flux scale
freq = []
flux = []
eflux = []
for line in calstring.split('\n'):
if '*' not in line:
result = line.split()
freq.append(float(result[0]))
flux.append(float(result[1]))
eflux.append(float(result[2]))
calfreq = np.array(freq)
calflux = np.array(flux)
print(calfreq, calflux)
def getscannum(sdmfile):
sdm = rtpipe.parsesdm.getsdm(sdmfile)
for scan in sdm.scans():
try:
print('Scan {0} binary data file: {1}'.format(scan.idx, scan.bdf.fname))
bdfscan = int(scan.idx)
except IOError:
pass
return bdfscan
def read_cut(sdmfile, scan, segment, dm=558., dt=1, gainfile=None, **kwargs):
if not gainfile:
gainfile = '.'.join(sdmfile.split('.')[:-1] + ['GN'])
st = rtpipe.RT.set_pipeline(sdmfile, scan, dmarr=[dm], dtarr=[dt], flaglist=[('badap', 3., 0.2)],
uvoversample=1.5, gainfile=gainfile, flagantsol=True,
timesub='mean', logfile=False, savecands=False,
savenoise=False, **kwargs)
data = rtpipe.RT.pipeline_reproduce(st, candloc=[segment,0,0,0,0], product='data')
u, v, w = rtpipe.parsesdm.get_uvw_segment(st, segment)
return st, data, u, v, w
def correctdata(st, data, u, v, w, corr='ph,dm', lm = (-3.835e-04,5.406e-04)):
""" lm gives (ra, dec) = (5 31 58.703708986 33 8 52.5067634154)
as quoted in Chatterjee et al (2017)
"""
data2 = data.copy()
if 'ph' in corr:
l1, m1 = lm
rtlib.phaseshift_threaded(data2, st, l1, m1, u, v)
if 'dm' in corr:
rtlib.dedisperse_par(data2, st['freq'], st['inttime'], st['dmarr'][0], [0, st['nbl']])
return data2
# get array2 for bin in array near value
def find_nearest(array, array2, value):
idx = (np.abs(array-value)).argmin()
return array2[idx]
def getscale(st):
# get flux scaling at nearest frequency
scale = []
for i in range(len(st['freq'])):
freq = st['freq'][i]
scale.append(find_nearest(calfreq, calflux, freq))
# print(i, st['freq'][i], scale)
scale = np.array(scale, dtype='complex64')[None,None,:,None]
return scale
def correct_all(st, data, u, v, w):
scale = getscale(st)
dataph = correctdata(st, data*scale, u, v, w, corr='ph')
dataphdm = correctdata(st, dataph, u, v, w, corr='dm')
return dataphdm
"""
Explanation: VLA
Set calibration table and set useful functions for rtpipe
End of explanation
"""
key = '57648'
read = {}
sdmfile = '16A-496_sb32698778_1_02h00m_000.57648.37452900463.cut/'
scannum = getscannum(sdmfile)
read[key] = read_cut(sdmfile, scannum, 7, npix_max=7400, chans=range(2,256))
"""
Explanation: Read data with rtpipe and phase it to FRB 121102
End of explanation
"""
st, data, u, v, w = read[key]
st['dmarr'] = [dm0]
scale = getscale(st)
dataph = correctdata(st, data*scale, u, v, w, corr='ph,dm')
intrange = (30, 60)
dint = intrange[1] - intrange[0]
specVLA = dataph[intrange[0]:intrange[1]].mean(axis=3).mean(axis=1).real
specVLA = (specVLA-specVLA.mean())/specVLA.std() # in SNR units
print(specAO.shape, specVLA.shape)
"""
Explanation: Select VLA data for comparison to Arecibo
End of explanation
"""
dmt_vla = dmdelay(dm0, st['freq'][-1], 1e4)/(24*3600)
topo_vla = 75.7533100 # seconds, from presto
mjdfrac_vla = st['segmenttimes'][7][0] + intrange[0]*st['inttime']/(24*3600) - mjd0 - dmt_vla + topo_vla/(24*3600)
print('VLA dMJD: {0}'.format(mjdfrac_vla))
print('AO dMJD: {0}'.format(mjdfrac_ao))
print('Diff: {0} s'.format((mjdfrac_ao-mjdfrac_vla)*24*3600))
tbary_vla = np.linspace(mjdfrac_vla, mjdfrac_vla+(dint*st['inttime']/(24*3600)), dint)
freqs_vla = 1e3*st['freq']
extent_vla = (tbary_vla[0], tbary_vla[-1], freqs_vla[0], freqs_vla[-1])
# put them together
fig = pl.figure(figsize=(8,8))
fig.add_subplot(211)
pl.imshow(specVLA.transpose(), interpolation='nearest', origin='bottom', extent=extent_vla, aspect='auto')
fig.add_subplot(212)
pl.imshow(specAO, interpolation='nearest', aspect='auto', extent=extent_ao)
tpk_vla = tbary_vla[np.where(specVLA.mean(axis=1) == specVLA.mean(axis=1).max())][0]
print('%.10f' % tpk_vla)
"""
Explanation: Calculate dm delay and topocentric correction for VLA
End of explanation
"""
gap = 30
specregrid = np.zeros(shape=(nch1+st['nchan']+gap, len(tbary_ao)))
for idx in range(len(tbary_ao)):
specregrid[254+gap:, idx] = specAO[:, idx]
idxs_vla = [np.argmin(np.abs(tbary_vla-tbary_ao[i])) for i in range(len(tbary_ao))]
for idx_ao in range(len(idxs_vla)):
idx_vla = idxs_vla[idx_ao]
specregrid[:254, idx_ao] += specVLA[idx_vla, ::-1]
"""
Explanation: Regrid VLA and AO data to same fixed image grid
End of explanation
"""
# assumes fixed relative gain between VLA and AO == 3
fig = pl.figure(figsize=(12,12))
ax = fig.add_subplot(211)
pl.imshow(specregrid, interpolation='nearest', aspect='equal', vmax=0.8*specregrid.max(), cmap='Greys')#, vmin=-0.8)
ax.fill_between(np.arange(0, len(tbary_ao)), 254*np.ones(len(tbary_ao)),
(254+gap)*np.ones(len(tbary_ao)), facecolor='k')
pl.xlabel('Time (ms)', fontsize=14)
pl.ylabel('Frequency (GHz)', fontsize=14)
xticks = np.arange(0, 600, 100)
pl.xticks(xticks, np.array(tbin*2e-2*xticks, dtype='int'))
ntot = nch1+len(st['freq'])+gap
yticks = np.arange(0, ntot, 80)
print(yticks)
pl.yticks(yticks)
pl.yticks(yticks, [st['freq'][-1], st['freq'][-80], st['freq'][-160], st['freq'][-240],
np.round(1e-3*freqs_ao[ntot-320], 3),
np.round(1e-3*freqs_ao[ntot-400], 3),
np.round(1e-3*freqs_ao[ntot-480], 3)])
pl.xlim(150, len(tbary_ao))
pl.ylim(ntot-50, 0)
xt = pl.setp(ax.get_xticklabels(), fontsize=14)
yt = pl.setp(ax.get_yticklabels(), fontsize=14)
ax.xaxis.set_tick_params(width=3, color='k')
ax.yaxis.set_tick_params(width=3, color='k')
ax.text(170, 330, "Arecibo", rotation=90, fontsize=20)
ax.text(170, 130, "VLA", rotation=90, fontsize=20)
fig.savefig('aovla_spec.pdf', format='pdf')
"""
Explanation: Plot it
End of explanation
"""
# some residual dm sweep?
print(24*3600*(tbary_ao[np.where(specAO[128:].mean(axis=0) == specAO[128:].mean(axis=0).max())][0] -
tbary_ao[np.where(specAO[:128].mean(axis=0) == specAO[:128].mean(axis=0).max())][0]))
dmt_ao_delta = dmdelay(dm0, obsfreq*1e-3, st['freq'][-1]) - dmdelay(560, obsfreq*1e-3, st['freq'][-1])
dmt_ao_inter = dmdelay(dm0, obsfreq*1e-3, (obsfreq-500)*1e-3) - dmdelay(560, obsfreq*1e-3, (obsfreq-500)*1e-3)
print(dmt_ao_delta, dmt_ao_inter)
"""
Explanation: Calculate residual sweep and compare to DM model
End of explanation
"""
# 4 bursts with ao, vla, effelsberg coverage
# (ami-la covers all 4 vla bursts)
# 57643 (AO-C, *VLA-S)
# 57645 (AO-L, *VLA-S)
# 57648 (*AO-L, *VLA-S, Eff-C)
# 57649 (AO-L, *VLA-S, Eff-C)
# * shows detections
# limits assume 2 ms pulse width
#s43d = np.array([[3.0, 4.0], [0.508, 0.0036*5]])
#s43l = np.array([[4.0], [0.0036*5]]) # useless data!
s45d = np.array([[1.38, 3.0], [0.002*5, (5/2.)*0.064]])
s45l = np.array([[1.38], [0.002*5]]) # too high?
s48d = np.array([[1.4, 3.0, 4.85], [0.057, (5/2.)*0.111, 0.028*5]]) # fixed for 2 ms width
s48l = np.array([[4.85], [0.028*5]])
s49d = np.array([[1.42, 3.0, 4.9], [0.002*5, (5/2.)*0.167, 0.028*5]])
s49l = np.array([[1.42, 4.9], [0.002*5, 0.028*5]])
fig = pl.figure(figsize=(10,5))
ax = fig.add_subplot(111)
# overplot upper limit symbols
#ax.plot(s43l[0], s43l[1], 'cv', ms=10)
ax.plot(s45l[0], s45l[1], 'kv', ms=10)
ax.plot(s48l[0], s48l[1], 'kv', ms=10)
ax.plot(s49l[0], s49l[1], 'kv', ms=10)
# plot lines with points first
#ax.plot(s43d[0], s43d[1], 'c.-', ms=10, label='12 Sep 2016')
ax.plot(s45d[0], s45d[1], 'k.--', ms=10, label='57645')
ax.plot(s48d[0], s48d[1], 'k.-.', ms=10, label='57648')
ax.plot(s49d[0], s49d[1], 'k.-', ms=10, label='57649')
ax.set_xlabel('Frequency (GHz)', fontsize=14)
ax.set_ylabel('Integrated Flux density (Jy; 2 ms)', fontsize=14)
ax.legend(fontsize=14)
xt = pl.setp(ax.get_xticklabels(), fontsize=14)
yt = pl.setp(ax.get_yticklabels(), fontsize=14)
ax.xaxis.set_tick_params(width=3, color='k')
ax.yaxis.set_tick_params(width=3, color='k')
fig.savefig('multispec.pdf', format='pdf')
"""
Explanation: Burst spectra with and without detections
Compare Effelsberg, VLA, Arecibo measured burst fluxes and limits
End of explanation
"""
def speclim(s):
freqs = s[0]
fluxes = s[1]
for i in range(len(freqs)-1):
freqi = freqs[i]
freqi1 = freqs[i+1]
# correct for plotting offset
if freqi <= 1.5:
freqi = 1.4
elif freqi1 > 4.2:
freqi1 = 4.85
print(freqi, freqi1)
print(fluxes[i], fluxes[i+1])
print(np.log10(fluxes[i]/fluxes[i+1])/np.log10(freqi/freqi1))
speclim(s45d)
speclim(s48d)
speclim(s49d)
"""
Explanation: Calculate implied spectral index and limits
End of explanation
"""
|
tien-le/uranus | machine_learning_project.ipynb | mit | # To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import numpy.random as rnd
import os
# to make this notebook's output stable across runs
rnd.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "end_to_end_project"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
"""
Explanation: Chapter 2 – End-to-end Machine Learning project
Welcome to Machine Learning Housing Corp.! Your task is to predict median house values in Californian districts, given a number of features from these districts.
This notebook contains all the sample code and solutions to the exercices in chapter 2.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
"""
DATASETS_URL = "https://github.com/ageron/handson-ml/raw/master/datasets"
import os
import tarfile
from six.moves import urllib
HOUSING_PATH = "datasets/housing"
HOUSING_URL = DATASETS_URL + "/housing/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
if not os.path.exists(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
housing = load_housing_data()
housing.head()
housing.info()
housing["ocean_proximity"].value_counts()
print(housing.describe())
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(11,8))
save_fig("attribute_histogram_plots")
plt.show()
import numpy as np
import numpy.random as rnd
rnd.seed(42) # to make this notebook's output identical at every run
def split_train_test(data, test_ratio):
shuffled_indices = rnd.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
train_set, test_set = split_train_test(housing, 0.2)
print(len(train_set), len(test_set))
import hashlib
def test_set_check(identifier, test_ratio, hash):
return bytearray(hash(np.int64(identifier)).digest())[-1] < 256 * test_ratio
def split_train_test_by_id(data, test_ratio, id_column, hash=hashlib.md5):
ids = data[id_column]
in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio, hash))
return data.loc[~in_test_set], data.loc[in_test_set]
housing_with_id = housing.reset_index() # adds an `index` column
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "index")
test_set.head()
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)
test_set.head()
housing["median_income"].hist()
housing["income_cat"] = np.ceil(housing["median_income"] / 1.5)
housing["income_cat"].where(housing["income_cat"] < 5, 5.0, inplace=True)
housing["income_cat"].value_counts()
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing["income_cat"]):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
def income_cat_proportions(data):
return data["income_cat"].value_counts() / len(data)
train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)
compare_props = pd.DataFrame({
"Overall": income_cat_proportions(housing),
"Stratified": income_cat_proportions(strat_test_set),
"Random": income_cat_proportions(test_set),
}).sort_index()
compare_props["Rand. %error"] = 100 * compare_props["Random"] / compare_props["Overall"] - 100
compare_props["Strat. %error"] = 100 * compare_props["Stratified"] / compare_props["Overall"] - 100
compare_props
for set in (strat_train_set, strat_test_set):
set.drop("income_cat", axis=1, inplace=True)
"""
Explanation: Get the data
End of explanation
"""
housing = strat_train_set.copy()
housing.plot(kind="scatter", x="longitude", y="latitude")
save_fig("bad_visualization_plot")
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1)
save_fig("better_visualization_plot")
housing.plot(kind="scatter", x="longitude", y="latitude",
s=housing['population']/100, label="population",
c="median_house_value", cmap=plt.get_cmap("jet"),
colorbar=True, alpha=0.4, figsize=(10,7),
)
plt.legend()
save_fig("housing_prices_scatterplot")
plt.show()
import matplotlib.image as mpimg
california_img=mpimg.imread(PROJECT_ROOT_DIR + '/images/end_to_end_project/california.png')
ax = housing.plot(kind="scatter", x="longitude", y="latitude", figsize=(10,7),
s=housing['population']/100, label="Population",
c="median_house_value", cmap=plt.get_cmap("jet"),
colorbar=False, alpha=0.4,
)
plt.imshow(california_img, extent=[-124.55, -113.80, 32.45, 42.05], alpha=0.5)
plt.ylabel("Latitude", fontsize=14)
plt.xlabel("Longitude", fontsize=14)
prices = housing["median_house_value"]
tick_values = np.linspace(prices.min(), prices.max(), 11)
cbar = plt.colorbar()
cbar.ax.set_yticklabels(["$%dk"%(round(v/1000)) for v in tick_values], fontsize=14)
cbar.set_label('Median House Value', fontsize=16)
plt.legend(fontsize=16)
save_fig("california_housing_prices_plot")
plt.show()
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
housing.plot(kind="scatter", x="median_income", y="median_house_value",
alpha=0.3)
plt.axis([0, 16, 0, 550000])
save_fig("income_vs_house_value_scatterplot")
plt.show()
from pandas.tools.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms", "housing_median_age"]
scatter_matrix(housing[attributes], figsize=(11, 8))
save_fig("scatter_matrix_plot")
plt.show()
housing["rooms_per_household"] = housing["total_rooms"] / housing["population"]
housing["bedrooms_per_room"] = housing["total_bedrooms"] / housing["total_rooms"]
housing["population_per_household"] = housing["population"] / housing["households"]
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
housing.plot(kind="scatter", x="rooms_per_household", y="median_house_value",
alpha=0.2)
plt.axis([0, 5, 0, 520000])
plt.show()
housing.describe()
"""
Explanation: Discover and visualize the data to gain insights
End of explanation
"""
housing = strat_train_set.drop("median_house_value", axis=1)
housing_labels = strat_train_set["median_house_value"].copy()
housing_copy = housing.copy().iloc[21:24]
housing_copy
housing_copy.dropna(subset=["total_bedrooms"]) # option 1
housing_copy = housing.copy().iloc[21:24]
housing_copy.drop("total_bedrooms", axis=1) # option 2
housing_copy = housing.copy().iloc[21:24]
median = housing_copy["total_bedrooms"].median()
housing_copy["total_bedrooms"].fillna(median, inplace=True) # option 3
housing_copy
from sklearn.preprocessing import Imputer
imputer = Imputer(strategy='median')
housing_num = housing.drop("ocean_proximity", axis=1)
imputer.fit(housing_num)
X = imputer.transform(housing_num)
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
housing_tr.iloc[21:24]
imputer.statistics_
housing_num.median().values
imputer.strategy
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
housing_tr.head()
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
housing_cat = housing["ocean_proximity"]
housing_cat_encoded = encoder.fit_transform(housing_cat)
housing_cat_encoded
print(encoder.classes_)
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1,1))
housing_cat_1hot
housing_cat_1hot.toarray()
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
encoder.fit_transform(housing_cat)
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room = True): # no *args or **kargs
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self # nothing else to do
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, household_ix]
population_per_household = X[:, population_ix] / X[:, household_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(housing.values)
housing_extra_attribs = pd.DataFrame(housing_extra_attribs, columns=list(housing.columns)+["rooms_per_household", "population_per_household"])
housing_extra_attribs.head()
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
('imputer', Imputer(strategy="median")),
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
])
num_pipeline.fit_transform(housing_num)
from sklearn.pipeline import FeatureUnion
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names].values
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
num_pipeline = Pipeline([
('selector', DataFrameSelector(num_attribs)),
('imputer', Imputer(strategy="median")),
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
])
cat_pipeline = Pipeline([
('selector', DataFrameSelector(cat_attribs)),
('label_binarizer', LabelBinarizer()),
])
preparation_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
housing_prepared = preparation_pipeline.fit_transform(housing)
housing_prepared
housing_prepared.shape
"""
Explanation: Prepare the data for Machine Learning algorithms
End of explanation
"""
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
# let's try the full pipeline on a few training instances
some_data = housing.iloc[:5]
some_labels = housing_labels.iloc[:5]
some_data_prepared = preparation_pipeline.transform(some_data)
print("Predictions:\t", lin_reg.predict(some_data_prepared))
print("Labels:\t\t", list(some_labels))
from sklearn.metrics import mean_squared_error
housing_predictions = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_labels, housing_predictions)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
from sklearn.metrics import mean_absolute_error
lin_mae = mean_absolute_error(housing_labels, housing_predictions)
lin_mae
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
housing_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(housing_labels, housing_predictions)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
"""
Explanation: Prepare the data for Machine Learning algorithms
End of explanation
"""
from sklearn.model_selection import cross_val_score
tree_scores = cross_val_score(tree_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
tree_rmse_scores = np.sqrt(-tree_scores)
def display_scores(scores):
print("Scores:", scores)
print("Mean:", scores.mean())
print("Standard deviation:", scores.std())
display_scores(tree_rmse_scores)
lin_scores = cross_val_score(lin_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
lin_rmse_scores = np.sqrt(-lin_scores)
display_scores(lin_rmse_scores)
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared, housing_labels)
housing_predictions = forest_reg.predict(housing_prepared)
forest_mse = mean_squared_error(housing_labels, housing_predictions)
forest_rmse = np.sqrt(forest_mse)
forest_rmse
from sklearn.model_selection import cross_val_score
forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-forest_scores)
display_scores(forest_rmse_scores)
scores = cross_val_score(lin_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
pd.Series(np.sqrt(-scores)).describe()
from sklearn.svm import SVR
svm_reg = SVR(kernel="linear")
svm_reg.fit(housing_prepared, housing_labels)
housing_predictions = svm_reg.predict(housing_prepared)
svm_mse = mean_squared_error(housing_labels, housing_predictions)
svm_rmse = np.sqrt(svm_mse)
svm_rmse
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},
{'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},
]
forest_reg = RandomForestRegressor()
grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring='neg_mean_squared_error')
grid_search.fit(housing_prepared, housing_labels)
grid_search.best_params_
grid_search.best_estimator_
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
pd.DataFrame(grid_search.cv_results_)
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint
param_distribs = {
'n_estimators': randint(low=1, high=200),
'max_features': randint(low=1, high=8),
}
forest_reg = RandomForestRegressor()
rnd_search = RandomizedSearchCV(forest_reg, param_distributions=param_distribs,
n_iter=10, cv=5, scoring='neg_mean_squared_error')
rnd_search.fit(housing_prepared, housing_labels)
cvres = rnd_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
feature_importances = grid_search.best_estimator_.feature_importances_
feature_importances
extra_attribs = ["rooms_per_household", "population_per_household", "bedrooms_per_room"]
cat_one_hot_attribs = list(encoder.classes_)
attributes = num_attribs + extra_attribs + cat_one_hot_attribs
sorted(zip(feature_importances, attributes), reverse=True)
final_model = grid_search.best_estimator_
X_test = strat_test_set.drop("median_house_value", axis=1)
y_test = strat_test_set["median_house_value"].copy()
X_test_transformed = preparation_pipeline.transform(X_test)
final_predictions = final_model.predict(X_test_transformed)
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
final_rmse
"""
Explanation: Fine-tune your model
End of explanation
"""
class SupervisionFriendlyLabelBinarizer(LabelBinarizer):
def fit_transform(self, X, y=None):
return super(SupervisionFriendlyLabelBinarizer, self).fit_transform(X)
# Replace the Labelbinarizer with a SupervisionFriendlyLabelBinarizer
cat_pipeline.steps[1] = ("label_binarizer", SupervisionFriendlyLabelBinarizer())
# Now you can create a full pipeline with a supervised predictor at the end.
full_pipeline = Pipeline([
("preparation", preparation_pipeline),
("linear", LinearRegression())
])
full_pipeline.fit(housing, housing_labels)
full_pipeline.predict(some_data)
"""
Explanation: Extra material
Label Binarizer hack
LabelBinarizer's fit_transform() method only accepts one parameter y (because it was meant for labels, not predictors), so it does not work in a pipeline where the final estimator is a supervised estimator because in this case its fit() method takes two parameters X and y.
This hack creates a supervision-friendly LabelBinarizer.
End of explanation
"""
from sklearn.externals import joblib
joblib.dump(final_model, "my_random_forest_regressor.pkl")
final_model_loaded = joblib.load("my_random_forest_regressor.pkl")
final_model_loaded
"""
Explanation: Model persistence using joblib
End of explanation
"""
from scipy.stats import geom, expon
geom_distrib=geom(0.5).rvs(10000)
expon_distrib=expon(scale=1).rvs(10000)
plt.hist(geom_distrib, bins=50)
plt.show()
plt.hist(expon_distrib, bins=50)
plt.show()
"""
Explanation: Example SciPy distributions for RandomizedSearchCV
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/1c191178a3423d922910711c4b574821/50_configure_mne.ipynb | bsd-3-clause | import os
import mne
"""
Explanation: Configuring MNE-Python
This tutorial covers how to configure MNE-Python to suit your local system and
your analysis preferences.
We begin by importing the necessary Python modules:
End of explanation
"""
print(mne.get_config('MNE_USE_CUDA'))
print(type(mne.get_config('MNE_USE_CUDA')))
"""
Explanation: Getting and setting configuration variables
Configuration variables are read and written using the functions
:func:mne.get_config and :func:mne.set_config. To read a specific
configuration variable, pass its name to :func:~mne.get_config as the
key parameter (key is the first parameter so you can pass it unnamed
if you want):
End of explanation
"""
try:
mne.set_config('MNE_USE_CUDA', True)
except TypeError as err:
print(err)
"""
Explanation: Note that the string values read from the JSON file are not parsed in any
way, so :func:~mne.get_config returns a string even for true/false config
values, rather than a Python boolean <bltin-boolean-values>.
Similarly, :func:~mne.set_config will only set string values (or None
values, to unset a variable):
End of explanation
"""
print(mne.get_config('missing_config_key', default='fallback value'))
"""
Explanation: If you're unsure whether a config variable has been set, there is a
convenient way to check it and provide a fallback in case it doesn't exist:
:func:~mne.get_config has a default parameter.
End of explanation
"""
print(mne.get_config()) # same as mne.get_config(key=None)
"""
Explanation: There are also two convenience modes of :func:~mne.get_config. The first
will return a :class:dict containing all config variables (and their
values) that have been set on your system; this is done by passing
key=None (which is the default, so it can be omitted):
End of explanation
"""
print(mne.get_config(key=''))
"""
Explanation: The second convenience mode will return a :class:tuple of all the keys that
MNE-Python recognizes and uses, regardless of whether they've been set on
your system. This is done by passing an empty string '' as the key:
End of explanation
"""
mne.set_config('MNEE_USE_CUUDAA', 'false')
"""
Explanation: It is possible to add config variables that are not part of the recognized
list, by passing any arbitrary key to :func:~mne.set_config. This will
yield a warning, however, which is a nice check in cases where you meant to
set a valid key but simply misspelled it:
End of explanation
"""
mne.set_config('MNEE_USE_CUUDAA', None)
assert 'MNEE_USE_CUUDAA' not in mne.get_config('')
"""
Explanation: Let's delete that config variable we just created. To unset a config
variable, use :func:~mne.set_config with value=None. Since we're still
dealing with an unrecognized key (as far as MNE-Python is concerned) we'll
still get a warning, but the key will be unset:
End of explanation
"""
print(mne.get_config_path())
"""
Explanation: Where configurations are stored
MNE-Python stores configuration variables in a JSON_ file. By default, this
file is located in :file:{%USERPROFILE%}\\.mne\\mne-python.json on Windows
and :file:{$HOME}/.mne/mne-python.json on Linux or macOS. You can get the
full path to the config file with :func:mne.get_config_path.
End of explanation
"""
# make sure it's not in the JSON file (no error means our assertion held):
assert mne.get_config('PATH', use_env=False) is None
# but it *is* in the environment:
print(mne.get_config('PATH'))
"""
Explanation: However it is not a good idea to directly edit files in the :file:.mne
directory; use the getting and setting functions described in the
previous section <config-get-set>.
If for some reason you want to load the configuration from a different
location, you can pass the home_dir parameter to
:func:~mne.get_config_path, specifying the parent directory of the
:file:.mne directory where the configuration file you wish to load is
stored.
Using environment variables
For compatibility with :doc:MNE-C <../../install/mne_c>, MNE-Python
also reads and writes environment variables_ to specify configuration. This
is done with the same functions that read and write the JSON configuration,
and is controlled with the parameters use_env and set_env. By
default, :func:~mne.get_config will check :data:os.environ before
checking the MNE-Python JSON file; to check only the JSON file use
use_env=False. To demonstrate, here's an environment variable that is not
specific to MNE-Python (and thus is not in the JSON config file):
End of explanation
"""
mne.set_config('foo', 'bar', set_env=False)
print('foo' in os.environ.keys())
mne.set_config('foo', 'bar')
print('foo' in os.environ.keys())
mne.set_config('foo', None) # unsetting a key deletes var from environment
print('foo' in os.environ.keys())
"""
Explanation: Also by default, :func:~mne.set_config will set values in both the JSON
file and in :data:os.environ; to set a config variable only in the JSON
file use set_env=False. Here we'll use :func:print statement to confirm
that an environment variable is being created and deleted (we could have used
the Python assert statement <assert> instead, but it doesn't print any
output when it succeeds so it's a little less obvious):
End of explanation
"""
print(mne.get_config('MNE_LOGGING_LEVEL'))
"""
Explanation: Logging
One important configuration variable is MNE_LOGGING_LEVEL. Throughout the
module, messages are generated describing the actions MNE-Python is taking
behind-the-scenes. How you set MNE_LOGGING_LEVEL determines how many of
those messages you see. The default logging level on a fresh install of
MNE-Python is info:
End of explanation
"""
kit_data_path = os.path.join(os.path.abspath(os.path.dirname(mne.__file__)),
'io', 'kit', 'tests', 'data', 'test.sqd')
raw = mne.io.read_raw_kit(kit_data_path, verbose='warning')
"""
Explanation: The logging levels that can be set as config variables are debug,
info, warning, error, and critical. Around 90% of the log
messages in MNE-Python are info messages, so for most users the choice is
between info (tell me what is happening) and warning (tell me only if
something worrisome happens). The debug logging level is intended for
MNE-Python developers.
In an earlier section <config-get-set> we saw how
:func:mne.set_config is used to change the logging level for the current
Python session and all future sessions. To change the logging level only for
the current Python session, you can use :func:mne.set_log_level instead.
The :func:~mne.set_log_level function takes the same five string options
that are used for the MNE_LOGGING_LEVEL config variable; additionally, it
can accept :class:int or :class:bool values that are equivalent to those
strings. The equivalencies are given in this table:
+----------+---------+---------+
| String | Integer | Boolean |
+==========+=========+=========+
| DEBUG | 10 | |
+----------+---------+---------+
| INFO | 20 | True |
+----------+---------+---------+
| WARNING | 30 | False |
+----------+---------+---------+
| ERROR | 40 | |
+----------+---------+---------+
| CRITICAL | 50 | |
+----------+---------+---------+
With many MNE-Python functions it is possible to change the logging level
temporarily for just that function call, by using the verbose parameter.
To illustrate this, we'll load some sample data with different logging levels
set. First, with log level warning:
End of explanation
"""
raw = mne.io.read_raw_kit(kit_data_path, verbose='info')
"""
Explanation: No messages were generated, because none of the messages were of severity
"warning" or worse. Next, we'll load the same file with log level info
(the default level):
End of explanation
"""
raw = mne.io.read_raw_kit(kit_data_path, verbose='debug')
"""
Explanation: This time, we got a few messages about extracting information from the file,
converting that information into the MNE-Python :class:~mne.Info format,
etc. Finally, if we request debug-level information, we get even more
detail:
End of explanation
"""
|
jacobdein/alpine-soundscapes | source detection/Region of interest.ipynb | mit | import numpy as np
from scipy.ndimage import label, find_objects
from scipy.ndimage.morphology import generate_binary_structure
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
from nacoustik import Wave
from nacoustik.spectrum import psd
from nacoustik.noise import remove_background_noise
%matplotlib inline
"""
Explanation: Region of interest
This notebook detects signals, or regions of interest, in a spectrogram generated from a recording of the natural acoustic environment.
Required packages
scipy <br />
numpy <br />
nacoustik <br />
matplotlib
Import statements
End of explanation
"""
filepath = ""
"""
Explanation: Variable definitions
filepath — path to a WAV file
End of explanation
"""
sound = Wave(filepath)
sound.read()
sound.normalize()
f, t, a = psd(sound)
"""
Explanation: Compute spectrogram
End of explanation
"""
ale = remove_background_noise(a, iterations=10)
"""
Explanation: Remove background noise
refer to the Adaptive level equalization notebook for details
End of explanation
"""
s = generate_binary_structure(2, 2)
s
labels = np.empty_like(ale, dtype=np.int32)
n_features = np.empty(shape=(2), dtype=np.int32)
for channel in range(sound.n_channels):
labels[channel], n_features[channel] = label(ale[channel], structure=s)
"""
Explanation: Label regions of interest
define structure connectivity
End of explanation
"""
# figure configuration
dpi = 192
channels = sound.n_channels
fig, ax = plt.subplots(channels, 1)
fig.set_dpi(dpi)
fig.set_figwidth((920 / dpi) * 3)
fig.set_figheight((460 / dpi) * 3)
plt.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0, hspace=0)
fig.set_frameon(False)
# specify frequency bins (width of 1 kiloherz)
bins = np.arange(0, (sound.rate / 2), 1000)
# calculate the t_step and f_step
t_step = t[1] - t[0]
f_step = f[1] - f[0]
# psd spectrogram ale
for channel in range(channels):
spec = ax[channel].pcolormesh(t, f, ale[channel], cmap='viridis')
ax[channel].set(ylim=([0, sound.rate / 2]),
#xticks = np.arange(30, sound.duration, 30).astype(np.int),
yticks = bins.astype(np.int) + 1000)
ax[channel].tick_params(length=12, color='white',
bottom=False, labelbottom=False,
top=False, labeltop=False,
labelleft=False,
labelright=False)
ax[channel].set_frame_on(False)
# draw bounding boxes
for i in range(labels[channel].max()):
loc = find_objects(labels[channel] == i)[0]
x = loc[1].start * t_step
y = loc[0].start * f_step
width = (loc[1].stop - loc[1].start) * t_step
height = (loc[0].stop - loc[0].start) * f_step
rec = Rectangle((x, y), width = width, height = height, color='#00FF80', fill=False)
p = ax[channel].add_patch(rec)
"""
Explanation: Plot regions of interest
End of explanation
"""
|
jorisvandenbossche/2015-EuroScipy-pandas-tutorial | solved - 06 - Reshaping data.ipynb | bsd-2-clause | %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: <p><font size="6"><b>Reshaping data</b></font></p>
© 2016, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
"""
excelample = pd.DataFrame({'Month': ["January", "January", "January", "January",
"February", "February", "February", "February",
"March", "March", "March", "March"],
'Category': ["Transportation", "Grocery", "Household", "Entertainment",
"Transportation", "Grocery", "Household", "Entertainment",
"Transportation", "Grocery", "Household", "Entertainment"],
'Amount': [74., 235., 175., 100., 115., 240., 225., 125., 90., 260., 200., 120.]})
excelample
excelample_pivot = excelample.pivot(index="Category", columns="Month", values="Amount")
excelample_pivot
"""
Explanation: Pivoting data
Cfr. excel
People who know Excel, probably know the Pivot functionality:
The data of the table:
End of explanation
"""
# sum columns
excelample_pivot.sum(axis=1)
# sum rows
excelample_pivot.sum(axis=0)
"""
Explanation: Interested in Grand totals?
End of explanation
"""
df = pd.DataFrame({'Fare': [7.25, 71.2833, 51.8625, 30.0708, 7.8542, 13.0],
'Pclass': [3, 1, 1, 2, 3, 2],
'Sex': ['male', 'female', 'male', 'female', 'female', 'male'],
'Survived': [0, 1, 0, 1, 0, 1]})
df
df.pivot(index='Pclass', columns='Sex', values='Fare')
df.pivot(index='Pclass', columns='Sex', values='Survived')
"""
Explanation: Pivot is just reordering your data
Small subsample of the titanic dataset:
End of explanation
"""
df = pd.read_csv("data/titanic.csv")
df.head()
"""
Explanation: So far, so good...
Let's now use the full titanic dataset:
End of explanation
"""
try:
df.pivot(index='Sex', columns='Pclass', values='Fare')
except Exception as e:
print("Exception!", e)
"""
Explanation: And try the same pivot (no worries about the try-except, this is here just used to catch a loooong error):
End of explanation
"""
df.loc[[1, 3], ["Sex", 'Pclass', 'Fare']]
"""
Explanation: This does not work, because we would end up with multiple values for one cell of the resulting frame, as the error says: duplicated values for the columns in the selection. As an example, consider the following rows of our three columns of interest:
End of explanation
"""
df = pd.read_csv("data/titanic.csv")
df.pivot_table(index='Sex', columns='Pclass', values='Fare')
"""
Explanation: Since pivot is just restructuring data, where would both values of Fare for the same combination of Sex and Pclass need to go?
Well, they need to be combined, according to an aggregation functionality, which is supported by the functionpivot_table
<div class="alert alert-danger">
<b>NOTE</b>:
<ul>
<li>**Pivot** is purely restructuring: a single value for each index/column combination is required.</li>
</ul>
</div>
Pivot tables - aggregating while pivoting
End of explanation
"""
df.pivot_table(index='Sex', columns='Pclass',
values='Fare', aggfunc='max')
df.pivot_table(index='Sex', columns='Pclass',
values='Fare', aggfunc='count')
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li>By default, `pivot_table` takes the **mean** of all values that would end up into one cell. However, you can also specify other aggregation functions using the `aggfunc` keyword.</li>
</ul>
</div>
End of explanation
"""
pd.crosstab(index=df['Sex'], columns=df['Pclass'])
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li>There is a shortcut function for a `pivot_table` with a `aggfunc=count` as aggregation: `crosstab`</li>
</ul>
</div>
End of explanation
"""
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean')
fig, ax1 = plt.subplots()
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean').plot(kind='bar',
rot=0,
ax=ax1)
ax1.set_ylabel('Survival ratio')
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a pivot table with the survival rates (= number of persons survived / total number of persons) for Pclass vs Sex.</li>
<li>Plot the result as a bar plot.</li>
</ul>
</div>
End of explanation
"""
df['Underaged'] = df['Age'] <= 18
df.pivot_table(index='Underaged', columns='Sex',
values='Fare', aggfunc='mean')
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a table of the median Fare payed by aged/underaged vs Sex.</li>
</ul>
</div>
End of explanation
"""
pivoted = df.pivot_table(index='Sex', columns='Pclass', values='Fare').reset_index()
pivoted.columns.name = None
pivoted
"""
Explanation: Melt
The melt function performs the inverse operation of a pivot. This can be used to make your frame longer, i.e. to make a tidy version of your data.
End of explanation
"""
pd.melt(pivoted)
"""
Explanation: Assume we have a DataFrame like the above. The observations (the average Fare people payed) are spread over different columns. In a tidy dataset, each observation is stored in one row. To obtain this, we can use the melt function:
End of explanation
"""
pd.melt(pivoted, id_vars=['Sex']) #, var_name='Pclass', value_name='Fare')
"""
Explanation: As you can see above, the melt function puts all column labels in one column, and all values in a second column.
In this case, this is not fully what we want. We would like to keep the 'Sex' column separately:
End of explanation
"""
df = pd.DataFrame({'A':['one', 'one', 'two', 'two'],
'B':['a', 'b', 'a', 'b'],
'C':range(4)})
df
"""
Explanation: Reshaping with stack and unstack
The docs say:
Pivot a level of the (possibly hierarchical) column labels, returning a
DataFrame (or Series in the case of an object with a single level of
column labels) having a hierarchical index with a new inner-most level
of row labels.
Indeed...
<img src="img/schema-stack.svg" width=50%>
Before we speak about hierarchical index, first check it in practice on the following dummy example:
End of explanation
"""
df = df.set_index(['A', 'B']) # Indeed, you can combine two indices
df
result = df['C'].unstack()
result
df = result.stack().reset_index(name='C')
df
"""
Explanation: To use stack/unstack, we need the values we want to shift from rows to columns or the other way around as the index:
End of explanation
"""
df = pd.read_csv("data/titanic.csv")
df.head()
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean')
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li>**stack**: make your data *longer* and *smaller* </li>
<li>**unstack**: make your data *shorter* and *wider* </li>
</ul>
</div>
Mimick pivot table
To better understand and reason about pivot tables, we can express this method as a combination of more basic steps. In short, the pivot is a convenient way of expressing the combination of a groupby and stack/unstack.
End of explanation
"""
df.groupby(['Pclass', 'Sex'])['Survived'].mean().unstack()
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Get the same result as above based on a combination of `groupby` and `unstack`</li>
<li>First use `groupby` to calculate the survival ratio for all groups</li>
<li>Then, use `unstack` to reshape the output of the groupby operation</li>
</ul>
</div>
End of explanation
"""
pivoted = df.pivot_table(index='Sex', columns='Pclass', values='Fare').reset_index()
pivoted.columns.name = None
pivoted
pd.melt(pivoted, id_vars=['Sex'], var_name='Pclass', value_name='Fare')
"""
Explanation: Mimick melt
Like the pivot table above, we can now also obtain the result of melt with stack/unstack.
Let's use the same pivoted frame as above, and look at the final melt result:
End of explanation
"""
temp = pivoted.set_index('Sex')
temp
temp.columns.name = 'Pclass'
temp = temp.stack()
temp
temp.reset_index(name='Fare')
# alternative: rename columns at the end
temp = pivoted.set_index('Sex').stack().reset_index()
temp.rename(columns={'level_1': 'Pclass', 0: 'Fare'})
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Get the same result as above using `stack`/`unstack` (combined with `set_index` / `reset_index`)</li>
<li>Tip: set those columns as the index that you do not want to stack</li>
</ul>
</div>
End of explanation
"""
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
"""
Explanation: Exercises: use the reshaping methods with the movie data
These exercises are based on the PyCon tutorial of Brandon Rhodes (so credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /data folder.
End of explanation
"""
grouped = cast.groupby(['year', 'type']).size()
table = grouped.unstack('type')
table.plot()
cast.pivot_table(index='year', columns='type', values="character", aggfunc='count').plot()
# for values in using the , take a column with no Nan values in order to count effectively all values -> at this stage: aha-erlebnis about crosstab function(!)
pd.crosstab(index=cast['year'], columns=cast['type']).plot()
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the number of actor roles each year and the number of actress roles each year over the whole period of available movie data.</li>
</ul>
</div>
End of explanation
"""
pd.crosstab(index=cast['year'], columns=cast['type']).plot(kind='area')
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the number of actor roles each year and the number of actress roles each year. Use kind='area' as plot type</li>
</ul>
</div>
End of explanation
"""
grouped = cast.groupby(['year', 'type']).size()
table = grouped.unstack('type')
(table['actor'] / (table['actor'] + table['actress'])).plot(ylim=[0,1])
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the fraction of roles that have been 'actor' roles each year over the whole period of available movie data.</li>
</ul>
</div>
End of explanation
"""
sel = cast[(cast.character == 'Superman') | (cast.character == 'Batman')]
sel = sel.groupby(['year', 'character']).size()
sel = sel.unstack()
sel = sel.fillna(0)
sel.head()
d = sel['Superman'] - sel['Batman']
print('Superman years:')
print(len(d[d > 0.0]))
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Define a year as a "Superman year" when films of that year feature more Superman characters than Batman characters. How many years in film history have been Superman years?</li>
</ul>
</div>
End of explanation
"""
|
chengsoonong/crowdastro | notebooks/11_classification.ipynb | mit | import collections
import itertools
import logging
import pprint
import sys
import warnings
import matplotlib.pyplot
import numpy
import scipy.linalg
import skimage.feature
import sklearn.cross_validation
import sklearn.decomposition
import sklearn.ensemble
import sklearn.linear_model
import sklearn.metrics
import sklearn.preprocessing
import sklearn.svm
sys.path.insert(1, '..')
import crowdastro.data
import crowdastro.rgz_analysis.consensus
import crowdastro.show
%matplotlib inline
warnings.simplefilter('ignore', UserWarning) # astropy always raises warnings on Windows.
def make_annotation_signature(annotation):
"""Generates a unique signature from a single radio annotation."""
if 'radio' not in annotation:
# Metadata (ignore this).
return None
# Having no contours is a special case; () to avoid type errors (it's special but it's still valid!)
if annotation['radio'] == 'No Contours':
return ()
# Round to fix floating point precision loss.
# Sort to make sure this is deterministic.
xmaxs = tuple(sorted([round(float(r['xmax']), 14) for r in annotation['radio'].values()]))
ymaxs = tuple(sorted([round(float(r['ymax']), 14) for r in annotation['radio'].values()]))
signature = (xmaxs, ymaxs)
return signature
def make_classification_signature(classification):
"""Generates a unique signature from a complete radio classification."""
classification_signature = []
for annotation in classification['annotations']:
annotation_signature = make_annotation_signature(annotation)
if annotation_signature is not None:
classification_signature.append(annotation_signature)
classification_signature = tuple(sorted(classification_signature))
return classification_signature
def plurality_classification_signature(subject):
"""Finds the most-chosen radio classification signature for a subject."""
n_signatures = collections.Counter()
for classification in crowdastro.data.db.radio_classifications.find({'subject_ids': subject['_id']}):
classification_signature = make_classification_signature(classification)
n_signatures[classification_signature] += 1
return max(n_signatures, key=n_signatures.get)
def number_of_radio_sources(subject):
"""Finds the number of radio sources in a subject, according to volunteers."""
return len(plurality_classification_signature(subject))
def filter_nice(subjects):
"""Yields nice subjects."""
for subject in subjects:
if number_of_radio_sources(subject) == 1:
yield subject
def atlas_subjects():
"""Yields complete ATLAS subjects."""
return crowdastro.data.db.radio_subjects.find({'metadata.survey': 'atlas', 'state': 'complete'})
def raw_training_data(subjects, radius=20):
"""Returns examples matrix X and targets matrix T.
Each row of X is one example.
Each row of T is one target.
"""
xs = []
ts = []
subjects = list(subjects) # In case this is a generator.
for subject in subjects:
# Find potential hosts.
ir = crowdastro.data.get_ir(subject)
blobs = skimage.feature.blob_log(ir/255, min_sigma=5, max_sigma=15, num_sigma=10, threshold=0.00002,
overlap=0.75, log_scale=False)
potential_hosts = numpy.fliplr(blobs[:, :2])
# Find the label of each host.
consensus = crowdastro.rgz_analysis.consensus.consensus(subject['zooniverse_id'])
answers = list(consensus['answer'].values())
classifications = numpy.zeros(potential_hosts.shape[0])
if len(answers) != 1:
# My plurality radio combination differs from Kyle Willett's - odd, but does happen.
# Haven't solved this yet, so we'll take the noise hit for now and ignore the problem.
logging.warning('Ignoring a subject due to unexpected number of answers ({}).'.format(len(answers)))
continue
if 'ir_peak' in answers[0]:
true_host = numpy.array(answers[0]['ir_peak']) * 200 / 500 # Magic constant from web -> fits.
true_host[1] = crowdastro.config.get('fits_image_height') - true_host[1] # Clicks are flipped vertically.
# Find the potential host closest to the true host. This is labelled 1; all else labelled 0.
classifications[numpy.array([numpy.linalg.norm(diff) for diff in potential_hosts - true_host]).argmin()] = 1
elif 'ir' in answers[0]:
true_host = numpy.array(answers[0]['ir']) * 200 / 500
true_host[1] = crowdastro.config.get('fits_image_height') - true_host[1] # Clicks are flipped vertically.
classifications[numpy.array([numpy.linalg.norm(diff) for diff in potential_hosts - true_host]).argmin()] = 1
else:
logging.warning('No click peaks found.')
# Fetch the large image - that way, we don't need to impose artificial restrictions
# on the edges of the image.
radio = crowdastro.data.get_radio(subject, size='5x5')
# Distance from edge of large image to edge of small image.
border_radius = (crowdastro.config.get('fits_image_width') * 5 // 2 -
crowdastro.config.get('fits_image_width')) // 2
# Get neighbourhoods around each host.
for (host_x, host_y), label in zip(potential_hosts, classifications):
host_x, host_y = int(host_x), int(host_y)
neighbourhood = radio[border_radius + host_x - radius : border_radius + host_x + radius,
border_radius + host_y - radius : border_radius + host_y + radius]
xs.append(neighbourhood.flatten())
ts.append(label)
return numpy.array(xs), numpy.array(ts)
N = 100 # This uses about 2.2 GB of memory. I should try and compress on-the-fly when I have some features.
holdout = 100 # Skip + store the first holdout so we can test later on the full subjects.
subjects = filter_nice(atlas_subjects())
heldout = [s for _, s in zip(range(holdout), subjects)]
X, T = raw_training_data(itertools.islice(subjects, N), radius=20)
X_train, X_test, T_train, T_test = sklearn.cross_validation.train_test_split(X, T, test_size=0.2, random_state=0)
"""
Explanation: Classification Experiments
In this notebook, I will collate the code from the pipeline notebook in a more efficient way, and then use it to try some different classification methods and features.
I'm taking a very naïve approach to trying different classifiers — I'm just going to use whatever the scikit-learn default is.
End of explanation
"""
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(X_train, T_train)
# Raw confusion matrix.
pprint.pprint(sklearn.metrics.confusion_matrix(T_test, lr.predict(X_test)))
# Perform the actual task, i.e., clicking the image in the right spot.
radius = 20
def predict(subject, lr, evaluate=False, preprocessor=None):
# Get input data.
xs = []
ir = crowdastro.data.get_ir(subject)
blobs = skimage.feature.blob_log(ir/255, min_sigma=5, max_sigma=15, num_sigma=10, threshold=0.00002,
overlap=0.75, log_scale=False)
potential_hosts = numpy.fliplr(blobs[:, :2])
radio = crowdastro.data.get_radio(subject, size='5x5')
border_radius = (crowdastro.config.get('fits_image_width') * 5 // 2 -
crowdastro.config.get('fits_image_width')) // 2
for host_x, host_y in potential_hosts:
host_x, host_y = int(host_x), int(host_y)
neighbourhood = radio[border_radius + host_x - radius : border_radius + host_x + radius,
border_radius + host_y - radius : border_radius + host_y + radius]
xs.append(neighbourhood.flatten())
xs = numpy.array(xs)
if preprocessor is None:
ts = lr.predict_proba(xs)
else:
ts = lr.predict_proba(preprocessor.transform(xs))
predicted_host = potential_hosts[ts[:, 1].argmax()]
if not evaluate:
return predicted_host
# Get actual label for evaluation.
consensus = crowdastro.rgz_analysis.consensus.consensus(subject['zooniverse_id'])
answers = list(consensus['answer'].values())
if len(answers) != 1:
raise ValueError('Unexpected number of answers ({}).'.format(len(answers)))
if 'ir_peak' in answers[0]:
true_host = numpy.array(answers[0]['ir_peak']) * 200 / 500 # Magic constant from web -> fits.
true_host[1] = crowdastro.config.get('fits_image_height') - true_host[1] # Clicks are flipped vertically.
# Find the potential host closest to the true host. This is labelled 1; all else labelled 0.
closest = potential_hosts[numpy.array([numpy.linalg.norm(diff) for diff in potential_hosts - true_host]).argmin()]
elif 'ir' in answers[0]:
true_host = numpy.array(answers[0]['ir']) * 200 / 500
true_host[1] = crowdastro.config.get('fits_image_height') - true_host[1] # Clicks are flipped vertically.
closest = potential_hosts[numpy.array([numpy.linalg.norm(diff) for diff in potential_hosts - true_host]).argmin()]
else:
closest = [-99, -99]
logging.warning('No click peaks found.')
return numpy.allclose(closest, predicted_host)
n_correct = 0
n_total = 0
for subject in heldout:
try:
c = predict(subject, lr, evaluate=True)
except ValueError:
continue
n_correct += c
n_total += 1
print('{:.02%}'.format(n_correct / n_total))
"""
Explanation: Logistic regression
I've already done logistic regression in the pipeline notebook, but I would like to use it as a benchmark, so I reproduce it here. I also want to see the effect of preprocessing the data with sklearn (e.g. to scale it).
End of explanation
"""
scaler = sklearn.preprocessing.StandardScaler().fit(X_train)
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(scaler.transform(X_train), T_train)
pprint.pprint(sklearn.metrics.confusion_matrix(T_test, lr.predict(X_test)))
n_correct = 0
n_total = 0
for subject in heldout:
try:
c = predict(subject, lr, evaluate=True, preprocessor=scaler)
except ValueError:
continue
n_correct += c
n_total += 1
print('{:.02%}'.format(n_correct / n_total))
"""
Explanation: Scaling
End of explanation
"""
svc = sklearn.svm.SVC(class_weight='balanced', probability=True)
svc.fit(X_train, T_train)
n_correct = 0
n_total = 0
for subject in heldout:
try:
c = predict(subject, svc, evaluate=True)
except ValueError:
continue
n_correct += c
n_total += 1
print('{:.02%}'.format(n_correct / n_total))
"""
Explanation: Interestingly, scaling seems to reduce performance. This may be due to the physical interpretation of each pixel (i.e., each pixel represents light).
SVM
End of explanation
"""
svd = sklearn.decomposition.TruncatedSVD(n_components=200)
svd.fit(X_train)
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(svd.transform(X_train), T_train)
n_correct = 0
n_total = 0
for subject in heldout:
try:
c = predict(subject, lr, evaluate=True, preprocessor=svd)
except ValueError:
continue
n_correct += c
n_total += 1
print('{:.02%}'.format(n_correct / n_total))
"""
Explanation: This is considerably worse than logistic regression. It's possible that this is due to the low number of data points, but I can't increase that at the moment — I need to reconsider how I handle the data so I can avoid MemoryErrors first.
Logistic regression with dimensionality reduction
End of explanation
"""
svd = scipy.linalg.svd(X_train)
U, S, V = svd
S.sort()
S = S[::-1]
matplotlib.pyplot.plot(numpy.log(S))
matplotlib.pyplot.xlabel('Number of singular values')
matplotlib.pyplot.ylabel('Singular value')
matplotlib.pyplot.show()
"""
Explanation: That's not too bad, and the data are many, many times smaller. We should look at the singular values matrix to see a good place to cut it off (I chose 200 really arbitrarily).
End of explanation
"""
svd = sklearn.decomposition.TruncatedSVD(n_components=200)
svd.fit(X_train)
poly = sklearn.preprocessing.PolynomialFeatures(2)
poly.fit(svd.transform(X_train))
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(poly.transform(svd.transform(X_train)), T_train)
n_correct = 0
n_total = 0
class PolySVD(object):
def transform(self, xs):
return poly.transform(svd.transform(xs))
for subject in heldout:
try:
c = predict(subject, lr, evaluate=True, preprocessor=PolySVD())
except ValueError:
continue
n_correct += c
n_total += 1
print('{:.02%}'.format(n_correct / n_total))
"""
Explanation: Interestingly, it looks like 200 might be a good choice after all. Now I can use this information to go back and preprocess all my data — it'll be 8 times smaller, so I can fit 8 times as many data points in. But first, I'll use it to preprocess data for the polynomial kernel — I couldn't do this earlier as I would get a MemoryError.
Logistic regression with dimensionality reduction and polynomial kernel
End of explanation
"""
rfc = sklearn.ensemble.RandomForestClassifier()
rfc.fit(X_train, T_train)
n_correct = 0
n_total = 0
for subject in heldout:
try:
c = predict(subject, rfc, evaluate=True)
except ValueError:
continue
n_correct += c
n_total += 1
print('{:.02%}'.format(n_correct / n_total))
"""
Explanation: This was pretty slow, and not better than linear.
Random Forest
End of explanation
"""
def raw_training_data_svd(subjects, radius=20):
"""Returns examples matrix X and targets matrix T.
Each row of X is one example.
Each row of T is one target.
"""
xs = []
ts = []
subjects = list(subjects) # In case this is a generator.
for subject in subjects:
# Find potential hosts.
ir = crowdastro.data.get_ir(subject)
blobs = skimage.feature.blob_log(ir/255, min_sigma=5, max_sigma=15, num_sigma=10, threshold=0.00002,
overlap=0.75, log_scale=False)
potential_hosts = numpy.fliplr(blobs[:, :2])
# Find the label of each host.
consensus = crowdastro.rgz_analysis.consensus.consensus(subject['zooniverse_id'])
answers = list(consensus['answer'].values())
classifications = numpy.zeros(potential_hosts.shape[0])
if len(answers) != 1:
# My plurality radio combination differs from Kyle Willett's - odd, but does happen.
# Haven't solved this yet, so we'll take the noise hit for now and ignore the problem.
logging.warning('Ignoring a subject due to unexpected number of answers ({}).'.format(len(answers)))
continue
if 'ir_peak' in answers[0]:
true_host = numpy.array(answers[0]['ir_peak']) * 200 / 500 # Magic constant from web -> fits.
true_host[1] = crowdastro.config.get('fits_image_height') - true_host[1] # Clicks are flipped vertically.
# Find the potential host closest to the true host. This is labelled 1; all else labelled 0.
classifications[numpy.array([numpy.linalg.norm(diff) for diff in potential_hosts - true_host]).argmin()] = 1
elif 'ir' in answers[0]:
true_host = numpy.array(answers[0]['ir']) * 200 / 500
true_host[1] = crowdastro.config.get('fits_image_height') - true_host[1] # Clicks are flipped vertically.
classifications[numpy.array([numpy.linalg.norm(diff) for diff in potential_hosts - true_host]).argmin()] = 1
else:
logging.warning('No click peaks found.')
# Fetch the large image - that way, we don't need to impose artificial restrictions
# on the edges of the image.
radio = crowdastro.data.get_radio(subject, size='5x5')
# Distance from edge of large image to edge of small image.
border_radius = (crowdastro.config.get('fits_image_width') * 5 // 2 -
crowdastro.config.get('fits_image_width')) // 2
# Get neighbourhoods around each host.
for (host_x, host_y), label in zip(potential_hosts, classifications):
host_x, host_y = int(host_x), int(host_y)
neighbourhood = radio[border_radius + host_x - radius : border_radius + host_x + radius,
border_radius + host_y - radius : border_radius + host_y + radius]
xs.append(svd.transform(numpy.array([neighbourhood.flatten()])))
ts.append(label)
return numpy.array(xs), numpy.array(ts)
N = 546 * 3 // 4
holdout = 546 // 4 # Skip + store the first holdout so we can test later on the full subjects.
subjects = filter_nice(atlas_subjects())
heldout = [s for _, s in zip(range(holdout), subjects)]
X, T = raw_training_data_svd(itertools.islice(subjects, N), radius=20)
X_train, X_test, T_train, T_test = sklearn.cross_validation.train_test_split(X, T, test_size=0.2, random_state=0)
X_train = X_train.reshape((-1, 200))
X_test = X_test.reshape((-1, 200))
T_train = T_train.reshape((-1, 1))
T_test = T_test.reshape((-1, 1))
svc = sklearn.svm.SVC(class_weight='balanced', probability=True)
svc.fit(X_train, T_train)
n_correct = 0
n_total = 0
for subject in heldout:
try:
c = predict(subject, svc, evaluate=True, preprocessor=svd)
except ValueError:
continue
n_correct += c
n_total += 1
print('{:.02%}'.format(n_correct / n_total))
"""
Explanation: SVM with dimensionality reduction
To put more data points into SVM, I'll need to do dimensionality reduction while processing the training data — I can only hold about 100 data points in memory otherwise.
End of explanation
"""
|
lyoung13/deep-learning-nanodegree | p2-image-classification/dlnd_image_classification.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
normalized = np.array((x) / (255))
return normalized
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
from sklearn import preprocessing
encoding_map = None
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
global encoding_map
if encoding_map is None:
encoding_map = preprocessing.LabelBinarizer()
encoding_map.fit(x)
one_hot_encoding = encoding_map.transform(x)
return one_hot_encoding
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
image_input = tf.placeholder(tf.float32, shape=[None, *image_shape], name="x")
return image_input
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
label_input = tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
return label_input
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
### Convolutional Layer
# Image height,image width, and colour channels
batch_size, image_height, image_width, colour_channels = x_tensor.get_shape().as_list()
# Filter height and widith
filter_height, filter_width = conv_ksize
# Conv Strides Dimensions
stride_height, stride_width = conv_strides
# Weight input for convolutional layer
weight = tf.Variable(tf.truncated_normal(
[filter_height, filter_width, colour_channels, conv_num_outputs], stddev=0.05))
# Bias input for convolutional layer
bias = tf.Variable(tf.zeros(conv_num_outputs))
# Apply Convolution
conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1, stride_height, stride_width, 1], padding='SAME')
# Add bias
conv_layer = tf.nn.bias_add(conv_layer, bias)
# Apply relu activation function
conv_layer = tf.nn.relu(conv_layer)
### Max Pooling Layer
# Pool Size Dimensions
pool_size_height, pool_size_width = pool_ksize
# Pool Strides Dimensions
pool_strides_height, pool_strides_width = pool_strides
# Max Pooling Function
conv_maxpool = tf.nn.max_pool(conv_layer,
ksize=[1, pool_size_height, pool_size_width, 1],
strides=[1, pool_strides_height, pool_strides_width, 1],
padding="SAME")
return conv_maxpool
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
unflattened_shape = x_tensor.get_shape().as_list()
flattened_dimensions = np.prod(unflattened_shape[1:])
flattened_shape = tf.reshape(x_tensor, [-1, flattened_dimensions])
return flattened_shape
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
# Batch_size, depth
batch_size, depth = x_tensor.get_shape().as_list()
# Weights and Biases
weight = tf.Variable(tf.truncated_normal([depth, num_outputs], stddev=0.05))
bias = tf.Variable(tf.zeros(num_outputs))
fully_connected_layer = tf.add(tf.matmul(x_tensor, weight), bias)
fully_connected_layer = tf.nn.relu(fully_connected_layer)
return fully_connected_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
# Batch_size, depth
batch_size, depth = x_tensor.get_shape().as_list()
# Weights and Biases
weight = tf.Variable(tf.truncated_normal([depth, num_outputs], stddev=0.05))
bias = tf.Variable(tf.zeros(num_outputs))
output = tf.add(tf.matmul(x_tensor, weight), bias)
return output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
x_tensor = x
conv_num_outputs = [32, 48, 64]
conv_ksize = (3,3)
conv_strides = (1,1)
pool_ksize = (2,2)
pool_strides = (2,2)
num_outputs = [256, 512, 1024, 10]
# Three convolutional layers
conv_max1 = conv2d_maxpool(x_tensor, conv_num_outputs[0], conv_ksize, conv_strides, pool_strides, pool_strides)
conv_max2 = conv2d_maxpool(conv_max1, conv_num_outputs[1], conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_max3 = conv2d_maxpool(conv_max2, conv_num_outputs[2], conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
flat = flatten(conv_max3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# First fully connected layer then dropout applied
fcl1 = fully_conn(flat, num_outputs[0])
fcl1 = tf.nn.dropout(fcl1, keep_prob)
# Second fully connected layer then dropout applied
fcl2 = fully_conn(fcl1, num_outputs[1])
fcl2 = tf.nn.dropout(fcl2, keep_prob)
# Third fully connected layer then dropout applied
fcl3 = fully_conn(fcl2, num_outputs[2])
fcl3 = tf.nn.dropout(fcl3, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
out = output(fcl3, num_outputs[3])
# TODO: return output
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
pass
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.})
valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
pass
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 25
batch_size = 256
keep_probability = .5
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
ajdawson/python_for_climate_scientists | course_content/notebooks/pandas_introduction.ipynb | gpl-3.0 | import pandas as pd
"""
Explanation: Working with pandas DataFrames
Pandas (http://pandas.pydata.org) is great for data analysis, again we met it briefly in the software carpentry course, but it's worth revisiting.
Note the book on that website - 'Python for data analysis', this is a useful text which much of this section was drawn from.
We're also going to look at how we might use pandas to work with data read in with CIS.
End of explanation
"""
from datetime import datetime
s = pd.Series([0.13, 0.21, 0.15, 'NaN', 0.29, 0.09, 0.24, -10], dtype='f',
index = [datetime(2015,11,16,15,41,23), datetime(2015,11,16,15,42,22), datetime(2015,11,16,15,43,25), datetime(2015,11,16,15,44,20), datetime(2015,11,16,15,45,22),
datetime(2015,11,16,15,46,23), datetime(2015,11,16,15,47,26), datetime(2015,11,16,15,48,21)])
print(s)
"""
Explanation: Series
A Series is essentially a container for series data (think time-series, but more general).
Let's create a basic time-series:
End of explanation
"""
s = s[s>0]
print(s)
"""
Explanation: As you can see, it's dealt with our missing value nicely - this is one of the nice things about Pandas.
We can get rid of the negative value easily as well:
End of explanation
"""
s.resample('5min').max()
"""
Explanation: Note this also got rid of our NaN (as NaN comparisons are always negative)
Now, as you probably noticed, I added a lot of datetimes to this data which represent the timings of the measurements. Pandas uses these times as an index on the data, and gives us access to some very powerful tools.
For example, resampling our data to a minutely average is easy:
End of explanation
"""
colours = pd.Series({'Blue': 42, 'Green': 12, 'Yellow': 37})
colours
"""
Explanation: Another way of creating series is using dictionaries:
End of explanation
"""
print(colours[1])
print(colours[:-1])
print(colours['Blue'])
"""
Explanation: We can index Series just like numpy arrays, or using the named index:
End of explanation
"""
print(colours[1:]['Green'])
"""
Explanation: Or both:
End of explanation
"""
more_colours = pd.Series({'Blue': 16, 'Red': 22, 'Purple': 34, 'Green': 25,})
more_colours + colours
"""
Explanation: Another nice benefit of the indices is in data allignment. So for example when performing operations on two series, Pandas will line up the indices first:
End of explanation
"""
colours.mean(), colours.max()
"""
Explanation: As you can see, if not both of the indices are present then Pandas will return NaNs.
Pandas uses numpy heavily underneath, so many of the numpy array operations work on Series as well:
End of explanation
"""
df = pd.DataFrame({'First': colours, 'Second': more_colours})
print(df)
"""
Explanation: DataFrames
Data frames are essentially collections of Series, with a shared index. Each column of data is labelled and the whole frame can be pictured as a table, or spreadsheet of data.
End of explanation
"""
# Column by index
print(df['First'])
# Column as attribute
print(df.First)
# Row via ix
print(df.ix['Blue'])
"""
Explanation: And can be indexed by row, or index via the ix attribute:
End of explanation
"""
df.max()
df.sum()
"""
Explanation: We can then apply many of the same numpy functions on this data, on a per column basis:
End of explanation
"""
example_csv = pd.read_csv('../resources/B1_mosquito_data.csv',
parse_dates=True, index_col=0)
example_csv[0:10]
example_csv.corr()
"""
Explanation: Reading Excel files
End of explanation
"""
from cis import read_data_list
aerosol_cci_collocated = read_data_list('col_output.nc', '*')
cis_df = aerosol_cci_collocated.as_data_frame()
cis_df
# Now we can do cool Pandas stuff!
cis_df.ix[cis_df['NUMBER_CONCENTRATION'].argmin()]
cis_short = cis_df.dropna()
cis_short.ix[cis_short['NUMBER_CONCENTRATION'].argmin()]
"""
Explanation: Using Pandas with CIS data
We can easily convert CIS data into pandas data to take advantage of this time-series functionality.
End of explanation
"""
%matplotlib inline
cis_df['NUMBER_CONCENTRATION'].plot(kind='kde', xlim=[0,1000], label='Raw')
cis_df['NUMBER_CONCENTRATION'].resample('10min').mean().plot(kind='kde', label='10min')
ax=cis_df['NUMBER_CONCENTRATION'].resample('120min').mean().plot(kind='kde', label='120min')
ax.legend()
"""
Explanation: Exercise
In pairs, plot probability distributions (use kde) of the raw, 10 minutely and 2 hourly averaged number concentration
End of explanation
"""
from pandas.tools.plotting import scatter_matrix
m = scatter_matrix(cis_df, alpha=0.2, figsize=(8, 8), diagonal='kde', edgecolors='none')
"""
Explanation: Extras
End of explanation
"""
|
szymonm/pyspark-dataproc-workshop | rdds_real_dataset.ipynb | apache-2.0 | ! gsutil ls gs://pyspark-workshop/so-posts
lines = sc.textFile("gs://pyspark-workshop/so-posts/*")
# or a smaller piece of them
lines = sc.textFile("gs://pyspark-workshop/so-posts/Posts.xml-*a")
"""
Explanation: Let's read the data
End of explanation
"""
lines.take(5)
"""
Explanation: Let's check what's inside these files...
End of explanation
"""
rows = lines.filter(lambda x: x.lstrip().startswith('<row'))
"""
Explanation: Only proper rows with posts
End of explanation
"""
import xml.etree.ElementTree as ET
parsed = lines.map(lambda x: x.lstrip()).filter(lambda x: x.startswith('<row')).map(lambda x: ET.fromstring(x))
from pprint import pprint
pprint(parsed.take(2))
"""
Explanation: Let's parse this mess...
End of explanation
"""
pprint(parsed.map(lambda x: x.attrib).take(3))
"""
Explanation: Better:
End of explanation
"""
def parse_tags(x):
return x[1:-1].split("><")
tags = parsed.map(lambda x: parse_tags(x.attrib['Tags']) if 'Tags' in x.attrib else [])
tags.take(5)
counts = tags.flatMap(lambda x: x).groupBy(lambda x: x).map(lambda x: (x[0], len(x[1])))
"""
Explanation: Let's compute tag counts!
End of explanation
"""
counts.sortBy(lambda x: x[1], ascending=False).take(10)
"""
Explanation: Taking long? go to: http://cluster-1-m:8088 and explore it (if you're using default cluster name).
Did you know flatMap?? If yes, rewrite the statement before to use flatMap.
End of explanation
"""
# if you hate xml (you do), then save it as json on hdfs!
import json
parsed.map(lambda x: json.dumps(x.attrib)).saveAsTextFile("posts.jsons")
"""
Explanation: Shout if you're the first one here! Congrats!
Puzzles:
Can you compute how many times someone asked about Python this month (you can compute posts with python tag only)?
Can you measure Pythons monthly popularity over last year? Can you plot it?
Can you do the same but only for main posts (questions)?
(*) Can you find the question that has the most posts attached??
Do the same but use ranking by total score of subposts.
End of explanation
"""
|
policyMetrics/course | lectures/material/06_monte_carlo_exploration/lecture.ipynb | mit | import pickle as pkl
import numpy as np
import copy
from statsmodels.sandbox.regression.gmm import IV2SLS
from mc_exploration_functions import *
import statsmodels.api as sm
import seaborn.apionly as sns
import grmpy
model_base = get_model_dict('mc_exploration.grmpy.ini')
model_base['SIMULATION']['source'] = 'mc_data'
df_base = grmpy.simulate('mc_exploration.grmpy.ini')
df_base.head()
"""
Explanation: Monte Carlo Explorations
We will conduct some basic Monte Carlo explorations with the grmpy package. This allows us to revisit the key message of the course.
End of explanation
"""
d_treated = (df_base['D'] == 1)
ate = np.mean(df_base['Y1'] - df_base['Y0'])
tt = np.mean(df_base['Y1'].loc[d_treated] - df_base['Y0'].loc[d_treated])
tut = np.mean(df_base['Y1'].loc[~d_treated] - df_base['Y0'].loc[~d_treated])
true_effect = ate
print('Effect ', ate, tt, tut)
"""
Explanation: Questions
What are the remaining sources of heterogeneity?
What is the average treatment effect? Do you expect any to the other conventional treatment effect parameters?
What does this imply for the marginal benefit of treatment?
End of explanation
"""
plot_distribution_of_benefits(df_base)
"""
Explanation: However, we still have a considerable amount of treatment effect heterogeneity.
End of explanation
"""
def update_correlation_structure(model_dict, rho):
"""This function takes a valid model specification and updates the correlation structure
among the unobservables."""
# We first extract the baseline information from the model dictionary.
sd_v = model_dict['DIST']['all'][-1]
sd_u = model_dict['DIST']['all'][0]
# Now we construct the implied covariance, which is relevant for the initialization file.
cov = rho * sd_v * sd_u
model_dict['DIST']['all'][2] = cov
# We print out the specification to an initialization file with the name mc_init.grmpy.ini.
print_model_dict(model_dict)
def collect_effects(model_base, which, grid_points):
"""This function collects numerous effects for alternative correlation structures."""
model_mc = copy.deepcopy(model_base)
effects = []
for rho in np.linspace(0.00, 0.99, grid_points):
# We create a new initialization file with an updated correlation structure.
update_correlation_structure(model_mc, rho)
# We use this new file to simulate a new sample.
df_mc = grmpy.simulate('mc_init.grmpy.ini')
# We extract auxiliary objects for further processing.
endog, exog, instr = df_mc['Y'], df_mc[['X_0', 'D']], df_mc[['X_0', 'Z_1']]
d_treated = df_mc['D'] == 1
# We calculate our parameter of interest.
label = which.lower()
if label == 'randomization':
stat = np.mean(endog.loc[d_treated]) - np.mean(endog.loc[~d_treated])
elif label == 'ordinary_least_squares':
stat = sm.OLS(endog, exog).fit().params[1]
elif label == 'conventional_instrumental_variables':
stat = IV2SLS(endog, exog, instr).fit().params[1]
elif label == 'local_instrumental_variables':
grmpy.estimate('mc_init.grmpy.ini')
stat = get_effect_grmpy()
elif label == 'conventional_average_effects':
ate = np.mean(df_mc['Y1'] - df_mc['Y0'])
tt = np.mean(df_mc['Y1'].loc[d_treated] - df_mc['Y0'].loc[d_treated])
stat = (ate, tt)
else:
raise NotImplementedError
effects += [stat]
return effects
"""
Explanation: Now let us get ready to explore the effects of essential heterogeneity with some auxiliary functions.
End of explanation
"""
effects = collect_effects(model_base, 'conventional_average_effects', 10)
plot_effects(effects)
"""
Explanation: How doe he different treatment effect parameters diverge if we introduce essential heterogeneity?
End of explanation
"""
df_mc = pkl.load(open('mc_data.grmpy.pkl', 'rb'))
for df in [df_base, df_mc]:
plot_joint_distribution_unobservables(df)
"""
Explanation: Let us investigate the essential heterogentiy with respect to the distribution of the unobservables.
End of explanation
"""
for fname in ['data', 'mc_data']:
plot_marginal_effect(get_marginal_effect_grmpy(fname + '.grmpy.info'))
"""
Explanation: Let us revisit the shape of the marginal benefit of treatment with and without essential hetergeneity.
End of explanation
"""
effect = np.mean(df_base['Y'].loc[d_treated]) - np.mean(df_base['Y'].loc[~d_treated])
print('Effect ', effect)
"""
Explanation: Estimation Strategies
Randomization
Ordinary Least Squares
Instrumental Variables
Conventional
Local
Randomization
We start with the baseline model.
End of explanation
"""
effects = collect_effects(model_base, 'randomization', 10)
plot_estimates(true_effect, effects)
"""
Explanation: Now we can directly look at the effect of essential heterogeneity.
End of explanation
"""
results = sm.OLS(df_base['Y'], df_base[['X_0', 'D']]).fit()
results.summary()
"""
Explanation: Ordinary Least Squares
We first look at a regression in the baseline sample in more detail.
End of explanation
"""
effects = collect_effects(model_base, 'ordinary_least_squares', 10)
plot_estimates(true_effect, effects)
"""
Explanation: Now we again investigate the effect of essential heterogeneity on our estimates.
End of explanation
"""
result = IV2SLS(df_base['Y'], df_base[['X_0', 'D']], df_base[['X_0', 'Z_1']]).fit()
result.summary()
"""
Explanation: Coventional Instrumental Variables Regression
End of explanation
"""
effects = collect_effects(model_base, 'conventional_instrumental_variables', 10)
plot_estimates(true_effect, effects)
"""
Explanation: Now we introduce essential heterogeneity.
End of explanation
"""
rslt = grmpy.estimate('mc_exploration.grmpy.ini')
print('Effect ', get_effect_grmpy())
effects = collect_effects(model_base, 'local_instrumental_variables', 10)
plot_estimates(true_effect, effects)
"""
Explanation: Local Instrumental Variables
We look at our baseline specification first.
End of explanation
"""
plot_joint_distribution_potential(df)
"""
Explanation: Other Objects of Interest
Let us conclude by revisiting some additional objects of interest such as the joint distribution of potential outcomes and benefits and surplus. All of these can be obtained when, for example, the requirements for a factor structure approach as in Carneiro (2003) are met.
End of explanation
"""
plot_joint_distribution_benefits_surplus(model_base, df)
"""
Explanation: Now we turn to the joint distribution of benefits and costs. What is the meaning of each quadrant?
End of explanation
"""
|
opengeostat/pygslib | doc/source/Tutorial_1/Tutorial.ipynb | mit | # import third party python libraries
import pandas as pd
import matplotlib.pylab as plt
import numpy as np
# make plots inline
%matplotlib inline
# later try %matplotlib notebook
#%matplotlib notebook
# import pygslib
import pygslib
"""
Explanation: Tutorial: Resource estimation with PyGSLIB
This tutorial will guide you on doing resource estimation with PyGSLIB. The informing data is from the BABBITT zone of the KEWEENAWAN DULUTH COMPLEX.
The sequence of data preparation and estimation used in this example is as it follows:
import drillhole tables
create a drillhole object
composite drillhole intervals
desurvey drillholes
code drillholes with domains
declustering and basic statistics
create a block model object
populate a wireframe domain with blocks
estimate with ordinary kriging, inverse of the power of the distance, and nearest neighbour
basic model validation
To run this demo in your computer download the the demo files.
In this tutorial we will use the development version. You will have to install pygslib by compiling the source code.
shell
git clone https://github.com/opengeostat/pygslib.git
cd pygslib
python setup.py build
python setup.py install
To compile in Windows we recommend reading the wiki Before compiling in Windows. This explain how install prerequisites: MinGW-w64, MSVC, among other dependencies. Compiling in Linux or in OSX is way more easy.
Loading python packages
End of explanation
"""
# importing drillhole tables into pandas dataframes
collar = pd.read_csv('Babbitt/collar_BABBITT.csv')
survey = pd.read_csv('Babbitt/survey_BABBITT.csv')
assay = pd.read_csv('Babbitt/assay_BABBITT.csv')
# print first 3 lines of the table collar
collar.head(4)
# print first 3 lines of the table survey
survey.head(3)
# print first 3 lines of the table assay
assay.head(3)
"""
Explanation: Need some help? Just type
help(pygslib)
There is also an online manual at https://opengeostat.github.io/pygslib/ but is not updated
Loading the drillhole tables
PyGSLIB requires drillholes tables loaded into Pandas DataFrames. Two tables are compulsory:
- the collar, with the compulsory fields BHID, XCOLLAR, YCOLLAR and ZCOLLAR and the optional field LENGTH
- and survey, with the compulsory fields BHID, AT, AZ, DIP
In addition, you may have any number of optional interval tables with the compulsory fields BHID, FROM and TO
End of explanation
"""
# droping some columns
assay.drop(['NI','S','FE'], axis=1, inplace=True)
# making non-sampled intervals equal to zero
assay.loc[~np.isfinite(assay['CU']), 'CU']=0
"""
Explanation: Pandas provides a large set of functions to modify your data. Let's remove some columns and make non-assayed intervals equal to zero.
End of explanation
"""
#creating a drillhole object
mydholedb=pygslib.drillhole.Drillhole(collar=collar, survey=survey)
# now you can add as many interval tables as you want, for example, assays, lithology and RQD.
mydholedb.addtable(assay, 'assay', overwrite = False)
"""
Explanation: Creating a drillhole object
To get access to the drillhole functions implemented in PyGSLIB, such as desurvey and compositing, you need to create a drillhole object (an instance of the class Drillhole, defined on the submodule gslib.drillhole)
End of explanation
"""
# validating a drillhole object
mydholedb.validate()
"""
Explanation: The output above is a warning message. This one is a complain because the field LENGTH was not included in the collar table. You will see similar warnings any time PyGSLIB detects a potential issue in your data.
End of explanation
"""
# fixing the issue of single interval at survey table
mydholedb.fix_survey_one_interval_err(90000.)
"""
Explanation: The warning above is serious. There are drillholes with only one survey record and to desurvey we need at least two records, the first one may be at the collar of the drillhole.
End of explanation
"""
#validating interval tables
mydholedb.validate_table('assay')
"""
Explanation: Note: To validate interval tables you may use the function validate_table.
End of explanation
"""
# Calculating length of sample intervals
mydholedb.table['assay']['Length']= mydholedb.table['assay']['TO']-mydholedb.table['assay']['FROM']
# plotting the interval lengths
mydholedb.table['assay']['Length'].hist(bins=np.arange(15)+0.5)
# printing length mode
print ('The Length Mode is:', mydholedb.table['assay']['Length'].mode()[0])
"""
Explanation: Compositing
Before doing any statistical or geostatistical analysis you may verify that all samples have approximately the same length. If samples have different lengths you may composite the drillhole intervals using a compositing algorithm.
End of explanation
"""
# compositing
mydholedb.downh_composite('assay', variable_name= "CU", new_table_name= "CMP",
cint = 10, minlen=-1, overwrite = True)
# first 5 rows of a table
mydholedb.table["CMP"].tail(5)
"""
Explanation: Most samples (the mode) are 10 ft length. This value or any of its multiples are good options for composite length, they minimize the oversplitting of sample intervals.
End of explanation
"""
# desurveying an interval table
mydholedb.desurvey('CMP',warns=False, endpoints=True)
# first 3 rows of a table
mydholedb.table["CMP"].head(3)
"""
Explanation: Note that some especial fields were created, those fields have prefix _. _acum is the grade accumulated in the composite interval (sum of grades from sample intervals contributing to the composite interval) and _len is the actual length of the composite.
In the table CMP the interval at row 54188 has FROM : 1010.0 and TO: 1020.0 but the sample length is only 7.0 ft. In this way the FROM and TO intervals of any drillhole or table are always at the same position and you can safely use the fields [BHID, FROM] to merge tables.
Desurveying
To plot drillholes in 3D or to estimate grade values you need to calculate the coordinates of the composites. This process is known as desurvey. There are many techniques to desurvey, PyGSLIB uses minimum curvature.
Desurvey will add the fields azm, dipm and xm, ym, zm, these are directions and the coordinates at the mid point of composite intervals. You have the option to add endpoint coordinates xb, yb, zb and xe, ye, ze, these are required to export drillholes in vtk format.
End of explanation
"""
# creating BHID of type integer
mydholedb.txt2intID('CMP')
# first 3 rows of a subtable
mydholedb.table["CMP"][['BHID', 'BHIDint', 'FROM', 'TO']].tail(3)
"""
Explanation: Createing a BHID of type integer
The compiled FORTRAN code of GSLIB is not good with data of type str, sometimes you need to transform the BHID to type int, for example, if you use a maximum number of samples per drillholes on kriging or calculating downhole variograms. The function txt2intID will do this work for you.
End of explanation
"""
# exporting results to VTK
mydholedb.export_core_vtk_line('CMP', 'cmp.vtk', title = 'Drillhole created in PyGSLIB')
"""
Explanation: Rendering drillhole intervals in Paraview and exporting drillhole data
PyGSLIB can export drillhole intervals to VTK. Drag and drop the VTK file on Paraview to see the drillholes in 3D. For a better image quality add a tube filter and update the color scale.
End of explanation
"""
# inspecting interval tables in drillhole object
print ("Table names ", mydholedb.table_mames)
print ("Tables names", mydholedb.table.keys())
print ("table is ", type(mydholedb.table))
# exporting to csv
mydholedb.table["CMP"].to_csv('cmp.csv', index=False)
"""
Explanation: This is how it looks in Paraview
<img src="Figure1.JPG" />
Interval tables are stored as a python dictionary of {Table Name : Pandas Dataframes}. To export data to *.csv format use the Pandas function Dataframe.to_csv. You can also export to any other format supported by Pandas, this is the list of formats supported.
End of explanation
"""
# importing the a wireframe (this one was created with https://geomodelr.com)
domain=pygslib.vtktools.loadSTL('Babbitt/Mpz.stl')
"""
Explanation: Tagging samples with domain code
Use the function pygslib.vtktools.pointinsolid to label composites in a domain defined by a closed wireframe. You can also use this function to label samples in open surfaces (ej. between two surfaces), below a surface and above a surface.
End of explanation
"""
# creating array to tag samples in domain1
inside1=pygslib.vtktools.pointinsolid(domain,
x=mydholedb.table['CMP']['xm'].values,
y=mydholedb.table['CMP']['ym'].values,
z=mydholedb.table['CMP']['zm'].values)
# creating a new domain field
mydholedb.table['CMP']['Domain']=inside1.astype(int)
# first 3 rows of a subtable
mydholedb.table['CMP'][['BHID', 'FROM', 'TO', 'Domain']].head(3)
# exporting results to VTK
mydholedb.export_core_vtk_line('CMP', 'cmp.vtk', title = 'Generated with PyGSLIB')
# exporting to csv
mydholedb.table["CMP"].to_csv('cmp.csv', index=False)
"""
Explanation: Only Stereo Lithography (*.STL) and XML VTK Polydata (VTP) file formats are implemented. If your data is in a different format, ej. DXF, you can use a file format converter, my favorite is meshconv
End of explanation
"""
# The model definition
xorg = 2288230
yorg = 415200
zorg = -1000
dx = 100
dy = 100
dz = 30
nx = 160
ny = 100
nz = 90
# Creating an empty block model
mymodel=pygslib.blockmodel.Blockmodel(nx,ny,nz,xorg,yorg,zorg,dx,dy,dz)
# filling wireframe with blocks
mymodel.fillwireframe(domain)
# the fillwireframe function generates a field named __in,
# this is the proportion inside the wireframe. Here we rename __in to D1
mymodel.bmtable.rename(columns={'__in': 'D1'},inplace=True)
# creating a partial model by filtering out blocks with zero proportion inside the solid
mymodel.set_blocks(mymodel.bmtable[mymodel.bmtable['D1']> 0])
# export partial model to a vtk unstructured grid (*.vtu)
mymodel.blocks2vtkUnstructuredGrid(path='model.vtu')
"""
Explanation: A section of the wireframe and the drillholes may look as follows
<img src="Figure2.JPG" />
Block modeling
Cu grades will be estimated on blocks inside the mineralized domain. To create those blocks you may:
create a block model object pygslib.blockmodel.Blockmodel
fill the mineralized domain with blocks
In PyGSLIB we use percent blocks, similar to GEMS ®. In the future we will implement subcell style, similar to Surpac ®, using Adaptive Mesh Refinement (AMR).
Blocks are stored in the class member bmtable, this is a Pandas DataFrame with especial field index IJK or [IX,IY,IZ] and coordinates [XC, YC, ZC]. We use GSLIB order, in other words, IJK is the equivalent of the row number in a GSLIB grid.
Block model tables can be full or partial (with some missing blocks). Only one table will be available in a block model object.
The block model definition is stored in the members nx, ny, nz, xorg, yorg, zorg, dx, dy, dz. The origin xorg, yorg, zorg refers to the lower left corner of the lower left block (not the centroid), like in Datamine Studio ®.
End of explanation
"""
#declustering parameters
parameters_declus = {
'x' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'xm'],
'y' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'ym'],
'z' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'zm'],
'vr' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'],
'anisy' : 1.,
'anisz' : 0.05,
'minmax' : 0,
'ncell' : 100,
'cmin' : 100.,
'cmax' : 5000.,
'noff' : 8,
'maxcel' : -1}
# declustering
wtopt,vrop,wtmin,wtmax,error, \
xinc,yinc,zinc,rxcs,rycs,rzcs,rvrcr = pygslib.gslib.declus(parameters_declus)
#Plotting declustering optimization results
plt.plot (rxcs, rvrcr, '-o')
plt.xlabel('X cell size')
plt.ylabel('declustered mean')
plt.show()
plt.plot (rycs, rvrcr, '-o')
plt.xlabel('Y cell size')
plt.ylabel('declustered mean')
plt.show()
plt.plot (rzcs, rvrcr, '-o')
plt.xlabel('Z cell size')
plt.ylabel('declustered mean')
plt.show()
# parameters for declustering with the cell size selected
parameters_declus = {
'x' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'xm'],
'y' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'ym'],
'z' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'zm'],
'vr' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'],
'anisy' : 1., # y == x
'anisz' : 0.1, # z = x/20
'minmax' : 0,
'ncell' : 1,
'cmin' : 1000.,
'cmax' : 1000.,
'noff' : 8,
'maxcel' : -1}
# declustering
wtopt,vrop,wtmin,wtmax,error, \
xinc,yinc,zinc,rxcs,rycs,rzcs,rvrcr = pygslib.gslib.declus(parameters_declus)
# Adding declustering weight to a drillhole interval table
mydholedb.table["CMP"]['declustwt'] = 1
mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'declustwt'] = wtopt
# calculating declustered mean
decl_mean = rvrcr[0]
"""
Explanation: Note that fillwireframe created or overwrited mymodel.bmtable. The blocks outside the wireframe where filtered out and the final output is a partial model with block inside or touching the wireframe domain.
Note that fillwireframe works with closed surfaces only.
A section view of the blocks colored by percentage inside the solid and the wireframe (white lines) may look as follows:
<img src="Figure3.JPG" />
Some basic stats
You may spend some time doing exploratory data analysis, looking at statistical plots, 3D views and 2D sections of your data. A good comersial software for this is Supervisor ®, open source options are Pandas, Statsmodels, Seaborn and glueviz.
PyGSLIB includes some minimum functionality for statistical plots and calculations, with support for declustering wight. Here we demonstrate how you can do a declustering analysis of the samples in the mineralized domain and how to evaluate the declustered mean. The declustered mean will be compared later with the mean of CU estimates.
Note: In this section we are not including all the statistical analysis usually required for resource estimation.
End of explanation
"""
# prepare parameters dictionary
parameters = {
'hmin' : None, #in/output rank-0 array(float,'d')
'hmax' : None, #in/output rank-0 array(float,'d')
'ncl' : 30, #int, number of bins
'iwt' : 1, #int, 1 use declustering weight
'ilog' : 1, #int, 1 use logscale
'icum' : 0, #int, 1 use cumulative
'va' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'], # array('d') with bounds (nd)
'wt' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'declustwt'], # array('d') with bounds (nd), wight variable (obtained with declust?)
'figure' : None , # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure.
'title' : 'Hist Cu', # string. Figure title
'xlabel' : 'Cu (%)', # string. X axis label
'ylabel' : 'f(%)', # string. Y axis label
# visual parameter for the histogram
'color' : 'red', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a)
'legend': 'Non - Declustered', # string (Optional, default "NA")
'alpha' : 0.5, # float [0-1]. Transparency of the fill colour
'lwidth': 1, # float. Line width
# legend
'legendloc': 'top_left'}
# calculate histogram
stats, fig = pygslib.plothtml.histgplt(parameters)
print ('CV', stats['xcvr'])
print ('Mean', stats['xmen'])
print ('Min', stats['xmin'])
print ('Max', stats['xmax'])
# show the figure
pygslib.plothtml.show(fig)
# plot CDF
parameters_probplt = {
# gslib parameters for histogram calculation
'iwt' : 1, # input boolean (Optional: set True). Use weight variable?
'va' : mydholedb.table["CMP"].loc[(mydholedb.table['CMP']['Domain']==1) & (mydholedb.table['CMP']['CU']>0), 'CU'], # array('d') with bounds (nd)
'wt' : mydholedb.table["CMP"].loc[(mydholedb.table['CMP']['Domain']==1) & (mydholedb.table['CMP']['CU']>0), 'declustwt'], # array('d') with bounds (nd), wight variable (obtained with declust?)
# visual parameters for figure (if a new figure is created)
'figure' : None, # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure.
'title' : 'Prob blot', # string (Optional, "Histogram"). Figure title
'xlabel' : 'Cu', # string (Optional, default "Z"). X axis label
'ylabel' : 'P[Cu<c]', # string (Optional, default "f(%)"). Y axis label
'xlog' : 1, # boolean (Optional, default True). If true plot X axis in log sale.
'ylog' : 1, # boolean (Optional, default True). If true plot Y axis in log sale.
# visual parameter for the probplt
'style' : 'cross', # string with valid bokeh chart type
'color' : 'blue', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a) (Optional, default "navy")
'legend': 'Declustered Cu', # string (Optional, default "NA").
'alpha' : 1, # float [0-1] (Optional, default 0.5). Transparency of the fill colour
'lwidth': 0, # float (Optional, default 1). Line width
# leyend
'legendloc': 'bottom_right'} # float (Optional, default 'top_right'). Any of top_left, top_center, top_right, center_right, bottom_right, bottom_center, bottom_left, center_left or center
results, fig2 = pygslib.plothtml.probplt(parameters_probplt)
# show the plot
pygslib.plothtml.show(fig2)
results
"""
Explanation: Now we can calculate some declustered stats and plot declustered histogras
End of explanation
"""
# TODO:
"""
Explanation: Variography
End of explanation
"""
# creating parameter dictionary for estimation in one block
kt3d_Parameters = {
# Input Data (Only using intervals in the mineralized domain)
# ----------
'x' : mydholedb.table["CMP"]['xm'][mydholedb.table["CMP"]['Domain']==1].values,
'y' : mydholedb.table["CMP"]['ym'][mydholedb.table["CMP"]['Domain']==1].values,
'z' : mydholedb.table["CMP"]['zm'][mydholedb.table["CMP"]['Domain']==1].values,
'vr' : mydholedb.table["CMP"]['CU'][mydholedb.table["CMP"]['Domain']==1].values,
'bhid' : mydholedb.table["CMP"]['BHIDint'][mydholedb.table["CMP"]['Domain']==1].values, # an interger BHID
# Output (Target)
# ----------
'nx' : nx,
'ny' : ny,
'nz' : nz,
'xmn' : xorg,
'ymn' : yorg,
'zmn' : zorg,
'xsiz' : dx,
'ysiz' : dy,
'zsiz' : dz,
'nxdis' : 5,
'nydis' : 5,
'nzdis' : 3,
'outx' : mymodel.bmtable['XC'][mymodel.bmtable['IJK']==1149229].values, # filter to estimate only on block with IJK 1149229
'outy' : mymodel.bmtable['YC'][mymodel.bmtable['IJK']==1149229].values,
'outz' : mymodel.bmtable['ZC'][mymodel.bmtable['IJK']==1149229].values,
# Search parameters
# ----------
'radius' : 850,
'radius1' : 850,
'radius2' : 250,
'sang1' : -28,
'sang2' : 34,
'sang3' : 7,
'ndmax' : 12,
'ndmin' : 4,
'noct' : 0,
'nbhid' : 3,
# Kriging parameters and options
# ----------
'ktype' : 1, # 1 Ordinary kriging
'idbg' : 1, # 0 no debug
# Variogram parameters
# ----------
'c0' : 0.35 * 0.109758094158, # we require not normalized variance for GCOS, fix... multiply for actual variance
'it' : [2,2],
'cc' : [0.41*0.109758094158,0.23*0.109758094158],
'aa' : [96,1117],
'aa1' : [96,1117],
'aa2' : [96,300],
'ang1' : [-28,-28],
'ang2' : [ 34, 34],
'ang3' : [ 7, 7]}
"""
Explanation: Estimating Cu grade in one block
For estimation you may use the function pygslib.gslib.kt3d, which is the GSLIB’s KT3D program modified and embedded into python. KT3D now includes a maximum number of samples per drillhole in the search ellipsoid and the estimation is only in the blocks provided as arrays.
The input parameters of pygslib.gslib.kt3d are defined in a large and complicated dictionary. You can get this dictionary by typing
print pygslib.gslib.kt3d.__doc__
Note that some parameters are optional. PyGSLIB will initialize those parameters to zero or to array of zeros, for example if you exclude the coordinate Z, PyGSLIB will create an array of zeros in its place.
To understand GSLIB’s KT3D parameters you may read the GSLIB user manual or the kt3d gslib program parameter documentation.
Note that in PyGSLIB the parameters nx, ny and nz are only used by superblock search algorithm, if these parameters are arbitrary the output will be correct but the running time may be longer.
End of explanation
"""
# estimating in one block
estimate, debug, summary = pygslib.gslib.kt3d(kt3d_Parameters)
#saving debug to a csv file using Pandas
pd.DataFrame({'x':debug['dbgxdat'],'y':debug['dbgydat'],'z':debug['dbgzdat'],'wt':debug['dbgwt']}).to_csv('dbg_data.csv', index=False)
#pd.DataFrame({'x':[debug['dbgxtg']],'y':[debug['dbgytg']],'z':[debug['dbgztg']],'na':[debug['na']]}).to_csv('dbg_target.csv', index=False)
# save the search ellipse to a VTK file
pygslib.vtktools.SavePolydata(debug['ellipsoid'], 'search_ellipsoid')
"""
Explanation: The variogram was calculated and modelled in a differnt software
The variogram types are as explained in http://www.gslib.com/gslib_help/vmtype.html, for example, 'it' : [2,2] means two exponential models, in other words [Exponential 1,Exponential 2]
Only the block with index IJK equal to 1149229 was used this time and 'idbg' was set to one in order to get a full output of the last (and unique) block estimate, including the samples selected, kriging weight and the search ellipsoid.
End of explanation
"""
# calculate block variance, wee need it for global change of support validation
# you can also calculate this with the function pygslib.gslib.block_covariance(...)
cbb=debug['cbb']
"""
Explanation: The results may look like this in Paraview.
<img src="Figure4.JPG" />
End of explanation
"""
# update parameter file
kt3d_Parameters['idbg'] = 0 # set the debug of
kt3d_Parameters['outx'] = mymodel.bmtable['XC'].values # use all the blocks
kt3d_Parameters['outy'] = mymodel.bmtable['YC'].values
kt3d_Parameters['outz'] = mymodel.bmtable['ZC'].values
# estimating in all blocks
estimate, debug, summary = pygslib.gslib.kt3d(kt3d_Parameters)
# adding the estimate into the model
mymodel.bmtable['CU_OK'] = estimate['outest']
mymodel.bmtable['CU_ID2'] = estimate['outidpower']
mymodel.bmtable['CU_NN'] = estimate['outnn']
mymodel.bmtable['CU_Lagrange'] = estimate['outlagrange']
mymodel.bmtable['CU_KVar']= estimate['outkvar']
# exporting block model to VTK (unstructured grid)
mymodel.blocks2vtkUnstructuredGrid(path='model.vtu')
# exporting to csv using Pandas
mymodel.bmtable['Domain']= 1
mymodel.bmtable[mymodel.bmtable['CU_OK'].notnull()].to_csv('model.csv', index = False)
"""
Explanation: Estimating in all blocks
After testing the estimation parameters in few blocks you may be ready to estimate in all the blocks within the mineralized domain. Just update the parameter file to remove the debug option and reassign the target coordinates as the actual blocks coordinate arrays.
End of explanation
"""
print ("Mean in model OK :", mymodel.bmtable['CU_OK'].mean())
print ("Mean in model ID2 :", mymodel.bmtable['CU_ID2'].mean())
print ("Mean in model NN :", mymodel.bmtable['CU_NN'].mean())
print ("Mean in data :", mydholedb.table["CMP"]['CU'][mydholedb.table["CMP"]['Domain']==1].mean())
print ("Declustered mean:", decl_mean)
"""
Explanation: Validating the results
There are few validations you may do:
visual validation
comparison of mean grade
swath plots
global change of support (GCOS)
Swath plots and GCOS are not implemented in PyGSLIB. For visual validations you can use Paraview, for example:
<img src="Figure5.JPG" />
End of explanation
"""
mymodel.bmtable.groupby('XC')[['CU_OK','CU_ID2','CU_NN']].mean().plot()
mymodel.bmtable.groupby('YC')[['CU_OK','CU_ID2','CU_NN']].mean().plot()
mymodel.bmtable.groupby('ZC')[['CU_OK','CU_ID2','CU_NN']].mean().plot()
"""
Explanation: Create swath plots
There are two ways of doing swath plots
Slicing block model and data and comparing the declustered means of each slice
Calculating nearest neighbour in blocks (this is equivalent to declustered values) and comparing means of nearest neighbour estimates with means of other estimation methods along row, columns and levels.
We do not have a function in pygslib to do that, but we can implement the second option with one line of pandas
End of explanation
"""
# Fit anamorphosis by changing, zmax, zmin, and extrapolation function
PCI, H, raw, zana, gauss, z, P, raw_var, PCI_var, fig1 = pygslib.nonlinear.anamor(
z = mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'],
w = mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'declustwt'],
zmin = mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'].min(),
zmax = mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'].max(),
zpmin = None, zpmax = None,
ymin=-5, ymax=5,
ndisc = 5000,
ltail=1, utail=4, ltpar=1, utpar=1.5, K=40)
# calculate the support correction coefficient r
r = pygslib.nonlinear.get_r(Var_Zv = cbb, PCI = PCI)
print ('cbb :', cbb)
print ('r :', r)
"""
Explanation: Global change of support
End of explanation
"""
# fit block anamorphosis
ZV, PV, fig2 = pygslib.nonlinear.anamor_blk( PCI, H, r = r, gauss = gauss, Z = z,
ltail=1, utail=1, ltpar=1, utpar=1,
raw=raw, zana=zana)
cutoff = np.arange(0,0.6, 0.01)
tt = []
gg = []
label = []
# calculate GTC from gaussian in block support
t,ga,gb = pygslib.nonlinear.gtcurve (cutoff = cutoff, z=ZV, p=PV, varred = 1, ivtyp = 0, zmin = 0, zmax = None,
ltail = 1, ltpar = 1, middle = 1, mpar = 1, utail = 1, utpar = 1,maxdis = 1000)
tt.append(t)
gg.append(ga)
label.append('DGM with block support')
fig = pygslib.nonlinear.plotgt(cutoff = cutoff, t = tt, g = gg, label = label)
# to compare global resources with the one estimated we calculate the CDF of the blocks
# cdf of kriging estimate
parameters_probplt = {
'iwt' : 0, #int, 1 use declustering weight
'va' : mymodel.bmtable['CU_OK'][mymodel.bmtable['CU_OK'].notnull()].values, # array('d') with bounds (nd)
'wt' : np.ones(mymodel.bmtable['CU_OK'][mymodel.bmtable['CU_OK'].notnull()].shape[0])} # array('d') with bounds (nd), wight variable (obtained with declust?)
binval_ok,cl_ok,xpt025,xlqt,xmed,xuqt,xpt975,xmin,xmax, \
xcvr,xmen,xvar,error = pygslib.gslib.__plot.probplt(**parameters_probplt)
# cdf of id2
parameters_probplt = {
'iwt' : 0, #int, 1 use declustering weight
'va' : mymodel.bmtable['CU_ID2'][mymodel.bmtable['CU_OK'].notnull()].values, # array('d') with bounds (nd)
'wt' : np.ones(mymodel.bmtable['CU_OK'][mymodel.bmtable['CU_OK'].notnull()].shape[0])} # array('d') with bounds (nd), wight variable (obtained with declust?)
binval_id2,cl_id2,xpt025,xlqt,xmed,xuqt,xpt975,xmin,xmax, \
xcvr,xmen,xvar,error = pygslib.gslib.__plot.probplt(**parameters_probplt)
# calculate GTC ok
t,ga,gb = pygslib.nonlinear.gtcurve (cutoff = cutoff, z=cl_ok, p=binval_ok, varred = 1, ivtyp = 2, zmin = 0, zmax = None,
ltail = 1, ltpar = 1, middle = 1, mpar = 1, utail = 1, utpar = 1,maxdis = 1000)
tt.append(t)
gg.append(ga)
label.append('Ordinary Kriging')
# calculate GTC in block support
t,ga,gb = pygslib.nonlinear.gtcurve (cutoff = cutoff, z=cl_id2, p=binval_id2, varred = 1, ivtyp = 2, zmin = 0, zmax = None,
ltail = 1, ltpar = 1, middle = 1, mpar = 1, utail = 1, utpar = 1,maxdis = 1000)
tt.append(t)
gg.append(ga)
label.append('Inverse of the Distance 2)')
fig = pygslib.nonlinear.plotgt(cutoff = cutoff, t = tt, g = gg, label = label)
# we can plot diferences (relative error in grade)
plt.plot (cutoff, gg[0]-gg[1], label = 'DGM - OK')
plt.plot (cutoff, gg[0]-gg[2], label = 'DGM - ID2')
plt.plot (cutoff, np.zeros(cutoff.shape[0]),'--k', label = 'Zero error')
plt.title('relative error in grade')
plt.legend()
# we can plot diferences (relative error in tonnage)
plt.plot (cutoff, tt[0]-tt[1], label = 'DGM - OK')
plt.plot (cutoff, tt[0]-tt[2], label = 'DGM - ID2')
plt.plot (cutoff, np.zeros(cutoff.shape[0]),'--k', label = 'Zero error')
plt.legend()
plt.title('relative error in tonnage')
# To get tonnes right just multiply per total tonnes
# calcullate tottal tonnage (million tonnes)
ttonnes = mymodel.bmtable['D1'][mymodel.bmtable['CU_OK'].notnull()].sum()*100*100*30* 0.0283168 * 2.7 /1000000
# cubic foot to m -> 0.0283168, density 2.7
ttt = tt[0]*ttonnes
#plot
plt.plot(cutoff, ttt)
plt.ylabel('Mt')
plt.xlabel('Cutoff')
"""
Explanation: Note that r is very low...
End of explanation
"""
|
AguaParaelPueblo/plant_notebooks | Gracias/ConductionLine.ipynb | mit | from aide_design.play import *
from IPython.display import display
pipe.ID_sch40 = np.vectorize(pipe.ID_sch40)
pipe.ID_sch40 = np.vectorize(pipe.ID_sch40)
################## Constants #################
flow_branch = 60 *u.L/u.s
flow_full = flow_branch * 2
nd_pipe_train_4 = 4 *u.inch
sdr_pipe = 17
nd_pipe_train_6 = 6 * u.inch
# these measurements are from Minty's notebook TODO: change to reflect topography study
l_total = 455.06 *u.m
height_pressure_break_4 = 1090.12 * u.m
height_pressure_break_6 = 1091.29 * u.m
# this measurement is from AutoCAD TODO: change to reflect topography study
height_plant = 1058 * u.m
PVC_ROUGHNESS = mat.PIPE_ROUGH_PVC
NU_WATER = exp.NU_WATER
# a conservative estimate for k TODO: change to reflect actual accessories
k_conduction_line = exp.K_MINOR_EL90 * 7
# Getting function inputs into simple form
head_4 = height_pressure_break_4 - height_plant
head_6 = height_pressure_break_6 - height_plant
id_4 = pipe.ID_SDR(nd_pipe_train_4, sdr_pipe)
id_6 = pipe.ID_SDR(nd_pipe_train_6, sdr_pipe)
#################### headloss calculations ############################
headloss_train_4 = pc.headloss(flow_branch, id_4, l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
headloss_train_6 = pc.headloss(flow_branch, id_6, l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
print("Headloss in 4 inch line: " + str(headloss_train_4) + " and available head is: " + str(head_4))
print("Headloss in 6 inch line: " + str(headloss_train_6) + " and available head is: " + str(head_6))
##################### total flow calculation ###########################
flow_4 = pc.flow_pipe(id_4,head_4,l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
flow_6 = pc.flow_pipe(id_6,head_6,l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
flow_actual_with_two_lines = (flow_4 + flow_6).to(u.L/u.s)
print("Flow to the plant with both lines and available head is: " + str(flow_actual_with_two_lines))
"""
Explanation: Flow Distribution for the Two Treatment Trains
Problem Definition
The two 60 L/s trains need proper flow control. They need a flow control system to split the plant flow evenly between the two trains that would enable fine grain flow control. This distribution system should keep flow control for each train independent - such that decreasing one train's flow doesn't increase the other's.
Existing Conduction Line
The existing conduction line is composed of two independent pipes of 4" and 6" size. Presumably, one was added after the other in an attempt to augment the flow rate. Two pressure breaks, one for each line, are located 30 meters higher in elevation and 455 meters away from the proposed plant site. By definition, these two pressure breaks have a free surface, and therefore the difference in elevation between the pressure break and the plant's entrance tank represents the maximum available head for delivering, splitting and controlling the flow. The diagram below summarizes the existing system components:
<img src="https://docs.google.com/drawings/d/e/2PACX-1vTYoz334ZI_fy6hpKUyfmm7Ap24bQDkuBVZXC4JJvACmSd-VeLFAUI5RsWscA-FHlxnKEQmn-Kz-H0U/pub?w=1056&h=816">
Use the Existing 4" and 6" Lines
The simplest solution is to use the current pressure break as the flow distribution system with the two existing lines (4" and 6") as the incoming lines for each train. To make sure this will work, we need to ensure the 4" line can handle the full 60 L/s
End of explanation
"""
# Make a table with available pipe sizes
pipe_sdr = 26
pipe_diameters_nd = [6,8,10,12]#*u.inch
pipe_diameters_id = pipe.ID_sch40(pipe_diameters_nd)
headloss_various_diameters = pc.headloss(flow_full, pipe_diameters_id*u.inch,
l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
df = pd.DataFrame(np.array(headloss_various_diameters.magnitude), index=pipe_diameters_id, columns=['Headloss (m)'])
#Graph headloss for different pipe diameters
df.index.name = 'Pipe Diameter (ID, inch)'
df.name = 'Total Headloss Through Various Pipe Diameters'
df.plot().set(ylabel="Headloss (m)", title = df.name)
plt.show()
display(df)
"""
Explanation: Changing the Pipes
The headloss in both the 4" and 6" lines is too great to handle the {{flow_branch}} flow rate. Therefore larger diameter pipe needs to be installed to reduce the headloss in the conduction line(s). There are multiple options for how to both increase the conduction line capacitiy and split the flow efficiently:
Distribution box at the plant with one large conduction line running from the existing plants.
Distribution box at the location of the current pressure breaks, with two lines running to the plant, one for each train.
Combine the flow with Ys from the two current pressure breaks into a large line, and split at the plant into each train
The first two options involve the construction of a distribution box, an unnecessary, more complex and expensive solution. All options will use two gate valves (one for each train) at each train entrance tank for fine-grain control of each flow rate. The third option will be investigated first, as it is the simplest to construct, the least expensive, and has no functional drawbacks.
To size the main trunk line, an appropriate target headloss must be chosen. Below is a graph that lists the headloss at different pipe sizes given the parameters of this plant:
End of explanation
"""
id_12 = pipe.ID_SDR(12, sdr_pipe)
# conservative minor loss coefficient in both lines pressure break to tee:
k_value_pressure_break_to_tee_6_inch = exp.K_MINOR_PIPE_ENTRANCE + \
exp.K_MINOR_90 + k.k_value_expansion(id_6, id_12, flow_branch)
k_value_pressure_break_to_tee_4_inch = exp.K_MINOR_PIPE_ENTRANCE + \
exp.K_MINOR_90 + exp.K_MINOR_EL45 + k.k_value_expansion(id_4, id_12, flow_branch)
print("k value in 6 inch line: " + str(k_value_pressure_break_to_tee_6_inch))
print('k value in 4 inch line: ' + str(k_value_pressure_break_to_tee_4_inch))
# conservative pipe lengths from pressure break to tee:
l_pressure_break_to_tee_6_inch = 4 * u.m
l_pressure_break_to_tee_4_inch = 4 * u.m
# determine headloss through both 4" and 6" pipes by defining headloss range:
headloss_range_pressure_break_to_tee = np.linspace(0.1,10,100) * u.m
# calculate the added flow rates for all the headlosses in the range:
flow_range_pressure_break_to_tee = pc.flow_pipe(id_4, headloss_range_pressure_break_to_tee,
l_pressure_break_to_tee_4_inch, exp.NU_WATER, mat.PIPE_ROUGH_PVC,
k_value_pressure_break_to_tee_4_inch) + \
pc.flow_pipe(id_6, headloss_range_pressure_break_to_tee,
l_pressure_break_to_tee_6_inch, exp.NU_WATER, mat.PIPE_ROUGH_PVC,
k_value_pressure_break_to_tee_6_inch)
# graph of flow rates for various flow rates:
df = pd.DataFrame(np.array(flow_range_pressure_break_to_tee.to(u.L/u.s)),
index=np.array(headloss_range_pressure_break_to_tee),
columns = ['4" and 6" lines'])
df.index.name = 'Headloss (m)'
df.columns.name = 'flow (L/s)'
df.name = 'Headloss v. Flow rate for Pressure Break to Tee'
df.plot().set(ylabel=df.columns.name, title=df.name)
plt.show()
"""
Explanation: Using a 10 inch or 12 inch pipe would potentially leave enough remaining available headloss to use for flow control.
Flow Distribution
Now the question is about flow distribution. The effect of shutting off one train potentially effects the flow rate of the other. Determining the extent of this effect is a flow distribution problem, much like those done throughout plant design. By studying the various flow paths, one can determine the effect of shutting off a flow path during the worst case scenario. There are several steps to designing the optimal system. First, the goal is to reduce the headloss in the shared line, because that headloss changes when one branch is turned off and the flow rate is halved. As the shared headloss reduces, the leftover headloss is taken up by the remaining line, increasing train flow. The steps to define the optimal pipe configuration are as follows:
Pipe Length Geometry: make a guess for the ideal pipe geometry, attempting to minimize shared headloss and maximize train branch headloss.
Headloss Calculations: determine minor and major losses throughout the system.
System Error (Cross-Talk Effect): calculate the effect of cross-talk over a range of flow rates.
1. Pipe Length Geometry
The initial pipe design is based on limited knowledge of the site, and is supposed to convey a conservative guess for the condction and distribution line geometry. When a full topography of the site and the two upstream pressure breaks, a more precise design will be made and analyzed. The video below is a rendering of the preliminary design of the conduction and train-distribution system:
In summary, the proposed plan is to augment both lines running from the pressure break to 8" lines. The two lines will immediately plumb into a main 12" conduction line. The main line will run 455 m to the plant site, where it splits at a tee into two 4" lines. The following calculations ensure the cross-talk between the two trains are minimized.
2. Headloss Calculations
The headloss in the various components of the system is critical in calculating the effect of cross-talk.
Headloss From the Pressure Break to the Tee
The first section of the conduction line is where the two smaller lines join the 10" conduction line. To calculate the headloss through the two pipes, an iterative approach is used. First, the flowrates for various headlosses through the 6" and 4" lines combined are calculated. Because the head of the 4" and 6" line is known to be the same at the Tee, it is assumed that the headloss is the same (pressure breaks have the same free surface.) When these two added flow rates together equal the full plant flow rate, the resulting headloss through both pipes represent the first losses in the distribution system:
End of explanation
"""
# id of 8" pipe
diam_8 = pipe.ID_SDR(6, sdr_pipe)
# calculate the added flow rates for all the headlosses in the range:
flow_range_pressure_break_to_tee = pc.flow_pipe(diam_8, headloss_range_pressure_break_to_tee,
l_pressure_break_to_tee_4_inch, exp.NU_WATER, mat.PIPE_ROUGH_PVC,
k_value_pressure_break_to_tee_4_inch) + \
pc.flow_pipe(diam_8, headloss_range_pressure_break_to_tee,
l_pressure_break_to_tee_6_inch, exp.NU_WATER, mat.PIPE_ROUGH_PVC,
k_value_pressure_break_to_tee_6_inch)
# dataframe of flow rates for various flow rates:
df = df.assign(flow_8_inch=np.array(flow_range_pressure_break_to_tee.to(u.L/u.s)))
df.plot()
plt.show()
"""
Explanation: 3 m of head is lost to the entrance in order to supply the full plant flow. This is a significant portion of the full head available, and is in the shared headloss section, meaning it will negatively impact the flow distribution cross-talk error. Increasing the two lines to 8" would decrease the headloss substantially, and not require too much work:
End of explanation
"""
# set a conservative guess for a pressure break to tee headloss determined above:
headloss_pressure_break_to_tee = 1 * u.m
# headloss in the combined trunk:
headloss_conduction_line = pc.headloss(flow_full, 12*u.inch,
l_total,NU_WATER,PVC_ROUGHNESS,k_conduction_line)
# total shared headloss:
headloss_shared = headloss_conduction_line + headloss_pressure_break_to_tee
# set the headloss available as the difference in height from pressure break to plant entrance:
head_available_total = min(height_pressure_break_4, height_pressure_break_6) - height_plant
head_available_for_trains = head_available_total - headloss_shared
print('The total shared headloss is: ' + str(headloss_shared))
print('The remaining headloss available for each train: ' + str(head_available_for_trains))
# calculate the headloss for various pipe sizes for a singe train branch:
pipe_diameters_nd_branch = [3,4,6,8]*u.inch
pipe_diameters_id_branch = pipe.ID_sch40(pipe_diameters_nd_branch)*u.inch
# calculate minor losses:
k_value_tee_to_plant_entrance = k.k_value_reduction(id_12, pipe_diameters_id_branch, flow_branch)\
+ exp.K_MINOR_90*4 + exp.K_MINOR_GATE_VALVE
# calculate length:
l_branch = 5 * u.m + 5 * u.m + 2 * u.m
headloss_branch = pc.headloss(flow_branch, pipe_diameters_id_branch, l_branch,
exp.NU_WATER, mat.PIPE_ROUGH_PVC, k_value_tee_to_plant_entrance)
pd.DataFrame(np.array([np.array(pipe_diameters_id_branch),np.array(headloss_branch)]),
columns=pipe_diameters_nd_branch, index=['Pipe Size (inch)', 'Headloss (m)'])
"""
Explanation: Now the required headloss is less than 1 m, wich will help reduce shared headloss dramatically.
Total Shared (Trunk) Headloss
Now a conservative estimate of the headloss from the main conduction line is added to form the total shared headloss. The remaining head is used in the branch lengths. Using so much head to drive the flow through each branch leads to using smaller pipes and thus smaller gate valves. The following calculations prove that a 6" branch pipe diameter can be used and still achieve the full train flow rate.
End of explanation
"""
# k values for the gate valve at various positions
gate_valve_positions = [1, 0.75, 0.5, 0.25]
k_values_gate_valve = [0.17, 0.9, 4.5, 24]
gate_valve_pipe_section_guess = 10*u.inch
k_value_gate_valve_section = k.k_value_orifice(i, diam_4, l_gate_orifice, flow_branch) + k_values_gate_valve[0]
headloss_various_positions_gate_valve = pc.headloss(flow_branch, diam_4, l_gate_orifice,
NU_WATER,PVC_ROUGHNESS,k_value_gate_valve_section)
pd.options.display.float_format = '{:,.1f}'.format
# headloss_whole_system_various_flow_rates = pc.flow_pipe(diam_12,l_tee_to_plant_entrance, flow_branch,)
pd.DataFrame(np.array([k_value_gate_valve_section,headloss_various_positions_gate_valve.magnitude]),columns=k_value_gate_positions,
index=['Gate valve k values for different positions (1 is fully open, 0 fully closed)','headloss (m)'])
l_pipes_final_design = np.array([l_pressure_break_to_tee_4_inch.magnitude, pipe_length_trains, l_branch, l_gate_orifice])
id_pipes_final_design = np.array([diam_8, diam_12, diam_6, diam_4])
k_pipes_final_design = np.array([k_value_pressure_break_to_tee_6_inch, k_pipe, k_value_tee_to_plant_entrance, k_value_gate_valve_section])
pipeline.flow_pipeline(id_pipes_final_design, l_pipes_final_design, k_pipes_final_design)
"""
Explanation: There is an extreme difference in headloss between the 4" and 6" option. The 6" branch diameter would not have enough headloss to enable fine-grain control of flow rate because the valve has to be at least 3/4 closed to even begin reducing the flow rate below the full branch flow rate. Therefore, the size of a short section with the gate valve could be reduced to 4". The following calculation shows the max headloss of the proposed system:
Gate Valve Reduction Headloss
A 4" gate valve is proposed to simultaneously increase headloss and decrease price. To calculate the headloss used by the new configuration, flow through the reduced gate valve is modeled as a thick orifice with an additional coefficient for the valve itself. Our goal is to determine what length the 4" valve section should be to enable fine grain control with the gate valve. This is done by trying to use the remaining headloss in this section.
End of explanation
"""
# Calculating the flow throughout the whole system with only one train on:
# pc.flow_pipe()
"""
Explanation: The headloss table reveals that a 4" gate valve will yield a reasonable resolution for the gate valve position. This is further expounded upon in the flow row, that shows a single branch will have favorable flow distribution across the gate valve's range.
3. System Error (Cross-Talk Effect)
End of explanation
"""
height_pressure_break_after_plant_4 = 1008 * u.m
height_pressure_break_after_plant_6 = 1009 * u.m
#################### headloss calculations ############################
# a conservative estimate for k TODO: change to reflect actual accessories
k_exit_line = exp.K_MINOR_EL90 * 7
# dimensions derived from the topography study
d_z = 45.83 * u.m
d_x = 444.77 *u.m
d_y = 372.49 * u.m
length_exit_line = (d_z**2 + d_x**2 + d_y**2)**0.5
head_exit_line = d_z
print(length_exit_line)
headloss_exit_4 = pc.headloss(flow_branch, id_4, l_total,NU_WATER,PVC_ROUGHNESS,k_exit_line)
headloss_exit_6 = pc.headloss(flow_branch, id_6, l_total,NU_WATER,PVC_ROUGHNESS,k_exit_line)
print("Headloss in 4 inch line: {} and available head is: {}".format(head_exit_line,headloss_exit_4))
print("Headloss in 6 inch line: " + str(headloss_exit_6) + " and available head is: " + str(head_exit_line))
##################### total flow calculation ###########################
flow_exit_4 = pc.flow_pipe(id_4,head_exit_line,length_exit_line,NU_WATER,PVC_ROUGHNESS,k_exit_line)
flow_exit_6 = pc.flow_pipe(id_6,head_exit_line,length_exit_line, NU_WATER,PVC_ROUGHNESS,k_exit_line)
flow_actual_exit_with_two_lines = (flow_exit_4 + flow_exit_6).to(u.L/u.s)
print("Flow in the 4 inch line is: "+str(flow_exit_4.to(u.L/u.s)))
print("Flow in the 6 inch line is: "+str(flow_exit_6.to(u.L/u.s)))
print("Flow to the plant with both lines and available head is: " + str(flow_actual_exit_with_two_lines))
"""
Explanation: Confirming Exit Line Flow Rates
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nims-kma/cmip6/models/sandbox-2/atmoschem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-2', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:28
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
jdekozak/dirac5d | Pauli with Geometric Algebra 4,0 over the Reals.ipynb | gpl-3.0 | from sympy import *
variables = (x, y, z, w) = symbols('x y z w', real=True)
print(variables)
metric=[ 1
,1
,1
,1]
myBasis='e_1 e_2 e_3 e_4'
sp4d = Ga(myBasis, g=metric, coords=variables,norm=True)
(e_1, e_2, e_3, e_4) = sp4d.mv()
"""
Explanation: ALGEBRA & DEFINITIONS
Clifford algebra is $$Cl_{4,0}(\mathbb{R})$$
Flat space, no metric, just signature
All constants are equal to 1
End of explanation
"""
sigma_1w=e_2*e_3
sigma_2w=e_3*e_1
sigma_3w=-e_1*e_2
sigma_1w.texLabel='\\sigma_{1w}'
sigma_2w.texLabel='\\sigma_{2w}'
sigma_3w.texLabel='\\sigma_{3w}'
display(Math('(1,'+sigma_1w.texLabel+','+sigma_2w.texLabel+','+sigma_3w.texLabel+')'))
CheckProperties(sigma_1w,sigma_2w,sigma_3w)
sigma_1x=e_2*e_3
sigma_2x=e_3*e_4
sigma_3x=-e_4*e_2
sigma_1x.texLabel='\\sigma_{1x}'
sigma_2x.texLabel='\\sigma_{2x}'
sigma_3x.texLabel='\\sigma_{3x}'
display(Math('(1,'+sigma_1x.texLabel+','+sigma_2x.texLabel+','+sigma_3x.texLabel+')'))
CheckProperties(sigma_1x,sigma_2x,sigma_3x)
sigma_1y=e_1*e_3
sigma_2y=e_3*e_4
sigma_3y=-e_4*e_1
sigma_1y.texLabel='\\sigma_{1y}'
sigma_2y.texLabel='\\sigma_{2y}'
sigma_3y.texLabel='\\sigma_{3y}'
display(Math('(1,'+sigma_1y.texLabel+','+sigma_2y.texLabel+','+sigma_3y.texLabel+')'))
CheckProperties(sigma_1y,sigma_2y,sigma_3y)
sigma_1z=e_1*e_2
sigma_2z=e_2*e_4
sigma_3z=-e_4*e_1
sigma_1z.texLabel='\\sigma_{1z}'
sigma_2z.texLabel='\\sigma_{2z}'
sigma_3z.texLabel='\\sigma_{3z}'
display(Math('(1,'+sigma_1z.texLabel+','+sigma_2z.texLabel+','+sigma_3z.texLabel+')'))
CheckProperties(sigma_1z,sigma_2z,sigma_3z)
gamma_4=e_1*e_2*e_3
gamma_1=e_4*e_2*e_3
gamma_2=e_1*e_4*e_3
gamma_3=e_1*e_2*e_4
displayWithTitle(gamma_4,title='\\gamma_4')
displayWithTitle(gamma_4*gamma_4,title='\\gamma_4^2')
displayWithTitle(gamma_1,title='\\gamma_1')
displayWithTitle(gamma_1*gamma_1,title='\\gamma_1^2')
displayWithTitle(gamma_2,title='\\gamma_2')
displayWithTitle(gamma_2*gamma_2,title='\\gamma_2^2')
displayWithTitle(gamma_3,title='\\gamma_3')
displayWithTitle(gamma_3*gamma_3,title='\\gamma_3^2')
"""
Explanation: Quaternions - Pauli matrices
http://en.wikipedia.org/wiki/Quaternion
https://en.wikipedia.org/wiki/Pauli_matrices
End of explanation
"""
m, q = symbols('m q', real=True)
rquat = [sigma_1w, sigma_2w, sigma_3w]
B_x, B_y, B_z = symbols('B_x B_y B_z', real=True)
Bv =[B_x, B_y, B_z]
B = S(0)
for (dim, var) in zip(Bv, rquat):
B += var * dim
B.texLabel='\\mathbf{B}'
display(Latex('Magnetic field $'+B.texLabel+'$ is defined with $B_x, B_y, B_z \\in \\mathbb{R}$'))
display(Math(B.texLabel+'=B_x'+sigma_1w.texLabel+'+B_y'+sigma_2w.texLabel+'+B_z'+sigma_3w.texLabel))
displayWithTitle(B, title=B.texLabel)
a, b, c, d = symbols('a b c d', real=True)
psiv =[b, c, d]
psi = S(a)
for (dim, var) in zip(psiv, rquat):
psi += var * dim
psi.texLabel='\\psi'
display(Latex('Pauli spinor $'+psi.texLabel+'$ is defined with $a, b, c, d\\in \\mathbb{R}$'))
display(Math(psi.texLabel+'=a+b'+sigma_1w.texLabel+'+c'+sigma_2w.texLabel+'+d'+sigma_3w.texLabel))
displayWithTitle(psi, title=psi.texLabel)
psiv_dagger =[-b, -c, -d]
psi_dagger = S(a)
for (dim, var) in zip(psiv_dagger, rquat):
psi_dagger += var * dim
psi_dagger.texLabel='\\psi^\\dagger'
display(Math(psi_dagger.texLabel+'=a-b'+sigma_1w.texLabel+'-c'+sigma_2w.texLabel+'-d'+sigma_3w.texLabel))
displayWithTitle(psi_dagger, title=psi_dagger.texLabel)
rho=psi*psi_dagger
rho.texLabel = '\\rho'
display(Math(rho.texLabel+'='+psi.texLabel+psi_dagger.texLabel))
displayWithTitle(rho, title=rho.texLabel)
spin_vector_w = psi*sigma_3w*psi_dagger
spin_vector_w.texLabel = '\\boldsymbol{s}_w'
display(Math(spin_vector_w.texLabel+'='+psi.texLabel+sigma_3w.texLabel+psi_dagger.texLabel))
displayWithTitle(spin_vector_w, title=spin_vector_w.texLabel)
up=1/2*(1+e_3)
up.texLabel='\\uparrow'
displayWithTitle(up, title=up.texLabel)
displayWithTitle(up*up, title=up.texLabel+'^2')
down=1/2*(1-e_3)
down.texLabel='\\downarrow'
displayWithTitle(down, title=down.texLabel)
displayWithTitle(down*down, title=down.texLabel+'^2')
gamma_5 = e_1*e_2*e_3*e_4
gamma_5.texLabel = '\\gamma_5'
left=1/2*(1-gamma_5)
left.texLabel='['
displayWithTitle(left, title=left.texLabel)
displayWithTitle(left*left, title=left.texLabel+'^2')
right=1/2*(1+gamma_5)
right.texLabel=']'
displayWithTitle(right, title=right.texLabel)
displayWithTitle(right*right, title=right.texLabel+'^2')
"""
Explanation: PHYSICS
The following symbols are defined :
Charge $$q \in \mathbb{R}$$
Mass $$m \in \mathbb{R}$$
End of explanation
"""
|
ngcm/summer-academy-2017-basics | basics_B/Recap/Basics_examples.ipynb | mit | list1 = [10, 12, 14, 16, 18]
print(list1[0]) # Index starts at 0
print(list1[-1]) # Last index at -1
"""
Explanation: <font color='mediumblue'> Lists
<font color='midnightblue'> Example: Indexed
End of explanation
"""
print(list1[0:3]) # Slicing: exclusive of end value
# i.e. get i=(0, 1, .. n-1)
print(list1[3:]) # "slice from i=3 to end"
"""
Explanation: <font color='midnightblue'> Example: Slicable
End of explanation
"""
list1.append(20)
print(list1)
list1.extend([22,24,26])
print(list1)
list1[3]='squirrel'
print(list1)
list1.remove('squirrel')
print(list1)
list1.insert(3,16)
print(list1)
"""
Explanation: <font color='midnightblue'> Example: Mutable & Mixed Data Types
End of explanation
"""
tuple1 = (10, 12, 14, 16, 18)
print(tuple1)
print(tuple1[0])
print(tuple1[1:3])
print(tuple1[3:])
tuple1.append(20)
print(tuple1)
"""
Explanation: <font color='mediumblue'> Tuples
<font color='midnightblue'> Example: Immutable
End of explanation
"""
%timeit tuple1=(10,12,14,16,18)
%timeit list1=[10,12,14,16,18]
#%timeit tuple1[3:]
#%timeit list1[3:]
"""
Explanation: <font color='midnightblue'> Example: Tuples are faster
End of explanation
"""
tuple2 = 'Lucy','Ryan'
a, b = tuple2
print('{} is OK, {} is amazing!'.format(a, b))
b, a = a, b
print('{} is OK, {} is amazing!'.format(a, b))
ages = [('Lucy', 25), ('Ryan', 24)]
for name, age in ages:
print('{} is {}.'.format(name, age))
"""
Explanation: <font color='midnightblue'> Example: Unpacking Tuples
End of explanation
"""
list2 = [list1, tuple1]
list1 = [1, 2 , 3]
print(list2)
"""
Explanation: <font color='midnightblue'> Example: Python is Call by value
End of explanation
"""
# To use numpy, we first have to import the package
import numpy as np
# Can convert a list to an array:
array1=np.array(list1)
print(array1)
# Can make an evenly spaced array between 2 values using linspace or arange.
# linspace takes the number of points to use as an argument and returns floats by default
print(np.linspace(0, 10, 11))
# arange takes the spacing as an argument and returns the type given as the spacing, e.g.
print(np.arange(0, 11, 1.))
print(np.arange(0, 11, 1))
"""
Explanation: <font color='mediumblue'> Numpy Arrays
<font color='midnightblue'> Example: How to use
End of explanation
"""
print('The average of array1 is', np.average(array1))
print('The sum of array1 is', np.sum(array1))
# Apply functions
print(np.exp(array1))
print(np.reciprocal(array1))
array2=np.array([float(array1[i]) for i in range(len(array1))])
a=np.reciprocal(array2)
print(np.reciprocal([float(array1[i]) for i in range(len(array1))]))
angles=np.array([0, np.pi/2., np.pi, 3*np.pi/4.])
np.sin(angles)
"""
Explanation: <font color='midnightblue'> Example: Useful functions
End of explanation
"""
M1 = np.array([[2,3],[6,3]])
M2 = np.array([[5,6],[2,9]])
print('M1:')
print(M1)
print('M2:')
print(M2)
M3 = M1 * M2 # Element-wise multiplication
print(M3, '\n')
M4 = np.dot(M1, M2) # Matrix multiplication
print(M4)
"""
Explanation: #### <font color='midnightblue'> Example: 2d Arrays
End of explanation
"""
premier_league_data = np.loadtxt('example.csv')
print(premier_league_data)
print(type(premier_league_data[0][0]))
"""
Explanation: <font color='midnightblue'> Example: Creating an array from a file
End of explanation
"""
price_table = {'apples': 50, 'pears': 60, 'bananas': 20}
print(price_table)
fruit = [('apples', 50), ('bananas', 20), ('pears', 60)]
price_table1 = dict(fruit)
print(price_table==price_table1)
# NOTE: the order when you define a dictionary doesn't matter, it's ordered with a hashtable not
# with indexing lists and tuples
# To get a value out, you use square brackets but instead of an index, you use the key:
akey = 'apples'
print("The price of {} is {}p.".format(akey, price_table[akey]))
# Trying to use an index wouldn't work:
print(price_table[0])
price_table.keys()
# Example usage:
shopping_list = [('apples', 50), ('bananas', 20)]
total = 0
for item, quantity in shopping_list:
price = price_table[item]
print('Adding {} {} at {}p each.'.format(quantity, item, price))
total += price * quantity
print('Total shopping cost is £%.2f.' %(total/100.))
"""
Explanation: <font color='mediumblue'> Dictionaries
<font color='midnightblue'> Example: How to use
End of explanation
"""
price_table['kiwis']=30
print(price_table)
del price_table['bananas']
print(price_table)
price_table['apples']=25
print(price_table)
"""
Explanation: <font color='midnightblue'> Example: Mutable
End of explanation
"""
# Iterating over the dictionary will iterate over its keys
for key in price_table:
print("{} cost {}p".format(key, price_table[key]))
# Or use the items method:
for key, val in price_table.items():
print("{} cost {}p".format(key, val))
"""
Explanation: <font color='midnightblue'> Example: Iterating
End of explanation
"""
def square_root(x):
"""Useful docstring: Calculates and returns square root of x"""
i = x ** 0.5
return i
x = 10
y = square_root(x)
print('The square root of {} is {}'.format(x, y))
# We can set a default value to the function
def square_root(x=20):
i = x ** 0.5
return i
print(square_root())
# Loops, functions and appending
mylist = []
for i in range(1,5):
mylist.append(square_root(i))
print(mylist)
"""
Explanation: <font color='dodgerblue'> Functions
<font color='midnightblue'> Example: Simple function
End of explanation
"""
def update_integer(i):
# attempt to update i (integers) are immutable
i += 1
def update_list_end(arglist):
arglist[-1] = 50 # Lists are mutable: updates args directly!
a = 1
update_integer(a)
print(a)
mylist = [0, 1, 2, 3, 4]
update_list_end(mylist)
print(mylist)
"""
Explanation: <font color='midnightblue'> Example: Arguments and mutability
End of explanation
"""
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi)
y = np.sin(x)
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
ax.plot(x, y,'o-')
ax.margins(0.1)
ax.set_title('2D plot')
ax.set_xlabel('$x$')
ax.set_ylabel(r'$sin(x)$')
ax.plot()
"""
Explanation: <font color='dodgerblue'> Plotting
<font color='mediumblue'> Matplotlib
<font color='midnightblue'> Example: Simple Plot
End of explanation
"""
xtick_values = np.linspace(0, 2*np.pi, 5)
xtick_labels = ['$0$', r'$\frac{\pi}{2}$', r'$\pi$', r'$\frac{3\pi}{2}$',
r'$2\pi$']
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111); ax.plot(x, y,'-o')
ax.set_title('2D plot')
ax.margins(0.1)
ax.set_xlabel('$x$'); ax.set_ylabel(r'$sin(x)$')
ax.set_xticks(xtick_values)
ax.set_xticklabels(xtick_labels, fontsize=25);
"""
Explanation: <font color='midnightblue'> Example: Labels, ticks and other appenditories
End of explanation
"""
f1 = open('textfile.txt', 'r+')
print(f1.read())
f1.close()
with open('textfile.txt', 'r+') as f1:
print(f1.readline())
print(f1.readline())
with open('textfile.txt', 'r+') as f1:
print(f1.readlines())
with open('textfile.txt', 'r+') as f1:
print(list(f1))
with open('textfile.txt', 'r+') as f1:
for line in f1:
print(line)
with open('textfile.txt', 'r+') as f1:
f1.write('Hello')
print(f1.readline())
f1.write('Second Hello')
print(f1.read())
with open('textfile.txt', 'r+') as f1:
print(f1.read())
with open('textfile.txt', 'r+') as f1:
lines = f1.readlines()
del lines[-1]
lines[2] = 'I have changed the third line\n'
with open('textfile.txt', 'w') as f1:
f1.writelines(lines)
f1.seek(0)
with open('textfile.txt') as f1:
print(f1.read())
"""
Explanation: <font color='dodgerblue'> Reading from and writing to files
End of explanation
"""
|
bryanwweber/thermostate | docs/regen-reheat-rankine-cycle-example.ipynb | bsd-3-clause | from thermostate import State, Q_, units, SystemInternational as SI
from thermostate.plotting import IdealGas, VaporDome
"""
Explanation: Regen-Reheat Rankine Cycle Example
Imports
End of explanation
"""
substance = 'water'
T_1 = Q_(480.0, 'degC')
p_1 = Q_(12.0, 'MPa')
p_2 = Q_(2.0, 'MPa')
p_3 = p_2
T_3 = Q_(440.0, 'degC')
p_4 = p_7 = p_8 = p_12 = Q_(0.3, 'MPa')
p_5 = Q_(6.0, 'kPa')
x_6 = Q_(0.0, 'dimensionless')
x_8 = Q_(0.0, 'dimensionless')
p_10 = Q_(12.0, 'MPa')
T_10 = Q_(210.0, 'degC')
x_11 = Q_(0.0, 'dimensionless')
p_11 = p_2
"""
Explanation: Definitions
End of explanation
"""
# State 1
st_1 = State(substance, T=T_1, p=p_1)
h_1 = st_1.h.to(SI.h)
s_1 = st_1.s.to(SI.s)
# State 2
s_2 = s_1
st_2 = State(substance, p=p_2, s=s_2)
h_2 = st_2.h.to(SI.h)
T_2 = st_2.T.to(SI.T)
x_2 = st_2.x
# State 3
st_3 = State(substance, p=p_3, T=T_3)
h_3 = st_3.h.to(SI.h)
s_3 = st_3.s.to(SI.s)
# State 4
s_4 = s_3
st_4 = State(substance, p=p_4, s=s_4)
h_4 = st_4.h.to(SI.h)
T_4 = st_4.T.to(SI.T)
x_4 = st_4.x
# State 5
s_5 = s_4
st_5 = State(substance, p=p_5, s=s_5)
h_5 = st_5.h.to(SI.h)
T_5 = st_5.T.to(SI.T)
x_5 = st_5.x
# State 6
p_6 = p_5
st_6 = State(substance, p=p_6, x=x_6)
h_6 = st_6.h.to(SI.h)
s_6 = st_6.s.to(SI.s)
T_6 = st_6.T.to(SI.T)
# State 7
s_7 = s_6
st_7 = State(substance, p=p_7, s=s_7)
h_7 = st_7.h.to(SI.h)
T_7 = st_7.T.to(SI.T)
# State 8
st_8 = State(substance, p=p_8, x=x_8)
h_8 = st_8.h.to(SI.h)
s_8 = st_8.s.to(SI.s)
T_8 = st_8.T.to(SI.T)
# State 9
s_9 = s_8
p_9 = p_10
st_9 = State(substance, p=p_9, s=s_9)
h_9 = st_9.h.to(SI.h)
T_9 = st_9.T.to(SI.T)
# State 10
st_10 = State(substance, p=p_10, T=T_10)
h_10 = st_10.h.to(SI.h)
s_10 = st_10.s.to(SI.s)
# State 11
st_11 = State(substance, p=p_11, x=x_11)
h_11 = st_11.h.to(SI.h)
s_11 = st_11.s.to(SI.s)
T_11 = st_11.T.to(SI.T)
# State 12
h_12 = h_11
st_12 = State(substance, p=p_12, h=h_12)
s_12 = st_12.s.to(SI.s)
T_12 = st_12.T.to(SI.T)
x_12 = st_12.x
"""
Explanation: Problem Statement
Consider a regenerative vapor power cycle with two feedwater heaters, a closed one and an open one, and reheat. Steam enters the first turbine stage at 12.0 MPa, 480.0 celsius, and expands to 2.0 MPa. Some steam is extracted at 2.0 MPa and bled to the closed feedwater heater. The remainder is reheated at 2.0 MPa to 440.0 celsius and then expands through the second stage turbine, where an additional amount of steam is extracted and bled into the open feedwater heater operating at 0.3 MPa. The steam expanding through the third stage turbine exits at the condenser pressure of 6.0 kPa, and the steam exits the condenser as a saturated liquid at 6.0 kPa. Feedwater leaves the closed heater at 210.0 celsius, 12.0 MPa, and condensate exiting as saturated liquid at 2.0 MPa is trapped into the open feedwater heater. Saturated liquid at 0.3 MPa leaves the open feedwater heater. Assume all pumps and turbine stages operate isentropically. Determine for the cycle
the heat transfer to the working fluid passing through the steam generator and reheater, in kJ per kg of steam entering the first stage turbine
the thermal efficiency
the heat transfer from the working fluid passing through the condenser to the cooling water, in kJ per kg of steam entering the first stage turbine
Solution
1. the heat transfer in the steam generator and reheater
The heat transfer can be found by drawing a control volume around the boiler/steam generator, including the reheat portion. Then, the first law reduces to
\begin{align}
\dot{Q}{in} &= \dot{m}_1\left(h_1 - h{10}\right) + \dot{m}3\left(h_3 - h_2\right) \
\frac{\dot{Q}{in}}{\dot{m}1} &= \left(h_1 - h{10}\right) + \left(1 - y'\right)\left(h_3 - h_2\right)
\end{align}
where $y'$ is the fraction of the flow extracted to the closed feedwater heater. To find the value of $y'$, we draw a control volume around the closed feedwater heater and solve for $y'$
\begin{align}
y' = \frac{h_{10} - h_9}{h_2 - h_{11}}
\end{align}
First, fix the values for all the states using given information.
The regen reheat rankine cycle is made of 14 processes:
1. Isentropic expansion
2. Isobaric heat exchange
3. Isentropic expansion
4. Isentropic expansion
5. Isobaric condensation
6. Isentropic
7. Isobaric
8. Isentropic
9. Isobaric
10. Isobaric
11. Isoenthalpic
12. Isobaric
13. Isobaric
14. Isobaric
The following properties are used to fix the four states:
State | Property 1 | Property 2
:-----:|:-----:|:-----:
1|$$T_1 $$|$$p_1 $$
2|$$p_2 $$|$$s_2=s_1 $$
3|$$p_3=p_2 $$|$$T_3 $$
4|$$p_4 $$|$$s_4=s_3 $$
5|$$p_5 $$|$$s_5=s_4 $$
6|$$p_6=p_5 $$|$$x_6 $$
7|$$p_7=p_4 $$|$$s_7=s_6 $$
8|$$p_8=p_7 $$|$$x_8 $$
9|$$p_9=p_10 $$|$$s_9=s_8 $$
10|$$p_10 $$|$$T_10 $$
11|$$p_11=p_2 $$|$$x_11 $$
12|$$p_12=p_8 $$|$$h_12=h_11 $$
End of explanation
"""
Regen = VaporDome(substance, ('s', 'T'))
Regen.add_process(st_1, st_2, 'isentropic')
Regen.add_process(st_2, st_3, 'isobaric')
Regen.add_process(st_3, st_4, 'isentropic')
Regen.add_process(st_4, st_5, 'isentropic')
Regen.add_process(st_5, st_6, 'isobaric')
Regen.add_process(st_6, st_7, 'isentropic')
Regen.add_process(st_7, st_8, 'isobaric')
Regen.add_process(st_8, st_9, 'isentropic')
Regen.add_process(st_9, st_10, 'isobaric')
Regen.add_process(st_10, st_1, 'isobaric')
Regen.add_process(st_11, st_12, 'isoenthalpic')
Regen.add_process(st_12, st_7, 'isobaric')
Regen.add_process(st_2, st_11, 'isobaric')
Regen.add_process(st_4, st_7, 'isobaric')
y_p = (h_10 - h_9)/(h_2 - h_11)
Qdot_in = (h_1 - h_10 + (1 - y_p)*(h_3 - h_2)).to('kJ/kg')
"""
Explanation: Summarizing the states,
| State | T | p | h | s | x | phase |
|-------|---------------------------------|---------------------------------|---------------------------------|---------------------------------|---------------------------------|-----------------|
| 1 | 480.00 celsius | 12.00 MPa | 3295.29 kJ/kg | 6.4186 kJ/(K kg) | --- | supercritical |
| 2 | 225.46 celsius | 2.00 MPa | 2837.44 kJ/kg | 6.4186 kJ/(K kg) | --- | gas |
| 3 | 440.00 celsius | 2.00 MPa | 3336.25 kJ/kg | 7.2560 kJ/(K kg) | --- | supercritical |
| 4 | 187.07 celsius | 0.30 MPa | 2839.24 kJ/kg | 7.2560 kJ/(K kg) | --- | gas |
| 5 | 36.16 celsius | 6.00 kPa | 2234.74 kJ/kg | 7.2560 kJ/(K kg) | 86.26% | twophase |
| 6 | 36.16 celsius | 6.00 kPa | 151.48 kJ/kg | 0.5208 kJ/(K kg) | 0.00% | twophase |
| 7 | 36.17 celsius | 0.30 MPa | 151.77 kJ/kg | 0.5208 kJ/(K kg) | --- | liquid |
| 8 | 133.52 celsius | 0.30 MPa | 561.43 kJ/kg | 1.6717 kJ/(K kg) | 0.00% | twophase |
| 9 | 134.62 celsius | 12.00 MPa | 573.95 kJ/kg | 1.6717 kJ/(K kg) | --- | liquid |
| 10 | 210.00 celsius | 12.00 MPa | 901.29 kJ/kg | 2.4077 kJ/(K kg) | --- | liquid |
| 11 | 212.38 celsius | 2.00 MPa | 908.50 kJ/kg | 2.4468 kJ/(K kg) | 0.00% | twophase |
| 12 | 133.52 celsius | 0.30 MPa | 908.50 kJ/kg | 2.5252 kJ/(K kg) | 16.04% | twophase |
Plotting the T-s diagram of the cycle,
End of explanation
"""
y_pp = (h_8 - h_7 + y_p*(h_7 - h_12))/(h_4 - h_7)
"""
Explanation: <div class="alert alert-success">
**Answer:** The heat transfer input is $\dot{Q}_{in}/\dot{m} =$ 2808.15 kJ/kg
</div>
2. the thermal efficiency
To find the thermal efficiency, we need to calculate the net work output of the power plant. This involves all three turbine stages and both of the pump stages. First, we need to calculate the mass fractions of the flow, $y'$ and $y''$. To find these values, we need to draw control volumes around the feedwater heaters, moving from high pressure to low pressure. $y'$ has already been found, so we need to find $y''$
\begin{align}
0 &= \dot{m}7 h_7 + \dot{m}{12} h_{12} + \dot{m}_{4} h_4 - \dot{m}_8 h_8
\end{align}
Dividing through by $\dot{m}1$ and noting that $\dot{m}_8 = \dot{m}_1$, $\dot{m}{12} = \dot{m}_2$, and $\dot{m}_7 = \dot{m}_5$,
\begin{align}
0 &= (1 - y' - y'')h_7 + y' h_{12} + y'' h_4 - h_8 \
y'' &= \frac{h_8 - h_7 + y'(h_7 - h_{12})}{h_4 - h_7}
\end{align}
End of explanation
"""
Wdot_t = h_1 - h_2 + (1 - y_p)*(h_3 - h_4) + (1 - y_p - y_pp)*(h_4 - h_5)
"""
Explanation: Now, for the turbine stages,
\begin{align}
\dot{W}_t &= \dot{m}_1(h_1 - h_2) + \dot{m}_3 (h_3 - h_4) + \dot{m}_5 (h_4 - h_5) \
\frac{\dot{W}_t}{\dot{m}_1} &= h_1 - h_2 + (1 - y')(h_3 - h_4) + (1 - y' - y'')(h_4 - h_5)
\end{align}
End of explanation
"""
Wdot_p = (1 - y_p - y_pp)*(h_6 - h_7) + h_8 - h_9
eta = ((Wdot_t + Wdot_p)/Qdot_in).to('dimensionless')
"""
Explanation: and for the pump stages
\begin{align}
\dot{W}_p &= \dot{m}_6(h_6 - h_7) + \dot{m}_8(h_8 - h_9) \
\frac{\dot{W}_p}{\dot{m}_1} &= (1 - y' - y'')(h_6 - h_7) + h_8 - h_9
\end{align}
End of explanation
"""
Qdot_out = (1 - y_p - y_pp)*(h_6 - h_5).to('kJ/kg')
"""
Explanation: <div class="alert alert-success">
**Answer:** The thermal efficiency of the cycle is $\eta =$ 0.4617 = 46.17%
</div>
3. the rate of heat transfer out of the condenser into the cooling water
Drawing a control volume around the condenser, we can find the rate of heat transfer as
\begin{align}
\dot{Q}{out} &= \dot{m}_5(h_6 - h_5) \
\frac{\dot{Q}{out}}{\dot{m}_1} &= (1 - y' - y'') (h_6 - h_5)
\end{align}
End of explanation
"""
|
okkhoy/pyDataAnalysis | ml-regression/week1/PhillyCrime.ipynb | mit | import graphlab
"""
Explanation: Fire up graphlab create
End of explanation
"""
sales = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv/')
sales
"""
Explanation: Load some house value vs. crime rate data
Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
End of explanation
"""
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
"""
Explanation: Exploring the data
The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
End of explanation
"""
crime_model = graphlab.linear_regression.create(sales, target='HousePrice', features=['CrimeRate'],validation_set=None,verbose=False)
"""
Explanation: Fit the regression model using crime as the feature
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(sales['CrimeRate'],sales['HousePrice'],'.',
sales['CrimeRate'],crime_model.predict(sales),'-')
"""
Explanation: Let's see what our fit looks like
Matplotlib is a Python plotting library that is also useful for plotting. You can install it with:
'pip install matplotlib'
End of explanation
"""
sales_noCC = sales[sales['MilesPhila'] != 0.0]
sales_noCC.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
"""
Explanation: Above: blue dots are original data, green line is the fit from the simple regression.
Remove Center City and redo the analysis
Center City is the one observation with an extremely high crime rate, yet house prices are not very low. This point does not follow the trend of the rest of the data very well. A question is how much including Center City is influencing our fit on the other datapoints. Let's remove this datapoint and see what happens.
End of explanation
"""
crime_model_noCC = graphlab.linear_regression.create(sales_noCC, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
"""
Explanation: Refit our simple regression model on this modified dataset:
End of explanation
"""
plt.plot(sales_noCC['CrimeRate'],sales_noCC['HousePrice'],'.',
sales_noCC['CrimeRate'],crime_model.predict(sales_noCC),'-')
"""
Explanation: Look at the fit:
End of explanation
"""
crime_model.get('coefficients')
crime_model_noCC.get('coefficients')
"""
Explanation: Compare coefficients for full-data fit versus no-Center-City fit
Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
End of explanation
"""
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000]
crime_model_nohighend = graphlab.linear_regression.create(sales_nohighend, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
"""
Explanation: Above: We see that for the "no Center City" version, per unit increase in crime, the predicted decrease in house prices is 2,287. In contrast, for the original dataset, the drop is only 576 per unit increase in crime. This is significantly different!
High leverage points:
Center City is said to be a "high leverage" point because it is at an extreme x value where there are not other observations. As a result, recalling the closed-form solution for simple regression, this point has the potential to dramatically change the least squares line since the center of x mass is heavily influenced by this one point and the least squares line will try to fit close to that outlying (in x) point. If a high leverage point follows the trend of the other data, this might not have much effect. On the other hand, if this point somehow differs, it can be strongly influential in the resulting fit.
Influential observations:
An influential observation is one where the removal of the point significantly changes the fit. As discussed above, high leverage points are good candidates for being influential observations, but need not be. Other observations that are not leverage points can also be influential observations (e.g., strongly outlying in y even if x is a typical value).
Remove high-value outlier neighborhoods and redo analysis
Based on the discussion above, a question is whether the outlying high-value towns are strongly influencing the fit. Let's remove them and see what happens.
End of explanation
"""
crime_model_noCC.get('coefficients')
crime_model_nohighend.get('coefficients')
"""
Explanation: Do the coefficients change much?
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.