text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Public-key encryption
Step2: Packing and padding integers
Step3: Key generation
Step4: RSA encryption
Step5: How decryption works
Step6: How Fidelio generates the RSA number and key pair
Step7: Finding the private key
|
<ASSISTANT_TASK:>
Python Code:
from fidelio_functions import *
message = "Most of the department chiefs are already mine."
print(message)
ints = text_to_ints(message)
print(ints,'\n')
packets = packetize(ints)
print(packets,'\n')
test_ints = unpacketize(packets)
print(test_ints,'\n')
test_text = ints_to_text(test_ints)
print(test_text)
rsa_number, public_key, private_key = generate_keys(verbose=True)
ciphertext = rsa_encrypt(message,rsa_number,public_key)
print(ciphertext)
plaintext = rsa_decrypt(ciphertext,rsa_number,private_key)
print(plaintext)
# Close doesn't count in RSA encryption. You need the exact private key.
rsa_decrypt(ciphertext,rsa_number,private_key-1)
# Good luck guessing the private key. There are many possibilities.
# (But most guesses decrypt to invalid characters, which is a potential weakness of Fidelio!)
for j in range(10):
badkey = random.randint(0,rsa_number)
print( rsa_decrypt(ciphertext,rsa_number,badkey) )
packets = packetize(text_to_ints("Hello, world!"))
print(packets,'\n')
cipher = [ pow(m,public_key,rsa_number) for m in packets ]
print(cipher,'\n')
decipher = [ pow(c,private_key,rsa_number) for c in cipher ]
print(decipher,'\n')
plaintext = ints_to_text(unpacketize(decipher))
print(plaintext)
# Let's use tiny primes for this example
small_primes = load_primes(too_large=50)
print(small_primes)
# Choose p and q at random from our list of primes
small_n, small_totient = choose_rsa_number(small_primes,verbose=True)
# Choose a public key which meets the criteria
small_public_key = choose_public_key(small_primes,small_totient,verbose=True)
# Check that the public key and totient are relatively prime
check_gcd = gcd(small_public_key,small_totient)
show_numbers = (small_public_key,small_totient)
if check_gcd == 1:
print( "%s and %s are relatively prime" % show_numbers )
else:
raise ValueError( "%s and %s are not relatively prime!" % show_numbers )
# Find the private key x such that kx % n = 1
check_gcd, small_private_key = gcd_and_inverse(small_public_key,small_totient)
print( "Private key is %s" % small_private_key )
# Is the private key really the inverse of the public key (mod totient)?
check_inverse = (small_public_key * small_private_key) % small_totient
show_numbers = (small_private_key,small_public_key,small_totient)
if check_inverse == 1:
print( "%s is the multiplicative inverse of %s (mod %s)" % show_numbers )
else:
raise ValueError( "%s is not the multiplicative inverse of %s (mod %s)" % show_numbers )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: ```{admonition} Observación
Step3: Objetivo
Step4: Medición de tiempo
Step5: Prueba que se resuelve correctamente el problema
Step6: Comando de magic
Step7: ```{admonition} Comentarios
Step8: ```{admonition} Observación
Step9: El output de cProfile muestra
Step11: Si queremos filtrar los resultados para los que hagan match con las palabras lambda o listcomp o math tomadas como expresiones regulares en la columna filename
Step12: Y podemos usar strip_dirs para eliminar el nombre de la ruta para mayor legibilidad
Step13: El número de veces que se llamaron a funciones primitivas o built in es
Step14: También pstats nos permite obtener qué funciones llaman a otras o son llamadas
Step15: Análogamente podemos establecer los filtros usados en print_stats
Step16: ```{admonition} Comentario
Step17: ```{admonition} Comentario
Step19: El output de line_profiler muestra
Step21: Utilizando funcionalidad de NumPy
Step23: ```{admonition} Comentarios
Step25: Utilizando python3 <script.py>
Step26: ```{admonition} Ejercicio
Step27: Y podemos ejecutar las siguientes líneas de acuerdo a how-do-i-profile-memory-usage-in-python para consistencia con los resultados que se obtendrán con %memit y %mprun
Step28: Para el uso de %memit y %mprun se carga la extensión de jupyter
Step29: %memit devuelve el pico de memoria usada en una celda de un jupyter notebook y utiliza las mismas ideas para reportar las mediciones que %timeit
Step31: ```{margin}
Step32: El output de %mprun contiene
Step33: ```{admonition} Comentarios
Step34: El output anterior muestra
Step35: Count un contador del número de objetos en el reporte de acuerdo al Kind. Enseguida se muestra su porcentaje de acuerdo al número de objetos analizados.
Step36: Utilizando generators
Step37: Utilizando funcionalidad de NumPy
Step38: ```{admonition} Ejercicio
Step39: ````{admonition} Comentarios
Step40: o en una forma update
Step41: ```{admonition} Comentario
Step42: se tiene
Step43: Si $y$ tiene valores distintos de $0$, se realiza un update
Step44: ````{admonition} Comentarios
Step45: ````{admonition} Comentarios
Step46: El algoritmo gaxpy row oriented puede escribirse de forma más compacta haciendo uso de la definición de producto punto estándar
Step47: Operación de multiplicación matriz-matriz (nivel 3 de BLAS)
Step48: Multiplicación de matrices
Step49: ```{admonition} Observación
Step50: ```{admonition} Comentarios
Step51: ```{admonition} Comentario
Step52: Resultados utilizando el script_no_OpenBLAS.sh
Step53: ```{admonition} Observación
Step55: ````{admonition} Comentarios
Step56: ```{admonition} Comentario
Step57: ```{margin}
Step58: ```{admonition} Comentarios
Step59: Otras métricas pueden ser obtenidas si ejecutamos perf sólo con la flag -r
Step60: ```{admonition} Comentarios
Step61: Para contrastar las salidas anteriores con un mejor uso del caché y de los cores se utiliza a continuación el paquete de numpy
Step62: ```{admonition} Comentarios
Step63: ```{admonition} Comentarios
Step64: perf OpenBLAS
Step65: Utilizando listas para construir a la matriz y a los vectores
Step66: gaxpy column oriented caso no vectorizado
Step67: Métricas por core
Step68: gaxpy column oriented caso vectorizando
Step69: Métricas por core
Step70: gaxpy row oriented caso vectorizando
Step71: ```{admonition} Comentarios
Step72: Comparación de versiones de multiplicación de matrices utilizando nivel 1 vs nivel 2 de BLAS
|
<ASSISTANT_TASK:>
Python Code:
%%bash
lscpu
%%bash
sudo lshw -C memory
%%bash
uname -ar #r for kernel, a for all
import math
import time
import numpy as np
from pytest import approx
from scipy.integrate import quad
def Rcf(f,a,b,n):
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
h_hat = (b-a)/n
nodes = [a+(i+1/2)*h_hat for i in range(n)]
sum_res = 0
for node in nodes:
sum_res = sum_res+f(node)
return h_hat*sum_res
f = lambda x: math.exp(-x**2) #using math library
a = 0
b = 1
obj, err = quad(f, a, b)
print(obj)
n = 10**6
start_time = time.time()
res = Rcf(f,a,b,n)
end_time = time.time()
secs = end_time-start_time
print("Rcf tomó",secs,"segundos" )
print(res == approx(obj))
%time Rcf(f,a, b,n)
%timeit -n 5 -r 10 Rcf(f,a, b,n)
import cProfile
n=10**6
cprof = cProfile.Profile()
cprof.enable()
res = Rcf(f,a,b,n)
cprof.disable()
cprof.print_stats(sort='cumtime')
cprof.dump_stats("Rcf_stats")
import pstats
p_rcf_stats = pstats.Stats("Rcf_stats")
print(p_rcf_stats.sort_stats("cumulative").print_stats(10))
print(p_rcf_stats.sort_stats("cumulative").print_stats("lambda|listcomp|math"))
print(p_rcf_stats.strip_dirs().sort_stats("cumulative").print_stats("lambda|listcomp|math"))
print(p_rcf_stats.prim_calls)
p_rcf_stats.strip_dirs().sort_stats("cumulative").print_callers()
p_rcf_stats.strip_dirs().sort_stats("cumulative").print_callees()
p_rcf_stats.strip_dirs().sort_stats("cumulative").print_callers(10)
p_rcf_stats.strip_dirs().sort_stats("cumulative").print_callees("Rcf|lambda")
def myfunc():
A = np.array([[1, 2, 0],
[3, 4, 5]], dtype=float)
a_1 = A[:, 0]
return a_1
%prun -s cumulative myfunc()
import line_profiler
line_prof = line_profiler.LineProfiler()
n = 10**6
print(line_prof(Rcf)(f,a,b,n))
print(line_prof.print_stats())
def Rcf_2(f,a,b,n):
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
h_hat = (b-a)/n
nodes = (a+(i+1/2)*h_hat for i in range(n))
sum_res = 0
for node in nodes:
sum_res = sum_res+f(node)
return h_hat*sum_res
line_prof.add_function(Rcf_2)
print(line_prof(Rcf_2)(f, a, b, n))
f_np = lambda x: np.exp(-x**2)
def Rcf_3(f,a,b,n):
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
h_hat = (b-a)/n
aux_vec = np.linspace(a, b, n+1)
nodes = (aux_vec[:-1]+aux_vec[1:])/2
return h_hat*np.sum(f(nodes))
line_prof.add_function(Rcf_3)
print(line_prof(Rcf_3)(f_np, a, b, n))
print(line_prof.print_stats())
%%file Rcf_line_profiler.py
import math
from pytest import approx
from scipy.integrate import quad
@profile #decorate the functions you want to profile with kernprof
def Rcf(f,a,b,n):
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
h_hat = (b-a)/n
nodes = [a+(i+1/2)*h_hat for i in range(n)]
sum_res = 0
for node in nodes:
sum_res = sum_res+f(node)
return h_hat*sum_res
if __name__ == "__main__":
a = 0
b = 1
f = lambda x: math.exp(-x**2)
obj, err = quad(f, a, b)
print("objetivo: {:0.6e}".format(obj))
n = 10**6
res=Rcf(f,a,b,n)
print("aproximación: {:0.6e}".format(res))
print(res == approx(obj))
%%bash
$HOME/.local/bin/kernprof -l -v Rcf_line_profiler.py
%%file Rcf_line_profiler_2.py
import math
import line_profiler
from pytest import approx
from scipy.integrate import quad
def Rcf(f,a,b,n):
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
h_hat = (b-a)/n
nodes = [a+(i+1/2)*h_hat for i in range(n)]
sum_res = 0
for node in nodes:
sum_res = sum_res+f(node)
return h_hat*sum_res
if __name__ == "__main__":
a = 0
b = 1
f = lambda x: math.exp(-x**2)
obj, err = quad(f, a, b)
print("objetivo: {:0.6e}".format(obj))
n = 10**6
res = Rcf(f,a,b,n)
print("aproximación: {:0.6e}".format(res))
print(res == approx(obj))
line_prof = line_profiler.LineProfiler()
print(line_prof(Rcf)(f,a,b,n))
line_prof.print_stats()
%%bash
python3 Rcf_line_profiler_2.py
from memory_profiler import memory_usage
t = (Rcf, (f, a, b, n))
print(memory_usage(t, max_usage=True))
start_mem = memory_usage(max_usage=True)
res = memory_usage(t, max_usage=True, retval=True)
print('start mem', start_mem)
print('max mem', res[0])
print('used mem', res[0]-start_mem)
print('fun output', res[1])
%load_ext memory_profiler
%memit #how much RAM this process is consuming
%memit -c Rcf(f, a, b, n)
%memit -c Rcf_2(f, a, b, n)
%memit -c Rcf_3(f_np, a, b, n)
%%file Rcf_memory_profiler.py
import math
from pytest import approx
from scipy.integrate import quad
from memory_profiler import profile
@profile #decorate the functions you want to profile with memory_profiler
def Rcf(f,a,b,n):
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
h_hat = (b-a)/n
nodes = [a+(i+1/2)*h_hat for i in range(n)]
sum_res = 0
for node in nodes:
sum_res = sum_res+f(node)
return h_hat*sum_res
if __name__ == "__main__":
a = 0
b = 1
f = lambda x: math.exp(-x**2)
obj, err = quad(f, a, b)
print("objetivo: {:0.6e}".format(obj))
n = 10**6
res = Rcf(f,a,b,n)
print("aproximación: {:0.6e}".format(res))
print(res == approx(obj))
import math
a = 0
b = 1
f = lambda x: math.exp(-x**2)
n = 10**6
from Rcf_memory_profiler import Rcf
%mprun -f Rcf Rcf(f, a, b, n)
%%bash
python3 Rcf_memory_profiler.py
%%file heapy_ex_1
import math
from guppy import hpy
hp = hpy()
hp.setrelheap() #Everything allocated before this call will not be in the objects you get later.
a = 0
b = 1
n = 10**6
f = lambda x: math.exp(-x**2)
h_hat = (b-a)/n
h = hp.heap()
print(h)
%%bash
python3 heapy_ex_1
%%file heapy_ex_1_b
import math
from guppy import hpy
hp = hpy()
hp.setrelheap() #Everything allocated before this call will not be in the objects you get later.
a = 0
b = 1
n = 10**6
f = lambda x: math.exp(-x**2)
h_hat = (b-a)/n
h = hp.heap()
print(h)
print("index 1:")
print(h[1])
%%bash
python3 heapy_ex_1_b
%%file heapy_ex_1_2
import math
from guppy import hpy
hp = hpy()
hp.setrelheap() #Everything allocated before this call will not be in the objects you get later.
a = 0
b = 1
n = 10**6
f = lambda x: math.exp(-x**2)
h_hat = (b-a)/n
h = hp.heap()
print(h)
print("index 1:")
print(h[1])
nodes = [a+(i+1/2)*h_hat for i in range(n)]
h = hp.heap()
print("After creating nodes")
print(h)
%%bash
python3 heapy_ex_1_2
%%file heapy_ex_1_3
import math
from guppy import hpy
hp = hpy()
hp.setrelheap() #Everything allocated before this call will not be in the objects you get later.
a = 0
b = 1
n = 10**6
f = lambda x: math.exp(-x**2)
h_hat = (b-a)/n
h = hp.heap()
print(h)
print("index 1:")
print(h[1])
nodes = [a+(i+1/2)*h_hat for i in range(n)]
h = hp.heap()
print("After creating nodes")
print(h)
sum_res = 0
for node in nodes:
sum_res = sum_res+f(node)
h = hp.heap()
print("After sum")
print(h)
%%bash
python3 heapy_ex_1_3
%%file heapy_ex_2
import math
from guppy import hpy
hp = hpy()
hp.setrelheap() #Everything allocated before this call will not be in the objects you get later.
a = 0
b = 1
n = 10**6
f = lambda x: math.exp(-x**2)
h_hat = (b-a)/n
h = hp.heap()
print(h)
print("index 1:")
print(h[1])
nodes = (a+(i+1/2)*h_hat for i in range(n))
h = hp.heap()
print("After creating nodes")
print(h)
sum_res = 0
for node in nodes:
sum_res = sum_res+f(node)
h = hp.heap()
print("After sum")
print(h)
%%bash
python3 heapy_ex_2
%%file heapy_ex_3
import numpy as np
from guppy import hpy
hp = hpy()
hp.setrelheap() #Everything allocated before this call will not be in the objects you get later.
a = 0
b = 1
n = 10**6
f_np = lambda x: np.exp(-x**2)
h_hat = (b-a)/n
h = hp.heap()
print(h)
print("index 1:")
print(h[1])
aux_vec = np.linspace(a, b, n+1)
nodes= (aux_vec[:-1]+aux_vec[1:])/2
h = hp.heap()
print("After creating nodes")
print(h)
res = h_hat*np.sum(f_np(nodes))
print("After sum")
print(h)
%%bash
python3 heapy_ex_3
c = 0
n = 5
x = [-1]*n
y = [1.5]*n
for i in range(n):
c += x[i]*y[i]
print(c)
alpha = 2
n = 5
x = [-2]*n
y = [0]*n
for i in range(n):
y[i] += alpha*x[i]
print(y)
alpha = 2
n = 5
x = [-2]*n
y = [3,4,-1,0,1]
for i in range(n):
y[i] += alpha*x[i]
print(y)
m = 2
n = 5
A = [[1.2]*n if i%2==0 else [1]*n for i in range(m)]
print(A)
x = [2]*n
y = [0]*m
for i in range(m):
for j in range(n):
y[i] += A[i][j]*x[j]
print(y)
x = [2]*n
y = [-1]*m
for i in range(m):
for j in range(n):
y[i] += A[i][j]*x[j]
print(y)
x = [2]*n
y = [0]*m
for j in range(n):
for i in range(m):
y[i] += A[i][j]*x[j]
print(y)
x = 2*np.ones(n)
y = np.zeros(m)
print(x)
print(y)
A = np.array([[1.2,1.2,1.2,1.2,1.2],[1,1,1,1,1]])
print(A)
for j in range(n):
y += A[:,j]*x[j]
print(y)
x = 2*np.ones(n)
y = np.zeros(m)
A = np.array([[1.2,1.2,1.2,1.2,1.2],[1,1,1,1,1]])
for i in range(m):
y[i] += A[i,:].dot(x)
print(y)
print(np.show_config())
np.random.seed(2020)
m=10**4
r=10**4
A=np.random.rand(m,r)
np.random.seed(2021)
r=10**4
n=10**4
B=np.random.rand(r,n)
%timeit -n 1 -r 7 A@B
sessionInfo()
set.seed(2020)
m<-10**4
r<-10**4
A <- matrix(runif(m*r), m, r)
set.seed(2021)
r<-10**4
n<-10**4
B <- matrix(runif(r*n), r, n)
library(microbenchmark)
mbk<-microbenchmark(
A%*%B,
times=7
)
print(mbk)
sessionInfo()
set.seed(2020)
m<-10**4
r<-10**4
A <- matrix(runif(m*r), m, r)
set.seed(2021)
r<-10**4
n<-10**4
B <- matrix(runif(r*n), r, n)
library(microbenchmark)
mbk<-microbenchmark(
A%*%B,
times=1 #just one time as it takes too much time
)
print(mbk) #after ten minutes didnt finish
%%file Rcf.py
import math
from pytest import approx
from scipy.integrate import quad
def Rcf(f,a,b,n):
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
h_hat = (b-a)/n
nodes = [a+(i+1/2)*h_hat for i in range(n)]
sum_res = 0
for node in nodes:
sum_res = sum_res+f(node)
return h_hat*sum_res
if __name__=="__main__": #añadimos este bloque para ejecución de la función Rcf
a = 0
b = 1
f = lambda x: math.exp(-x**2)
obj, err = quad(f, a, b)
print("objetivo: {:0.6e}".format(obj))
n = 10**6
res = Rcf(f,a,b,n)
print("aproximación: {:0.6e}".format(res))
print(res == approx(obj))
%%bash
/usr/bin/time -p python3 Rcf.py
%%bash
/usr/bin/time -v python3 Rcf.py
%%file norm_square.py
n = 10**7
vector = list(range(n))
norm = 0
for v in vector:
norm += v*v
%%bash
echo "-1" |sudo tee -a /proc/sys/kernel/perf_event_paranoid
%%bash
perf stat -S -e cycles,instructions,cache-references,cache-misses -r 20 python3 norm_square.py
%%bash
perf stat -S --all-cpus -A -e cycles,instructions,cache-references,cache-misses -r 20 python3 norm_square.py
%%bash
perf stat -S -r 20 python3 norm_square.py
%%bash
perf stat -S --all-cpus -A -r 20 python3 norm_square.py
%%file norm_square_numpy.py
import numpy as np
n = 10**7
vector = np.arange(n)
vector.dot(vector)
%%bash
perf stat -S -e cycles,instructions,cache-references,cache-misses -r 20 python3 norm_square_numpy.py
%%bash
perf stat -S --all-cpus -A -e cycles,instructions,cache-references,cache-misses -r 20 python3 norm_square_numpy.py
%%bash
perf stat -S -r 20 python3 norm_square_numpy.py
%%bash
perf stat -S --all-cpus -A -r 20 python3 norm_square_numpy.py
np.random.seed(2020)
m = 10**4
n = 10**4
A = np.random.rand(m,n)
file = 'A.txt'
np.savetxt(file,A)
%%file mult_matrix_vector.py
m = 10**4
n = 10**4
x = [2.5]*n
y = [0]*m
A = []
file = 'A.txt'
with open(file,'r') as f:
for l in f:
A.append([float(k) for k in l.replace('\n','').replace(' ',',').split(',')])
for j in range(n):
for i in range(m):
y[i]+=A[i][j]*x[j]
%%bash
perf stat -S -e cycles,instructions,cache-references,cache-misses -r 7 python3 mult_matrix_vector.py
%%bash
perf stat -S --all-cpus -A -e cycles,instructions,cache-references,cache-misses -r 7 python3 mult_matrix_vector.py
%%file mult_matrix_vector_numpy.py
import numpy as np
m = 10**4
n = 10**4
x = 2.5*np.ones(n)
y = np.zeros(m)
file = 'A.txt'
A = np.loadtxt(file)
for j in np.arange(n):
y += A[:,j]*x[j]
%%bash
perf stat -S -e cycles,instructions,cache-references,cache-misses -r 7 python3 mult_matrix_vector_numpy.py
%%bash
perf stat -S --all-cpus -A -e cycles,instructions,cache-references,cache-misses -r 7 python3 mult_matrix_vector_numpy.py
%%file mult_matrix_vector_numpy_row_oriented.py
import numpy as np
m = 10**4
n = 10**4
x = 2.5*np.ones(n)
y = np.zeros(m)
file = 'A.txt'
A = np.loadtxt(file)
for i in np.arange(m):
y[i] += A[i,:].dot(x)
%%bash
perf stat -S -e cycles,instructions,cache-references,cache-misses -r 7 python3 mult_matrix_vector_numpy_row_oriented.py
%%bash
perf stat -S --all-cpus -A -e cycles,instructions,cache-references,cache-misses -r 7 python3 mult_matrix_vector_numpy_row_oriented.py
%%file mult_matrix_matrix_numpy_openblas.py
import numpy as np
m = 10**4
r = 10**4
n = 10**4
np.random.seed(2020)
m = 10**4
r = 10**4
A = np.random.rand(m,r)
np.random.seed(2021)
r = 10**4
n = 10**4
B = np.random.rand(r,n)
A@B
%%bash
perf stat -S -e cycles,instructions,cache-references,cache-misses -r 7 python3 mult_matrix_matrix_numpy_openblas.py
%%bash
perf stat -S --all-cpus -A -e cycles,instructions,cache-references,cache-misses -r 7 python3 mult_matrix_matrix_numpy_openblas.py
np.random.seed(2020)
m = 10**3
r = 10**3
A = np.random.rand(m,r)
fileA_10_3 = 'A_10_3.txt'
np.savetxt(fileA_10_3,A)
np.random.seed(2021)
m = 10**3
r = 10**3
B = np.random.rand(m,r)
fileB_10_3 = 'B_10_3.txt'
np.savetxt(fileB_10_3,B)
%%file mult_matrix_matrix_numpy_dot_product.py
import numpy as np
m = 10**3
n = 10**3
fileA_10_3 = 'A_10_3.txt'
fileB_10_3 = 'B_10_3.txt'
A = np.loadtxt(fileA_10_3)
B = np.loadtxt(fileB_10_3)
C = np.zeros((m,n))
for i in np.arange(m):
for j in np.arange(n):
C[i][j] += A[i,:].dot(B[:,j])
%%bash
perf stat -S --all-cpus -e cycles,instructions,cache-references,cache-misses -r 7 python3 mult_matrix_matrix_numpy_dot_product.py
%%bash
perf stat -S --all-cpus -A -e cycles,instructions,cache-references,cache-misses -r 7 python3 mult_matrix_matrix_numpy_dot_product.py
%%file mult_matrix_matrix_numpy_dot_product_gaxpy.py
import numpy as np
m = 10**3
n = 10**3
fileA_10_3 = 'A_10_3.txt'
fileB_10_3 = 'B_10_3.txt'
A = np.loadtxt(fileA_10_3)
B = np.loadtxt(fileB_10_3)
C = np.zeros((m,n))
for i in np.arange(m):
C[i,:] = A[i,:]@B
%%bash
perf stat -S --all-cpus -e cycles,instructions,cache-references,cache-misses -r 5 python3 mult_matrix_matrix_numpy_dot_product_gaxpy.py
%%bash
perf stat -S --all-cpus -A -e cycles,instructions,cache-references,cache-misses -r 5 python3 mult_matrix_matrix_numpy_dot_product_gaxpy.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Step3: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Making batches
Step5: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step6: Building the graph
Step7: Embedding
Step8: Negative sampling
Step9: Validation
Step10: Training
Step11: Restore the trained network if you need to
Step12: Visualizing the word vectors
|
<ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
print(words[:100])
print(int_words[:100])
## Your code here
from collections import Counter
import random
threadhold = 1e-5
int_word_counts = Counter(int_words)
total_count = len(int_word_counts)
freqs = {int_word: count / total_count for (int_word, count) in int_word_counts.items()}
prob_drop = {int_word: 1 - np.sqrt(threadhold / freqs[int_word]) for int_word in int_word_counts}
train_words = [int_word for int_word in int_words if (1 - prob_drop[int_word]) > random.random()]
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size + 1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
left_surrounding_words = words[start:idx]
right_surrounding_words = words[idx+1:stop+1]
target_words = set(left_surrounding_words + right_surrounding_words)
return target_words
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, shape=(None), name="inputs")
labels = tf.placeholder(tf.int32, shape=(None, None), name="labels")
n_vocab = len(int_to_vocab)
n_hidden_embedding = 300
with train_graph.as_default():
embedding = tf.Variable(
tf.random_uniform(shape=(n_vocab, n_hidden_embedding), minval=-1, maxval=1),
name="embedding"
)
embed = tf.nn.embedding_lookup(embedding, inputs)
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
w = tf.Variable(
tf.truncated_normal(shape=(n_vocab, n_hidden_embedding), stddev=0.1),
name = "w"
)
b = tf.Variable(
tf.zeros(shape=(n_vocab)),
name = "b"
)
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(
weights = w,
biases = b,
labels = labels,
inputs = embed,
num_sampled = n_sampled,
num_classes = n_vocab
)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
print("Start Training")
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulating Thousands of Possible Allocations
Step2: Log Returns vs Arithmetic Returns
Step3: Single Run for Some Random Allocation
Step4: Great! Now we can just run this many times over!
Step5: Plotting the data
Step7: Mathematical Optimization
Step8: To fully understand all the parameters, check out
Step9: Optimization works as a minimization function, since we actually want to maximize the Sharpe Ratio, we will need to turn it negative so we can minimize the negative sharpe (same as maximizing the postive sharpe)
Step10: All Optimal Portfolios (Efficient Frontier)
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Download and get Daily Returns
aapl = pd.read_csv('AAPL_CLOSE',
index_col = 'Date',
parse_dates = True)
cisco = pd.read_csv('CISCO_CLOSE',
index_col = 'Date',
parse_dates = True)
ibm = pd.read_csv('IBM_CLOSE',
index_col = 'Date',
parse_dates = True)
amzn = pd.read_csv('AMZN_CLOSE',
index_col = 'Date',
parse_dates = True)
stocks = pd.concat([aapl, cisco, ibm, amzn],
axis = 1)
stocks.columns = ['aapl','cisco','ibm','amzn']
stocks.head()
mean_daily_ret = stocks.pct_change(1).mean()
mean_daily_ret
stocks.pct_change(1).corr()
stocks.head()
stock_normed = stocks/stocks.iloc[0]
stock_normed.plot()
stock_daily_ret = stocks.pct_change(1)
stock_daily_ret.head()
log_ret = np.log(stocks / stocks.shift(1))
log_ret.head()
log_ret.hist(bins = 100,
figsize = (12, 6));
plt.tight_layout()
log_ret.describe().transpose()
log_ret.mean() * 252
# Compute pairwise covariance of columns
log_ret.cov()
log_ret.cov() * 252 # multiply by days
# Set seed (optional)
np.random.seed(101)
# Stock Columns
print('Stocks')
print(stocks.columns)
print('\n')
# Create Random Weights
print('Creating Random Weights')
weights = np.array(np.random.random(4))
print(weights)
print('\n')
# Rebalance Weights
print('Rebalance to sum to 1.0')
weights = weights / np.sum(weights)
print(weights)
print('\n')
# Expected Return
print('Expected Portfolio Return')
exp_ret = np.sum(log_ret.mean() * weights) *252
print(exp_ret)
print('\n')
# Expected Variance
print('Expected Volatility')
exp_vol = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights)))
print(exp_vol)
print('\n')
# Sharpe Ratio
SR = exp_ret/exp_vol
print('Sharpe Ratio')
print(SR)
num_ports = 15000
all_weights = np.zeros((num_ports, len(stocks.columns)))
ret_arr = np.zeros(num_ports)
vol_arr = np.zeros(num_ports)
sharpe_arr = np.zeros(num_ports)
for ind in range(num_ports):
# Create Random Weights
weights = np.array(np.random.random(4))
# Rebalance Weights
weights = weights / np.sum(weights)
# Save Weights
all_weights[ind,:] = weights
# Expected Return
ret_arr[ind] = np.sum((log_ret.mean() * weights) *252)
# Expected Variance
vol_arr[ind] = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights)))
# Sharpe Ratio
sharpe_arr[ind] = ret_arr[ind] / vol_arr[ind]
sharpe_arr.max()
sharpe_arr.argmax()
all_weights[1419,:]
max_sr_ret = ret_arr[1419]
max_sr_vol = vol_arr[1419]
plt.figure(figsize = (12, 8))
plt.scatter(vol_arr,
ret_arr,
c = sharpe_arr,
cmap = 'plasma')
plt.colorbar(label = 'Sharpe Ratio')
plt.xlabel('Volatility')
plt.ylabel('Return')
# Add red dot for max SR
plt.scatter(max_sr_vol,
max_sr_ret,
c = 'red',
s = 50,
edgecolors = 'black')
def get_ret_vol_sr(weights):
Takes in weights, returns array or return,volatility, sharpe ratio
weights = np.array(weights)
ret = np.sum(log_ret.mean() * weights) * 252
vol = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights)))
sr = ret/vol
return np.array([ret, vol, sr])
from scipy.optimize import minimize
help(minimize)
def neg_sharpe(weights):
return get_ret_vol_sr(weights)[2] * -1
# Contraints
def check_sum(weights):
'''
Returns 0 if sum of weights is 1.0
'''
return np.sum(weights) - 1
# By convention of minimize function it should be a function that returns zero for conditions
cons = ({'type' : 'eq', 'fun': check_sum})
# 0-1 bounds for each weight
bounds = ((0, 1), (0, 1), (0, 1), (0, 1))
# Initial Guess (equal distribution)
init_guess = [0.25, 0.25, 0.25, 0.25]
# Sequential Least SQuares Programming (SLSQP).
opt_results = minimize(neg_sharpe,
init_guess,
method = 'SLSQP',
bounds = bounds,
constraints = cons)
opt_results
opt_results.x
get_ret_vol_sr(opt_results.x)
# Our returns go from 0 to somewhere along 0.3
# Create a linspace number of points to calculate x on
frontier_y = np.linspace(0, 0.3, 100) # Change 100 to a lower number for slower computers!
def minimize_volatility(weights):
return get_ret_vol_sr(weights)[1]
frontier_volatility = []
for possible_return in frontier_y:
# function for return
cons = ({'type':'eq','fun': check_sum},
{'type':'eq','fun': lambda w: get_ret_vol_sr(w)[0] - possible_return})
result = minimize(minimize_volatility,
init_guess,
method = 'SLSQP',
bounds = bounds,
constraints = cons)
frontier_volatility.append(result['fun'])
plt.figure(figsize = (12, 8))
plt.scatter(vol_arr,
ret_arr,
c = sharpe_arr,
cmap = 'plasma')
plt.colorbar(label = 'Sharpe Ratio')
plt.xlabel('Volatility')
plt.ylabel('Return')
# Add frontier line
plt.plot(frontier_volatility,
frontier_y,
'g--',
linewidth = 3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Keep only main and second S1
Step2: Select events with good S1 areas
Step3: Process S1s
Step4: There is essentially no difference between SR0 and SR1. This indicates the drift field has at most a very limited influence on the S1.
Step5: While it's expected that the average KS-distance is larger for the secondary than the primary S1 (as it has fewer photons), we see the secondary S1 has a large population of outliers. The primary S1 has only a handful of outliers. We'll focus on the primary S1 model below; for the moment, let's examine some of the outlying waveforms
Step6: For the primary S1, outliers have a large secondary signal merged in, perhaps a big afterpulse, a single electron, or a combination of these. It could actually be the secondary Kr S1, though then the secondary Kr S1 as identified by pax has to arise from some arifact.
Step7: Since we remove 3 outliers on ~50k waveforms, the difference is essentially neglgible, we need not have bothered recomputing the waveform.
Step8: Extract n_sampler
Step10: Model
Step12: Fitting
Step13: Simulate, write, load, process, and load the processed data... Here this goes at 5000 S1s/sec (on my laptop).
Step14: What went wrong in my SR0 S1 shape analysis (https
Step15: Compare widths
Step16: Positive
Step17: According to the current afterpulse config, the total afterpulse probability is 1 - 2 % (it varies per PMT).
Step18: Check if 50% area computation is the same
Step19: Bla
Step20: OLD JUNK
Step21: OLD junk
|
<ASSISTANT_TASK:>
Python Code:
import os
def load_data(filename):
if not os.path.exists(filename + '.hdf5'):
raise FileNotFoundError("Need the S1 metadata dataframe from file %s.hdf5" % filename)
if not os.path.exists(filename + '.npz'):
raise FileNotFoundError("Need the S1 waveforms from file %s.npz" % filename)
data = pd.read_hdf(filename + '.hdf5')
waveforms = np.load(filename + '.npz')['waveforms'].T
return {'data': data, 'waveforms': waveforms}
def cut(d, mask):
d['data'] = d['data'][mask]
d['waveforms'] = d['waveforms'][:, mask]
sr1 = load_data('sr1_kr_s1s')
sr0 = load_data('sr0_kr_s1s')
dsets = {'sr1': sr1, 'sr0':sr0}
def select_primary(data):
return data['s1_a_center_time'] == data['center_time']
for d in dsets.values():
# Keep only main and secon S1
mask = ((d['data']['s1_a_center_time'] == d['data']['center_time']) |
(d['data']['s1_b_center_time'] == d['data']['center_time'])).values
cut(d, mask)
# Check Secondary is always after primary
assert np.all((d['data']['s1_a_center_time'] < d['data']['s1_b_center_time']).values)
# Add corrected areas
d['data']['cs1_a'] = d['data']['s1_a'] * d['data']['int_a_s1_area_correction']
d['data']['cs1_b'] = d['data']['s1_b'] * d['data']['int_b_s1_area_correction']
# data = hax.cuts.selection(data, mask, desc='Keep main and second S1')
# waveforms = waveforms[:, mask]
# ### Compute corrected S1 area
# prim_mask = select_primary(data).values
# c_area = np.copy(data['cs1_b'].values) # Don't forget copy here! Nasty business otherwise!
# c_area[prim_mask] = data['cs1_a'][prim_mask]
# data['c_area'] = c_area
# assert np.all(data[prim_mask]['cs1_a'] == data[prim_mask]['c_area'])
# assert np.all(data[True ^ prim_mask]['cs1_b'] == data[True ^ prim_mask]['c_area'])
primary_area_cut = np.array((180, 320))
secondary_area_cut = np.array((40, 160))
for dname, d in dsets.items():
Histdd(d['data']['cs1_a'], d['data']['cs1_b'],
bins=(np.linspace(0, 500, 100), np.linspace(0, 200, 100))).plot(log_scale=True)
draw_box(primary_area_cut, secondary_area_cut, edgecolor='red')
plt.xlabel("Corrected primary S1 area")
plt.ylabel("Corrected secondary S1 area")
plt.title(dname)
finish('selection_%s' % dname)
for d in dsets.values():
# Apply area cut
mask = (primary_area_cut[0] < d['data']['cs1_a'].values) & (
d['data']['cs1_a'].values < primary_area_cut[1])
mask &= (secondary_area_cut[0] < d['data']['cs1_b'].values ) & (
d['data']['cs1_b'].values < secondary_area_cut[1])
cut(d, mask)
# Normalize waveforms
d['data']['waveform_area'] = d['waveforms'].sum(axis=0)
d['waveforms'] /= d['data']['waveform_area'].values[np.newaxis, :]
# import warnings
# with warnings.catch_warnings():
# warnings.filterwarnings('error')
alignment_options = dict(method='area_fraction', area_fraction=0.1)
proc = xs.MatrixProcessor(ts, alignment_options=alignment_options)
for d in dsets.values():
prim_mask = select_primary(d['data']).values
d['prim'] = dict(waveforms=d['waveforms'][:, prim_mask],
data=d['data'][prim_mask],
label='Primary S1 (32.2 keV)')
d['secn'] = dict(waveforms=d['waveforms'][:, ~prim_mask],
data=d['data'][~prim_mask],
label='Secondary S1 (9.4 keV)')
# Do the processing
for q in d['prim'], d['secn']:
q.update(proc.process(q['waveforms'], full=True))
stats.norm.cdf(-1), stats.norm.cdf(1)
t_range = (-100, 600)
def comp_both():
for dname, d in dsets.items():
for q in 'prim', 'secn':
plt.plot(ts, d[q]['average_waveform'],
drawstyle='steps-mid',
linestyle={'sr1': '-', 'sr0': ':'}[dname],
color={'prim': 'b', 'secn': 'g'}[q],
label={'sr0': '\SI{117}{V/cm}', 'sr1': '\SI{81}{V/cm}'}[dname] + ': ' + d[q]['label'])
plt.xlim(-200, 500)
plt.ylim(1e-6, None)
inset_plot(comp_both, inset_log=True, inset_lims=(1e-5, 1e-1))
plt.tight_layout()
finish('s1_shape_comparison')
for dname, d in dsets.items():
for q in 'prim', 'secn':
plt.hist(d[q]['data_model_ks'], bins=np.linspace(0, 0.3, 100), histtype='step', label=d[q]['label'])
plt.xlabel('KS distance between model and data')
plt.ylabel('Observations')
plt.yscale('log')
plt.legend(loc='upper right')
ks_cut = 0.15
#labeled_vline(ks_cut, "Cut (primary only)", 10, color='red', textoffset=0.01)
plt.title(dname)
finish('gof_with_template_%s' % dname)
# mh = Histdd(prim['ks'], secn['ks'], bins=100)
# mh.plot(log_scale=True)
# # plt.ylim(0, 0.4)
# # plt.xlim(0, 0.2)
# # Verify alignment is done correctly
# for wv_i in range(10):
# w = prim_wvs[:, wv_i]
# plt.plot(ts, w, linestyle='steps-mid')
# plt.plot(ts, mm[:, i_noshift + t_shifts[wv_i]//dt], linestyle='steps-mid')
# plt.xlim(-150, 150)
# plt.show()
# Plot some of the worst offenders
for d in [dsets['sr1']['prim'], dsets['sr1']['secn']]:
#for wv_i in np.random.randint(0, high=wvs.shape[1], size=3):
print(d['label'])
for wv_i in np.argsort(d['data_model_ks'])[-3:]:
w = d['waveforms'][:, wv_i]
plt.plot(ts, w, linestyle='steps-mid', label='Data')
plt.plot(ts, d['model_matrix'][:, d['i_noshift'] + d['time_shifts'][wv_i]//dt], linestyle='steps-mid',
label='Template')
#plt.xlim(-500, 500)
plt.xlabel("Time (ns) since center of gravity")
plt.legend(loc='upper right')
plt.ylabel("Amplitude (fraction of total)")
finish('waveform_%d' % wv_i)
# prim_outlier_mask = prim['ks'] < ks_cut
# prim_wvs = prim_wvs[:, prim_outlier_mask]
# old_model = prim['mean_template']
# plabel = prim['label']
# prim = doit(prim_wvs)
# prim['label'] = plabel
# (~prim_outlier_mask).sum()/len(prim_outlier_mask)
# np.sum(True ^ prim_outlier_mask)
# len(prim_outlier_mask)
# plt.plot(ts, prim['mean_template'], label='Template')
# plt.plot(ts, old_model - prim['mean_template'], label='Difference')
# plt.legend(loc='upper right')
# plt.ylim(1e-11, None)
# plt.yscale('log')
dpe_fraction=0.16
for dname, d in dsets.items():
x = np.linspace(0, 400, 1000)
h1 = Hist1d(d['prim']['data']['area'].values, xs.utils.centers_to_edges(x))
h1.bin_edges /= (1 + dpe_fraction)
h1.plot()
d['prim']['n_sampler'] = h1.get_random
plt.title(dname)
plt.show()
q = np.load('../spe_pulse_shape/x1t_sr1_spe.npz')
spe_y = q['ys']
spe_t = q['ts']
spe_y_cum = np.cumsum(spe_y)
spe_pulse = interpolate.interp1d(spe_t, spe_y_cum,
kind='linear', bounds_error=False, fill_value=(0, 1))
def simulate_s1s(n_sampler, n_photons=int(1e7),
ts=ts,
t1=3.1, t3=24, fs=0.2, tts=3, **kwargs):
Return wv_matrix for simulated S1s consisting of n_photons in total
* wv_matrix and time_matrix have shape (n_samples, n_s1s). t_shift_vector has shape (n_s1s).
* wv_matrix contains amplitude, time_matrix contains center time of digitizer bin to which amplitude applies
* t_shift_vector contains, for each S1, the time shift applied to align it.
e.g. for alignment on maximum, the time between the interaction time and the maximum time.
kwargs.setdefault('alignment_options', alignment_options)
kwargs.setdefault('spe_pulse', spe_pulse)
times = np.zeros(n_photons)
# Singlet/triplet decay
n_singlets = np.random.binomial(n=n_photons, p=fs)
times += np.concatenate([
np.random.exponential(t1, n_singlets),
np.random.exponential(t3, n_photons - n_singlets)
])
np.random.shuffle(times) # So we don't put all triplets in the later S1s..
# PMT transition time spread / other effects
times += np.random.normal(0, tts, size=n_photons)
return xs.simulate_peak_waveforms(times, ts=ts, n_sampler=n_sampler, **kwargs)
# wv_matrix = simulate_s1s(n_photons=int(1e5), fs=0)
# for i in range(10):
# plt.plot(ts, wv_matrix[:, i], linestyle='steps-mid')
# plt.xlim(-500, 500)
# t_matrix, t_shifts = xeshape.utils.aligned_time_matrix(ts, wv_matrix)
d = dsets['sr1']['prim']
wv_matrix = simulate_s1s(n_sampler=d['n_sampler'], fs=0, t3=40, tts=12)
res = proc.process(wv_matrix)
plt.plot(ts, d['average_waveform'], linestyle='steps-mid')
plt.plot(ts, res['average_waveform'], linestyle='steps-mid')
#plt.plot(ts, secn['mean_template'], linestyle='steps-mid')
plt.xlim(-100, 300)
#fit_range = (-100, 301)
#fit_mask = ((fit_range[0] <= ts)) & (ts <= fit_range[1])
fit_mask = np.ones_like(ts).astype(np.bool)
def gof(dset='sr1', s1type='prim', echo=True, **params):
Return goodness of fit of parameters. Parameters also selects which dataset to use..
d = dsets[dset][s1type]
data_template = d['average_waveform']
sim_template = proc.process(simulate_s1s(n_sampler=d['n_sampler'], **params))['average_waveform']
ks = np.abs(np.cumsum(sim_template[fit_mask]) - np.cumsum(data_template[fit_mask])).max()
if echo:
print('ks=%0.5f, %s' % (ks, str(params)))
return ks
# Don't trust regular minimizers for MC work: they will ask for gradients and you will die
# Should use blueice integration.. too lazy
optresult = optimize.brute(
lambda x: gof(fs=0, t3=x[0], tts=x[1]),
#[50, 12],
#bounds=[[20, 70], [5, 25]],
#options=dict(maxfev=100),
#method='Powell',
# The first time you fit, obviously choose a wider range!
#ranges=[(42.5, 44), (12, 13)],
#ranges=[(42, 44), (12, 13.5)],
ranges=[(41, 43), (12, 14)],
Ns=10,
finish=None,
)
optresult
len(dsets['sr1']['prim']['data'])
#D = 0.00552
D = 5.6e-3
c_alpha = (-0.5 * np.log(0.5/2))**0.5
c_alpha**2 * 2 / D**2
d = dsets['sr1']['prim']
sim_wvs = proc.process(simulate_s1s(n_sampler=d['n_sampler'], n_photons=int(1e7),
fs=0, t3=optresult[0], tts=optresult[1]),
full=True)
h1 = Hist1d(sim_wvs['areas'] * 10, bins=100)
#Hist1d(d['areas'] * 10, bins=h1.bin_centers).plot()
h2 = Hist1d(d['data']['s1'].values, bins=h1.bin_centers)
for h, label in zip([h1, h2], ['Simulation', 'Data']):
plt.plot(h.bin_centers, h.normalized_histogram, linestyle='steps-mid', label=label)
plt.xlabel("Area (PE)")
plt.ylabel("Fraction of observations")
plt.legend(loc='upper right')
finish('area_comparison')
np.median(sim_wvs['areas']) * 10 / np.median(d['data']['s1'].values)
def delta_pulse(ts):
y = np.zeros_like(ts)
y[ts>0] = 1
return y
sim_wvs_delta_pulse = proc.process(simulate_s1s(n_sampler=d['n_sampler'], n_photons=int(1e7),
spe_pulse=delta_pulse,
fs=0, t3=optresult[0], tts=optresult[1]),
full=True)
from pax import configuration
pax_config = configuration.load_configuration('XENON1T')
y = np.cumsum(pax_config['WaveformSimulator']['pe_pulse_ys'])
y /= y[-1]
old_pulse = interpolate.interp1d(pax_config['WaveformSimulator']['pe_pulse_ts'], y,
kind='linear', bounds_error=False, fill_value=(0, 1))
sim_wvs_old = proc.process(simulate_s1s(n_sampler=d['n_sampler'], fs=0, n_photons=int(2e6),
t3=pax_config['WaveformSimulator']['s1_decay_time'],
spe_pulse=old_pulse,
tts=pax_config['WaveformSimulator']['pmt_transit_time_spread']),
full=True)
d['wv_matrix'].shape, sim_wvs['wv_matrix'].shape
def bla():
plt.plot(ts, d['average_waveform'],
linestyle='steps-mid', label='Data')
plt.plot(ts, sim_wvs['average_waveform'],
linestyle='steps-mid', label='Simulation')
plt.plot(ts, sim_wvs_delta_pulse['average_waveform'],
drawstyle='steps-mid', dashes=(0.5, 0.5),
label='Simulation, $\delta$ pulse')
# plt.plot(ts, sim_wvs_old['average_waveform'],
# drawstyle='steps-mid',
# label='Simulation, current model')
plt.xlim(-50, 400)
inset_plot(bla, inset_log=True, inset_lims=(1e-5, 1e-1))
finish('s1_shape_fit')
# from pax import configuration
# ap_conf = configuration.load_configuration('pmt_afterpulsing')
# for name, c in ap_conf['WaveformSimulator']['each_pmt_afterpulse_types'][126].items():
# print(c)
f, axes = plt.subplots(2, 2, sharey=True)
axes = axes.ravel()
def robust_std(x):
return (np.percentile(x, 100 * stats.norm.cdf(1)) -
np.percentile(x, 100 * stats.norm.cdf(-1)))/2
for af_i, af in enumerate([0.25, 0.5, 0.75, 0.9]):
ax = axes[af_i]
plt.sca(ax)
bins = dict()
means = []
stds = []
for q, label, color in zip((d, sim_wvs), ('Data', 'Sim.'), 'bg'):
x = q['fraction_area_width'][af] * 10
means.append(x.mean())
# Use robust estimate of std, don't want to get massive disagreement because of a few outliers
stds.append(robust_std(x))
if af not in bins:
# Figure out sensible bins:
mu = x.mean()
bins[af] = np.linspace(mu - 25, mu + 25, 50)
h1 = Hist1d(x, bins[af]);
plt.plot(h1.bin_centers, h1.normalized_histogram,
color=color, linestyle='steps-mid',
label=label)
# ytext = h1.normalized_histogram.max() * 0.35
# plt.plot(means, 0.025 * np.ones(2), c='k')
# plt.text(np.mean(means), 0, "%0.2f ns" % abs(means[1] - means[0]),
# horizontalalignment='center')
plt.text(0.59, 0.85,
(r'$\Delta \mu / \mu = %0.1f \%%$' + "\n" + r'$\Delta \sigma / \sigma = %0.1f \%%$') % (
-100 * (means[1] - means[0])/means[1],
-100 * (stds[1] - stds[0])/stds[1]),
transform=ax.transAxes,
verticalalignment='top',
)
plt.text(0.1, 0.85,
"%d\%%" % (af * 100),
transform=ax.transAxes,
verticalalignment='top',
#horizontalalignment='center'
)
plt.grid(alpha=0.2)
plt.xlim(h1.bin_edges[0], h1.bin_edges[-1])
axes[0].legend(loc='lower left', handlelength=1, frameon=False)
axes[2].set_ylabel("Fraction of observations")
axes[2].set_xlabel("Width (ns) of central area fraction")
#plt.ylabel("Fraction of total")
#plt.ylim(0, plt.ylim()[-1] * 1.2)
#plt.yscale('log')
plt.tight_layout()
plt.gcf().subplots_adjust(hspace=0.2, wspace=0.05)
finish('s1_width_comparison')
bla = proc.process(simulate_s1s(n_dist=stats.randint(69, 70),
n_photons=int(1e6),
fs=0, t3=optresult[0], tts=optresult[1]),
full=True)
robust_std(sim_wvs['fraction_area_width'][0.5] * 10)
robust_std(bla['fraction_area_width'][0.5] * 10)
# plt.scatter(bla['areas'] * 10,
# bla['fraction_area_width'][0.5] * 10, marker='.')
x = np.linspace(0, 5000, 100)
ap_hist = Hist1d(bins=x)
for name, c in ap_conf['WaveformSimulator']['each_pmt_afterpulse_types'][126].items():
tps = c['time_parameters']
if c['time_distribution'] == 'normal':
dist = stats.norm(loc=tps['loc'], scale=tps['scale'])
elif c['time_distribution'] == 'uniform':
dist = stats.uniform(tps['low'], tps['high'])
#print(c['time_parameters'], c['time_distribution'])
y = np.diff(dist.cdf(x)) * c['p']
ap_hist += y
plt.plot((x[1:] + x[:-1])/2, y, label='' if label.startswith('Uniform'))
ap_hist.plot(c='k', label='Total')
plt.legend(loc='upper right')
#plt.yscale('log')
plt.show()
ap_hist.histogram.sum()
for af in [0.25, 0.5, 0.75, 0.9]:
wbins = np.arange(0, 251)
for q, label, color in zip((d, sim_wvs), ('Data', 'Simulation'), 'bg'):
h1 = Hist1d(q['fraction_area_width'][af] * 10, wbins);
plt.plot(h1.bin_centers, h1.normalized_histogram,
color=color, linestyle='steps-mid',
label=label if af == 0.25 else None)
plt.text(h1.mean, min(h1.normalized_histogram.max() * 1.2, 0.7), "%d\%%" % (af * 100),
horizontalalignment='center')
plt.legend(loc='upper right')
plt.xlabel("Width (ns) of central area fraction")
plt.ylabel("Fraction of total")
plt.ylim(0, 0.2)
plt.tight_layout()
plt.show()
# for wv_i in np.random.choice(np.where((d['width_std'] > 70))[0], 5):
# plt.plot(ts - d['time_shifts'][wv_i], d['waveforms'][:, wv_i], c='k', alpha=0.3, linestyle='steps-mid')
# plt.plot(ts, sim_wvs['average_waveform'], c='b', linestyle='steps-mid')
# plt.xlim(None, None)
# plt.show()
wbins = np.arange(50, 100)
for q in (d, sim_wvs):
plt.hist(q['width_std'], wbins, histtype='step', normed=True);
from scipy import stats
mu = 1
sigma = 0.5
x = np.linspace(-1, 3, 100)
dist = stats.truncnorm(-mu/sigma, float('inf'), loc=mu, scale=sigma)
plt.plot(x, dist.pdf(x))
plt.axvline(dist.mean(), color='r')
a = dsets['sr1']['prim']['fraction_area_width'][0.5] * 10
b = dsets['sr1']['prim']['data']['s1_range_50p_area'].values
np.median(np.abs((a-b)/a))
#1/data['run_number'] + data['event_number']
np.in1d(data['left']
Histdd(data2['area'], data2['range_50p_area'],
bins=(np.linspace(0, 500, 100), 100)).plot(log_scale=True)
(the SEs are binned based on the average SE width at their position, not based on their actual width (which is very stochastic))
raise ZeroDivisionError
#get_params(dict())['hit_distribution']
# class S1WaveformSource(xeshape.bi.WaveformSource):
# def simulate_wv_matrix(self, n_events):
# c = self.config
# # Simulat a bit more photons than we need
# n_photons = (c['hit_dist'].mean() + c['hit_dist'].std()) * n_events
# wv_matrix = simulate_s1s(n_photons, **c)
# return wv_matrix[:,:n_events]
# config = dict(
# sources=[dict(name='Freddie')],
# analysis_space=(('t', ts),),
# default_source_class=S1WaveformSource,
# n_events_for_model=int(1e3),
# )
# config.update(default_params)
# config['fs'] = 0
from pax import units
plt.hist(data['left'] * 10 * units.ns / units.ms, bins=np.linspace(0, 2.5, 100));
plt.yscale('log')
mh = Histdd(data['left'] * 10 * units.ns / units.ms,
data['c_area'],
bins=(np.linspace(0, 2.5, 100), np.linspace(0, 450, 100)))
mh.plot(log_scale=True)
#plt.yscale('log')
plt.xlabel("Time (ms) in event")
plt.ylabel("Corrected S1 Area (pe)")
x = 0.5
#t1 = plt.text(x, 4, "Few hits", backgroundcolor='white')
#t2 = plt.text(x, 20, "SE")
t3 = plt.text(x, 80, "9.4 keV")
t4 = plt.text(x, 250, "32 keV")
t5 = plt.text(1.5, 170, "Pile-up / early trigger", verticalalignment='center')
for t in [t1, t2, t3, t4, t5]:
t.set_bbox(dict(facecolor='white', alpha=0.8, edgecolor='white'))
#labeled_hline(50, "Cut below", 0.2, color='red', textoffset=10)
plt.tight_layout()
prim_mask = select_primary(data)
for secondary in False, True:
d = data[secondary ^ prim_mask]
Histdd((d['center_time'] - 1 * units.ms) / units.us,
d['c_area'],
bins=(np.linspace(0, 3, 100),
np.logspace(1, 4, 100))).plot(log_scale=True)
offset = 0.6
if secondary:
plt.axvspan(offset + 0.6, offset + 2, color='red', alpha=0.1)
plt.text(offset + (0.6+2)/2, 1000, "Preselection\npreferred", horizontalalignment='center', color='red')
plt.axhspan(*secondary_area_cut, color='blue', alpha=0.2)
plt.text(0.2,
np.exp(np.mean(np.log(secondary_area_cut))),
"Area\nsel.",
horizontalalignment='center', verticalalignment='center',
color='blue')
plt.title("Secondary S1")
else:
#labeled_vline(offset, "Trigger offset", 1500, linestyle=':', textoffset = 0.1, color='r')
plt.axhspan(*primary_area_cut, color='blue', alpha=0.2)
plt.text(0.2,
np.exp(np.mean(np.log(primary_area_cut))),
"Area\nsel.",
horizontalalignment='center', verticalalignment='center',
color='blue')
t = plt.text(1, 700, "Merged with SE?")
plt.title("Primary S1")
plt.yscale('log')
plt.xlabel("Center time (\si{\micro s}) since trigger time")
plt.ylabel("Corrected S1 Area (pe)")
plt.show()
# plt.hist(data2['cs1_b'], bins=np.logspace(1, 4, 100))
# plt.xscale('log')
# plt.yscale('log')
hax.cuts.history(data2)
plt.hist(data2['cs1_a'], np.logspace(0, 4, 100), histtype='step');
plt.hist(data2['cs1_b'], np.logspace(0, 4, 100), histtype='step');
plt.xscale('log')
# pm.sum() / len(data2)
hax.cuts.history(data2)
def select_primary(data):
return data['s1_a_center_time'] == data['center_time']
prim_mask = select_primary(data2)
for secondary in False, True:
d = data2[secondary ^ prim_mask]
Histdd((d['center_time'] - 1 * units.ms) / units.us,
d['area'],
bins=(np.linspace(0, 3, 100),
np.logspace(1, 4, 100))).plot(log_scale=True)
offset = 0.6
if secondary:
plt.axvspan(offset + 0.6, offset + 2, color='red', alpha=0.2)
plt.text(offset + (0.6+2)/2, 1000, "Preselection", horizontalalignment='center', color='red')
plt.axhspan(*secondary_area_cut, color='blue', alpha=0.2)
plt.text(0.2,
np.exp(np.mean(np.log(secondary_area_cut))),
"Area\nsel.",
horizontalalignment='center', verticalalignment='center',
color='blue')
plt.title("Secondary S1")
else:
labeled_vline(offset, "Trigger offset", 1500, linestyle=':', textoffset = 0.1, color='r')
plt.axhspan(*primary_area_cut, color='blue', alpha=0.2)
plt.text(0.2,
np.exp(np.mean(np.log(primary_area_cut))),
"Area\nsel.",
horizontalalignment='center', verticalalignment='center',
color='blue')
t = plt.text(1, 700, "Merged with SE?")
plt.title("Primary S1")
plt.yscale('log')
plt.xlabel("Center time (\si{\micro s}) since trigger time")
plt.ylabel("Area (pe)")
plt.show()
plt.hist((data['s1_a_center_time'] - 1 * units.ms) / units.us,
bins=np.linspace(0, 3, 100), histtype='step');
plt.hist((data['s1_b_center_time'] - 1 * units.ms) / units.us,
bins=np.linspace(0, 3, 100), histtype='step');
plt.yscale('log')
(data['s1_b_center_time'] - data['s1_a_center_time']).min()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hubs
Step2: Exercise
Step3: Approach 2
Step4: If you inspect the dictionary closely, you will find that node 51 is the one that has the highest degree centrality, just as we had measured by counting the number of neighbors.
Step5: Exercise
Step7: What can you deduce about the structure of the network, based on this visualization?
Step8: If you write an algorithm that runs breadth-first, the recursion pattern is likely to follow what we have done above. If you do a depth-first search (i.e. DFS), the recursion pattern is likely to look a bit different. Take it as a challenge exercise to figure out how a DFS looks like.
Step9: NetworkX also has other shortest path algorithms implemented.
Step10: Incidentally, the node list is in order as well.
Step11: Challenge Exercise (at home)
Step13: Challenge Exercises (at home)
Step14: Hubs Revisited
Step15: Exercise
Step16: Think about it...
|
<ASSISTANT_TASK:>
Python Code:
# Load the sociopatterns network data.
G = cf.load_sociopatterns_network()
# Let's find out the number of neighbors that individual #7 has.
len(G.neighbors(7))
# Possible Answers:
# sorted(G.nodes(), key=lambda x:len(G.neighbors(x)), reverse=True)
sorted([(n, G.neighbors(n)) for n in G.nodes()], key=lambda x: len(x[1]), reverse=True)
nx.degree_centrality(G)
# Possible Answers:
fig = plt.figure(0)
# Get a list of degree centrality scores for all of the nodes.
degree_centralities = list(nx.degree_centrality(G).values())
# Plot the histogram of degree centralities.
plt.hist(degree_centralities)
# Set the plot title.
plt.title('Degree Centralities')
fig = plt.figure(1)
neighbors = [len(G.neighbors(node)) for node in G.nodes()]
plt.hist(neighbors)
# plt.yscale('log')
plt.title('Number of Neighbors')
fig = plt.figure(2)
plt.scatter(degree_centralities, neighbors, alpha=0.1)
plt.xlabel('Degree Centralities')
plt.ylabel('Number of Neighbors')
from circos import CircosPlot
import numpy as np
nodes = sorted(G.nodes())
edges = G.edges()
edgeprops = dict(alpha=0.1)
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes)) # be sure to use viridis!
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/sociopatterns.png', dpi=300)
def path_exists(node1, node2, G):
This function checks whether a path exists between two nodes (node1, node2) in graph G.
Special thanks to @ghirlekar for suggesting that we keep track of the "visited nodes" to
prevent infinite loops from happening.
Reference: https://github.com/ericmjl/Network-Analysis-Made-Simple/issues/3
visited_nodes = set()
queue = [node1]
for node in queue:
neighbors = G.neighbors(node)
if node2 in neighbors:
print('Path exists between nodes {0} and {1}'.format(node1, node2))
return True
break
else:
queue.remove(node)
visited_nodes.add(node)
queue.extend([n for n in neighbors if n not in visited_nodes])
if len(queue) == 0:
print('Path does not exist between nodes {0} and {1}'.format(node1, node2))
return False
# Test your answer below
def test_path_exists():
assert path_exists(18, 5, G)
assert path_exists(22, 318, G)
test_path_exists()
nx.has_path(G, 400, 1)
nx.shortest_path(G, 4, 400)
# Possible Answer:
def extract_path_edges(G, source, target):
# Check to make sure that a path does exists between source and target.
if nx.has_path(G, source, target):
nodes = nx.shortest_path(G, source, target)
newG = G.subgraph(nodes)
return newG
else:
raise Exception('Path does not exist between nodes {0} and {1}.'.format(source, target))
newG = extract_path_edges(G, 4, 400)
nx.draw(newG, with_labels=True)
# Possible Answer
def extract_neighbor_edges(G, node):
neighbors = G.neighbors(node)
newG = nx.Graph()
for n1, n2 in G.edges():
if (n1 == node and n2 in neighbors) or (n1 in neighbors and n2 == node):
newG.add_edge(n1, n2)
return newG
fig = plt.figure(0)
newG = extract_neighbor_edges(G, 19)
nx.draw(newG, with_labels=True)
def extract_neighbor_edges2(G, node):
neighbors = G.neighbors(node)
newG = nx.Graph()
for neighbor in neighbors:
if (node, neighbor) in G.edges() or (neighbor, node) in G.edges():
newG.add_edge(node, neighbor)
return newG
fig = plt.figure(1)
newG = extract_neighbor_edges2(G, 19)
nx.draw(newG, with_labels=True)
# Possible answer to Question 1:
# All we need here is the length of the path.
def compute_transmission_time(G, source, target):
Fill in code below.
length = nx.shortest_path_length(G, source, target)
time = sum([i for i in range(1, length+1)])
return time
compute_transmission_time(G, 14, 4)
# Possible answer to Question 2:
# We need to know the length of every single shortest path between every pair of nodes.
# If we don't put a source and target into the nx.shortest_path_length(G) function call, then
# we get a dictionary of dictionaries, where all source-->target-->lengths are shown.
lengths = []
times = []
for source, sink_length in nx.shortest_path_length(G).items():
for sink, length in sink_length.items():
times.append(sum(range(1, length+1)))
lengths.append(length)
plt.figure(0)
plt.bar(Counter(lengths).keys(), Counter(lengths).values())
plt.figure(1)
plt.bar(Counter(times).keys(), Counter(times).values())
btws = nx.betweenness_centrality(G, normalized=False)
plt.bar(btws.keys(), btws.values())
# Possible answer:
deg_centrality = nx.degree_centrality(G)
btw_centrality = nx.betweenness_centrality(G)
deg_cent_sorted = [i[1] for i in sorted(zip(deg_centrality.keys(), deg_centrality.values()))]
btw_cent_sorted = [i[1] for i in sorted(zip(btw_centrality.keys(), btw_centrality.values()))]
plt.scatter(deg_cent_sorted, btw_cent_sorted)
plt.xlabel('degree')
plt.ylabel('betweeness')
plt.title('centrality scatterplot')
nx.draw(nx.barbell_graph(5, 1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data from file
Step2: Construct histogram from data
Step3: Interpolate histogram output -> p(z); n.b. that you can also use numerical quadrature to get $P(z)$ directly.
Step4: Use numerical integration to get $P(z) = \int_0^\infty p(z') dz'$
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import scipy as sp
zz = np.loadtxt('wiggleZ_DR1_z.dat',dtype='float'); # Load WiggleZ redshifts
np.min(zz) # Check bounds
np.max(zz)
nbins = 50; # Is this a good choice?
n, bins, patches = hist(zz,nbins)
x = bins[0:nbins] + (bins[2]-bins[1])/2; # Convert bin edges to centres, chopping the last
# Import the function you need
from scipy.interpolate import interp1d
# Build an interpolation function for p(z) that accepts an arbitrary redshift z
zdist = interp1d(x,n, kind='cubic', bounds_error = False, fill_value=0)
z = linspace(0,2,100); plot(z,zdist(z)) # Test your interpolation function out
# Import the function you need
from scipy import integrate
intPz = lambda z0: integrate.quad(zdist,0,z0)[0]
total = intPz(5) # Get normalisation constant by evaluating P(z->\infty)
total # Check that this worked
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data can be accessed through a URL that I'll store in a string below.
Step4: Next, I'll write a couple of functions. The first to get the data from the url. The second function will parse the text returned by the first function and put in a Pandas DataFrame. This second function makes more sense after inspecting the content of the page at the url above.
Step5: This DataFrame quite large and unwieldy with 212 columns. But Pandas makes it easy to extract the necessary data for a particular project. For my current project, which I'll go over in a subsequent post, I need field data relevant to the SeaWiFS sensor, in particular optical data at wavelengths 412, 443, 490, 510, 555, and 670 nm. First let's look at the available bands as they appear in spectral surface irradiance column labels, which start with 'es'.
Step6: Now I can extract data with bands that are the closest to what I need. In the process I'm going to use water leaving radiance and spectral surface irradiance to compute remote sensing reflectance, rrs. I will store this new data in a new DataFrame, dfSwf.
Step7: For the projects I'm currently working on, I'll need to select a few more features from the inital dataset.
Step8: Tallying the features I've gathered...
Step9: That seems like a good dataset to start with. I'll pickle this DataFrame just in case.
Step10: The first project that I'll first tackle is a recasting of the OCx empirical band ratio algorithms within a Bayesian framework. For that I can further cull the dataset following the "Data Source" section in a paper I am using for comparison by Hu et al., 2012. This study draws from this same data set, applying the following criteria
Step11: Apparently the only null entries are in the hplc_chl column. Dropping the nulls in that column takes care of the first of the criteria listed above.
Step12: According to the summary table above, I don't need to worry about 0 chl as per the criteria above. However, it appears several reflectances have spurious 1.0000 values. Since these were never mentioned in the paper, I'll first cull the dataset according to depth and lat criteria, see if that takes care of cleaning those values as well. This should land me with 136 observations
Step13: Nope. We're down to 964 observations. So much for reproducibility via publication. Getting rid of spurions rrs values...
Step14: 136 values. Success! Once again, I'll pickle this DataFrame.
|
<ASSISTANT_TASK:>
Python Code:
import requests
import pandas as pd
import re
import numpy as np
import pickle
from IPython.core.display import display, HTML
display(HTML("<style>.container {width:90% !important;}</style>"))
NOMADV2url='https://seabass.gsfc.nasa.gov/wiki/NOMAD/nomad_seabass_v2.a_2008200.txt'
def GetNomad(url=NOMADV2url):
Download and return data as text
resp = requests.get(NOMADV2url)
content = resp.text.splitlines()
resp.close()
return content
def ParseTextFile(textFile, topickle=False, convert2DateTime=False, **kwargs):
* topickle: pickle resulting DataFrame if True
* convert2DateTime: join date/time columns and convert entries to datetime objects
* kwargs:
pkl_fname: pickle file name to save DataFrame by, if topickle=True
# Pre-compute some regex
columns = re.compile('^/fields=(.+)') # to get field/column names
units = re.compile('^/units=(.+)') # to get units -- optional
endHeader = re.compile('^/end_header') # to know when to start storing data
# Set some milestones
noFields = True
getData = False
# loop through the text data
for line in textFile:
if noFields:
fieldStr = columns.findall(line)
if len(fieldStr)>0:
noFields = False
fieldList = fieldStr[0].split(',')
dataDict = dict.fromkeys(fieldList)
continue # nothing left to do with this line, keep looping
if not getData:
if endHeader.match(line):
# end of header reached, start acquiring data
getData = True
else:
dataList = line.split(',')
for field,datum in zip(fieldList, dataList):
if not dataDict[field]:
dataDict[field] = []
dataDict[field].append(datum)
df = pd.DataFrame(dataDict, columns=fieldList)
if convert2DateTime:
datetimelabels=['year', 'month', 'day', 'hour', 'minute', 'second']
df['Datetime']= pd.to_datetime(df[datetimelabels],
format='%Y-%m-%dT%H:%M:%S')
df.drop(datetimelabels, axis=1, inplace=True)
if topickle:
fname=kwargs.pop('pkl_fname', 'dfNomad2.pkl')
df.to_pickle(fname)
return df
df = ParseTextFile(GetNomad(), topickle=True, convert2DateTime=True,
pkl_fname='./bayesianChl_DATA/dfNomadRaw.pkl')
df.head()
bandregex = re.compile('es([0-9]+)')
bands = bandregex.findall(''.join(df.columns))
print(bands)
swfBands = ['411','443','489','510','555','670']
dfSwf = pd.DataFrame(columns=['rrs%s' % b for b in swfBands])
for b in swfBands:
dfSwf.loc[:,'rrs%s'%b] = df.loc[:,'lw%s' % b].astype('f8') / df.loc[:,'es%s' % b].astype('f8')
dfSwf.head()
dfSwf['id'] = df.id.astype('i4') # in case I need to relate this data to the original
dfSwf['datetime'] = df.Datetime
dfSwf['hplc_chl'] = df.chl_a.astype('f8')
dfSwf['fluo_chl'] = df.chl.astype('f8')
dfSwf['lat'] = df.lat.astype('f8')
dfSwf['lon'] = df.lon.astype('f8')
dfSwf['depth'] = df.etopo2.astype('f8')
dfSwf['sst'] = df.oisst.astype('f8')
for band in swfBands:
addprods=['a','ad','ag','ap','bb']
for prod in addprods:
dfSwf['%s%s' % (prod,band)] = df['%s%s' % (prod, band)].astype('f8')
dfSwf.replace(-999,np.nan, inplace=True)
print(dfSwf.columns)
dfSwf.to_pickle('./bayesianChl_DATA/dfNomadSWF.pkl')
rrsCols = [col for col in dfSwf.columns if 'rrs' in col]
iwantcols=rrsCols + ['id', 'depth','hplc_chl','sst','lat','lon']
dfSwfHu = dfSwf[iwantcols].copy()
del dfSwf, df
dfSwfHu.info()
dfSwfHu.dropna(inplace=True)
dfSwfHu.describe()
dfSwfHu=dfSwfHu.loc[((dfSwfHu.depth>30) &\
(dfSwfHu.lat>=-60) & (dfSwfHu.lat<=60)),:]
dfSwfHu.describe()
dfSwfHu = dfSwfHu.loc[((dfSwfHu.rrs411<1.0) & (dfSwfHu.rrs510<1.0)&\
(dfSwfHu.rrs555<1.0) & (dfSwfHu.rrs670<1.0)),:]
dfSwfHu.describe()
dfSwfHu.to_pickle('/accounts/ekarakoy/DATA/NOMAD/dfSwfHuOcxCI_2012.pkl')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 梯度提升树(Gradient Boosted Trees):模型理解
Step2: 有关特征的描述,请参阅之前的教程。
Step3: 构建输入 pipeline
Step4: 训练模型
Step5: 出于性能原因,当您的数据适合内存时,我们建议在 tf.estimator.BoostedTreesClassifier 函数中使用参数 train_in_memory=True。但是,如果训练时间不是关注的问题,或者如果您有一个非常大的数据集并且想要进行分布式训练,请使用上面显示的 tf.estimator.BoostedTrees API。
Step6: 模型说明与绘制
Step7: 局部可解释性(Local interpretability)
Step8: DFC 有一个非常好的属性,即贡献 + 偏差的总和等于给定样本的预测。
Step11: 为单个乘客绘制 DFCs,绘图时按贡献的方向性对其进行涂色并添加特征的值。
Step12: 更大的贡献值意味着对模型的预测有更大的影响。负的贡献表示此样例该特征的值减小了减小了模型的预测,正贡献值表示增加了模型的预测。
Step13: 绘制此样例。
Step14: 最后,第三方的工具,如:LIME 和 shap 也可以帮助理解模型的各个预测。
Step15: 平均绝对 DFCs
Step16: 您可以看到 DFCs 如何随特征的值变化而变化。
Step19: 排列特征重要性(Permutation feature importances)
Step20: 可视化模型拟合过程
Step22: 您可以可视化这个方程,红色代表较大的值。
Step23: 首先,我们尝试用线性模型拟合数据。
Step24: 可见,拟合效果并不好。接下来,我们试着用 GBDT 模型拟合并了解模型是如何拟合方程的。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install statsmodels
import numpy as np
import pandas as pd
from IPython.display import clear_output
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
import tensorflow as tf
tf.random.set_seed(123)
fc = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fc.indicator_column(
fc.categorical_column_with_vocabulary_list(feature_name,
vocab))
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
feature_columns.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(fc.numeric_column(feature_name,
dtype=tf.float32))
# 当数据集小的时候,将整个数据集作为一个 batch。
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
# 训练时让数据迭代尽可能多次 (n_epochs=None)。
dataset = (dataset
.repeat(n_epochs)
.batch(NUM_EXAMPLES))
return dataset
return input_fn
# 训练并评估输入函数。
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
params = {
'n_trees': 50,
'max_depth': 3,
'n_batches_per_layer': 1,
# You must enable center_bias = True to get DFCs. This will force the model to
# make an initial prediction before using any features (e.g. use the mean of
# the training labels for regression or log odds for classification when
# using cross entropy loss).
'center_bias': True
}
est = tf.estimator.BoostedTreesClassifier(feature_columns, **params)
# Train model.
est.train(train_input_fn, max_steps=100)
# Evaluation.
results = est.evaluate(eval_input_fn)
clear_output()
pd.Series(results).to_frame()
in_memory_params = dict(params)
in_memory_params['n_batches_per_layer'] = 1
# In-memory input_fn does not use batching.
def make_inmemory_train_input_fn(X, y):
y = np.expand_dims(y, axis=1)
def input_fn():
return dict(X), y
return input_fn
train_input_fn = make_inmemory_train_input_fn(dftrain, y_train)
# Train the model.
est = tf.estimator.BoostedTreesClassifier(
feature_columns,
train_in_memory=True,
**in_memory_params)
est.train(train_input_fn)
print(est.evaluate(eval_input_fn))
import matplotlib.pyplot as plt
import seaborn as sns
sns_colors = sns.color_palette('colorblind')
pred_dicts = list(est.experimental_predict_with_explanations(eval_input_fn))
# Create DFC Pandas dataframe.
labels = y_eval.values
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
df_dfc = pd.DataFrame([pred['dfc'] for pred in pred_dicts])
df_dfc.describe().T
# Sum of DFCs + bias == probabality.
bias = pred_dicts[0]['bias']
dfc_prob = df_dfc.sum(axis=1) + bias
np.testing.assert_almost_equal(dfc_prob.values,
probs.values)
# Boilerplate code for plotting :)
def _get_color(value):
To make positive DFCs plot green, negative DFCs plot red.
green, red = sns.color_palette()[2:4]
if value >= 0: return green
return red
def _add_feature_values(feature_values, ax):
Display feature's values on left of plot.
x_coord = ax.get_xlim()[0]
OFFSET = 0.15
for y_coord, (feat_name, feat_val) in enumerate(feature_values.items()):
t = plt.text(x_coord, y_coord - OFFSET, '{}'.format(feat_val), size=12)
t.set_bbox(dict(facecolor='white', alpha=0.5))
from matplotlib.font_manager import FontProperties
font = FontProperties()
font.set_weight('bold')
t = plt.text(x_coord, y_coord + 1 - OFFSET, 'feature\nvalue',
fontproperties=font, size=12)
def plot_example(example):
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index # Sort by magnitude.
example = example[sorted_ix]
colors = example.map(_get_color).tolist()
ax = example.to_frame().plot(kind='barh',
color=colors,
legend=None,
alpha=0.75,
figsize=(10,6))
ax.grid(False, axis='y')
ax.set_yticklabels(ax.get_yticklabels(), size=14)
# Add feature values.
_add_feature_values(dfeval.iloc[ID][sorted_ix], ax)
return ax
# Plot results.
ID = 182
example = df_dfc.iloc[ID] # Choose ith example from evaluation set.
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index
ax = plot_example(example)
ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
ax.set_xlabel('Contribution to predicted probability', size=14)
plt.show()
# Boilerplate plotting code.
def dist_violin_plot(df_dfc, ID):
# Initialize plot.
fig, ax = plt.subplots(1, 1, figsize=(10, 6))
# Create example dataframe.
TOP_N = 8 # View top 8 features.
example = df_dfc.iloc[ID]
ix = example.abs().sort_values()[-TOP_N:].index
example = example[ix]
example_df = example.to_frame(name='dfc')
# Add contributions of entire distribution.
parts=ax.violinplot([df_dfc[w] for w in ix],
vert=False,
showextrema=False,
widths=0.7,
positions=np.arange(len(ix)))
face_color = sns_colors[0]
alpha = 0.15
for pc in parts['bodies']:
pc.set_facecolor(face_color)
pc.set_alpha(alpha)
# Add feature values.
_add_feature_values(dfeval.iloc[ID][sorted_ix], ax)
# Add local contributions.
ax.scatter(example,
np.arange(example.shape[0]),
color=sns.color_palette()[2],
s=100,
marker="s",
label='contributions for example')
# Legend
# Proxy plot, to show violinplot dist on legend.
ax.plot([0,0], [1,1], label='eval set contributions\ndistributions',
color=face_color, alpha=alpha, linewidth=10)
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large',
frameon=True)
legend.get_frame().set_facecolor('white')
# Format plot.
ax.set_yticks(np.arange(example.shape[0]))
ax.set_yticklabels(example.index)
ax.grid(False, axis='y')
ax.set_xlabel('Contribution to predicted probability', size=14)
dist_violin_plot(df_dfc, ID)
plt.title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
plt.show()
importances = est.experimental_feature_importances(normalize=True)
df_imp = pd.Series(importances)
# Visualize importances.
N = 8
ax = (df_imp.iloc[0:N][::-1]
.plot(kind='barh',
color=sns_colors[0],
title='Gain feature importances',
figsize=(10, 6)))
ax.grid(False, axis='y')
# Plot.
dfc_mean = df_dfc.abs().mean()
N = 8
sorted_ix = dfc_mean.abs().sort_values()[-N:].index # Average and sort by absolute.
ax = dfc_mean[sorted_ix].plot(kind='barh',
color=sns_colors[1],
title='Mean |directional feature contributions|',
figsize=(10, 6))
ax.grid(False, axis='y')
FEATURE = 'fare'
feature = pd.Series(df_dfc[FEATURE].values, index=dfeval[FEATURE].values).sort_index()
ax = sns.regplot(feature.index.values, feature.values, lowess=True)
ax.set_ylabel('contribution')
ax.set_xlabel(FEATURE)
ax.set_xlim(0, 100)
plt.show()
def permutation_importances(est, X_eval, y_eval, metric, features):
Column by column, shuffle values and observe effect on eval set.
source: http://explained.ai/rf-importance/index.html
A similar approach can be done during training. See "Drop-column importance"
in the above article.
baseline = metric(est, X_eval, y_eval)
imp = []
for col in features:
save = X_eval[col].copy()
X_eval[col] = np.random.permutation(X_eval[col])
m = metric(est, X_eval, y_eval)
X_eval[col] = save
imp.append(baseline - m)
return np.array(imp)
def accuracy_metric(est, X, y):
TensorFlow estimator accuracy.
eval_input_fn = make_input_fn(X,
y=y,
shuffle=False,
n_epochs=1)
return est.evaluate(input_fn=eval_input_fn)['accuracy']
features = CATEGORICAL_COLUMNS + NUMERIC_COLUMNS
importances = permutation_importances(est, dfeval, y_eval, accuracy_metric,
features)
df_imp = pd.Series(importances, index=features)
sorted_ix = df_imp.abs().sort_values().index
ax = df_imp[sorted_ix][-5:].plot(kind='barh', color=sns_colors[2], figsize=(10, 6))
ax.grid(False, axis='y')
ax.set_title('Permutation feature importance')
plt.show()
from numpy.random import uniform, seed
from scipy.interpolate import griddata
# Create fake data
seed(0)
npts = 5000
x = uniform(-2, 2, npts)
y = uniform(-2, 2, npts)
z = x*np.exp(-x**2 - y**2)
xy = np.zeros((2,np.size(x)))
xy[0] = x
xy[1] = y
xy = xy.T
# Prep data for training.
df = pd.DataFrame({'x': x, 'y': y, 'z': z})
xi = np.linspace(-2.0, 2.0, 200),
yi = np.linspace(-2.1, 2.1, 210),
xi,yi = np.meshgrid(xi, yi)
df_predict = pd.DataFrame({
'x' : xi.flatten(),
'y' : yi.flatten(),
})
predict_shape = xi.shape
def plot_contour(x, y, z, **kwargs):
# Grid the data.
plt.figure(figsize=(10, 8))
# Contour the gridded data, plotting dots at the nonuniform data points.
CS = plt.contour(x, y, z, 15, linewidths=0.5, colors='k')
CS = plt.contourf(x, y, z, 15,
vmax=abs(zi).max(), vmin=-abs(zi).max(), cmap='RdBu_r')
plt.colorbar() # Draw colorbar.
# Plot data points.
plt.xlim(-2, 2)
plt.ylim(-2, 2)
zi = griddata(xy, z, (xi, yi), method='linear', fill_value='0')
plot_contour(xi, yi, zi)
plt.scatter(df.x, df.y, marker='.')
plt.title('Contour on training data')
plt.show()
fc = [tf.feature_column.numeric_column('x'),
tf.feature_column.numeric_column('y')]
def predict(est):
Predictions from a given estimator.
predict_input_fn = lambda: tf.data.Dataset.from_tensors(dict(df_predict))
preds = np.array([p['predictions'][0] for p in est.predict(predict_input_fn)])
return preds.reshape(predict_shape)
train_input_fn = make_input_fn(df, df.z)
est = tf.estimator.LinearRegressor(fc)
est.train(train_input_fn, max_steps=500);
plot_contour(xi, yi, predict(est))
n_trees = 37 #@param {type: "slider", min: 1, max: 80, step: 1}
est = tf.estimator.BoostedTreesRegressor(fc, n_batches_per_layer=1, n_trees=n_trees)
est.train(train_input_fn, max_steps=500)
clear_output()
plot_contour(xi, yi, predict(est))
plt.text(-1.8, 2.1, '# trees: {}'.format(n_trees), color='w', backgroundcolor='black', size=20)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Checking the simulation
Step2: Background rate
Step3: Aftershocks
Step4: We only sample the process in a finite time interval, so we'll miss aftershocks which occur after the end of our time window.
Step5: To get the normalisation correct above, we need to think about what the "intensity" parameter $\theta$ represents.
Step6: For a largish sample size, this is typical behaviour-- we converge to an estimate which under-estimates $\theta$ and slightly over-estimates $\omega$ and $\mu$.
Step7: This is a slightly extreme graph, but it can certainly occur.
Step8: This is a typical graph, though more extreme behaviour can be observed as well!
Step9: Choosing different parameters
Step10: Finally try with $\omega=10$ and a small(er) sample.
Step11: Recreation of Lewis and Mohler
Step12: Except when $\omega^{-1}=10$ these look like the graphs from the paper.
Step13: This is definitely an improvement, but despite having tried this a number of times, I cannot reproduce the graphs from Lewis and Mohler.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import open_cp.sources.sepp as source_sepp
process = source_sepp.SelfExcitingPointProcess(
background_sampler = source_sepp.HomogeneousPoissonSampler(rate=0.1),
trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=0.05))
totality = []
trials = 50000
for _ in range(trials):
result = process.sample_with_details(0,100)
totality.extend(result.backgrounds)
bins = np.linspace(0, 100, 21)
counts = np.histogram(totality, bins)[0]
counts = counts / (trials * (bins[1] - bins[0]))
fig, ax = plt.subplots(ncols=2, figsize=(16,5))
ax[0].plot((bins[:-1] + bins[1:])/2, counts)
ax[0].set(xlim=[0,100], ylim=[0,.15])
ax[1].plot((bins[:-1] + bins[1:])/2, counts)
ax[1].set(xlim=[0,100])
for i in range(2):
ax[i].set_xlabel("time")
ax[i].set_ylabel("intensity")
None
totality = []
trials = 10000
for _ in range(trials):
result = process.sample_with_details(0,100)
totality.extend(result.trigger_deltas)
fig, ax = plt.subplots(ncols=2, figsize=(16,5))
bins = np.linspace(0,1,11)
xcoords = (bins[:-1] + bins[1:]) / 2
y = np.random.exponential(1 / 0.05, size=100000)
x = 1 - np.exp(-0.05 * np.asarray(y))
c = np.histogram(x, bins)[0]
ax[0].scatter(xcoords, c / 10000)
ax[0].set(xlim=[0,1], ylim=[0,1.1], title="Direct simulation from numpy")
y = np.asarray(totality)
x = 1 - np.exp(-0.05 * np.asarray(y))
c = np.histogram(x, bins)[0]
ax[1].scatter(xcoords, c / trials * 10)
ax[1].set(xlim=[0,1], title="From our process, showing edge effects")
None
totality = []
trials = 1000
total_points = 0
for _ in range(trials):
result = process.sample_with_details(0,1000)
mask = result.trigger_points <= 900
totality.extend( result.trigger_deltas[mask] )
count = np.sum(result.points <= 900)
total_points += count
fig, ax = plt.subplots(ncols=2, figsize=(16,5))
y = np.asarray(totality)
x = 1 - np.exp(-0.05 * np.asarray(y))
c = np.histogram(x, bins)[0]
ax[0].scatter(xcoords, c / trials)
ax[0].set(xlim=[0,1], title="From our process, corrected for edge effects")
scale = 1 / ((bins[1] - bins[0]) * total_points)
ax[1].scatter(xcoords, c * scale)
ax[1].set(xlim=[0,1], title="Normalised to estimate $\\theta$")
None
total_time = 10000
result = process.sample_with_details(0, total_time)
points = result.points
points.shape, result.trigger_deltas.shape
def p_matrix_col(points, col, theta=0.5, omega=0.05, mu=0.1):
p = np.empty(col + 1)
p[col] = mu
dt = points[col] - points[0:col]
p[0:col] = theta * omega * np.exp(-omega * dt)
return p, np.sum(p), dt
def m_step(points, total_time, theta=0.5, omega=0.05, mu=0.1):
omega_1, omega_2, mu_e = 0.0, 0.0, 0.0
for col in range(len(points)):
p, norm, dt = p_matrix_col(points, col, theta, omega, mu)
wp = p[0:col] * dt
omega_1 += np.sum(p[0:col]) / norm
omega_2 += np.sum(wp) / norm
mu_e += p[-1] / norm
return omega_1 / len(points), omega_1 / omega_2, mu_e / total_time
theta, omega, mu = m_step(points, total_time)
theta, omega, mu
def apply_algorithm(points, total_time, steps=200, theta_in=0.5, omega_in=0.05, mu_in=0.1,
convergence_criteria=None, func=m_step):
theta, omega, mu = theta_in, omega_in, mu_in
thetas, omegas, mus = [theta], [omega], [mu]
for _ in range(steps):
theta, omega, mu = func(points, total_time, theta, omega, mu)
diff = max(abs(thetas[-1] - theta), abs(omegas[-1] - omega), abs(mus[-1] - mu))
thetas.append(theta)
omegas.append(omega)
mus.append(mu)
if convergence_criteria is not None and diff <= convergence_criteria:
break
return thetas, omegas, mus
thetas, omegas, mus = apply_algorithm(points, total_time, 100)
thetas[-1], omegas[-1], mus[-1]
def plot_convergence(thetas, omegas, mus, inv_omega=False):
fig, ax = plt.subplots(figsize=(16,5))
x = list(range(len(thetas)))
ax.plot(x, thetas)
legend_txt = ["$\\theta$"]
if inv_omega:
legend_txt.append("$\\omega^{-1}$")
ax.plot(x, 1 / np.asarray(omegas))
else:
legend_txt.append("$\\omega$")
ax.plot(x, omegas)
ax.plot(x, mus)
legend_txt.append("$\\mu$")
ax.legend(legend_txt)
ax.set(xlabel="Iteration")
plot_convergence(thetas, omegas, mus)
total_time = 1000
result = process.sample_with_details(0, total_time)
points = result.points
points.shape, result.trigger_deltas.shape
plot_convergence(*apply_algorithm(points, total_time, 200))
total_time = 2000
result = process.sample_with_details(0, total_time)
points = result.points
points = points[points>=1000]
plot_convergence(*apply_algorithm(points, 1000, 200))
total_time = 11000
result = process.sample_with_details(0, total_time)
points = result.points
points = points[points>=1000]
plot_convergence(*apply_algorithm(points, 10000, 50))
process = source_sepp.SelfExcitingPointProcess(
background_sampler = source_sepp.HomogeneousPoissonSampler(rate=0.1),
trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1))
points = process.sample(0, 1000)
fig, ax = plt.subplots(figsize=(16,1))
ax.scatter(points, np.random.random(len(points))*.02, alpha=0.3)
None
total_time = 10000
result = process.sample_with_details(0, total_time)
points = result.points
points.shape, result.trigger_deltas.shape
plot_convergence(*apply_algorithm(points, total_time, 200, theta_in=0.5, omega_in=1, mu_in=0.1), True)
process = source_sepp.SelfExcitingPointProcess(
background_sampler = source_sepp.HomogeneousPoissonSampler(rate=0.1),
trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=10))
points = process.sample(0, 1000)
fig, ax = plt.subplots(figsize=(16,1))
ax.scatter(points, np.random.random(len(points))*.02, alpha=0.3)
None
total_time = 1000
result = process.sample_with_details(0, total_time)
points = result.points
points.shape, result.trigger_deltas.shape
plot_convergence(*apply_algorithm(points, total_time, 200, theta_in=0.5, omega_in=10, mu_in=0.1), True)
all_results_dict = {}
for omega_inv in [0.01, 0.1, 1, 10]:
results = []
for _ in range(10):
process = source_sepp.SelfExcitingPointProcess(
background_sampler = source_sepp.HomogeneousPoissonSampler(rate=1),
trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1/omega_inv))
points = process.sample(0, 2000)
convergents = apply_algorithm(points, 2000, 2000, theta_in=0.5, omega_in=1/omega_inv, mu_in=1,
convergence_criteria = 10**(-5), func=m_step)
convergents = np.asarray(convergents)
# (theta, omega, mu), iterations
results.append((convergents[:,-1], convergents.shape[-1]))
all_results_dict[omega_inv] = results
thetas = {key: [result[0][0] for result in all_results_dict[key]] for key in all_results_dict}
mus = {key: [result[0][2] for result in all_results_dict[key]] for key in all_results_dict}
fig, ax = plt.subplots(ncols=2, figsize=(16,6))
def plot(ax, data, true_value):
x = list(data.keys())
y = np.asarray([ np.mean(data[k]) for k in x ])
ax.scatter(x, y, color="black")
yy = np.asarray([ np.std(data[k], ddof=1) for k in x ])
for x, y, dy in zip(x,y,yy):
ax.plot([x]*2, [y-dy,y+dy], color="black")
ax.plot([0.001,100], [true_value]*2, color="black", linestyle="--", linewidth=1)
ax.set(xscale="log", xlim=[0.001,100])
plot(ax[0], thetas, 0.5)
plot(ax[1], mus, 1)
ax[0].set(xlabel="$\\omega^{-1}$", ylabel="$\\theta$")
ax[1].set(xlabel="$\\omega^{-1}$", ylabel="$\\mu$")
None
def corrected_m_step(points, total_time, theta=0.5, omega=0.05, mu=0.1):
omega_1, omega_2, mu_e = 0.0, 0.0, 0.0
for col in range(len(points)):
p, norm, dt = p_matrix_col(points, col, theta, omega, mu)
wp = p[0:col] * dt
omega_1 += np.sum(p[0:col]) / norm
omega_2 += np.sum(wp) / norm
mu_e += p[-1] / norm
from_end = total_time - points
exp_from_end = np.exp(-omega * from_end)
corrected_n = len(points) - np.sum(exp_from_end)
corrected_omega_2 = omega_2 + theta * np.sum(from_end * exp_from_end)
return omega_1 / corrected_n, omega_1 / corrected_omega_2, mu_e / total_time
all_results_dict = {}
for omega_inv in [0.01, 0.1, 1, 10]:
results = []
for _ in range(10):
process = source_sepp.SelfExcitingPointProcess(
background_sampler = source_sepp.HomogeneousPoissonSampler(rate=1),
trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1/omega_inv))
points = process.sample(0, 2000)
convergents = apply_algorithm(points, 2000, 2000, theta_in=0.5, omega_in=1/omega_inv, mu_in=1,
convergence_criteria = 10**(-5), func=corrected_m_step)
convergents = np.asarray(convergents)
# (theta, omega, mu), iterations
results.append((convergents[:,-1], convergents.shape[-1]))
all_results_dict[omega_inv] = results
thetas = {key: [result[0][0] for result in all_results_dict[key]] for key in all_results_dict}
mus = {key: [result[0][2] for result in all_results_dict[key]] for key in all_results_dict}
fig, ax = plt.subplots(ncols=2, figsize=(16,6))
plot(ax[0], thetas, 0.5)
plot(ax[1], mus, 1)
ax[0].set(xlabel="$\\omega^{-1}$", ylabel="$\\theta$")
ax[1].set(xlabel="$\\omega^{-1}$", ylabel="$\\mu$")
None
def likelihood(points, time_range, theta, omega, mu):
n = len(points)
first_sum = np.empty(n)
first_sum[0] = mu
for i in range(1, n):
dt = points[i] - points[:i]
first_sum[i] = mu + theta * omega * np.sum(np.exp(-omega * dt))
second_sum = np.sum(np.exp(-omega * (time_range - points)))
return np.sum(np.log(first_sum)) - (mu * time_range + n * theta + theta - theta * second_sum)
omega_inv = 10
process = source_sepp.SelfExcitingPointProcess(
background_sampler = source_sepp.HomogeneousPoissonSampler(rate=1),
trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1/omega_inv))
points = process.sample(0, 2000)
convergents = apply_algorithm(points, 2000, 2000, theta_in=0.5, omega_in=1/omega_inv, mu_in=1,
convergence_criteria = 10**(-5), func=corrected_m_step)
convergents = np.asarray(convergents)
theta, omega, mu = convergents[:,-1]
theta, omega, mu
likelihood(points, 2000, theta, omega, mu)
likelihood(points, 2000, 0.5, 0.1, 1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
Step8: LSTM Cell
Step9: RNN Output
Step10: Training loss
Step11: Optimizer
Step12: Build the network
Step13: Hyperparameters
Step14: Time for training
Step15: Saved checkpoints
Step16: Sampling
Step17: Here, pass in the path to a checkpoint and sample from the network.
|
<ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
encoded[:100]
len(vocab)
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
batch_size = n_seqs * n_steps
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches*batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
# print(arr.shape)
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros((x.shape[0], x.shape[1]))
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name = "inputs")
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name = "targets")
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name = "keep_prob")
return inputs, targets, keep_prob
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell outputs
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output = tf.concat(lstm_output, axis=1)
# Reshape seq_output to a 2D tensor with lstm_size columns
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits, labels=y_reshaped)
return loss
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logit, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
tf.train.get_checkpoint_state('checkpoints')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 불균형 데이터 분류
Step2: 데이터 처리 및 탐색
Step3: 클래스 레이블 불균형 검사
Step4: 이를 통해 양성 샘플 일부를 확인할 수 있습니다.
Step5: 데이터세트를 학습, 검증 및 테스트 세트로 분할합니다. 검증 세트는 모델 피팅 중에 사용되어 손실 및 메트릭을 평가하지만 해당 모델은 이 데이터에 적합하지 않습니다. 테스트 세트는 훈련 단계에서는 전혀 사용되지 않으며 마지막에만 사용되어 모델이 새 데이터로 일반화되는 정도를 평가합니다. 이는 훈련 데이터가 부족하여 과대적합이 크게 문제가 되는 불균형 데이터세트에서 특히 중요합니다.
Step6: sklearn StandardScaler를 사용하여 입력 특성을 정규화하면 평균은 0으로, 표준 편차는 1로 설정됩니다.
Step7: 주의
Step8: 모델 및 메트릭 정의
Step9: 유용한 메트릭 이해하기
Step10: 모델을 실행하여 테스트해보겠습니다.
Step11: 선택사항
Step12: 올바른 바이어스 설정은 다음에서 가능합니다.
Step13: 이를 초기 바이어스로 설정하면 모델은 훨씬 더 합리적으로 초기 추측을 할 수 있습니다.
Step14: 이 초기화를 통해서 초기 손실은 대략 다음과 같아야합니다.
Step15: 이 초기 손실은 단순한 상태의 초기화에서 발생했을 때 보다 약 50배 적습니다.
Step16: 바이어스 수정이 도움이 되는지 확인하기
Step17: 위의 그림에서 명확히 알 수 있듯이, 검증 손실 측면에서 이와 같은 정교한 초기화에는 분명한 이점이 있습니다.
Step18: 학습 이력 확인
Step19: 참고
Step20: 테스트 데이터세트에서 모델을 평가하고 위에서 생성한 메트릭 결과를 표시합니다.
Step21: 만약 모델이 모두 완벽하게 예측했다면 대각행렬이 되어 예측 오류를 보여주며 대각선 값은 0이 됩니다. 이와 같은 경우, 매트릭에 거짓 양성이 상대적으로 낮음을 확인할 수 있으며 이를 통해 플래그가 잘못 지정된 합법적인 거래가 상대적으로 적다는 것을 알 수 있습니다. 그러나 거짓 양성 수를 늘리더라도 거짓 음성을 더 낮추고 싶을 수 있습니다. 거짓 음성은 부정 거래가 발생할 수 있지만, 거짓 양성은 고객에게 이메일을 보내 카드 활동 확인을 요청할 수 있기 때문에 거짓 음성을 낮추는 것이 더 바람직할 수 있기 때문입니다.
Step22: AUPRC 플로팅
Step23: 정밀도가 비교적 높은 것 같지만 재현율과 ROC 곡선(AUC) 아래 면적이 높지 않습니다. 분류자가 정밀도와 재현율 모두를 최대화하려고 하면 종종 어려움에 직면하는데, 불균형 데이터세트로 작업할 때 특히 그렇습니다. 관심있는 문제의 맥락에서 다른 유형의 오류 비용을 고려하는 것이 중요합니다. 이 예시에서 거짓 음성(부정 거래를 놓친 경우)은 금전적 비용이 들 수 있지만 , 거짓 양성(거래가 사기 행위로 잘못 표시됨)은 사용자 만족도를 감소시킬 수 있습니다.
Step24: 클래스 가중치로 모델 교육
Step25: 학습 이력 조회
Step26: 매트릭 평가
Step27: 여기서 클래스 가중치를 사용하면 거짓 양성이 더 많기 때문에 정확도와 정밀도는 더 낮지만, 반대로 참 양성이 많으므로 재현율과 AUC는 더 높다는 것을 알 수 있습니다. 정확도가 낮음에도 불구하고 이 모델은 재현율이 더 높습니다(더 많은 부정 거래 식별). 물론 두 가지 유형의 오류 모두 비용이 발생합니다(많은 합법 거래를 사기로 표시하여 사용자를 번거롭게 하는 것은 바람직하지 않으므로). 따라서, 여러 유형 오류 간 절충 사항을 신중하게 고려해야 합니다.
Step28: AUPRC 플로팅
Step29: 오버샘플링
Step30: NumPy 사용
Step31: tf.data 사용
Step32: 각 데이터 세트는 (feature, label) 쌍으로 되어 있습니다.
Step33: experimental.sample_from_datasets 를 사용하여 두 가지를 병합합니다.
Step34: 이 데이터 세트를 사용하려면 epoch당 스텝 수가 필요합니다.
Step35: 오버 샘플링 된 데이터에 대한 학습
Step36: 만약 훈련 프로세스가 각 기울기 업데이트에서 전체 데이터 세트를 고려하는 경우, 이 오버 샘플링은 기본적으로 클래스 가중치와 동일합니다.
Step37: 재교육
Step38: 훈련 이력 재확인
Step39: 메트릭 평가
Step40: ROC 플로팅
Step41: AUPRC 플로팅
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps = 0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
# Use a utility from sklearn to split and shuffle your dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns=train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns)
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
_ = plt.suptitle("Negative distribution")
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
keras.metrics.AUC(name='prc', curve='PR'), # precision-recall curve
]
def make_model(metrics=METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
model = make_model()
model.summary()
model.predict(train_features[:10])
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
initial_bias = np.log([pos/neg])
initial_bias
model = make_model(output_bias=initial_bias)
model.predict(train_features[:10])
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
initial_weights = os.path.join(tempfile.mkdtemp(), 'initial_weights')
model.save_weights(initial_weights)
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels))
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
def plot_prc(name, labels, predictions, **kwargs):
precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, predictions)
plt.plot(precision, recall, label=name, linewidth=2, **kwargs)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
weight_for_0 = (1 / neg) * (total / 2.0)
weight_for_1 = (1 / pos) * (total / 2.0)
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
plot_metrics(weighted_history)
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks=[early_stopping],
validation_data=val_ds)
plot_metrics(resampled_history)
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch=20,
epochs=10*EPOCHS,
callbacks=[early_stopping],
validation_data=(val_ds))
plot_metrics(resampled_history)
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_prc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_prc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Which of these functions have at least one root?
Step2: Notice that we need to define hasroot above.
Step3: We get about 5 digits of accuracy. Why? How fast did we get there?
Step4: Can you find any problems with this implementation? List them below
Step5: What's going wrong here? How can we improve the implementation and what are fundamental limitations of the algorithm?
Step6: Oops, how can we fix this?
Step7: Did we solve all of these equations?
Step8: We only get three digits correct despite a very small residual (and it takes many iterations to get there).
Step9: Relative condition number
Step11: Automatically choosing a suitable $\epsilon$
Step12: This algorithm is imperfect, leaving some scaling responsibility to the user.
Step13: In class 2016-09-14
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
from matplotlib import pyplot
import numpy
tests = []
@tests.append
def f0(x):
return x*x - 2, 2*x
@tests.append
def f1(x):
return numpy.cos(x) - x, -numpy.sin(x) - 1
@tests.append
def f2(x):
return numpy.exp(-numpy.abs(x)) + numpy.sin(x), numpy.exp(-numpy.abs(x))*(-numpy.sign(x)) + numpy.cos(x)
@tests.append
def f3(x):
return x*x - x + 0.25, 2*x - 1
@tests.append
def f4(x):
return numpy.tan(x+2), numpy.cos(x+2)**(-2)
return numpy.exp(-x*x), numpy.exp(-x*x)*(-2*x)
@tests.append
def f5(x):
return (x <= 1)*1.0, 0*x
@tests.append
def f6(x):
return x*numpy.sin(5/x), numpy.sin(5/x) - numpy.cos(5/x)*5/x
x = numpy.linspace(-2,2,100)
pyplot.plot(x, 0*x, color='k')
for f in tests:
pyplot.plot(x, f(x)[0], label=f.__name__)
pyplot.legend(loc='upper right')
pyplot.style.use('ggplot')
pyplot.show()
def hasroot(f, a, b):
return f(a)[0]*f(b)[0] < 0
def bisect(f, a, b, verbose=False):
mid = (a + b)/2.
if b-a < 1e-5:
return mid
if verbose:
print('bisect', mid)
if hasroot(f, a, mid):
return bisect(f, a, mid, verbose)
else:
return bisect(f, mid, b, verbose)
bisect(tests[0], 0, 2)
numpy.sqrt(2) - bisect(tests[0], 0, 2)
bisect(tests[0], 0, 2, verbose=True)
for f in tests:
print(f.__name__, bisect(f, -2, 2.1))
def newton(f, x, verbose=False):
for i in range(100):
fx, dfx = f(x)
if verbose:
print(f.__name__, i, x, fx)
if numpy.abs(fx) < 1e-12:
return x, fx, i
try:
x -= fx / dfx
except ZeroDivisionError:
return x, numpy.NaN, i
for f in tests:
print(f.__name__, newton(f, 1))
for f in tests:
print(f.__name__, '{0:15.12f} {1:8.2e} {2:2d}'.format(*newton(f, -0.1)))
def fquartic(x):
return (x - 0.9)**4, 4*(x - 0.9)**3
newton(fquartic, 0)
format((.2 - 1/3) + 2/15, '.20f')
# format((.2 - 1/3) + (1/3 - 0.2), '.20f')
# format((1 + 1e-12) - 1, '.20f')
eps = 1
while 1 + eps > 1:
eps /= 2
eps_machine = eps # We call this "machine epsilon"
numpy.log(1 + 1e-12) - numpy.log1p(1e-12)
(numpy.log(1 + 1e-12) - numpy.log1p(1e-12)) / numpy.log1p(1e-12)
x = numpy.array([1,1e5,1e10,1e15])
numpy.sin(numpy.pi*x)
numpy.sin(x)**2 + numpy.cos(x)**2 - 1
[numpy.tan((3.14159+eps)/2) for eps in [1e-6,1e-8]], 1/numpy.cos(3.14159)**2
def diff(f, x, epsilon=1e-5):
return (f(x + epsilon) - f(x)) / epsilon
diff(numpy.sin, 0.7, 1e-8) - numpy.cos(0.7)
x = .5
diff(numpy.tan, x) - 1/numpy.cos(x)**2
x = 3.14/2
[(eps, diff(numpy.tan, x, eps) - 1/numpy.cos(x)**2) for eps in [1e-14, 1e-12, 1e-10, 1e-8, 1e-6, 1e-4]]
x = 1e4
[(eps, diff(numpy.log, x, eps) - 1/x) for eps in [1e-14, 1e-12, 1e-10, 1e-8, 1e-6, 1e-4, 1e-2]]
def diff_wp(f, x, eps=1e-8):
Numerical derivative with Walker and Pernice (1998) choice of step
h = eps * (1 + abs(x))
return (f(x+h) - f(x)) / h
x = 1
[(eps, diff_wp(numpy.log, x, eps) - 1/x) for eps in [1e-14, 1e-12, 1e-10, 1e-8, 1e-6, 1e-4, 1e-2]]
x = 1e-4
[(eps, diff_wp(numpy.log, x, eps) - 1/x) for eps in [1e-14, 1e-12, 1e-10, 1e-8, 1e-6, 1e-4, 1e-2]]
numpy.log(-1)
x = numpy.sqrt(-1)
1/numpy.inf
x = numpy.linspace(0,3,100)
pyplot.plot(x, numpy.sqrt(x + 1e-1))
pyplot.show()
numpy.tan(1e100)
numpy.tan(1e100*(1 + 2*eps_machine))
1e100 - 1e100*(1 + 2*eps_machine)
x = numpy.linspace(-5,5)
pyplot.figure()
f = numpy.sin(x) + 1.1*x
pyplot.plot(x, f)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Exploration
Step3: Question 1 - Feature Observation
Step4: Question 2 - Goodness of Fit
Step5: Yes, this model appears to have sufficently captured the variation of the target variable as the R^2 score is .923. As stated above, the closer the score is to 1, the better the model is able to predict future values.
Step6: Question 3 - Training and Testing
Step7: Question 4 - Learning the Data
Step9: Question 5 - Bias-Variance Tradeoff
Step10: Making Predictions
Step11: In this model, the maximum depth is 5 for the optimal model. My guess was that the most optimal max depth would be 4 in this case so my guess was close.
Step12: The model shows that for Client 1's home, the price should be approximately 418k. For Client 2, it should be 193k. Lastly for client 3 the price should be 908k. These prices seem reasonable considering the average price for homes in the area is 454k with the max and min ranging from just over a million dollars to slightly more than 100k. Additionally, Client 2's house has features that are less desirable; it has a higher rate of neighborhood poverty and a higher student-teacher ratio. Thus the price that is suggested makes sense even though it is towards the minimum of the price range. In comparison, Client 3's home has a much lower rate of neighborhood poverty and a much lower student-teacher ratio. The price that is suggested for this house also makes sense considering the features.
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.cross_validation import ShuffleSplit
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
# TODO: Minimum price of the data
minimum_price = np.amin(prices)
# TODO: Maximum price of the data
maximum_price = np.amax(prices)
# TODO: Mean price of the data
mean_price = np.mean(prices)
# TODO: Median price of the data
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
# TODO: Import 'r2_score'
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
from sklearn.metrics import r2_score
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
# TODO: Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=.2,random_state=65)
# Success
print "Training and testing split was successful."
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
vs.ModelComplexity(X_train, y_train)
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth' : list(range(1,11))}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring = scoring_fnc,cv = cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
vs.PredictTrials(features, prices, fit_model, client_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ratio and logarithm
Step2: Plot on the linear scale using the scatter() function.
Step3: Plot on the log scale.
Step4: What do you see from the two plots? Why do we need to use log scale to visualize ratios?
Step5: Log-bin
Step6: As we can see, most votes fall in the first bin, and we cannot see the values from the second bin.
Step7: Change the number of bins to 1000.
Step8: Now, let's try log-bin. Recall that when plotting histgrams we can specify the edges of bins through the bins parameter. For example, we can specify the edges of bins to [1, 2, 3, ... , 10] as follows.
Step9: Here, we can specify the edges of bins in a similar way. Instead of specifying on the linear scale, we do it on the log space. Some useful resources
Step10: Now we can plot histgram with log-bin.
Step11: KDE
Step12: We can plot histogram and KDE using pandas
Step13: Or using seaborn
Step14: Can you plot the histogram and KDE of the log of movie votes?
Step15: We can get a random sample using pandas' sample() function. The kdeplot() function in seaborn provides many options (like kernel types) to do KDE.
Step16: Regression
Step17: Actually, the dataset is included in seaborn and we can load it.
Step18: All four datasets are in this single data frame and the 'dataset' indicator is one of the columns. This is a form often called tidy data, which is easy to manipulate and plot. In tidy data, each row is an observation and columns are the properties of the observation. Seaborn makes use of the tidy form.
Step19: What do these parameters mean? The documentation for the lmplot() is here.
Step20: 2-D scatter plot and KDE
Step21: We can draw a scatter plot of movie votes and ratings using the scatter() function.
Step22: Too many data points. We can decrease symbol size, set symbols empty, and make them transparent.
Step23: Number of votes is broadly distributed. So set the x axis to log scale.
Step24: We can combine scatter plot with 1D histogram using seaborn's jointplot() function.
Step25: Hexbin
Step26: KDE
Step27: Or using jointplot() by setting the kind parameter.
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import numpy as np
import scipy.stats as ss
import warnings
warnings.filterwarnings("ignore")
sns.set_style('white')
%matplotlib inline
x = np.array([1, 1, 1,1, 10, 100, 1000])
y = np.array([1000, 100, 10, 1, 1, 1, 1])
ratio = x/y
print(ratio)
plt.scatter( np.arange(len(ratio)), ratio, s=100 )
plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 ) # plot the line ratio = 1
plt.scatter( np.arange(len(ratio)), ratio, s=100 )
plt.yscale('log')
plt.ylim( (0.0001,10000) ) # set the scope the y axis
plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 )
# TODO: generate random numbers and calculate ratios between two consecutive numbers
x = np.random.rand(10)
print(x)
ratio = [ i/j for i,j in zip(x[1:],x[:-1]) ]
print(ratio)
# TODO: plot the ratios on the linear scale
plt.scatter( np.arange(len(ratio)), ratio, s=100 )
plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 )
# TODO: plot the ratios on the log scale
plt.scatter( np.arange(len(ratio)), ratio, s=100 )
plt.yscale('log')
plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 )
# TODO: plot the histogram of movie votes
movie_df = pd.read_csv('imdb.csv', delimiter='\t')
plt.hist(movie_df['Votes'])
# TODO: change the y scale to log
plt.hist(movie_df['Votes'])
plt.yscale('log')
# TODO: set the bin number to 1000
plt.hist(movie_df['Votes'], bins=1000)
plt.yscale('log')
plt.hist( movie_df['Rating'], bins=range(0,11) )
# TODO: specify the edges of bins using np.logspace
bins = np.logspace( np.log10(min(movie_df['Votes'])), np.log10(max(movie_df['Votes'])), 20)
plt.hist(movie_df['Votes'], bins=bins)
plt.xscale('log')
# TODO: correct the plot
plt.hist(movie_df['Votes'], bins=bins, normed=True)
plt.xscale('log')
plt.yscale('log')
movie_df = pd.read_csv('imdb.csv', delimiter='\t')
movie_df.head()
movie_df['Rating'].hist(bins=10, normed=True)
movie_df['Rating'].plot(kind='kde')
sns.distplot(movie_df['Rating'], bins=10)
# TODO: implement this using pandas
logs = np.log(movie_df['Votes'])
logs.hist(bins=10, normed=True)
logs.plot(kind='kde')
plt.xlim(0, 25)
# TODO: implement this using seaborn
sns.distplot(logs, bins=10)
f = plt.figure(figsize=(15,8))
plt.xlim(0, 10)
sample_sizes = [10, 50, 100, 500, 1000, 10000]
for i, N in enumerate(sample_sizes, 1):
plt.subplot(2,3,i)
plt.title("Sample size: {}".format(N))
for j in range(5):
s = movie_df['Rating'].sample(N)
sns.kdeplot(s, kernel='gau', legend=False)
X1 = [10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0]
Y1 = [8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]
X2 = [10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0]
Y2 = [9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74]
X3 = [10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0]
Y3 = [7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73]
X4 = [8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 19.0, 8.0, 8.0, 8.0]
Y4 = [6.58, 5.76, 7.71, 8.84, 8.47, 7.04, 5.25, 12.50, 5.56, 7.91, 6.89]
data = [ (X1,Y1),(X2,Y2),(X3,Y3),(X4,Y4) ]
plt.figure(figsize=(10,8))
for i,p in enumerate(data, 1):
X, Y = p[0], p[1]
plt.subplot(2, 2, i)
plt.scatter(X, Y, s=30, facecolor='#FF4500', edgecolor='#FF4500')
slope, intercept, r_value, p_value, std_err = ss.linregress(X, Y)
plt.plot([0, 20], [intercept, slope*20+intercept], color='#1E90FF') #plot the fitted line Y = slope * X + intercept
# TODO: display the fitted equations using the text() function.
plt.text(2, 11, r'$Y = {:1.2f} \cdot X + {:1.2f}$'.format(slope,intercept))
plt.xlim(0,20)
plt.xlabel('X'+str(i))
plt.ylabel('Y'+str(i))
df = sns.load_dataset("anscombe")
df.head()
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", size=4,
scatter_kws={"s": 50, "alpha": 1})
sns.lmplot(x="y", y="x", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", size=4,
scatter_kws={"s": 25, "alpha": 0.8})
geq = movie_df['Year'] >= 1990
leq = movie_df['Year'] <= 1999
subset = movie_df[ geq & leq ]
subset.head()
plt.scatter(subset['Votes'], subset['Rating'])
plt.xlabel('Votes')
plt.ylabel('Rating')
plt.scatter(subset['Votes'], subset['Rating'], s=20, alpha=0.6, facecolors='none', edgecolors='b')
plt.xlabel('Votes')
plt.ylabel('Rating')
plt.scatter(subset['Votes'], subset['Rating'], s=10, alpha=0.6, facecolors='none', edgecolors='b')
plt.xscale('log')
plt.xlabel('Votes')
plt.ylabel('Rating')
sns.jointplot(np.log(subset['Votes']), subset['Rating'])
# TODO: draw a joint plot with hexbins and two histograms for each marginal distribution
sns.jointplot(np.log(subset['Votes']), subset['Rating'], kind='hexbin')
sns.kdeplot(np.log(subset['Votes']), subset['Rating'], cmap="Reds", shade=True, shade_lowest=False)
# TODO: draw a joint plot with bivariate KDE as well as marginal distributions with KDE
sns.jointplot(np.log(subset['Votes']), subset['Rating'], kind='kde', shade_lowest=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is what our initial covariance matrix looks like. Intuitively, every data point's Y-value correlates with points according to their squared distances.
Step2: The following generates predictions from the GP model in a grid of values
Step3: Sample from the posterior GP
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import pymc3 as pm
from pymc3 import Model, MvNormal, HalfCauchy, sample, traceplot, summary, find_MAP, NUTS, Deterministic
import theano.tensor as T
from theano import shared
from theano.tensor.nlinalg import matrix_inverse
x = np.array([-5, -4.9, -4.8, -4.7, -4.6, -4.5, -4.4, -4.3, -4.2, -4.1, -4,
-3.9, -3.8, -3.7, -3.6, -3.5, -3.4, -3.3, -3.2, -3.1, -3, -2.9,
-2.8, -2.7, -2.6, -2.5, -2.4, -2.3, -2.2, -2.1, -2, -1.9, -1.8,
-1.7, -1.6, -1.5, -1.4, -1.3, -1.2, -1.1, -1, -0.9, -0.8, -0.7,
-0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8,
1.9, 2, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3, 3.1,
3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4, 4.1, 4.2, 4.3, 4.4,
4.5, 4.6, 4.7, 4.8, 4.9, 5])
y = np.array([1.04442478194401, 0.948306088493654, 0.357037759697332, 0.492336514646604,
0.520651364364746, 0.112629866592809, 0.470995468454158, -0.168442254267804,
0.0720344402575861, -0.188108980535916, -0.0160163306512027,
-0.0388792158617705, -0.0600673630622568, 0.113568725264636,
0.447160403837629, 0.664421188556779, -0.139510743820276, 0.458823971660986,
0.141214654640904, -0.286957663528091, -0.466537724021695, -0.308185884317105,
-1.57664872694079, -1.44463024170082, -1.51206214603847, -1.49393593601901,
-2.02292464164487, -1.57047488853653, -1.22973445533419, -1.51502367058357,
-1.41493587255224, -1.10140254663611, -0.591866485375275, -1.08781838696462,
-0.800375653733931, -1.00764767602679, -0.0471028950122742, -0.536820626879737,
-0.151688056391446, -0.176771681318393, -0.240094952335518, -1.16827876746502,
-0.493597351974992, -0.831683011472805, -0.152347043914137, 0.0190364158178343,
-1.09355955218051, -0.328157917911376, -0.585575679802941, -0.472837120425201,
-0.503633622750049, -0.0124446353828312, -0.465529814250314,
-0.101621725887347, -0.26988462590405, 0.398726664193302, 0.113805181040188,
0.331353802465398, 0.383592361618461, 0.431647298655434, 0.580036473774238,
0.830404669466897, 1.17919105883462, 0.871037583886711, 1.12290553424174,
0.752564860804382, 0.76897960270623, 1.14738839410786, 0.773151715269892,
0.700611498974798, 0.0412951045437818, 0.303526087747629, -0.139399513324585,
-0.862987735433697, -1.23399179134008, -1.58924289116396, -1.35105117911049,
-0.990144529089174, -1.91175364127672, -1.31836236129543, -1.65955735224704,
-1.83516148300526, -2.03817062501248, -1.66764011409214, -0.552154350554687,
-0.547807883952654, -0.905389222477036, -0.737156477425302, -0.40211249920415,
0.129669958952991, 0.271142753510592, 0.176311762529962, 0.283580281859344,
0.635808289696458, 1.69976647982837, 1.10748978734239, 0.365412229181044,
0.788821368082444, 0.879731888124867, 1.02180766619069, 0.551526067300283])
N = len(y)
squared_distance = lambda x, y: np.array([[(x[i] - y[j])**2 for i in range(len(x))] for j in range(len(y))])
with Model() as gp_fit:
μ = np.zeros(N)
η_sq = HalfCauchy('η_sq', 5)
ρ_sq = HalfCauchy('ρ_sq', 5)
σ_sq = HalfCauchy('σ_sq', 5)
D = squared_distance(x, x)
# Squared exponential
Σ = T.fill_diagonal(η_sq * T.exp(-ρ_sq * D), η_sq + σ_sq)
obs = MvNormal('obs', μ, Σ, observed=y)
sns.heatmap(Σ.tag.test_value, xticklabels=False, yticklabels=False)
with gp_fit:
# Prediction over grid
xgrid = np.linspace(-6, 6)
D_pred = squared_distance(xgrid, xgrid)
D_off_diag = squared_distance(x, xgrid)
# Covariance matrices for prediction
Σ_pred = η_sq * T.exp(-ρ_sq * D_pred)
Σ_off_diag = η_sq * T.exp(-ρ_sq * D_off_diag)
# Posterior mean
μ_post = Deterministic('μ_post', T.dot(T.dot(Σ_off_diag, matrix_inverse(Σ)), y))
# Posterior covariance
Σ_post = Deterministic('Σ_post', Σ_pred - T.dot(T.dot(Σ_off_diag, matrix_inverse(Σ)), Σ_off_diag.T))
with gp_fit:
gp_trace = pm.variational.svgd(n=300, n_particles=50)
traceplot(gp_trace, varnames=['η_sq', 'ρ_sq', 'σ_sq']);
y_pred = [np.random.multivariate_normal(m, S) for m, S in zip(gp_trace['μ_post'], gp_trace['Σ_post'])]
for yp in y_pred:
plt.plot(np.linspace(-6, 6), yp, 'c-', alpha=0.1);
plt.plot(x, y, 'r.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image classification with TensorFlow Lite Model Maker
Step2: Import the required packages.
Step3: Simple End-to-End Example
Step4: You could replace image_path with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
Step5: Step 2. Customize the TensorFlow model.
Step6: Step 3. Evaluate the model.
Step7: Step 4. Export to TensorFlow Lite model.
Step8: After these simple 4 steps, we could further use TensorFlow Lite model file in on-device applications like in image classification reference app.
Step 1
Step9: Use DataLoader class to load data.
Step10: Split it to training data (80%), validation data (10%, optional) and testing data (10%).
Step11: Show 25 image examples with labels.
Step12: Step 2
Step13: Have a look at the detailed model structure.
Step14: Step 3
Step15: We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
Step16: If the accuracy doesn't meet the app requirement, one could refer to Advanced Usage to explore alternatives such as changing to a larger model, adjusting re-training parameters etc.
Step 4
Step17: See the image classification examples guide for more details about how to integrate the TensorFlow Lite model into mobile apps.
Step18: You can also evaluate the tflite model with the evaluate_tflite method.
Step19: Advanced Usage
Step20: Then we export the TensorFlow Lite model with such configuration.
Step21: In Colab, you can download the model named model_fp16.tflite from the left sidebar, same as the uploading part mentioned above.
Step22: Evaluate the newly retrained MobileNetV2 model to see the accuracy and loss in testing data.
Step23: Change to the model in TensorFlow Hub
Step24: Then, by setting parameter model_spec to inception_v3_spec in create method, we could retrain the Inception V3 model.
Step25: Evaluate the newly retrained model with 10 training epochs.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!sudo apt -y install libportaudio2
!pip install -q tflite-model-maker
import os
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import image_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader
import matplotlib.pyplot as plt
image_path = tf.keras.utils.get_file(
'flower_photos.tgz',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
extract=True)
image_path = os.path.join(os.path.dirname(image_path), 'flower_photos')
data = DataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
model = image_classifier.create(train_data)
loss, accuracy = model.evaluate(test_data)
model.export(export_dir='.')
image_path = tf.keras.utils.get_file(
'flower_photos.tgz',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
extract=True)
image_path = os.path.join(os.path.dirname(image_path), 'flower_photos')
data = DataLoader.from_folder(image_path)
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.gen_dataset().unbatch().take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
model = image_classifier.create(train_data, validation_data=validation_data)
model.summary()
loss, accuracy = model.evaluate(test_data)
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_top_k(test_data)
for i, (image, label) in enumerate(test_data.gen_dataset().unbatch().take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
model.export(export_dir='.')
model.export(export_dir='.', export_format=ExportFormat.LABEL)
model.evaluate_tflite('model.tflite', test_data)
config = QuantizationConfig.for_float16()
model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)
model = image_classifier.create(train_data, model_spec=model_spec.get('mobilenet_v2'), validation_data=validation_data)
loss, accuracy = model.evaluate(test_data)
inception_v3_spec = image_classifier.ModelSpec(
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)
loss, accuracy = model.evaluate(test_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Trapezoidal rule
Step3: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
def trapz(f, a, b, N):
Integrate the function f(x) over the range [a,b] with N points.
h = (b-a)/N
xvals = np.linspace(a, b, N+1)
yvals = f(xvals)
return 0.5 * np.sum((h*yvals[0], h*yvals[-1], 2*h*np.sum(yvals[1:-1])))
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
# YOUR CODE HERE
iq_f, err_g = integrate.quad(f, 0, 1)
tr_f = trapz(f, 0, 1, 1000)
print(iq_f, err_g)
print(tr_f)
print()
iq_g, err_g = integrate.quad(g, 0, np.pi)
tr_g = trapz(g, 0, np.pi, 1000)
print(iq_g, err_g)
print(tr_g)
assert True # leave this cell to grade the previous one
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 执行osm的xml到json转换,一次扫描提取为三个文件。
Step2: 执行转换。
|
<ASSISTANT_TASK:>
Python Code:
import os
import time
import json
from pprint import *
import lxml
from lxml import etree
import xmltodict, sys, gc
from pymongo import MongoClient
gc.enable() #Enable Garbadge Collection
# 将指定tag的对象提取,写入json文件。
def process_element(elem):
elem_data = etree.tostring(elem)
elem_dict = xmltodict.parse(elem_data,attr_prefix="",cdata_key="")
#print(elem_dict)
if (elem.tag == "node"):
elem_jsonStr = json.dumps(elem_dict["node"])
fnode.write(elem_jsonStr + "\n")
elif (elem.tag == "way"):
elem_jsonStr = json.dumps(elem_dict["way"])
fway.write(elem_jsonStr + "\n")
elif (elem.tag == "relation"):
elem_jsonStr = json.dumps(elem_dict["relation"])
frelation.write(elem_jsonStr + "\n")
# 遍历所有对象,然后调用process_element处理。
# 迭代处理,func为迭代的element处理函数。
def fast_iter(context, func_element, maxline):
placement = 0
try:
for event, elem in context:
placement += 1
if (maxline > 0): # 最多的转换对象限制,大数据调试时使用于抽样检查。
print(etree.tostring(elem))
if (placement >= maxline): break
func_element(elem) #处理每一个元素,调用process_element.
elem.clear()
while elem.getprevious() is not None:
del elem.getparent()[0]
except Exception as ex:
print(time.strftime(ISOTIMEFORMAT),", Error:",ex)
del context
#maxline = 0 #抽样调试使用,最多转换的对象,设为0则转换文件的全部。
def transform(osmfile,maxline = 0):
ISOTIMEFORMAT="%Y-%m-%d %X"
print(time.strftime( ISOTIMEFORMAT),", Process osm XML...",osmfile," =>MaxLine:",maxline)
global fnode
global fway
global frelation
fnode = open(osmfile + "_node.json","w+")
fway = open(osmfile + "_way.json","w+")
frelation = open(osmfile + "_relation.json","w+")
context = etree.iterparse(osmfile,tag=["node","way","relation"])
fast_iter(context, process_element, maxline)
fnode.close()
fway.close()
frelation.close()
print(time.strftime( ISOTIMEFORMAT),", OSM to JSON, Finished.")
# 需要处理的osm文件名,自行修改。
osmfile = '../data/osm/muenchen.osm'
transform(osmfile,0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the command again if you are getting oauth2client error.
Step2: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
Step4: <h2> Create ML dataset using Dataflow </h2>
Step5: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
|
<ASSISTANT_TASK:>
Python Code:
pip install --user apache-beam[gcp]
import apache_beam as beam
print(beam.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
import datetime, os
def to_csv(rowdict):
import hashlib
import copy
# TODO #1:
# Pull columns from BQ and create line(s) of CSV input
CSV_COLUMNS = None
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
no_ultrasound['is_male'] = 'Unknown'
if rowdict['plurality'] > 1:
no_ultrasound['plurality'] = 'Multiple(2+)'
else:
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'num_workers': 4,
'max_num_workers': 5
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
## TODO Task #2: Modify the Apache Beam pipeline such that the first part of the pipe reads the data from BigQuery
| '{}_read'.format(step) >> None
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
# TODO Task #3: Once you have verified that the files produced locally are correct, change in_test_mode to False
# to execute this in Cloud Dataflow
preprocess(in_test_mode = True)
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Magpy non-interacting case
Step2: System properties
Step3: Magpy model and simulation
Step4: Simulate an ensemble of 10,000 dimers without interactions.
Step5: System magnetisation shows that the system has relaxed into the local minima (we could relax the system globally but it would take much longer to run since the energy barrier must be overcome).
Step6: Compare to analytic thermal equilibrium
Step7: The joint distribution for both angles are computed analytically and compared with the numerical result.
Step8: We can also compute the marginal distribution (i.e. the equilibrium of just 1 particle). It is easier to see the alignment of the two distributions.
Step9: Magpy interacting case
Step10: The interaction strength is very strong (actually the particles are impossibly close). The following command is identical to above except that interactions=True
Step11: System relaxation
Step12: Thermal equilibrium
Step13: We use the marginal distribution again to check the convergence. We also compare to the interacting case
Step14: Possible sources of error
Step15: Cartesian
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# dipole interaction energy
def dd(t1, t2, p1, p2, nu):
return -nu*(2*np.cos(t1)*np.cos(t2) - np.sin(t1)*np.sin(t2)*np.cos(p1-p2))
# anisotropy energy
def anis(t1, t2, sigma):
return sigma*(np.sin(t1)**2 + np.sin(t2)**2)
# total energy
def tot(t1, t2, p1, p2, nu, sigma):
return dd(t1, t2, p1, p2, nu) + anis(t1, t2, sigma)
# numerator of the Boltzmann distribution (i.e. ignoring the partition function Z)
def p_unorm(t1,t2,p1,p2,nu,sigma):
return np.sin(t1)*np.sin(t2)*np.exp(-tot(t1, t2, p1, p2, nu, sigma))
# non interacting
from scipy.integrate import nquad
sigma, nu = 1.0, 0.0
Z = nquad(
lambda t1, t2, p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma),
ranges=[(0, np.pi), (0, np.pi), (0, 2*np.pi), (0, 2*np.pi)]
)
print(Z[0])
Z = nquad(
lambda t1, t2: p_unorm(t1, t2, 0, 0, nu, sigma),
ranges=[(0, np.pi), (0, np.pi)]
)
print(Z[0] * 4 * np.pi**2)
import magpy as mp
K = 1e5
r = 7e-9
T = 330
Ms=400e3
R=9e-9
kdir = [0, 0, 1]
location1 = np.array([0, 0, 0], dtype=np.float)
location2 = np.array([0, 0, R], dtype=np.float)
direction = np.array([0, 0, 1], dtype=np.float)
alpha = 1.0
base_model = mp.Model(
anisotropy=np.array([K, K], dtype=np.float),
anisotropy_axis=np.array([kdir, kdir], dtype=np.float),
damping=alpha,
location=np.array([location1, location2], dtype=np.float),
magnetisation=Ms,
magnetisation_direction=np.array([direction, direction], dtype=np.float),
radius=np.array([r, r], dtype=np.float),
temperature=T
)
ensemble = mp.EnsembleModel(50000, base_model)
res = ensemble.simulate(end_time=1e-9, time_step=1e-12,
max_samples=500, random_state=1002,
n_jobs=8, implicit_solve=True,
interactions=False)
m_z0 = np.array([state['z'][0] for state in res.final_state()])/Ms
m_z1 = np.array([state['z'][1] for state in res.final_state()])/Ms
theta0 = np.arccos(m_z0)
theta1 = np.arccos(m_z1)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(res.results[0].time, res.ensemble_magnetisation())
plt.title('Non-interacting dimer ensemble magnetisation');
# Dimensionless parameters
V = 4./3*np.pi*r**3
sigma = K*V/mp.core.get_KB()/T
nu = 0
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
Z = nquad(
lambda t1, t2, p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma),
ranges=[(0, np.pi/2), (0, np.pi/2), (0, 2*np.pi), (0, 2*np.pi)]
)
print(Z[0])
Z=Z[0]
ts = np.linspace(min(theta0), max(theta0), 100)
bdist = [[
nquad(lambda p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma)/Z, ranges=[(0, 2*np.pi), (0, 2*np.pi)])[0]
for t1 in ts]
for t2 in ts]
plt.hist2d(theta0, theta1, bins=30, normed=True);
plt.contour(ts, ts, bdist, cmap='Greys')
plt.title('Joint distribution')
plt.xlabel('$\\theta_1$'); plt.ylabel('$\\theta_2$');
b_marginal = [nquad(
lambda t2, p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma)/Z,
ranges=[(0, np.pi/2), (0, 2*np.pi), (0, 2*np.pi)])[0]
for t1 in ts]
plt.hist(theta0, bins=50, normed=True)
plt.plot(ts, np.array(b_marginal))
# Dimensionless parameters
V = 4./3*np.pi*r**3
sigma = K*V/mp.core.get_KB()/T
nu = mp.core.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi**2 / R**3 / mp.core.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
res = ensemble.simulate(end_time=1e-9, time_step=1e-13,
max_samples=500, random_state=1001,
n_jobs=8, implicit_solve=False,
interactions=True, renorm=True)
m_z0i = np.array([state['z'][0] for state in res.final_state()])/Ms
m_z1i = np.array([state['z'][1] for state in res.final_state()])/Ms
theta0i = np.arccos(m_z0i)
theta1i = np.arccos(m_z1i)
plt.plot(res.results[0].time, res.ensemble_magnetisation())
plt.title('Interacting dimer ensemble magnetisation');
# Dimensionless parameters
V = 4./3*np.pi*r**3
sigma = K*V/mp.core.get_KB()/T
nu = 1.0 * mp.core.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi / np.pi / R**3 / mp.core.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
Z = nquad(
lambda t1, t2, p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma),
ranges=[(0, np.pi/2), (0, np.pi/2), (0, 2*np.pi), (0, 2*np.pi)]
)[0]
ts = np.linspace(min(theta0), max(theta0), 100)
bdist = [[
nquad(lambda p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma)/Z, ranges=[(0, 2*np.pi), (0, 2*np.pi)])[0]
for t1 in ts]
for t2 in ts]
plt.hist2d(theta0i, theta1i, bins=30, normed=True);
# ts = np.linspace(min(theta0), max(theta0), 100)
# b = boltz_2d(ts, nu, sigma)
plt.contour(ts, ts, bdist, cmap='Greys')
plt.title('Joint distribution')
plt.xlabel('$\\theta_1$'); plt.ylabel('$\\theta_2$');
b_marginal = [nquad(
lambda t2, p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma)/Z,
ranges=[(0, np.pi/2), (0, 2*np.pi), (0, 2*np.pi)])[0]
for t1 in ts]
plt.hist(theta0i, bins=50, normed=True, alpha=0.6, label='Magpy + inter.')
plt.hist(theta0, bins=50, normed=True, alpha=0.6, label='Magpy + no inter.')
plt.plot(ts, b_marginal, label='Analytic')
plt.legend();
import pymc3 as pm
with pm.Model() as model:
z1 = pm.Uniform('z1', 0, 1)
theta1 = pm.Deterministic('theta1', np.arccos(z1))
z2 = pm.Uniform('z2', 0, 1)
theta2 = pm.Deterministic('theta2', np.arccos(z2))
phi1 = pm.Uniform('phi1', 0, 2*np.pi)
phi2 = pm.Uniform('phi2', 0, 2*np.pi)
energy = tot(theta1, theta2, phi1, phi2, nu, sigma)
like = pm.Potential('energy', -energy)
with model:
step = pm.NUTS()
trace = pm.sample(500000, step=step)
pm.traceplot(trace)
plt.hist(trace['theta1'], bins=200, normed=True);
plt.plot(ts, b_marginal, label='Analytic')
plt.hist(trace['theta1'], bins=200, normed=True, alpha=0.6);
plt.hist(theta0i, bins=50, normed=True, alpha=0.6, label='Magpy + inter.');
def cart_energy(mx1, my1, mz1, mx2, my2, my3, nu, sigma):
t1 = np.arccos(mz1)
t2 = np.arccos(mz2)
anis = sigma*np.sin(t1)**2 + sigma*np.sin(t2)**2
inter = -nu*(3*np.dot)
import pymc3 as pm
with pm.Model() as model:
z1 = pm.Uniform('z1', 0, 1)
theta1 = pm.Deterministic('theta1', np.arccos(z1))
z2 = pm.Uniform('z2', 0, 1)
theta2 = pm.Deterministic('theta2', np.arccos(z2))
phi1 = pm.Uniform('phi1', 0, 2*np.pi)
phi2 = pm.Uniform('phi2', 0, 2*np.pi)
energy = tot(theta1, theta2, phi1, phi2, nu, sigma)
like = pm.Potential('energy', -energy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: but maybe you don't want that tripple pinky swear value available to just anyone to search on
Step2: this document will no longer match the original
Step3: now we need the secret key,
Step4: Appendix - Generating Keys
|
<ASSISTANT_TASK:>
Python Code:
PUBLIC_KEY = Binary('\x8eB\x11\xd5ht\x93\x05\xee\xed\x10\xad\xb4\x90\xb7]\x92\x04\xac\x82\xb5\xa2"v\xf9[\xd6^\x14\x8b\x12\x1d', 0)
sensitive_subdocument = {"sensitive":"triple pinky swear"}
document = {
"_id":1,
"name":"bsonsearch",
"super_secret_data":sensitive_subdocument
}
print document
plaintext_spec = {"super_secret_data.sensitive":"triple pinky swear"}
plaintext_matc = bc.generate_matcher(plaintext_spec)
print "Matches", bc.match(plaintext_matc, document)
secure_sensitive_subdocument = Binary(pysodium.crypto_box_seal(bson.BSON.encode(sensitive_subdocument),
PUBLIC_KEY))
secure_document = {
"_id":1,
"name":"bsonsearch",
"super_secret_data": secure_sensitive_subdocument
}
print secure_document
print "Matches", bc.match(plaintext_matc, secure_document)
SECRET_KEY = Binary('\xe2\x16\x9a,\xb1\x9b\xb4\xf67\xe9\xf8\x83\x0f"_\xa8}t\xd2i:\xbb\xfd\xb5\x8a\x89X.\x1b\x13\x92Z', 0)
spec_decrypt = {"super_secret_data":{"$sealOpen":{"$keys":{"pk":PUBLIC_KEY,
"sk":SECRET_KEY},
"$query":{"sensitive":"triple pinky swear"}
}
}
}
matcher_decrypt = bc.generate_matcher(spec_decrypt)
print "Decrypt with key matches ---->", bc.match(matcher_decrypt, secure_document)
print "Non-decrypt should be false ->", bc.match(plaintext_matc, secure_document)
SECURE_KEYPAIR = pysodium.crypto_box_keypair()
SECURE_KEYPAIR_DICT = {"pk":Binary(SECURE_KEYPAIR[0]),
"sk":Binary(SECURE_KEYPAIR[1])}
SECURE_KEYPAIR_DICT
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Steps Followed to train
Step1: This will load the TensorFlow's default graph with the Inception's graph.
Step 2
Step2: This is done for all the images (Train and Test) and they are saved as serialized Numpy arrays.
Step3: This step usually takes some time if the dataset is actually large (which partially defeats the purpose). Therefore, its best to save them on to the disk and make it a one-time process. This would now yield us a dataset of size from $None\times96\times96\times3$ to $None\times2048$. Which is a great deal of reduction data size. This means that, what was a 3-channel $96\times96$ image is now feature extracted efficiently by Inception and is represented by a vector of size 2048. Thus paving way for a simple Neural Network for the classification task.
Step 3
Step4: For the evaluation and monitoring of the model's performance throughout training and the final test accuracy, the following method is employed.
Step5: Step 4
Step6: Finally, run the trainer method after loading the saved bottleneck data.
Step7: The model, on 5000 training samples achieved a very high train accuracy of 98.68% with a loss value of 0.03 and the Test accuracy on 8000 samples was 86.83%. It can be observed that when compared to state-of-the-art accuracy on STL-10 dataset, which is a semi-supervised work is at 74.3%, the accuracy obtained with a pre-trained network with very little computing work done, is much better and faster.
Step8: The script was run with a standard-GPU (Single Tesla K80 GPU) tier on the GCP's Cloud ML Engine API for training. There are 4 convolutional layers with filter window size of 4$\times$4 for every filter in every layer. Below are the montage of images of every layer of encoder for an image that is obtained after training.
Step9: This loads the TensorFlow default graph with our model's graph and weights. Unlike Inception's pool_3, we choose our bottleneck layer to be the last encoder layer, the 'encoder/layer_3/h' which is the ReLU output
Step10: The rest remain the same, i.e., saving bottlenecks, addition of final layer, evaluation step definition, and finally train the Fully connected model with the training examples. Below is the Terminal output of training the Fully Connected model. For clarity, every 50 epochs data is displayed
|
<ASSISTANT_TASK:>
Python Code:
from tensorflow.python.platform import gfile
import tensorflow as tf
import numpy as np
model='../inception/classify_image_graph_def.pb'
def create_graph():
'''
Function to extract GraphDef of Inception model.
Returns: Extracted GraphDef
'''
with tf.Session() as sess:
with gfile.FastGFile(model,'rb') as f:
graph_def=tf.GraphDef()
graph_def.ParseFromString(f.read())
_=tf.import_graph_def(graph_def,name='')
return sess.graph
def batch_pool3_features(sess,X_input):
'''
Function to extract features for a given batch of images by
passing it through Inception model until pool_3 layer to get bottlenecks
Args: Current Session, Batch of Images of size:batch_sizex96x96x3
Returns: Array of 2048 features extracted for every image by Inception
'''
n_train=X_input.shape[0]
pool3=sess.graph.get_tensor_by_name('pool_3:0')
x_pool3=[]
for i in range(n_train):
print ("Iteration: "+str(i))
features=sess.run(pool3,{'DecodeJpeg:0':X_input[i,:]})
x_pool3.append(np.squeeze(features))
return np.array(x_pool3)
def bottleneck_data(sess):
'''
Function to load STL data and process them to bottleneck them
Args: TensorFlow session
'''
X_train, Y_train, X_test, Y_test=load_stl_data(one_hot=True)
bottleneck_pool3(sess,X_train, './X_train.npy')
bottleneck_pool3(sess,X_test, './X_test.npy')
np.save('./Y_train.npy',Y_train)
np.save('./Y_test.npy',Y_test)
BOTTLENECK_TENSOR_NAME='pool_3'
BOTTLENECK_TENSOR_SIZE=2048
def add_final_training_layer(class_count, final_tensor_name,\
ground_truth_tensor_name, learning_rate=1e-3):
'''
Function to define the FC, Softmax classifier model to Classify the serialized
images. Has Gradient Descent Optimizer.
Includes Dropout layers and a 2048-1024-512-10 network
Args: No. of classes, final tensor name of the FC network,
Ground Truth Tensor name, Learning rate for Optimizer
Returns: Train Op and Cost of the model
'''
layers=[1024, 512, 10]
keep_prob=0.75
bottleneck_input=tf.placeholder(tf.float32,\
shape=[None, BOTTLENECK_TENSOR_SIZE], name='BottleneckInput')
currentInput=bottleneck_input
n_input=BOTTLENECK_TENSOR_SIZE
for layer, output_size in enumerate(layers):
with tf.variable_scope('fc/layer{}'.format(layer)):
W=tf.get_variable(name='W', shape=[n_input, output_size], \
initializer=tf.random_normal_initializer(mean=0.0,stddev=0.01))
b=tf.get_variable(name='b',shape=[output_size],\
initializer=tf.constant_initializer([0]))
h=tf.matmul(currentInput,W)+b
n_input=output_size
if output_size!=layers[2]:
h=tf.nn.tanh(h,name='h')
else:
final_tensor=tf.nn.softmax(h, name=final_tensor_name)
h=tf.nn.dropout(h,keep_prob)
currentInput=h
Y=tf.placeholder(tf.float32, shape=[None,class_count],\
name=ground_truth_tensor_name)
cross_entropy=tf.nn.softmax_cross_entropy_with_logits(logits=h, labels=Y)
cost=tf.reduce_mean(cross_entropy)
train_step=tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
return train_step, cost
def evaluation_step(graph, final_tensor_name, ground_truth_tensor_name):
'''
Function to evaluate the performance of the model by calculating the
accuracy of prediction
Args: Final Tensor and Ground Truth Tensor Name, TensorFlow Graph
Return: Evaluation Tensor
'''
result_tensor=graph.get_tensor_by_name(ensure_port(final_tensor_name))
Y_tensor=graph.get_tensor_by_name(ensure_port(ground_truth_tensor_name))
correct_pred=tf.equal(tf.argmax(result_tensor,1),tf.argmax(Y_tensor,1))
eval_step=tf.reduce_mean(tf.cast(correct_pred,'float'))
return eval_step
def trainer(sess, X_input, Y_input, X_test, Y_test):
'''
Function to train the FC model with a Softmax activation for output layer
Args: TensorFlow Session, Bottlenecked Images for training and testing
and corresponding labels
'''
ground_truth_tensor_name='ground_truth'
# Define Batch size
mini_batch_size=250
n_train=X_input.shape[0]
graph=create_graph()
# Get the train op and loss function
train_step,cross_entropy=add_final_training_layer\
(n_classes, final_tensor_name, ground_truth_tensor_name, learning_rate)
# Intiliaze all variables
sess.run(tf.global_variables_initializer())
# Get evaluation tensor
eval_step=evaluation_step(graph, \
'fc/layer2/'+final_tensor_name, ground_truth_tensor_name)
# Get tensors for Input and Output
bottleneck_input=graph.get_tensor_by_name(ensure_port('BottleneckInput'))
Y=graph.get_tensor_by_name(ensure_port(ground_truth_tensor_name))
# Define number of epochs
epochs=3500
# Perform training for number of epochs defined
for epoch in range(epochs):
# Shuffle the examples
shuffle=np.random.permutation(n_train)
shuffle_X=X_input[shuffle,:]
shuffle_Y=Y_input[shuffle]
# Perform batch training
for Xi, Yi in iterate_batches(shuffle_X, shuffle_Y, mini_batch_size):
sess.run(train_step, feed_dict={bottleneck_input:Xi, Y:Yi})
# Print out model's performance after every epoch
train_accuracy, train_cross_entropy=\
sess.run([eval_step,cross_entropy], \
feed_dict={bottleneck_input:X_input, Y:Y_input})
print ("Epoch %d: Train accuracy:%0.2f, Cross Entropy:%0.2f"\
%(epoch,train_accuracy*100,train_cross_entropy))
# Get the test accuracy after training is complete
test_accuracy=sess.run(eval_step, \
feed_dict={bottleneck_input:X_test, Y:Y_test})
print('Final Test Accuracy:%0.2f' %(test_accuracy*100))
n_classes=10
X_train, Y_train, X_test, Y_test= load_bottleneck_data()
final_tensor_name='final_result'
learning_rate=0.001
# Create TensorFlow session and train model
sess=tf.InteractiveSession()
trainer(sess, X_train, Y_train, X_test, Y_test)
'''
Python script which is the trainer task to train the CAE on Google Cloud
Platform.
Requires Google Cloud Platform account, Training data and scripts are to be
placed inside Cloud Storage Bucket
'''
import numpy as np
import tensorflow as tf
from tensorflow.python.lib.io import file_io
from datetime import datetime
import logging
import argparse, os
from StringIO import StringIO
tf.reset_default_graph()
# Batch size to be inputted
batch_size=500
# Filter window size for every layer
filter_size=[4,4,4,4]
def iterate_batches(x_in, batch_size):
'''
Function to randomly shuffle and yield batches for training
Args: Unlabeled images and batch size
Returns: Batch of images, shuffled
'''
new_perm=np.random.permutation(range(len(x_in)))
epoch_images=x_in[new_perm, ...]
current_batch_id=0
while current_batch_id < len(x_in):
end=min(current_batch_id+batch_size,len(x_in))
batch_images={'images': epoch_images[current_batch_id:end]}
current_batch_id+=batch_size
yield batch_images['images']
def train_model(train_file='../Unlabeled_X.npy', job_dir='./tmp/autoencoder', \
output_dir='../output/', learning_rate=0.001, n_epochs=300, **args):
'''
Function to train the CAE by taking in batches of images. Requires
arguments to be passed while initiating the job on GCP. Saves the model in
the Bucket every 10 epochs
Args: Location of Training data (Cloud Storage Bucket), job-directory to
output logs of the job, learning rate and number of iterations for training
'''
logs_path=job_dir+'/logs/'+datetime.now().isoformat()
output_file=os.path.join(output_dir,'saved-autoencoder-model')
logging.info('_____________________')
logging.info('Using Train File located at {}'.format(train_file))
logging.info('Using Logs_path located at {}'.format(logs_path))
logging.info('_____________________')
file_string=StringIO(file_io.read_file_to_string(train_file))
with tf.Graph().as_default():
sess=tf.InteractiveSession()
X_input=np.load(file_string)
idx=range(len(X_input))
# Shuffle Data
rand_idxs=np.random.permutation(idx)
X_input=X_input[rand_idxs,...]
logging.info('Unlabeled Dataset loaded')
features=X_input.shape[1]
# Number of filters for every layer
n_filters=[64,64,64,64]
# Create placeholder for image tensor
X=tf.placeholder(tf.float32, shape=[None, features], name='X')
X_image_tensor=tf.reshape(X, [-1, 96, 96, 3])
currentInput=X_image_tensor
n_input=currentInput.get_shape().as_list()[3]
Ws=[]
shapes=[]
# Build a 4-layer convolutional encoder model by appending weights
# dimensions for decoder
for layer, output_size in enumerate(n_filters):
with tf.variable_scope("encoder/layer_{}".format(layer)):
shapes.append(currentInput.get_shape().as_list())
W=tf.get_variable(name='W', shape=[filter_size[layer],\
filter_size[layer],\
n_input, output_size],\
initializer=\
tf.random_normal_initializer(mean=0.0,stddev=0.01))
b=tf.get_variable(name='b', shape=[output_size], initializer=\
tf.constant_initializer([0]))
h=(tf.add(tf.nn.conv2d(currentInput, W, strides=[1,2,2,1],\
padding='SAME'),b))
h=tf.nn.relu(h,name='h')
currentInput=h
Ws.append(W)
n_input=output_size
# Reverse weights matrix and shape matrix for decoder
Ws.reverse()
shapes.reverse()
n_filters.reverse()
n_filters=n_filters[1:]+[3]
# Decoder for reconstruction of images
for layer, output_size in enumerate(shapes):
with tf.variable_scope('decoder/layer_{}'.format(layer)):
W=Ws[layer]
b = tf.Variable(tf.zeros([W.get_shape().as_list()[2]]))
output_shape=tf.stack([tf.shape(X)[0], \
output_size[1],output_size[2],output_size[3]])
h=(tf.add(tf.nn.conv2d_transpose(currentInput, W, output_shape=output_shape, \
strides=[1,2,2,1],padding='SAME'),b))
h=tf.nn.relu(h,name='h')
currentInput=h
# Final Placeholder
Y=currentInput
Y=tf.reshape(Y,[-1,96*96*3])
cost=tf.reduce_mean(tf.reduce_mean(tf.squared_difference(X,Y),1))
optimizer=tf.train.AdamOptimizer(float(learning_rate)).minimize(cost)
# Initiate Saver Instance
saver=tf.train.Saver()
# Initialize variables
sess.run(tf.global_variables_initializer())
# Start training
for i in range(int(n_epochs)):
for batch_img in iterate_batches(X_input, batch_size=batch_size):
sess.run(optimizer,feed_dict={X:batch_img})
# Every 10 epochs, report performance and save model graph and weights
if i%10==0:
logging.info('Epoch:{0}, Cost={1}'.format(i, \
sess.run(cost, feed_dict={X: batch_img})))
saver.save(sess, output_file, global_step=0)
logging.info('Model Saved')
if __name__=='__main__':
parser=argparse.ArgumentParser()
parser.add_argument('--train-file', help='GCS or local paths to train data',\
required=True)
parser.add_argument('--job-dir', help='GCS location to write \
checkpoints and export models', required=True)
parser.add_argument('--output_dir', help='GCS location \
to write model', required=True)
parser.add_argument('--learning-rate', help='Learning Rate', required=True)
parser.add_argument('--n-epochs', help='Number of epochs', required=True)
args=parser.parse_args()
arguments=args.__dict__
train_model(**arguments)
def create_graph(sess):
'''
Function to extract Graph and model from the trained CAE.
'''
saver=tf.train.import_meta_graph(model_meta)
saver.restore(sess, model)
def extract_features(sess, X_input):
'''
Function to extract features for a given batch of images by
passing it through CAE model until the layer 3 of ReLu of encoder to get bottlenecks
Args: Current Session, Images
Returns: Array of 2304 features extracted for every image by Inception
'''
encoder_relu=sess.graph.get_tensor_by_name('encoder/layer_3/h:0')
features=sess.run(encoder_relu, feed_dict={'X:0':X_input})
return features
C:\Users\Sharath\Documents\Machine Learning\Startup.ML\gcloud\transfer learning>python fc_train.py
2017-08-17 10:53:33.952634: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations.
2017-08-17 10:53:33.952990: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-17 10:53:33.954768: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-17 10:53:33.955172: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-17 10:53:33.956779: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-17 10:53:33.958236: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-17 10:53:33.959671: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-17 10:53:33.960981: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-08-17 10:53:35.855224: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:940] Found device 0 with properties:
name: GeForce GT 740M
major: 3 minor: 5 memoryClockRate (GHz) 1.0325
pciBusID 0000:0a:00.0
Total memory: 2.00GiB
Free memory: 1.67GiB
2017-08-17 10:53:35.855834: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:961] DMA: 0
2017-08-17 10:53:35.860241: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0: Y
2017-08-17 10:53:35.862274: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GT 740M, pci bus id: 0000:0a:00.0)
Epoch 0: Train accuracy:13.04, Cross Entropy:2.30
Epoch 50: Train accuracy:35.78, Cross Entropy:1.81
Epoch 100: Train accuracy:41.82, Cross Entropy:1.66
Epoch 150: Train accuracy:59.64, Cross Entropy:1.35
Epoch 200: Train accuracy:68.78, Cross Entropy:1.14
Epoch 250: Train accuracy:63.56, Cross Entropy:1.26
Epoch 300: Train accuracy:82.28, Cross Entropy:0.74
Epoch 350: Train accuracy:77.56, Cross Entropy:0.88
Epoch 400: Train accuracy:91.80, Cross Entropy:0.49
Epoch 450: Train accuracy:85.94, Cross Entropy:0.62
Epoch 500: Train accuracy:92.42, Cross Entropy:0.46
Epoch 550: Train accuracy:97.96, Cross Entropy:0.33
Final Test Accuracy for CAE Transfer Learning:43.63
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Running the examples
Step2: Example 2
Step3: Example 3
Step4: His Code
Step5: Creating User Vecs
Step6: Exploring the Data
Step7: candiates are items
Step8: These features are from the images of the previous users. These are the average of the previous images viewed.
Step9: These features are from the images of the previous users. These are clusters assigned to each image. In an algorithm later, it's easier to just compare similarities of items within the same cluster (rather than every pairwise image).
Step10: Expanded view list does something with the 'expand browses' and uses formar14_pv.
Step11: Same number of goods in candidates list as in the buylist
Step12: He has features for all 9863 goods. Does he have the images for those?
Step13: What are the item views for each person?
Step14: Dates
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pickle
from IPython.core.debugger import Tracer
import seaborn as sns
%matplotlib inline
import tensorflow as tf
import sklearn
import h5py
import keras
from keras.preprocessing import image
from resnet50 import ResNet50
from imagenet_utils import preprocess_input, decode_predictions
model = ResNet50(weights='imagenet')
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
#model.
img_path = 'img/euro/EUROMODA-U125256-39-5.jpg'
img = image.load_img(img_path, target_size=(224, 224))
img
x = image.img_to_array(img)
x.shape
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
x.shape
plt.imshow(x[0,:,:,0])
preds = model.predict(x)
plt.plot(preds.T)
preds.shape
print('Predicted:', decode_predictions(preds))
# print: [[u'n02504458', u'African_elephant']]
from vgg16 import VGG16
from keras.preprocessing import image
from imagenet_utils import preprocess_input
model = VGG16(weights='imagenet', include_top=False)
img_path = '1360x.jpeg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
x.shape
features = model.predict(x)
print features
features.shape
## save out instead
# from keras.utils import plot_model
# plot_model(model, to_file='model.png')
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
from vgg19 import VGG19
from keras.preprocessing import image
from imagenet_utils import preprocess_input
from keras.models import Model
base_model.input
base_model.get_layer('block5_pool')
# the layers appear to be keras objects.
base_model.get_layer('block5_pool').output
# input and output appear to be tensorflow tensors.
base_model = VGG19(include_top=False, weights='imagenet')
model = Model(input=base_model.input, output=base_model.get_layer('block5_pool').output)
# this Model creates a model based on some input tensor and some output tensor.
# here we've taken the base_model, and grabbed it's input layer, and it's 5th output layer,
# and created a new model with just those layers or is it all the layers inbetween
#img_path = '1360x.jpeg'
img_path = 'img/euro/EUROMODA-U125256-39-5.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
img
block4_pool_features = model.predict(x) # now we predict based on those layers
block4_pool_features.shape # same shape as before.
np.shape(block4_pool_features.tolist()[0][0])
import itertools
flattened_list = list(itertools.chain(*block4_pool_features.tolist()[0][0])) # * will unpack
for item in flattened_list: print item
import numpy as np
from vgg19 import VGG19
from resnet50 import ResNet50
from xception import Xception
from keras.preprocessing import image
from imagenet_utils import preprocess_input
from keras.models import Model
import itertools
def get_middle_layer(img_path):
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
block4_pool_features = model.predict(x)
flattened_list = list(itertools.chain(*block4_pool_features.tolist()[0][0]))
return flattened_list
def dot(K, L):
if len(K) != len(L):
return 0
return sum(i[0] * i[1] for i in zip(K, L))
def similarity(item_1, item_2):
return dot(item_1, item_2) / np.sqrt(dot(item_1, item_1) * dot(item_2, item_2))
import os
import sys
base_model = ResNet50(include_top=False, weights='imagenet')
model = Model(input=base_model.input, output=base_model.get_layer('avg_pool').output)
path = 'img/baiyi'
features = dict()
for filename in os.listdir(path): # loop through all images in the folder
img_path = path + '/' + filename
features[filename] = get_middle_layer(img_path) # get the features from the middle layer
len(features['Baiyixiuzi-B978N340-5.jpg'])
import itertools
similarities = {item: similarity(features[item[0]], features[item[1]]) for item in itertools.product(features,features)}
for key, item in similarities.items():
print key[0] + '|' + key[1] + '|' + str(item)
similarities
import math
import numpy as np
import pandas as pd
import pickle
import sys
sys.path.append('4.personalization/')
import utils.kmeans as kmeans
from collections import Counter
def average(lists):
#Tracer()()
return [np.mean(i) for i in zip(*[l for l in lists])]
def cluster(lists, model):
#Tracer()()
user_cluster = kmeans.predict(np.array([l for l in lists]), model)
user_vec = [0] * model.n_clusters
for i in user_cluster: user_vec[i] += 1
return [elem / float(sum(user_vec)) for elem in user_vec]
user_log = pd.read_pickle('4.personalization/data/viewlist_imagef.pkl')
user_vec = user_log.groupby(['user_id', 'dt'])['features'].apply(lambda x: average(x))
user_log.groupby(['user_id', 'dt'])
user_log[0:100].groupby(['user_id', 'dt'])['features'].apply(lambda x: x)
f = lambda y: y**2
f(7)
#user_log[0:100].groupby(['user_id', 'dt'])['goods_no'].apply(lambda x: x)
model_path='4.personalization/utils/model.pkl'
model = kmeans.load_model(model_path)
type(model)
model.cluster_centers_.shape
user_vec = user_log.groupby(['user_id', 'dt'])['features'].apply(lambda x: cluster(x, model))
len(user_vec[0])
buylist = pd.read_pickle('4.personalization/data/buylist_imagef2.pkl')
buylist.head(20)
candidates = pd.read_pickle('4.personalization/data/candidates.pkl')
candidates.head()
candidates_cluster = pd.read_pickle('4.personalization/data/candidates_cluster.pkl')
candidates_cluster.head()
user_vec_average = pd.read_pickle('4.personalization/data/user_vec_average.pkl')
user_vec_average.head()
user_vec_average_no_pv = pd.read_pickle('4.personalization/data/user_vec_average_no_pv.pkl')
user_vec_average_no_pv.head()
user_vec_cluster = pd.read_pickle('4.personalization/data/user_vec_cluster.pkl')
user_vec_cluster.head()
len(user_vec_cluster['features'][0])
viewlist = pd.read_pickle('4.personalization/data/viewlist_imagef.pkl')
viewlist.head(100)
viewlist_exp = pd.read_pickle('4.personalization/data/viewlist_imagef_exp.pkl')
viewlist_exp.head(100)
print('rows in buylist: {0}').format(str(len(buylist)))
print('rows in viewlsit: {0}').format(str(len(viewlist)))
print('rows in viewlsit expanded: {0}').format(str(len(viewlist_exp)))
print('rows in candidates: {0}').format(str(len(candidates)))
print('rows in candidates cluster: {0}').format(str(len(candidates)))
print('rows in user_vec_average: {0}').format(str(len(user_vec_average)))
print('number of users in buylist: {0}').format(str(len(set(buylist.user_id.unique()))))
print('number of users in viewlist: {0}').format(str(len(set(viewlist.user_id.unique()))))
print('number of users in both sets: {0}').format(str(len(set(buylist.user_id.unique()).intersection(set(viewlist.user_id.unique())))))
print('number of goods in buylist: {0}').format(str(len(set(buylist.goods_no.unique()))))
print('number of goods in viewlist: {0}').format(str(len(set(viewlist.goods_no.unique()))))
print('number of goods in both sets: {0}').format(str(len(set(buylist.goods_no.unique()).intersection(set(viewlist.goods_no.unique())))))
viewlist.features[3000]
for uid in viewlist_exp.user_id.unique():
print('user: {0}').format(uid)
indices = viewlist_exp.loc[viewlist_exp.user_id==uid].index.tolist()
print('places in database: {0}').format(indices)
print('')
#viewlist.dt
uid = 18318014
indices = viewlist_exp.loc[viewlist_exp.user_id==uid].index.tolist()
indexlast = -1
print('single user: {0}').format(uid)
for index in indices:
# find product
if index-indexlast>1:
print('')
#print('new session')
print('row {0}, good number {1} date {2}').format(index,viewlist_exp.loc[index,'goods_no'],viewlist_exp.loc[index,'dt'])
indexlast = index.copy()
dates_per_user = np.array([])
for uid in viewlist_exp.user_id.unique():
dates_per_user = np.append(dates_per_user,len(viewlist_exp.loc[viewlist_exp.user_id==uid,'dt'].unique()))
plt.hist(dates_per_user)
sns.despine()
plt.xlabel('number of dates per user')
dates_per_user
viewlist_exp.dt.unique()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We first create a HiddenMarkovModel object, and name it "Rainy-Sunny".
Step2: We then create the two possible states of the model, "rainy" and "sunny". We make them both discrete distributions, with the possibilities of Bob either walking, shopping, or cleaning.
Step3: We then add the transitions probabilities, starting with the probability the model starts as sunny or rainy.
Step4: We then add the transition matrix. We make sure to subtract 0.05 from each probability to add to the probability of exiting the hmm.
Step5: Last, we add transitions to mark the end of the model.
Step6: Finally we "bake" the model, finalizing its structure.
Step7: Now lets check on Bob each hour and see what he is doing! In other words lets create a sequence of observations.
Step8: Now lets check the probability of observing this sequence.
Step9: Then the probability that Bob will be cleaning a step 3 in this sequence.
Step10: The probability of the sequence occurring given it is Sunny at step 4 in the sequence.
Step11: Finally the probable series of states given the above sequence.
|
<ASSISTANT_TASK:>
Python Code:
from pomegranate import *
import random
import math
random.seed(0)
model = HiddenMarkovModel( name="Rainy-Sunny" )
rainy = State( DiscreteDistribution({ 'walk': 0.1, 'shop': 0.4, 'clean': 0.5 }), name='Rainy' )
sunny = State( DiscreteDistribution({ 'walk': 0.6, 'shop': 0.3, 'clean': 0.1 }), name='Sunny' )
model.add_transition( model.start, rainy, 0.6 )
model.add_transition( model.start, sunny, 0.4 )
model.add_transition( rainy, rainy, 0.65 )
model.add_transition( rainy, sunny, 0.25 )
model.add_transition( sunny, rainy, 0.35 )
model.add_transition( sunny, sunny, 0.55 )
model.add_transition( rainy, model.end, 0.1 )
model.add_transition( sunny, model.end, 0.1 )
model.bake( verbose=True )
sequence = [ 'walk', 'shop', 'clean', 'clean', 'clean', 'walk', 'clean' ]
print(math.e**model.forward( sequence )[ len(sequence), model.end_index ])
print(math.e**model.forward_backward( sequence )[1][ 2, model.states.index( rainy ) ])
print(math.e**model.backward( sequence )[ 3, model.states.index( sunny ) ])
print(" ".join( state.name for i, state in model.maximum_a_posteriori( sequence )[1] ))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need to know the children classes of Person in the DBpedia ontology. We use rdflib and networkx to find them.
Step2: There are a variety of classes in the ontology, but we do not need as much detail. We build a dictionary to obtain the first child of the Person class that matches each entity.
Step3: In Wikidata, these are the identifiers for the different gender values.
Step4: Recall that we have two extra sources of gender information. Here we use them to query for gender of specific biographies.
Step6: If this notebook has been run multiple times, we cache the Wikidata genders in order to avoid querying the system too much.
Step7: The DBpedia data includes the relationships between editions of each article. This function decides whether such links are from a different language edition, or Wikidata.
Step9: This is the function that creates a CSV file containing biographical meta-data.
Step10: We save this to be able to reuse this data in the future.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, unicode_literals
import pandas as pd
import gzip
import csv
import regex as re
import json
import time
import datetime
import requests
import os
import json
import dbpedia_config
from collections import Counter, defaultdict
from cytoolz import partition_all
from dbpedia_utils import iter_entities_from, get_date
data_folder = dbpedia_config.DATA_FOLDER
target_folder = dbpedia_config.TARGET_FOLDER
your_email = dbpedia_config.YOUR_EMAIL
query_wikidata = dbpedia_config.QUERY_WIKIDATA_GENDER
languages = dbpedia_config.LANGUAGES
import bz2file
import networkx as nx
import rdflib.graph as rdf
with open('{0}/dbpedia.owl'.format(data_folder), 'r') as f:
ontology = rdf.Graph().parse(f)
ontology_graph = nx.DiGraph()
for s, p, o in ontology:
src = str(s)
attr = str(p)
dst = str(o)
#print(s, p, o)
if attr == 'http://www.w3.org/2000/01/rdf-schema#subClassOf':
ontology_graph.add_edge(dst, src)
ontology_graph.number_of_nodes()
person_classes = set(nx.neighbors(ontology_graph, 'http://dbpedia.org/ontology/Person'))
len(person_classes)
person_classes
class_parents = {}
for level_1 in person_classes:
for descendant in nx.descendants(ontology_graph, level_1):
class_parents[descendant] = level_1
class_parents[level_1] = level_1
# to avoid querying another dictionary/set
class_parents['http://dbpedia.org/ontology/Person'] = 'http://dbpedia.org/ontology/Person'
len(class_parents)
value_dict = {6581097: 'male', 6581072: 'female', 1052281: 'transgender female', 2449503: 'transgender male'}
wikidata_api_url = 'http://www.wikidata.org/w/api.php?action=wbgetentities&ids={0}&format=json&props=claims'
headers = {'user-agent': 'gender-research-crawler/0.0.1 (contact: {0})'.format(your_email)}
gender_by_dbpedia_uri = defaultdict(lambda: None)
for ent in iter_entities_from('{0}/wiki.genders.txt'.format(data_folder)):
gender_by_dbpedia_uri[ent['resource']] = ent['gender'].pop()
len(gender_by_dbpedia_uri)
for ent in iter_entities_from('{0}/genders_en.nt.bz2'.format(data_folder)):
if ent['resource'] not in gender_by_dbpedia_uri:
gender_by_dbpedia_uri[ent['resource']] = ent['gender'].pop()
len(gender_by_dbpedia_uri)
wikidata_gender = defaultdict(lambda: None)
if os.path.exists('{0}/wikidata_entity_gender.json'.format(target_folder)):
with open('{0}/wikidata_entity_gender.json'.format(target_folder), 'r') as f:
wikidata_gender.update(json.load(f))
len(wikidata_gender)
# to avoid multiple queries of the same entity
no_gender_available = set()
def get_entity_gender(req_json, entity_id):
Given a JSON structure from Wikidata, get the gender of the specified entity.
try:
ent_value = req_json['entities'][entity_id]['claims']['P21'][0]['mainsnak']['datavalue']['value']['numeric-id']
return value_dict[ent_value]
except KeyError:
return None
dbpedia_url = re.compile(r'http://(.+)\.dbpedia.org/*+')
wikidata_url = re.compile(r'http://www.wikidata.org/entity/(.+)|http://wikidata.org/entity/(.+)')
discarded_editions = {'simple', 'commons', 'wikidata'}
def get_edition(url):
edition = None
wikidata_id = None
if url.startswith('http://dbpedia.org/'):
edition = 'en'
else:
dbp_prefix = dbpedia_url.match(url)
if dbp_prefix:
prefix = dbp_prefix.groups()[0]
if prefix not in discarded_editions:
edition = prefix
else:
wikidata = wikidata_url.match(url)
if wikidata:
if wikidata.groups()[0]:
wikidata_id = wikidata.groups()[0]
else:
wikidata_id = wikidata.groups()[1]
return edition, wikidata_id
get_edition('http://mg.dbpedia.org/resource/Paul_Otlet')
def generate_person_data(language, query_wikidata=True, skip_labels=False, skip_countries=False, include_without_gender=False):
Creates a csv file with person data from the specified language edition.
If query_wikidata is true, entities not found in our gender dictionaries will be queried in Wikidata.
# indexed by URI
person_uris = {}
person_ontologies = {}
person_birth = defaultdict(lambda: None)
person_birth_place = defaultdict(lambda: None)
person_death = defaultdict(lambda: None)
person_death_place = defaultdict(lambda: None)
person_gender = defaultdict(lambda: None)
person_editions = defaultdict(lambda: list([language]))
person_labels = defaultdict(lambda: None)
person_alternate_uri = defaultdict(lambda: None)
country_dict = defaultdict(lambda: None)
countries = set()
if not skip_countries:
try:
print('{0}/countries_{1}.json.gz'.format(target_folder, language))
with gzip.open('{0}/countries_{1}.json.gz'.format(target_folder, language), 'rt') as f:
country_dict.update(json.load(f))
countries.update(country_dict.values())
print('# toponyms', len(country_dict))
except Exception as err:
print('error loading countries', err)
skip_countries = True
def get_country(toponyms):
for t in toponyms:
if t in countries:
return t
if t in country_dict:
return country_dict[t]
return None
instance_types = '{1}/instance_types_{0}.ttl.bz2'.format(language, data_folder)
interlanguage_links = '{1}/interlanguage_links_{0}.ttl.bz2'.format(language, data_folder)
labels = '{1}/labels_{0}.ttl.bz2'.format(language, data_folder)
object_properties = '{1}/mappingbased_objects_{0}.ttl.bz2'.format(language, data_folder)
literal_properties = '{1}/mappingbased_literals_{0}.ttl.bz2'.format(language, data_folder)
for i, ent in enumerate(iter_entities_from(instance_types)):
ent_class = ent['22-rdf-syntax-ns#type'].pop()
if ent_class in class_parents:
person_uris[ent['resource']] = class_parents[ent_class]
print('# persons', len(person_uris))
entity_wikidata = defaultdict(lambda: None)
entity_uri = defaultdict(lambda: None)
without_gender = []
for i, ent in enumerate(iter_entities_from(literal_properties)):
resource = ent['resource']
if resource not in person_uris:
continue
if 'birthDate' in ent:
birth_year = get_date(ent, 'birthDate')
if birth_year is not None:
person_birth[resource] = birth_year.year
if 'deathDate' in ent:
death_year = get_date(ent, 'deathDate')
if death_year is not None:
person_death[resource] = death_year.year
if not skip_countries:
for i, ent in enumerate(iter_entities_from(object_properties)):
resource = ent['resource']
if resource not in person_uris:
continue
if 'birthPlace' in ent:
place = get_country(ent['birthPlace'])
if place is not None:
person_birth_place[resource] = place
if 'deathPlace' in ent:
place = get_country(ent['deathPlace'])
if place is not None:
person_death_place[resource] = place
if not skip_labels:
for i, ent in enumerate(iter_entities_from(labels)):
resource = ent['resource']
if not resource in person_uris:
continue
if ent['rdf-schema#label']:
person_labels[resource] = ent['rdf-schema#label'].pop()
for i, ent in enumerate(iter_entities_from(interlanguage_links)):
resource = ent['resource']
if resource not in person_uris:
continue
this_entity_editions = set()
this_entity_wikidata = None
alt_url = None
for url in ent['owl#sameAs']:
edition, wikidata_id = get_edition(url)
if edition is not None:
this_entity_editions.add(edition)
if edition == 'en':
alt_url = url
elif wikidata_id != None:
this_entity_wikidata = wikidata_id
if alt_url is None:
alt_url = ent['owl#sameAs'].pop()
person_alternate_uri[resource] = alt_url
person_editions[resource].extend(this_entity_editions)
if this_entity_wikidata:
entity_wikidata[resource] = this_entity_wikidata
entity_uri[this_entity_wikidata] = resource
for ent_uri, ent_id in entity_wikidata.items():
if ent_uri in person_gender:
continue
# do we know the URI?
if ent_uri in gender_by_dbpedia_uri:
person_gender[ent_uri] = gender_by_dbpedia_uri[ent_uri]
# perhaps using same as...
if person_alternate_uri[ent_uri] is not None:
alt_uri = person_alternate_uri[ent_uri]
if alt_uri in gender_by_dbpedia_uri:
person_gender[ent_uri] = gender_by_dbpedia_uri[alt_uri]
continue
# have we seen it on wikidata?
if ent_id in wikidata_gender:
person_gender[ent_uri] = wikidata_gender[ent_id]
elif ent_id not in no_gender_available:
without_gender.append(ent_id)
print('without gender', len(without_gender))
if query_wikidata:
for ids in partition_all(50, without_gender):
try:
req = requests.get(wikidata_api_url.format(u'|'.join(ids)), headers=headers)
req_json = req.json()
except Exception as ex:
print(ex)
time.sleep(1)
continue
for i, ent_id in enumerate(ids):
ent_gender = get_entity_gender(req_json, ent_id)
if ent_gender is None:
no_gender_available.add(ent_id)
else:
person_gender[entity_uri[ent_id]] = ent_gender
wikidata_gender[ent_id] = ent_gender
stats = dict(Counter(person_gender.values()))
stats['total_biographies'] = len(person_uris)
stats['language'] = language
stats['wikidata_entities'] = len(entity_wikidata)
stats['with_gender'] = len(person_gender)
with open('{1}/person_stats_{0}.json'.format(language, target_folder), 'w') as f:
json.dump(stats, f)
print(stats)
with gzip.open('{1}/person_data_{0}.csv.gz'.format(language, target_folder), 'wt') as f:
fields = ['uri', 'wikidata_entity', 'class', 'gender', 'edition_count', 'available_english', 'available_editions',
'birth_year', 'death_year', 'birth_place', 'death_place', 'same_as', 'label']
writer = csv.DictWriter(f, fieldnames=fields)
writer.writeheader()
for resource in person_uris.keys():
ent_gender = person_gender[resource]
if ent_gender is None and not include_without_gender:
continue
writer.writerow({
'wikidata_entity': entity_wikidata[resource],
'uri': resource,
'label': person_labels[resource] if person_labels[resource] else None,
'gender': ent_gender,
'available_english': 'en' in person_editions[resource],
'edition_count': len(person_editions[resource]),
'available_editions': u'|'.join(person_editions[resource]),
'birth_year': person_birth[resource],
'death_year': person_death[resource],
'birth_place': person_birth_place[resource],
'death_place': person_death_place[resource],
'class': person_uris[resource],
'same_as': person_alternate_uri[resource] if person_alternate_uri[resource] else None
})
return stats
len(wikidata_gender)
records = []
for lang in languages:
print(lang)
records.append(generate_person_data(lang, query_wikidata=query_wikidata))
records
with open('{0}/wikidata_entity_gender.json'.format(target_folder), 'w') as f:
json.dump(dict(wikidata_gender), f)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Считаем данные по росту и весу (weights_heights.csv, приложенный в задании) в объект Pandas DataFrame
Step2: Чаще всего первое, что надо надо сделать после считывания данных - это посмотреть на первые несколько записей. Так можно отловить ошибки чтения данных (например, если вместо 10 столбцов получился один, в названии которого 9 точек с запятой). Также это позволяет познакомиться с данными, как минимум, посмотреть на признаки и их природу (количественный, категориальный и т.д.).
Step3: Аргументы
Step4: Один из эффективных методов первичного анализа данных - отображение попарных зависимостей признаков. Создается $m \times m$ графиков (m - число признаков), где по диагонали рисуются гистограммы распределения признаков, а вне диагонали - scatter plots зависимости двух признаков. Это можно делать с помощью метода $scatter_matrix$ Pandas Data Frame или pairplot библиотеки Seaborn.
Step5: [3]. Постройте картинку, на которой будут отображены попарные зависимости признаков , 'Height', 'Weight' и 'BMI' друг от друга. Используйте метод pairplot библиотеки Seaborn.
Step6: Часто при первичном анализе данных надо исследовать зависимость какого-то количественного признака от категориального (скажем, зарплаты от пола сотрудника). В этом помогут "ящики с усами" - boxplots библиотеки Seaborn. Box plot - это компактный способ показать статистики вещественного признака (среднее и квартили) по разным значениям категориального признака. Также помогает отслеживать "выбросы" - наблюдения, в которых значение данного вещественного признака сильно отличается от других.
Step7: [5]. Постройте scatter plot зависимости роста от веса, используя метод plot для Pandas DataFrame с аргументом kind='scatter'. Подпишите картинку.
Step8: Задание 2. Минимизация квадратичной ошибки
Step9: Итак, мы решаем задачу
Step10: Минимизация квадратичной функции ошибки - относительная простая задача, поскольку функция выпуклая. Для такой задачи существует много методов оптимизации. Посмотрим, как функция ошибки зависит от одного параметра (наклон прямой), если второй параметр (свободный член) зафиксировать.
Step11: Теперь методом оптимизации найдем "оптимальный" наклон прямой, приближающей зависимость роста от веса, при фиксированном коэффициенте $w_0 = 50$.
Step12: При анализе многомерных данных человек часто хочет получить интуитивное представление о природе данных с помощью визуализации. Увы, при числе признаков больше 3 такие картинки нарисовать невозможно. На практике для визуализации данных в 2D и 3D в данных выделаяют 2 или, соответственно, 3 главные компоненты (как именно это делается - мы увидим далее в курсе) и отображают данные на плоскости или в объеме.
Step13: Создаем объекты типа matplotlib.figure.Figure (рисунок) и matplotlib.axes._subplots.Axes3DSubplot (ось).
Step14: [10]. Постройте 3D-график зависимости функции ошибки, посчитанной в п.6 от параметров $w_0$ и $w_1$. Подпишите ось $x$ меткой «Intercept», ось $y$ – меткой «Slope», a ось $z$ – меткой «Error».
Step15: [11]. С помощью метода minimize из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_0$ в диапазоне [-100,100] и $w_1$ - в диапазоне [-5, 5]. Начальная точка – ($w_0$, $w_1$) = (0, 0). Используйте метод оптимизации L-BFGS-B (аргумент method метода minimize). Проведите на графике из п. 5 Задания 1 прямую, соответствующую найденным оптимальным значениям параметров $w_0$ и $w_1$. Подпишите оси и график.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv('weights_heights.csv', index_col='Index')
data.plot(y='Height', kind='hist',
color='red', title='Height (inch.) distribution')
data.head(5)
data.plot(y='Weight', kind='hist',
color='green', title='Weight (lb.) distribution')
def make_bmi(height_inch, weight_pound):
METER_TO_INCH, KILO_TO_POUND = 39.37, 2.20462
return (weight_pound / KILO_TO_POUND) / \
(height_inch / METER_TO_INCH) ** 2
data['BMI'] = data.apply(lambda row: make_bmi(row['Height'],
row['Weight']), axis=1)
sns.pairplot(data)
def weight_category(weight):
if weight < 120:
return 1
elif weight >= 150:
return 3
else:
return 2
data['weight_cat'] = data['Weight'].apply(weight_category)
bxp = sns.boxplot(x="weight_cat", y="Height", data=data)
bxp.set_xlabel(u'Весовая категория')
bxp.set_ylabel(u'Рост')
data.plot(y='Height', x='Weight', kind='scatter', title=u'Зависимость роста от веса')
def error(w):
return np.sum((data['Height'] - (w[0] + w[1] * data['Weight'])) ** 2)
x = np.linspace(60,180)
y1 = 60 + 0.05 * x
y2 = 50 + 0.16 * x
plt.figure()
data.plot(y='Height', x='Weight', kind='scatter', color='green', title=u'Зависимость роста от веса')
plt.xlabel(u'Вес')
plt.ylabel(u'Рост')
plt.plot(x,y1)
plt.plot(x,y2)
w1 = np.linspace(-10, 10)
w0 = [50] * len(w1)
w = zip(w0, w1)
e = []
for weight in w:
e.append(error(weight))
plt.plot(w1, e)
plt.xlabel('w1')
plt.ylabel('error')
plt.title(u'Зависимость ошибки от w1 при w0 = 50')
from scipy.optimize import minimize_scalar
def error50(w1):
return np.sum((data['Height']-(50+w1*data['Weight']))**2)
w1_opt = minimize_scalar(error50, bounds=(-5,5), method='bounded')
plt.figure()
data.plot(y='Height', x='Weight', kind='scatter', color='green', title=u'Оптимальный наклон прямой при w0=50')
plt.plot(x, 50 + w1_opt.x * x)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
# Создаем массивы NumPy с координатами точек по осям X и У.
# Используем метод meshgrid, при котором по векторам координат
# создается матрица координат. Задаем нужную функцию Z(x, y).
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = np.sin(np.sqrt(X**2 + Y**2))
# Наконец, используем метод *plot_surface* объекта
# типа Axes3DSubplot. Также подписываем оси.
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
w0 = np.arange(-100, 100.25)
w1 = np.arange(-5, 5, 0.25)
w0,w1 = np.meshgrid(w0, w1)
def error_arr(w0,w1):
a=w0.shape[0]
b=w0.shape[1]
Z=np.zeros((a,b))
for i in range(a):
for j in range(b):
Z[i,j]=error((w0[i,j],w1[i,j]))
return Z
z = error_arr(w0,w1)
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(w0, w1, z)
ax.set_xlabel('Intercept')
ax.set_ylabel('Slope')
ax.set_zlabel('Error')
plt.show()
from scipy.optimize import minimize
res = minimize(error, [0,0], method='L-BFGS-B', bounds=[(-100,100),(-5,5)])
w_opt = res.x
print "w0 = %s\nw1 = %s" % (w_opt[0],w_opt[1])
plt.figure()
data.plot(y='Height', x='Weight', kind='scatter', color='green', title=u'Оптимальная прямая')
plt.plot(x,w_opt[0]+w_opt[1]*x)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tool functions
Step2: Discrete distributions
Step3: Binomial distribution
Step4: Hypergeometric distribution
Step5: Poisson distribution
Step6: Lambda=2
Step7: Lambda=3
Step8: Lambda=4
Step9: Lambda=5
Step10: Geometric distribution
Step11: Pascal distribution (negative binomial distribution)
Step12: Uniform distribution
Step13: Miscellaneous
Step14: Continuous distribution
Step15: Normal distribution
Step16: Log normal distribution
Step17: Power distribution
Step18: Beta distribution
Step19: Chi-square distribution
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def plot(data, bins=30):
plt.hist(data, bins)
plt.show()
def bernoulli(p=None, size=1):
return np.random.binomial(n=1, p=p, size=size)
bernoulli(p=0.5, size=100)
np.random.binomial(n=10, p=0.5, size=100)
data = np.random.binomial(n=10, p=0.25, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.binomial(n=10, p=0.5, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.binomial(n=10, p=0.75, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.binomial(n=25, p=0.5, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
np.random.hypergeometric(ngood=15, nbad=15, nsample=15, size=100)
data = np.random.hypergeometric(ngood=15, nbad=15, nsample=15, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
np.random.poisson(lam=1, size=100)
data = np.random.poisson(lam=1, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.poisson(lam=2, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.poisson(lam=3, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.poisson(lam=4, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.poisson(lam=5, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
np.random.geometric(p=0.5, size=100)
data = np.random.geometric(p=0.5, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
np.random.negative_binomial(n=1, p=0.1, size=100)
data = np.random.negative_binomial(n=1, p=0.1, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
np.random.choice(range(10), size=100, replace=True, p=None)
data = np.random.choice(range(25), size=100000, replace=True, p=None)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
np.random.choice(range(10), size=10, replace=False, p=None)
np.random.choice([1, 2, 3], size=100, replace=True, p=[0.8, 0.1, 0.1])
np.random.uniform(high=0.0, low=1.0, size=50)
np.random.normal(loc=0.0, scale=1.0, size=50)
data = np.random.normal(loc=0.0, scale=1.0, size=100000)
plot(data, bins=np.arange(-5, 6, 0.2))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.normal(loc=2.0, scale=1.0, size=100000)
plot(data, bins=np.arange(-5, 6, 0.2))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.normal(loc=0.0, scale=1.5, size=100000)
plot(data, bins=np.arange(-5, 6, 0.2))
print("mean:", data.mean())
print("std:", data.std())
np.random.lognormal(mean=0.0, sigma=1.0, size=50)
data = np.random.lognormal(mean=0.0, sigma=1.0, size=100000)
plot(data, bins=np.arange(0, 10, 0.2))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.lognormal(mean=2.0, sigma=1.0, size=100000)
plot(data, bins=np.arange(0, 10, 0.2))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.lognormal(mean=0.0, sigma=1.5, size=100000)
plot(data, bins=np.arange(0, 10, 0.2))
print("mean:", data.mean())
print("std:", data.std())
np.random.power(a=1.0, size=50)
data = np.random.power(a=0.25, size=100000)
plot(data, bins=np.arange(0, 2, 0.05))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.power(a=0.5, size=100000)
plot(data, bins=np.arange(0, 2, 0.05))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.power(a=1.0, size=100000)
plot(data, bins=np.arange(0, 2, 0.05))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.power(a=2.0, size=100000)
plot(data, bins=np.arange(0, 2, 0.05))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.power(a=5.0, size=100000)
plot(data, bins=np.arange(0, 2, 0.05))
print("mean:", data.mean())
print("std:", data.std())
np.random.exponential(scale=1.0, size=50)
data = np.random.exponential(scale=1.0, size=100000)
plot(data, bins=np.arange(0, 30, 0.5))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.exponential(scale=2.0, size=100000)
plot(data, bins=np.arange(0, 30, 0.5))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.exponential(scale=5.0, size=100000)
plot(data, bins=np.arange(0, 30, 0.5))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.exponential(scale=0.5, size=100000)
plot(data, bins=np.arange(0, 30, 0.5))
print("mean:", data.mean())
print("std:", data.std())
np.random.chisquare(df=1.0, size=50)
data = np.random.chisquare(df=1.0, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.chisquare(df=2.0, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.chisquare(df=5.0, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialization
Step2: We can visualize the neural network by mx.viz.plot_network.
Step3: Both argument parameters and auxiliary parameters (e.g mean/std in batch normalization layer) are stored as a dictionary of string name and ndarray value (see ndarray.ipynb). The arguments contain
Step4: while auxiliaries contains the the mean and std for the batch normalization layers.
Step5: Next we create an executable module (see module.ipynb) on GPU 0. To use a difference device, we just need to charge the context, e.g. mx.cpu() for CPU and mx.gpu(2) for the 3rd GPU.
Step6: The ResNet is trained with RGB images of size 224 x 224. The training data is feed by the variable data. We bind the module with the input shape and specify that it is only for predicting. The number 1 added before the image shape (3x224x224) means that we will only predict one image each time. Next we set the loaded parameters. Now the module is ready to run.
Step7: Prepare data
Step8: We next download 1000 images for testing, which were not used for the training.
Step9: Visualize the first 8 images.
Step10: Next we define a function that reads one image each time and convert to a format can be used by the model. Here we use a naive way that resizes the original image into the desired shape, and change the data layout.
Step11: Finally we define a input data structure which is acceptable by mxnet. The field data is used for the input data, which is a list of NDArrays.
Step12: Predict
Step13: When predicting more than one images, we can batch several images together which potentially improves the performance.
Step14: Now we iterative multiple images to calculate the accuracy
Step15: Extract Features
Step16: Often we want to use the output before the last fully connected layers, which may return semantic features of the raw images but not too fitting to the label yet. In the ResNet case, it is the flatten layer with name flatten0 before the last fullc layer. The following codes get the new symbol sym3 which use the flatten layer as the last output layer, and initialize a new module.
Step17: Now we can do feature extraction using forward1 as before. Notice that the last convolution layer uses 2048 channels, and we then perform an average pooling, so the output size of the flatten layer is 2048.
|
<ASSISTANT_TASK:>
Python Code:
import os, urllib
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.urlretrieve(url, filename)
def get_model(prefix, epoch):
download(prefix+'-symbol.json')
download(prefix+'-%04d.params' % (epoch,))
get_model('http://data.mxnet.io/models/imagenet/resnet/50-layers/resnet-50', 0)
import mxnet as mx
sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-50', 0)
mx.viz.plot_network(sym)
arg_params
aux_params
mod = mx.mod.Module(symbol=sym, label_names=None, context=mx.gpu())
mod.bind(for_training = False,
data_shapes=[('data', (1,3,224,224))])
mod.set_params(arg_params, aux_params, allow_missing=True)
download('http://data.mxnet.io/models/imagenet/resnet/synset.txt')
with open('synset.txt') as f:
synsets = [l.rstrip() for l in f]
import tarfile
download('http://data.mxnet.io/data/val_1000.tar')
tfile = tarfile.open('val_1000.tar')
tfile.extractall()
with open('val_1000/label') as f:
val_label = [int(l.split('\t')[0]) for l in f]
%matplotlib inline
import matplotlib
matplotlib.rc("savefig", dpi=100)
import matplotlib.pyplot as plt
import cv2
for i in range(0,8):
img = cv2.cvtColor(cv2.imread('val_1000/%d.jpg' % (i,)), cv2.COLOR_BGR2RGB)
plt.subplot(2,4,i+1)
plt.imshow(img)
plt.axis('off')
label = synsets[val_label[i]]
label = ' '.join(label.split(',')[0].split(' ')[1:])
plt.title(label)
import numpy as np
import cv2
def get_image(filename):
img = cv2.imread(filename) # read image in b,g,r order
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # change to r,g,b order
img = cv2.resize(img, (224, 224)) # resize to 224*224 to fit model
img = np.swapaxes(img, 0, 2)
img = np.swapaxes(img, 1, 2) # change to (channel, height, width)
img = img[np.newaxis, :] # extend to (example, channel, heigth, width)
return img
from collections import namedtuple
Batch = namedtuple('Batch', ['data'])
img = get_image('val_1000/0.jpg')
mod.forward(Batch([mx.nd.array(img)]))
prob = mod.get_outputs()[0].asnumpy()
y = np.argsort(np.squeeze(prob))[::-1]
print('truth label %d; top-1 predict label %d' % (val_label[0], y[0]))
batch_size = 32
mod2 = mx.mod.Module(symbol=sym, label_names=None, context=mx.gpu())
mod2.bind(for_training=False, data_shapes=[('data', (batch_size,3,224,224))])
mod2.set_params(arg_params, aux_params, allow_missing=True)
# @@@ AUTOTEST_OUTPUT_IGNORED_CELL
import time
acc = 0.0
total = 0.0
for i in range(0, 200/batch_size):
tic = time.time()
idx = range(i*batch_size, (i+1)*batch_size)
img = np.concatenate([get_image('val_1000/%d.jpg'%(j)) for j in idx])
mod2.forward(Batch([mx.nd.array(img)]))
prob = mod2.get_outputs()[0].asnumpy()
pred = np.argsort(prob, axis=1)
top1 = pred[:,-1]
acc += sum(top1 == np.array([val_label[j] for j in idx]))
total += len(idx)
print('batch %d, time %f sec'%(i, time.time()-tic))
assert acc/total > 0.66, "Low top-1 accuracy."
print('top-1 accuracy %f'%(acc/total))
all_layers = sym.get_internals()
all_layers.list_outputs()[-10:-1]
all_layers = sym.get_internals()
sym3 = all_layers['flatten0_output']
mod3 = mx.mod.Module(symbol=sym3, label_names=None, context=mx.gpu())
mod3.bind(for_training=False, data_shapes=[('data', (1,3,224,224))])
mod3.set_params(arg_params, aux_params)
img = get_image('val_1000/0.jpg')
mod3.forward(Batch([mx.nd.array(img)]))
out = mod3.get_outputs()[0].asnumpy()
print(out.shape)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, let's construct the moduli. We'll have both a true version and a noisy version with some random noise added to simulate experimental variance.
Step2: Now, we can build the model with PyMC3. I'll make 2
Step3: Now we can sample the models to get our parameter distributions
Step4: Load trace
|
<ASSISTANT_TASK:>
Python Code:
def H(tau):
g1 = 1; tau1 = 0.03; sd1 = 0.5;
g2 = 7; tau2 = 10; sd2 = 0.5;
term1 = g1/np.sqrt(2*sd1**2*np.pi) * np.exp(-(np.log10(tau/tau1)**2)/(2*sd1**2))
term2 = g2/np.sqrt(2*sd2**2*np.pi) * np.exp(-(np.log10(tau/tau2)**2)/(2*sd2**2))
return term1 + term2
Nfreq = 50
Nmodes = 30
w = np.logspace(-4,4,Nfreq).reshape((1,Nfreq))
tau = np.logspace(-np.log10(w.max()),-np.log10(w.min()),Nmodes).reshape((Nmodes,1))
# get equivalent discrete spectrum
delta_log_tau = np.log10(tau[1]/tau[0])
g_true = (H(tau) * delta_log_tau).reshape((1,Nmodes))
plt.loglog(tau,H(tau), label='Continuous spectrum')
plt.plot(tau.ravel(),g_true.ravel(), 'or', label='Equivalent discrete spectrum')
plt.legend(loc=4)
plt.xlabel(r'$\tau$')
plt.ylabel(r'$H(\tau)$ or $g$')
wt = tau*w
Kp = wt**2/(1+wt**2)
Kpp = wt/(1+wt**2)
noise_level = 0.02
Gp_true = np.dot(g_true,Kp)
Gp_noise = Gp_true + Gp_true*noise_level*np.random.randn(Nfreq)
Gpp_true = np.dot(g_true,Kpp)
Gpp_noise = Gpp_true + Gpp_true*noise_level*np.random.randn(Nfreq)
plt.loglog(w.ravel(),Gp_true.ravel(),label="True G'")
plt.plot(w.ravel(),Gpp_true.ravel(), label='True G"')
plt.plot(w.ravel(),Gp_noise.ravel(),'xr',label="Noisy G'")
plt.plot(w.ravel(),Gpp_noise.ravel(),'+r',label='Noisy G"')
plt.xlabel(r'$\omega$')
plt.ylabel("Moduli")
plt.legend(loc=4)
noisyModel = pm.Model()
with noisyModel:
g = pm.Uniform('g', lower=Gp_noise.min()/1e4, upper=Gp_noise.max(),
shape=g_true.shape)
sd1 = pm.HalfNormal('sd1',tau=1)
sd2 = pm.HalfNormal('sd2',tau=1)
# we'll log-weight the moduli as in other fitting methods
logGp = pm.Normal('logGp',mu=np.log(tt.dot(g,Kp)),
sd=sd1, observed=np.log(Gp_noise))
logGpp = pm.Normal('logGpp',mu=np.log(tt.dot(g,Kpp)),
sd=sd2, observed=np.log(Gpp_noise))
trueModel = pm.Model()
with trueModel:
g = pm.Uniform('g', lower=Gp_true.min()/1e4, upper=Gp_true.max(),
shape=g_true.shape)
sd1 = pm.HalfNormal('sd1',tau=1)
sd2 = pm.HalfNormal('sd2',tau=1)
# we'll log-weight the moduli as in other fitting methods
logGp = pm.Normal('logGp',mu=np.log(tt.dot(g,Kp)),
sd=sd1, observed=np.log(Gp_true))
logGpp = pm.Normal('logGpp',mu=np.log(tt.dot(g,Kpp)),
sd=sd2, observed=np.log(Gpp_true))
Nsamples = 5000
trueMapEstimate = pm.find_MAP(model=trueModel)
with trueModel:
trueTrace = pm.sample(Nsamples, start=trueMapEstimate)
pm.backends.text.dump('./Double_Maxwell_true', trueTrace)
noisyMapEstimate = pm.find_MAP(model=noisyModel)
with noisyModel:
noisyTrace = pm.sample(Nsamples, start=noisyMapEstimate)
pm.backends.text.dump('./Double_Maxwell_noisy', noisyTrace)
noisyTrace = pm.backends.text.load('./Double_Maxwell_noisy',model=noisyModel)
trueTrace = pm.backends.text.load('./Double_Maxwell_true', model=trueModel)
burn = 500
trueQ = pm.quantiles(trueTrace[burn:])
noisyQ = pm.quantiles(noisyTrace[burn:])
def plot_quantiles(Q,ax):
ax.fill_between(tau.ravel(), y1=Q['g'][2.5], y2=Q['g'][97.5], color='c',
alpha=0.25)
ax.fill_between(tau.ravel(), y1=Q['g'][25], y2=Q['g'][75], color='c',
alpha=0.5)
ax.plot(tau.ravel(), Q['g'][50], 'b-')
# sampling localization lines:
ax.axvline(x=np.exp(np.pi/2)/w.max(), color='k', linestyle='--')
ax.axvline(x=(np.exp(np.pi/2)*w.min())**-1, color='k', linestyle='--')
fig,ax = plt.subplots(nrows=2, sharex=True,
subplot_kw={'xscale':'log','yscale':'log',
'ylabel':'$g_i$'})
plot_quantiles(trueQ,ax[0])
plot_quantiles(noisyQ,ax[1])
# true spectrum
trueSpectrumline0 = ax[0].plot(tau.ravel(), g_true.ravel(),'xr',
label='True Spectrum')
trueSpectrumline1 = ax[1].plot(tau.ravel(), g_true.ravel(),'xr',
label='True Spectrum')
ax[0].legend(loc=4)
ax[0].set_title('Using True Moduli')
ax[1].set_xlabel(r'$\tau$')
ax[1].legend(loc=4)
ax[1].set_title('Using Noisy Moduli')
fig.set_size_inches(5,8)
fig.savefig('True,Noisy_moduli_uniform_prior.png',dpi=500)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
words = set(text)
words_to_key = {w: i for i, w in enumerate(words)}
key_to_words = {i: w for i, w in enumerate(words)}
return words_to_key, key_to_words
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return {
'.':'||PERIOD||',
',':'||COMMA||',
'"':'||QUOTATION_MARK||',
';':'||SEMICOLON||',
'!':'||EXCLAMATION_MARK||',
'?':'||QUESTION_MARK||',
'(':'||LEFT_PARENTHESES||',
')':'||RIGHT_PARENTHESES||',
'--':'||DASH||',
'\n':'||NEWLINE||'
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
input = tf.placeholder(tf.int32, shape=(None, None), name="input")
targets = tf.placeholder(tf.int32, shape=(None, None), name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
return input, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm_layer_count = 2
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layer_count)
init_state = cell.zero_state(batch_size, tf.float32)
init_state = tf.identity(init_state, name='initial_state')
return cell, init_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
random = tf.Variable(tf.random_normal((vocab_size, embed_dim)))
return tf.nn.embedding_lookup(random, input_data)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
output, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
return output, tf.identity(final_state, name='final_state')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, rnn_size)
output, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(output, vocab_size, activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
batch_count = int(len(int_text) / (batch_size * seq_length))
x_data = np.array(int_text[: batch_count * batch_size * seq_length])
x_batches = np.split(x_data.reshape(batch_size, -1), batch_count, 1)
y_data = np.array(int_text[1: batch_count * batch_size * seq_length + 1])
y_batches = np.split(y_data.reshape(batch_size, -1), batch_count, 1)
return np.array(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 180
# Batch Size
batch_size = 100
# RNN Size
rnn_size = 256
# Sequence Length
seq_length = 16
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 10
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
input = loaded_graph.get_tensor_by_name('input:0')
initial_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probabilities = loaded_graph.get_tensor_by_name('probs:0')
return input, initial_state, final_state, probabilities
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
probabilities = [(i,p) for i, p in enumerate(probabilities)]
probabilities.sort(key=lambda x: x[1], reverse=True)
choice = np.random.choice([i[0] for i in probabilities[:10]])
return int_to_vocab[choice]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: API overview
Step2: Let's save the TensorBoard logs and model checkpoints generated during training
Step3: Here, we will load our data from Keras directly. In general, it's best practice
Step4: The TensorFlow Cloud API provides the
Step5: Let's save the model in GCS after the training is complete.
Step6: We can also use this storage bucket for Docker image building, instead of your local
Step7: After training the model, we can load the saved model and view our TensorBoard logs
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q tensorflow_cloud
import tensorflow as tf
import tensorflow_cloud as tfc
from tensorflow import keras
from tensorflow.keras import layers
def create_model():
model = keras.Sequential(
[
keras.Input(shape=(28, 28)),
layers.experimental.preprocessing.Rescaling(1.0 / 255),
layers.Reshape(target_shape=(28, 28, 1)),
layers.Conv2D(32, 3, activation="relu"),
layers.MaxPooling2D(2),
layers.Conv2D(32, 3, activation="relu"),
layers.MaxPooling2D(2),
layers.Conv2D(32, 3, activation="relu"),
layers.Flatten(),
layers.Dense(128, activation="relu"),
layers.Dense(10),
]
)
model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=keras.metrics.SparseCategoricalAccuracy(),
)
return model
import datetime
import os
# Note: Please change the gcp_bucket to your bucket name.
gcp_bucket = "keras-examples"
checkpoint_path = os.path.join("gs://", gcp_bucket, "mnist_example", "save_at_{epoch}")
tensorboard_path = os.path.join( # Timestamp included to enable timeseries graphs
"gs://", gcp_bucket, "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
)
callbacks = [
# TensorBoard will store logs for each epoch and graph performance for us.
keras.callbacks.TensorBoard(log_dir=tensorboard_path, histogram_freq=1),
# ModelCheckpoint will save models after each epoch for retrieval later.
keras.callbacks.ModelCheckpoint(checkpoint_path),
# EarlyStopping will terminate training when val_loss ceases to improve.
keras.callbacks.EarlyStopping(monitor="val_loss", patience=3),
]
model = create_model()
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
if tfc.remote():
epochs = 100
callbacks = callbacks
batch_size = 128
else:
epochs = 5
batch_size = 64
callbacks = None
model.fit(x_train, y_train, epochs=epochs, callbacks=callbacks, batch_size=batch_size)
save_path = os.path.join("gs://", gcp_bucket, "mnist_example")
if tfc.remote():
model.save(save_path)
# docs_infra: no_execute
tfc.run(docker_image_bucket_name=gcp_bucket)
# docs_infra: no_execute
model = keras.models.load_model(save_path)
!#docs_infra: no_execute
!tensorboard dev upload --logdir "gs://keras-examples-jonah/logs/fit" --name "Guide MNIST"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic classification
Step2: Import the Fashion MNIST dataset
Step3: Loading the dataset returns four NumPy arrays
Step4: Explore the data
Step5: Likewise, there are 60,000 labels in the training set
Step6: Each label is an integer between 0 and 9
Step7: There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels
Step8: And the test set contains 10,000 images labels
Step9: Preprocess the data
Step10: Scale these values to a range of 0 to 1 before feeding them to the neural network model. To do so, divide the values by 255. It's important that the training set and the testing set be preprocessed in the same way
Step11: To verify that the data is in the correct format and that you're ready to build and train the network, let's display the first 25 images from the training set and display the class name below each image.
Step12: Build the model
Step13: The first layer in this network, tf.keras.layers.Flatten, transforms the format of the images from a two-dimensional array (of 28 by 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.
Step14: Train the model
Step15: Feed the model
Step16: As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.91 (or 91%) on the training data.
Step17: Evaluate accuracy
Step18: It turns out that the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy represents overfitting. Overfitting is when a machine learning model performs worse on new, previously unseen inputs than on the training data. An overfitted model "memorizes" the training data—with less accuracy on testing data. For more information, see the following
Step19: Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction
Step20: A prediction is an array of 10 numbers. They represent the model's "confidence" that the image corresponds to each of the 10 different articles of clothing. You can see which label has the highest confidence value
Step21: So, the model is most confident that this image is an ankle boot, or class_names[9]. Examining the test label shows that this classification is correct
Step22: Graph this to look at the full set of 10 class predictions.
Step23: Verify predictions
Step24: Let's plot several images with their predictions. Note that the model can be wrong even when very confident.
Step25: Use the trained model
Step26: tf.keras models are optimized to make predictions on a batch, or collection, of examples at once. Accordingly, even though you're using a single image, you need to add it to a list
Step27: Now predict the correct label for this image
Step28: keras.Model.predict returns a list of lists—one list for each image in the batch of data. Grab the predictions for our (only) image in the batch
Step29: Visually check prediction on a single image
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
import tensorflow as tf
print('Using tensorflow version: {} ...'.format(tf.__version__))
print('Visible devices for tensorflow: {} ...'.format(tf.config.list_physical_devices()))
from __future__ import absolute_import, division, print_function, unicode_literals
# tf.keras
import tensorflow.keras as keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print('Using keras version: {} ...'.format(keras.__version__))
WEIGHTS_DIR = './weights/'
WEIGHTS_FILENAME = 'fashionMNIST_classification.h5'
WEIGHTS_FILE ='{}/{}'.format(WEIGHTS_DIR, WEIGHTS_FILENAME)
(train_images, train_labels), (test_images, test_labels) = keras.datasets.fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
train_images.shape
len(train_labels)
train_labels
test_images.shape
len(test_labels)
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
train_images = train_images / 255.0
test_images = test_images / 255.0
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
import os
if not os.path.exists(WEIGHTS_FILE):
! mkdir {WEIGHTS_DIR}
! wget https://github.com/rahulremanan/python_tutorial/blob/master/Machine_Vision/00_Install_Instructions/notebook/weights/fashionMNIST_classification.h5?raw=true -O {WEIGHTS_FILE}
if os.path.exists(WEIGHTS_FILE):
model.load_weights(WEIGHTS_FILE)
print ('Loaded pre-trained weights ...')
model.fit(train_images, train_labels, epochs=10, batch_size=256)
model.save_weights(WEIGHTS_FILE)
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
predictions[0]
np.argmax(predictions[0])
test_labels[0]
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
# Grab an image from the test dataset.
i = 1
img = test_images[i]
print(img.shape)
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
predictions_single = probability_model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
np.argmax(predictions_single[0])
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions_single, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup/create a table
Step2: Read data using pandas and store them in sqlite
Step3: Summarizing Queries
|
<ASSISTANT_TASK:>
Python Code:
import sqlite3
conn = sqlite3.connect('election_tweets.sqlite')
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS Tweets")
cur.execute("CREATE TABLE Tweets(state VARCHAR(10), party VARCHAR(20), sentiment INT2)")
conn.commit()
import pandas as pd
reader = pd.read_table('http://vahidmirjalili.com/election-2016/opFromNLP-2.txt',
sep='|', header=None, chunksize=100)
sentiment={'Neutral':0,
'Positive':1,
'Negative':-1}
for chunk in reader:
for i in range(chunk.shape[0]):
line = chunk.iloc[[i]].values[0]
cur.execute("INSERT INTO Tweets (state, party, sentiment) \
VALUES (?,?,?)",
(line[0], line[1], sentiment[line[2]]))
conn.commit()
cur.execute('SELECT count(*) FROM Tweets')
num_tweets = cur.fetchall()
print('Total number of tweets: %d'%(num_tweets[0]))
cur.execute('SELECT sentiment,COUNT(*) FROM Tweets GROUP BY sentiment')
results = cur.fetchall()
for res in results:
print("Count of %d tweets: %d"%(res[0], res[1]))
import seaborn as sns
import matplotlib
import numpy as np
import pandas as pd
results = pd.DataFrame(results)
results.columns = ['Mood', 'Freq']
results['Freq'] = results['Freq']/np.sum(results['Freq'])
%matplotlib inline
ax = sns.barplot(x="Mood", y="Freq", data=results)
cur.execute('SELECT state,SUM(sentiment),count(*) \
FROM Tweets WHERE party="Democrat" GROUP BY state')
dem_results = cur.fetchall()
cur.execute('SELECT state,SUM(sentiment),count(*) \
FROM Tweets WHERE party="Republican" GROUP BY state')
rep_results = cur.fetchall()
for dem_res,rep_res in zip(dem_results,rep_results):
if(len(dem_res[0]) == 2):
print("%s\tDemocrat: %6.2f\tRepublican: %6.2f"%(
dem_res[0], dem_res[1]/dem_res[2], rep_res[1]/rep_res[2]))
dem_df = pd.DataFrame(dem_results)
rep_df = pd.DataFrame(rep_results)
df = pd.DataFrame({'state':dem_df[0], 'dem':dem_df[2], 'rep':rep_df[2], 'tot':dem_df[2]+rep_df[2]})
df.to_csv('/tmp/res', sep=' ')
ax = sns.barplot(x="state", y="tot", data=df)
for dem_res,rep_res in zip(dem_results,rep_results):
if(len(dem_res[0]) == 2):
if (dem_res[1]/dem_res[2] > rep_res[1]/rep_res[2]):
print("%s\tDemocrat \t%.3f"%(
dem_res[0], dem_res[1]/dem_res[2] -rep_res[1]/rep_res[2]))
else:
print("%s\tRepublican\t%.3f"%(
rep_res[0], rep_res[1]/rep_res[2] - dem_res[1]/dem_res[2]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset
Step2: pandas.read_csv
Step3: Expect pandas to take >3 minutes on EC2, longer locally
Step4: Are they equals?
Step5: Concept for exercise
Step6: Are they equal?
|
<ASSISTANT_TASK:>
Python Code:
import modin.pandas as pd
import pandas
import time
import modin.config as cfg
cfg.StorageFormat.put("omnisci")
# We download data locally because currently `OmnisciOnNative` doesn't support read files from s3 storage.
# Note that this may take a few minutes to download.
import urllib.request
url_path = "https://modin-test.s3.us-west-1.amazonaws.com/yellow_tripdata_2015-01.csv"
urllib.request.urlretrieve(url_path, "taxi.csv")
path = "taxi.csv"
start = time.time()
pandas_df = pandas.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"])
end = time.time()
pandas_duration = end - start
print("Time to read with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
modin_df = pd.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"])
end = time.time()
modin_duration = end - start
print("Time to read with Modin: {} seconds".format(round(modin_duration, 3)))
print("Modin is {}x faster than pandas at `read_csv`!".format(round(pandas_duration / modin_duration, 2)))
modin_df
pandas_df
start = time.time()
pandas_groupby = pandas_df.groupby(by="total_amount").count()
end = time.time()
pandas_duration = end - start
print("Time to groupby with pandas: {} seconds".format(round(pandas_duration, 3)))
start = time.time()
modin_groupby = modin_df.groupby(by="total_amount").count()
end = time.time()
modin_duration = end - start
print("Time to groupby with Modin: {} seconds".format(round(modin_duration, 3)))
print("Modin is {}x faster than pandas at `groupby`!".format(round(pandas_duration / modin_duration, 2)))
pandas_groupby
modin_groupby
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Dice problem
Step2: Here's what the update looks like
Step3: And here's what it looks like after more data
Step4: The train problem
Step5: But there are many more hypotheses
Step6: Here's what the posterior looks like
Step7: And here's how we can compute the posterior mean
Step8: Or we can just use the method
Step10: Sensitivity to the prior
Step11: Let's run it with the same dataset and several uniform priors
Step12: The results are quite sensitive to the prior, even with several observations.
Step13: Here's what a power law prior looks like, compared to a uniform prior
Step14: Now let's see what the posteriors look like after observing one train.
Step15: The power law gives less prior probability to high values, which yields lower posterior means, and less sensitivity to the upper bound.
Step16: Credible intervals
Step17: If you have to compute more than a few percentiles, it is more efficient to compute a CDF.
Step18: Cdf also provides Percentile
Step19: Exercises
Step24: Exercise
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
% matplotlib inline
import thinkplot
from thinkbayes2 import Hist, Pmf, Suite, Cdf
class Dice(Suite):
def Likelihood(self, data, hypo):
if hypo < data:
return 0
else:
return 1/hypo
suite = Dice([4, 6, 8, 12, 20])
suite.Update(6)
suite.Print()
for roll in [6, 8, 7, 7, 5, 4]:
suite.Update(roll)
suite.Print()
class Train(Suite):
def Likelihood(self, data, hypo):
if hypo < data:
return 0
else:
return 1/hypo
hypos = xrange(1, 1001)
suite = Train(hypos)
suite.Update(60)
thinkplot.Pdf(suite)
def Mean(suite):
total = 0
for hypo, prob in suite.Items():
total += hypo * prob
return total
Mean(suite)
suite.Mean()
def MakePosterior(high, dataset, constructor=Train):
Solves the train problem.
high: int maximum number of trains
dataset: sequence of observed train numbers
constructor: function used to construct the Train object
returns: Train object representing the posterior suite
hypos = range(1, high+1)
suite = constructor(hypos)
for data in dataset:
suite.Update(data)
return suite
dataset = [30, 60, 90]
for high in [500, 1000, 2000]:
suite = MakePosterior(high, dataset)
print(high, suite.Mean())
class Train2(Train):
def __init__(self, hypos, alpha=1.0):
Pmf.__init__(self)
for hypo in hypos:
self[hypo] = hypo**(-alpha)
self.Normalize()
high = 100
hypos = range(1, high+1)
suite1 = Train(hypos)
suite2 = Train2(hypos)
thinkplot.Pdf(suite1)
thinkplot.Pdf(suite2)
dataset = [60]
high = 1000
thinkplot.PrePlot(num=2)
constructors = [Train, Train2]
labels = ['uniform', 'power law']
for constructor, label in zip(constructors, labels):
suite = MakePosterior(high, dataset, constructor)
suite.label = label
thinkplot.Pmf(suite)
thinkplot.Config(xlabel='Number of trains',
ylabel='Probability')
dataset = [30, 60, 90]
for high in [500, 1000, 2000]:
suite = MakePosterior(high, dataset, Train2)
print(high, suite.Mean())
hypos = xrange(1, 1001)
suite = Train(hypos)
suite.Update(60)
suite.Percentile(5), suite.Percentile(95)
cdf = Cdf(suite)
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Number of trains',
ylabel='Cumulative Probability',
legend=False)
cdf.Percentile(5), cdf.Percentile(95)
# Solution
# Suppose Company A has N trains and all other companies have M.
# The chance that we would observe one of Company A's trains is $N/(N+M)$.
# Given that we observe one of Company A's trains, the chance that we
# observe number 60 is $1/N$ for $N \ge 60$.
# The product of these probabilities is $1/(N+M)$, which is just the
# probability of observing any given train.
# If N<<M, this converges to a constant, which means that all value of $N$
# have the same likelihood, so we learn nothing about how many trains
# Company A has.
# If N>>M, this converges to $1/N$, which is what we saw in the previous
# solution.
# More generally, if M is unknown, we would need a prior distribution for
# M, then we can do a two-dimensional update, and then extract the posterior
# distribution for N.
# We'll see how to do that soon.
# Solution
from scipy.special import binom
class Hyrax(Suite):
Represents hypotheses about how many hyraxes there are.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: total population (N)
data: # tagged (K), # caught (n), # of caught who were tagged (k)
N = hypo
K, n, k = data
if hypo < K + (n - k):
return 0
like = binom(N-K, n-k) / binom(N, n)
return like
# Solution
hypos = range(1, 1000)
suite = Hyrax(hypos)
data = 10, 10, 2
suite.Update(data)
# Solution
thinkplot.Pdf(suite)
thinkplot.Config(xlabel='Number of hyraxes', ylabel='PMF', legend=False)
# Solution
print('Posterior mean', suite.Mean())
print('Maximum a posteriori estimate', suite.MaximumLikelihood())
print('90% credible interval', suite.CredibleInterval(90))
# Solution
from scipy import stats
class Hyrax2(Suite):
Represents hypotheses about how many hyraxes there are.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: total population (N)
data: # tagged (K), # caught (n), # of caught who were tagged (k)
N = hypo
K, n, k = data
if hypo < K + (n - k):
return 0
like = stats.hypergeom.pmf(k, N, K, n)
return like
# Solution
hypos = range(1, 1000)
suite = Hyrax2(hypos)
data = 10, 10, 2
suite.Update(data)
# Solution
print('Posterior mean', suite.Mean())
print('Maximum a posteriori estimate', suite.MaximumLikelihood())
print('90% credible interval', suite.CredibleInterval(90))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Distributions
Step2: Note that the values on the $P(x)$ axis are probability densities rather than probabilities
Step3: If we pick values at random from $x$ and $y$, we can calculate the joint probability distribution, $P(x,y)$
Step4: In the 1D example above, the total area under the curve was equal to 1 and to find the probabiltiy of $x$ lying between $a$ and $b$ we integrated over that interval. Here the situation is exactly the same, except now we have an extra dimension. This means that the total volume under the surface must be equal to 1, because the values for $x$ and $y$ must lie somewhere in the $x-y$ plane.
Step5: 1.3. Independent and dependent variables
Step6: 1.4. Marginal and conditional distributions
Step7: If we're interested in deriving an equation for the probability density distribution over $x$, rather than the actual probability of $x$ lying within a particular interval, this integral becomes
Step8: Another question we might want to ask is, "Given that a person weighs between 70 and 90 kgs, what is the probability that their height is between 1.6 and 1.8 m?" This is a conditional probability, because we're interested in the probability that height lies in a certain range, conditioned on the fact that weight is between 70 and 90 kgs.
|
<ASSISTANT_TASK:>
Python Code:
def gaussian_function(x, mu=0, sigma=1):
Simple example function to return the probability density of a Gaussian with mean, mu,
and standard deviation, sigma, evaluated at values in x.
Note that a better function is available in scipy.stats.norm - this version is just
for illustration.
return (1/(sigma*(2*np.pi)**0.5))*np.exp(-((x - mu)**2)/(2*sigma**2))
# Pick some example values for mu and sigma
mu = 5
sigma = 3
# Calculate probability densities, P_x, at a range of values of x
x = np.arange(-5, 15, 0.1)
P_x = gaussian_function(x, mu=mu, sigma=sigma)
# Plot
plt.plot(x, P_x, 'k-')
plt.fill_between(x, P_x, where=((8<=x)&(x<=10)), alpha=0.3, color='red') # Explained below
plt.xlabel('$x$', fontsize=20)
plt.ylabel('$P(x)$', fontsize=20)
# Pick some example values for mu and sigma
mu_x, mu_y = 5, 8
sigma_x, sigma_y = 3, 5
# Calculate densities
x = np.arange(-10, 30, 0.1)
P_x = gaussian_function(x, mu=mu_x, sigma=sigma_x)
P_y = gaussian_function(x, mu=mu_y, sigma=sigma_y)
# Plot
fig, axes = plt.subplots(nrows=1, ncols=2)
data = {'x':P_x, 'y':P_y}
for idx, item in enumerate(['x', 'y']):
axes[idx].plot(x, data[item], 'k-')
axes[idx].set_xlabel('$%s$' % item, fontsize=20)
axes[idx].set_ylabel('$P(%s)$' % item, fontsize=20)
fig.subplots_adjust(wspace=0.8)
import scipy.stats as stats
from mpl_toolkits.mplot3d import Axes3D
x = y = np.arange(-10, 30, 0.1)
# Get a "mesh" for the x and y values
X, Y = np.meshgrid(x, y)
# This time we'll use the "norm" function from scipy.stats, although our gaussian_function
# from above would work just as well
norm_x = stats.norm.pdf(x, loc=mu_x, scale=sigma_x)
norm_y = stats.norm.pdf(y, loc=mu_y, scale=sigma_y)
M = np.dot(norm_y[:, None], norm_x[None, :])
# Plot
fig = plt.figure(figsize=(15,5))
# First subplot in 3D
ax1 = fig.add_subplot(121, projection='3d')
ax1.plot_surface(X, Y, M, cmap=plt.cm.terrain)
ax1.view_init(azim=390)
ax1.set_xlabel('$x$', fontsize=16)
ax1.set_ylabel('$y$', fontsize=16)
ax1.set_zlabel('$P(x,y)$', fontsize=16)
# Second subplot as a contour plot
ax2 = fig.add_subplot(122)
ax2.contour(X, Y, M)
ax2.imshow(M, interpolation='none', origin='lower',
cmap=plt.cm.terrain,
extent=(-10, 30, -10, 30))
ax2.set_xlabel('$x$', fontsize=16)
ax2.set_ylabel('$y$', fontsize=16)
from matplotlib.patches import Rectangle
# Pick some example values for a, b, c and d
a, b, c, d = 5, 10, 0, 5
# Contour plot
fig = plt.figure()
ax = fig.add_subplot(111)
ax.contour(X, Y, M)
ax.imshow(M, interpolation='none', origin='lower',
cmap=plt.cm.terrain, alpha=0.6,
extent=(-10, 30, -10, 30))
ax.add_patch(Rectangle((a, c), b-a, d-c, facecolor='red', alpha=0.5))
ax.set_xlabel('$x$', fontsize=16)
ax.set_ylabel('$y$', fontsize=16)
from scipy.stats import multivariate_normal
# Use the same mu and sigma values from above. We need new values for sigma_xy and sigma_yx
sigma_xy = sigma_yx = 3
mus = [mu_x, mu_y]
cov = [[sigma_x**2, sigma_yx**2], [sigma_xy**2, sigma_y**2]] # Covariance matrix
# Grid of x and y values
x = y = np.arange(-10, 30, 0.1)
X, Y = np.meshgrid(x, y)
pos = np.empty(X.shape + (2,))
pos[:, :, 0] = X
pos[:, :, 1] = Y
# Create a "frozen" distribution
rv = multivariate_normal(mus, cov)
# Sample from the distribution
M = rv.pdf(pos)
# Contour plot
fig = plt.figure()
ax = fig.add_subplot(111)
ax.contour(X, Y, M)
ax.imshow(M, interpolation='none', origin='lower',
cmap=plt.cm.terrain, alpha=0.6,
extent=(-10, 30, -10, 30))
ax.set_xlabel('$x$', fontsize=16)
ax.set_ylabel('$y$', fontsize=16)
# Pick some example values for a, b, c and d
a, b, c, d = 5, 10, 0, 5
# Set up for plotting
fig, axes = plt.subplots(nrows=1, ncols=2)
# Integrate out y
axes[0].contour(X, Y, M)
axes[0].imshow(M, interpolation='none', origin='lower',
cmap=plt.cm.terrain, alpha=0.6,
extent=(-10, 30, -10, 30))
axes[0].axvspan(a, b, alpha=0.5, color='red')
axes[0].set_xlabel('$x$', fontsize=16)
axes[0].set_ylabel('$y$', fontsize=16)
axes[0].set_title('Integrating out $y$')
# Integrate out x
axes[1].contour(X, Y, M)
axes[1].imshow(M, interpolation='none', origin='lower',
cmap=plt.cm.terrain, alpha=0.6,
extent=(-10, 30, -10, 30))
axes[1].axhspan(c, d, alpha=0.5, color='red')
axes[1].set_xlabel('$x$', fontsize=16)
axes[1].set_ylabel('$y$', fontsize=16)
axes[1].set_title('Integrating out $x$')
plt.subplots_adjust(wspace=0.8)
# This time we'll use the handy jointplot function in Seaborn to plot
# the same multivariate normal distribution as above
# Rather than evaluating the densities over a regular grid, for simplicity
# we'll just choose 1000 values at random
data = np.random.multivariate_normal(mus, cov, size=1000)
data = pd.DataFrame(data, columns=['$x$', '$y$'])
sn.jointplot('$x$', '$y$', data, kind='kde', stat_func=None)
# Pick some example values for a, b, c and d
a, b, c, d = 5, 10, 0, 5
# Contour plot
fig = plt.figure()
ax = fig.add_subplot(111)
ax.contour(X, Y, M)
ax.imshow(M, interpolation='none', origin='lower',
cmap=plt.cm.terrain, alpha=0.6,
extent=(-10, 30, -10, 30))
ax.axvspan(a, b, alpha=0.5, color='red')
ax.add_patch(Rectangle((a, c), b-a, d-c, color='black'))
ax.set_xlabel('$x$', fontsize=16)
ax.set_ylabel('$y$', fontsize=16)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: <a href="https
Step4: We've succesfully loaded our data, but there are still a couple preprocessing steps to go through first. Specifically, we're going to
Step5: Now that we have our features and labels set, it's time to start modeling!
Step6: The following blocks of code will generate 2 candidate classifiers, for labeling the datapoints as either label 0 (high obesity rate), or label 1 (high obesity rate).
Step7: 1A) Which model do you think is better, Classifier 1 or Classifier 2? Explain your reasoning.
Step8: 2A) In light of all the new datapoints, now which classifier do you think is better, Classifer 1 or Classifier 2? Explain your reasoning.
Step9: 2.1) Accuracy
Step10: 2.2) Train/Test Splits
Step11: 2.2.3) Training vs. Test Accuracy
Step12: 2.3A) Play around with the code box above to find a good value of $k$. What happens if $k$ is very large or very small?
Step13: 3A) Run the code boxes above and select which model you would choose to deploy. Justify your answer.
|
<ASSISTANT_TASK:>
Python Code:
Install Data Commons API
We need to install the Data Commons API, since they don't ship natively with
most python installations.
In Colab, we'll be installing the Data Commons python and pandas APIs
through pip.
!pip install datacommons --upgrade --quiet
!pip install datacommons_pandas --upgrade --quiet
Imports
This is where we'll load all the libraries we need for this assignment.
# Data Commons Python and Pandas APIs
import datacommons
import datacommons_pandas
# For manipulating data
import numpy as np
import pandas as pd
# For implementing models and evaluation methods
from sklearn import linear_model, svm, tree
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.preprocessing import StandardScaler
# For plotting
from matplotlib import pyplot as plt
from mlxtend.plotting import plot_decision_regions, category_scatter
Loading the Data
We'll query data using the Data Commons API, storing it in a Pandas data frame
city_dcids = datacommons.get_property_values(["CDC500_City"],
"member",
limit=500)["CDC500_City"]
# We've compiled a list of some nice Data Commons Statistical Variables
# to use as features for you
stat_vars_to_query = [
"Count_Person",
"Median_Income_Person",
"Count_Person_NoHealthInsurance",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"dc/e9gftzl2hm8h9", # Commute Time, this has a weird DCID
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol",
"Percent_Person_Obesity"
]
# Query Data Commons for the data and display the data
raw_features_df = datacommons_pandas.build_multivariate_dataframe(city_dcids,stat_vars_to_query)
display(raw_features_df)
# Make Row Names More Readable
# --- First, we'll copy the dcids into their own column
# --- Next, we'll get the "name" property of each dcid
# --- Then add the returned dictionary to our data frame as a new column
# --- Finally, we'll set this column as the new index
df = raw_features_df.copy(deep=True)
df['DCID'] = df.index
city_name_dict = datacommons.get_property_values(city_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
df['City'] = pd.Series(city_name_dict)
df.set_index('City', inplace=True)
# Rename column "dc/e9gftzl2jm8h9" to "Commute_Time"
df.rename(columns={"dc/e9gftzl2hm8h9":"Commute_Time"}, inplace=True)
# Convert commute_time value
avg_commute_time = df["Commute_Time"]/df["Count_Person"]
df["Commute_Time"] = avg_commute_time
# Convert Count of No Health Insurance to Percentage
percent_noHealthInsurance = df["Count_Person_NoHealthInsurance"]/df["Count_Person"]
df["Percent_NoHealthInsurance"] = percent_noHealthInsurance
# Create labels based on the Obesity rate of each city
# --- Percent_Person_Obesity < 30 will be Label 0
# --- Percent_Person_Obesity >= 30 will be label 1
df["Label"] = df['Percent_Person_Obesity'] >= 30.0
df["Label"] = df["Label"].astype(int)
# Display results
display(df)
# For ease of visualization, we'll focus on just a few cities
subset_city_dcids = ["geoId/0667000", # San Francisco, CA
"geoId/3651000", # NYC, NY
"geoId/1304000", # Atlanta, GA
"geoId/2404000", # Baltimore, MD
"geoId/3050200", # Missoula, MT
"geoId/4835000", # Houston, TX
"geoId/2622000", # Detroit, MI
"geoId/5363000", # Seattle, WA
"geoId/2938000", # Kansas City, MO
"geoId/4752006" # Nashville, TN
]
# Create a subset data frame with just those cities
subset_df = df.loc[df['DCID'].isin(subset_city_dcids)]
# We'll just use 2 features for ease of visualization
X = subset_df[["Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours"]]
Y = subset_df[['Label']]
# Visualize the data
colors = ['#1f77b4', '#ff7f0e']
markers = ['s', '^']
fig, ax = plt.subplots()
ax.set_title('Original Data')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
for i in range(X.shape[0]):
ax.scatter(X["Percent_Person_PhysicalInactivity"][i],
X["Percent_Person_SleepLessThan7Hours"][i],
c=colors[Y['Label'][i]],
marker=markers[Y['Label'][i]],
)
ax.legend([0, 1])
plt.show()
# Classifier 1
classifier1 = svm.SVC()
classifier1.fit(X, Y["Label"])
fig, ax = plt.subplots()
ax.set_title('Classifier 1')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
plot_decision_regions(X.to_numpy(),
Y["Label"].to_numpy(),
clf=classifier1,
legend=2)
plt.show()
print('Accuracy of this classifier is:', classifier1.score(X,Y["Label"]))
# Classifier 2
classifier2 = tree.DecisionTreeClassifier(random_state=0)
classifier2.fit(X, Y["Label"])
fig, ax = plt.subplots()
ax.set_title('Classifier 2')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
plot_decision_regions(X.to_numpy(),
Y["Label"].to_numpy(),
clf=classifier2,
legend=2)
plt.show()
print('Accuracy of this classifier is:', classifier2.score(X,Y["Label"]))
# Original Data
X_full = df[["Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours"]]
Y_full = df[['Label']]
# Visualize the data
cCycle = ['#1f77b4', '#ff7f0e']
mCycle = ['s', '^']
fig, ax = plt.subplots()
ax.set_title('Original Data')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
for i in range(X_full.shape[0]):
ax.scatter(X_full["Percent_Person_PhysicalInactivity"][i],
X_full["Percent_Person_SleepLessThan7Hours"][i],
c=cCycle[Y_full['Label'][i]],
marker=mCycle[Y_full['Label'][i]],
)
ax.legend([0, 1])
plt.show()
# Classifier 1
fig, ax = plt.subplots()
ax.set_title('Classifier 1')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
plot_decision_regions(X_full.to_numpy(),
Y_full["Label"].to_numpy(),
clf=classifier1,
legend=2)
plt.show()
print('Accuracy of this classifier is: %.2f' % classifier1.score(X_full,Y_full["Label"]))
# Classifier 2
fig, ax = plt.subplots()
ax.set_title('Classifier 2')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
plot_decision_regions(X_full.to_numpy(),
Y_full["Label"].to_numpy(),
clf=classifier2,
legend=2)
plt.show()
print('Accuracy of this classifier is: %.2f' % classifier2.score(X_full,Y_full["Label"]))
# Use all features that aren't obesity
X_large = df.dropna()[[
"Median_Income_Person",
"Percent_NoHealthInsurance",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol",
"Commute_Time"
]]
Y_large = df.dropna()["Label"]
# Standardize the data
scaler = StandardScaler().fit(X_large)
X_large = scaler.transform(X_large)
# Create a model
large_model = linear_model.Perceptron()
large_model.fit(X_large, Y_large)
print('Accuracy of the large model is: %.2f' % large_model.score(X_large,Y_large))
'''
Try a variety of different splits by changing the test_size variable, which
represents the ratio of points to use in the test set.
For example, for a 75% Training, 25% Test split, use test_size=0.25
'''
test_size = 0.25 # Change me! Enter a value between 0 and 1
print(f'{np.round((1-test_size)*100)}% Training, {(test_size)*100}% Test Split' )
# Randomly split data into Train and Test Sets
x_train, x_test, y_train, y_test = train_test_split(X_large, Y_large, test_size=test_size)
# Fit a model on the training set
large_model.fit(x_train, y_train)
print('The TRAINING accuracy is: %.2f' % large_model.score(x_train, y_train))
# Evaluate on the test Set
print('The TEST accuracy is: %.2f' % large_model.score(x_test, y_test))
'''
Set the number of folds by changing k.
'''
k = 5 # Enter an integer >=2. Number of folds.
print(f'Test accuracies for {k} splits:')
scores = cross_val_score(large_model, X_large, Y_large, cv=k)
for i in range(k):
print('\tFold %d: %.2f' % (i+1, scores[i]))
print('Average score across all folds: %.2f' % np.mean(scores))
# Classifier A
x_A = df[["Count_Person",
"Median_Income_Person"]]
y_A = df["Label"]
classifierA = linear_model.Perceptron()
classifierA.fit(x_A, y_A)
scores = cross_val_score(classifierA, x_A, y_A, cv=5)
print('Classifier A')
print('-------------')
print('Number of Data Points:', x_A.shape[0])
print('Number of Features:', x_A.shape[1])
print('Classification Accuracy: %.2f' % classifierA.score(x_A, y_A))
print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores))
# Classifier A
x_A = df[["Percent_Person_PhysicalInactivity",
"Median_Income_Person"]]
y_A = df["Label"]
classifierA = svm.SVC()
classifierA.fit(x_A, y_A)
scores = cross_val_score(classifierA, x_A, y_A, cv=5)
print('Classifier A')
print('-------------')
print('Number of Data Points:', x_A.shape[0])
print('Number of Features:', x_A.shape[1])
print('Training Classification Accuracy: %.2f' % classifierA.score(x_A, y_A))
print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores))
# Classifier B
x_B = df.dropna()[[
"Percent_NoHealthInsurance",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol"
]]
y_B = df.dropna()["Label"]
classifierB = tree.DecisionTreeClassifier()
classifierB.fit(x_B, y_B)
scores = cross_val_score(classifierB, x_B, y_B, cv=5)
print('Classifier B')
print('-------------')
print('Number of Data Points:', x_B.shape[0])
print('Number of Features:', x_B.shape[1])
print('Training Classification Accuracy: %.2f' % classifierB.score(x_B, y_B))
print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores))
# Classifier C
x_C = df.dropna()[[
"Percent_NoHealthInsurance",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol"
]]
y_C = df.dropna()["Label"]
classifierC = linear_model.Perceptron()
classifierC.fit(x_C, y_C)
scores = cross_val_score(classifierC, x_C, y_C, cv=5)
print('Classifier C')
print('-------------')
print('Number of Data Points:', x_C.shape[0])
print('Number of Features:', x_C.shape[1])
print('Training Classification Accuracy: %.2f' % classifierC.score(x_C, y_C))
print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores))
your_local_dcid = "geoId/0649670" # Replace with your own!
# Get your local data from data commons
local_data = datacommons_pandas.build_multivariate_dataframe(your_local_dcid,stat_vars_to_query)
# Cleaning and Preprocessing
local_data['DCID'] = local_data.index
city_name_dict = datacommons.get_property_values(city_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
local_data['City'] = pd.Series(city_name_dict)
local_data.set_index('City', inplace=True)
local_data.rename(columns={"dc/e9gftzl2hm8h9":"Commute_Time"}, inplace=True)
avg_commute_time = local_data["Commute_Time"]/local_data["Count_Person"]
local_data["Commute_Time"] = avg_commute_time
percent_noHealthInsurance = local_data["Count_Person_NoHealthInsurance"]/local_data["Count_Person"]
local_data["Percent_NoHealthInsurance"] = percent_noHealthInsurance
local_data["Label"] = local_data['Percent_Person_Obesity'] >= 30.0
local_data["Label"] = local_data["Label"].astype(int)
# Build data to feed into model
x_local = local_data[[
"Median_Income_Person",
"Percent_NoHealthInsurance",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol",
"Commute_Time"
]]
x_local = scaler.transform(x_local)
y_local = local_data["Label"]
# Make Prediction
prediction = large_model.predict(x_local)
# Report Results
print(f'Prediction for {local_data.index[0]}:')
print(f'\tThe predicted label was {prediction[0]}')
print(f'\tThe actual label was {y_local[0]}')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Defaults can be set for nodes and edges.
Step2: Nodes, edges and subgraphs are added and deleted through the respective add_*() and del_*() methods.
|
<ASSISTANT_TASK:>
Python Code:
graph.set_fontsize('12')
graph.get_fontsize()
graph.set_node_defaults(fillcolor='blue', style='filled')
graph.get_node_defaults()
node1 = pydot.Node(name='node1', label='My first node', shape='box')
node2 = pydot.Node(name='node2', label='My second node', color='red')
edge = pydot.Edge(src=node1, dst=node2, label='My Edge', style='dotted')
graph.add_node(node1)
graph.add_node(node2)
graph.add_edge(edge)
graph.write('pydot.dot')
graph.write_png('pydot.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Como pueden ver, en la variable usos_suelo tenemos ya calculadas todas nuestras variables de interés, ahora lo que necesitamos es, para cada fila de nuestro GeoDataFrame, saber cuáles son los polígnos vecinos.
Step2: Lo primero que hicimos fue importar la librería PySal. A continuación, claculamos la matriz de pesos w usando vecindades de tipo Reina (en la documentación de PySal pueden consultar los diferentes tipos de vecindades y las fuentes de datos que pueden usar).
Step3: Y luego lo graficamos
Step4: Intensidad
Step5: Ahora sí, recorremos la lista de vecinos y calculamos la intensidad para cada elemento
Step6: Al parecer lo que estamos haciendo es muy complicado, sin embargo, una vez más, si lo vemos con detenimiento es relativamente simple
Step7: Ejercicio
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import geopandas as gpd
from preproceso import preprocesa
pd.options.mode.chained_assignment = None
denue = gpd.read_file("datos/DENUE_INEGI_09_.shp")
agebs = gpd.read_file("datos/ageb_urb.shp")
usos_suelo = preprocesa(denue, agebs)
usos_suelo.reset_index(drop=True, inplace=True)
usos_suelo.head()
import pysal
w = pysal.weights.Queen.from_dataframe(usos_suelo)
print(w.n)
print(w.weights[0])
print(w.neighbors[0])
print(w.neighbors[5])
print(w.histogram)
freqs = pd.DataFrame(w.histogram, columns=['vecinos', 'cuenta'])
freqs.head()
%matplotlib inline
import seaborn as sns
sns.barplot(x='vecinos', y='cuenta', data=freqs)
usos_suelo = usos_suelo.drop(usos_suelo.index[[1224]])
usos_suelo.reset_index(drop=True, inplace=True)
w = pysal.weights.Queen.from_dataframe(usos_suelo)
usos_suelo.iloc[[0]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()
import numpy as np
intensidad =[]
for i in range(0, w.n):
vecinos = w.neighbors[i]
total = 0.0
suma = np.zeros((3),dtype=np.float)
valores = usos_suelo.iloc[[i]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()
for j in range(0,len(vecinos)):
data = usos_suelo.iloc[[j]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()
suma = suma + data
total += sum(data)
intensidad.append((i, sum(total)))
print(intensidad[0:10])
intensidad_df = pd.DataFrame(intensidad, columns=['gid', 'intensidad'])
datos_intensidad = usos_suelo.merge(intensidad_df, left_index=True, right_on='gid', how='inner')
datos_intensidad.head()
entropia =[]
for i in range(0, w.n):
vecinos = w.neighbors[i]
total = 0.0
suma = np.zeros((3),dtype=np.float)
valores = usos_suelo.iloc[[i]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()
for j in range(0,len(vecinos)):
data = usos_suelo.iloc[[j]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()
suma = suma + data
total += np.sum(data)
p = np.nan_to_num(suma/total)
lp = np.select([p == 0,p > 0],[p, np.log(p)])
entropia.append((i, np.sum(p*lp)))
print(entropia[0:10])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 載入原始 RAW Data
Step2: 利用 LXML Parser 來分析文章結構
Step3: 取出 Image Src 的列表
Step4: 統計 Image Src 的列表
Step5: 請使用 reduceByKey , sortBy 來計算出 img src 排行榜
|
<ASSISTANT_TASK:>
Python Code:
def parseRaw(json_map):
url = json_map['url']
content = json_map['html']
return (url,content)
import json
import pprint
pp = pprint.PrettyPrinter(indent=2)
path = "./pixnet.txt"
all_content = sc.textFile(path).map(json.loads).map(parseRaw)
def parseImgSrc(x):
try:
urls = list()
import lxml.html
from urlparse import urlparse
root = lxml.html.fromstring(x)
t = root.getroottree()
for src in root.xpath('//img/@src'):
try :
host = urlparse(src).netloc
if '.' not in host : continue
if host.count('.') == 1 :
pass
else:
host = host[host.index('.')+1:]
urls.append('imgsrc_'+host)
except :
print "Error Parse At:" , src
for src in root.xpath('//input[@src]/@src'):
try :
host = urlparse(src).netloc
if '.' not in host : continue
if host.count('.') == 1 :
pass
else:
host = host[host.index('.')+1:]
urls.append('imgsrc_'+host)
except :
print "Error parseImgSrc At:" , src
except :
pass
return urls
image_list = all_content.map(lambda x :parseImgSrc(x[1]))
pp.pprint(image_list.first()[:10])
img_src_count = all_content.map(
lambda x :parseImgSrc(x[1])).flatMap(
lambda x: x).countByValue()
for i in img_src_count:
print i , ':' , img_src_count[i]
from operator import add
all_content.map(
lambda x :parseImgSrc(x[1])).flatMap(
lambda x: x).map(
lambda x: (x,1)).reduceByKey(add).sortBy(lambda x:x[1] ,ascending =False).collect()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here's a quick way to see all the keys in the DustSpectrum object.
Step2: Play with WD01 dust distributions
Step3: The <code>DustSpectrum</code> objects also contain an <code>integrate_dust_mass()</code> function for calculating the total mass column of dust (g cm^-2).
|
<ASSISTANT_TASK:>
Python Code:
mrn_test1 = distlib.MRN_dist(0.005, 0.3, 3.5)
mrn_test2 = distlib.MRN_dist(0.005, 0.25, 3.5)
mrn_test3 = distlib.MRN_dist(0.005, 0.3, 4.0)
mrn_test4 = distlib.MRN_dist(0.005, 0.3, 3.5, na=10, log=True)
print(type(mrn_test1))
print(mrn_test1.__dict__.keys())
plt.plot(mrn_test1.a, mrn_test1.nd, label='amax=%.3f, p=%.2f' % (mrn_test1.a[-1], 3.5))
plt.plot(mrn_test4.a, mrn_test4.nd, 'ko', label='')
plt.plot(mrn_test2.a, mrn_test2.nd, label='amax=%.3f, p=%.2f' % (mrn_test2.a[-1], 3.5))
plt.plot(mrn_test3.a, mrn_test3.nd, label='amax=%.3f, p=%.2f' % (mrn_test3.a[-1], 4.0))
plt.legend(loc='upper right', frameon=False)
plt.loglog()
plt.xlabel('grain radius (micron)')
plt.ylabel(r'$dn/da$ (cm$^{-2}$ micron$^{-1}$)')
plt.xlim(0.005,0.3)
wd_MW_gra = distlib.make_WD01_DustSpectrum(type='Graphite', verbose=False)
wd_MW_sil = distlib.make_WD01_DustSpectrum(type='Silicate', verbose=False)
wd_MW_gra_bc6 = distlib.make_WD01_DustSpectrum(type='Graphite', bc=6.0, verbose=False)
wd_MW_sil_bc6 = distlib.make_WD01_DustSpectrum(type='Silicate', bc=6.0, verbose=False)
plt.plot(wd_MW_gra.a, wd_MW_gra.nd*wd_MW_gra.a**4, label='Graphite MW dust')
plt.plot(wd_MW_sil.a, wd_MW_sil.nd*wd_MW_sil.a**4, label='Silicate MW dust')
plt.plot(wd_MW_gra_bc6.a, wd_MW_gra_bc6.nd*wd_MW_gra_bc6.a**4, 'b--', label='bc=6')
plt.plot(wd_MW_sil_bc6.a, wd_MW_sil_bc6.nd*wd_MW_sil_bc6.a**4, 'g--', label='bc=6')
plt.xlabel('grain radius (micron)')
plt.ylabel(r'$a^4 dn/da$ (cm$^{-2}$ um$^{3}$)')
plt.legend(loc='lower left', frameon=False)
plt.loglog()
plt.xlim(0.005, 1)
plt.ylim(1.e-20,1.e-15)
print("Graphite dust mass = %.3e" %(wd_MW_gra.integrate_dust_mass()))
print("Silicate dust mass = %.3e" %(wd_MW_sil.integrate_dust_mass()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the dataset from the csv file
Step2: Process the data
Step3: Prepare the individual data axis
Step4: Plot the data in 2D
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import csv
%run 'preprocessor.ipynb' #our own preprocessor functions
with open('/Users/timothy/Desktop/Files/data_new/merged.csv', 'r') as f:
reader = csv.reader(f)
data = list(reader)
matrix = obtain_data_matrix(data)
samples = len(matrix)
print("Number of samples: " + str(samples))
filament = matrix[:,[8]]
time = matrix[:,[9]]
satisfaction = matrix[:,[10]]
result = matrix[:,[11]]
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.figure(1,figsize=(12,8))
plt.xlabel('Print Time (mins)')
plt.ylabel('Filament (m)')
plt.title('Prints (Combined)')
plt.figure(2,figsize=(12,8))
plt.xlabel('Print Time (mins)')
plt.ylabel('Filament (m)')
plt.title('Prints (Failed Focus)')
plt.figure(3,figsize=(10,8))
plt.xlabel('Print Time (mins)')
plt.ylabel('Filament (m)')
plt.title('Prints (Success Only)')
for data in matrix:
filament = [data[0][:,8]]
time = [data[0][:,9]]
satisfaction = [data[:,[10]]]
result = [data[:,[11]]]
result_success = [[1]]
if result[0] == result_success:
plt.figure(1)
plt.scatter(time, filament, c="blue", alpha=0.5,)
plt.figure(2)
plt.scatter(time, filament, c="blue", alpha=0.05,)
plt.figure(3)
plt.scatter(time, filament, c="blue", alpha=1,)
else:
plt.figure(1)
plt.scatter(time, filament, c="red", alpha=0.5,)
plt.figure(2)
plt.scatter(time, filament, c="red", alpha=1,)
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explaining simpler models
Step2: Train a Scikit-learn linear regression model on the data and print the learned coefficients
Step3: Feature attributions with SHAP
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
import tensorflow as tf
!gsutil cp gs://ml-design-patterns/auto-mpg.csv .
data = pd.read_csv('auto-mpg.csv', na_values='?')
data = data.dropna()
data = data.drop(columns=['car name'])
data = pd.get_dummies(data, columns=['origin'])
data.head()
labels = data['mpg']
data = data.drop(columns=['mpg', 'cylinders'])
x,y = data,labels
x_train,x_test,y_train,y_test = train_test_split(x,y)
model = LinearRegression().fit(x_train, y_train)
coefficients = model.coef_
coefdf = pd.DataFrame(coefficients, index=data.columns.tolist(), columns=['Learned coefficients'])
coefdf
model = tf.keras.Sequential([
tf.keras.layers.Dense(16, activation='relu', input_shape=[len(x_train.iloc[0])]),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
model.fit(x_train, y_train, epochs=1000)
!pip install shap
import shap
# Create an explainer object and get feature attributions for the first 10 examples in our test dataset
explainer = shap.DeepExplainer(model, x_train[:200])
shap_values = explainer.shap_values(x_test.values[:10])
# Print the feature attributions for the first example in our test set
shap_values[0][0]
# This is the baseline value shap is using
explainer.expected_value.numpy()
shap.initjs()
shap.force_plot(explainer.expected_value[0].numpy(), shap_values[0][0,:], x_test.iloc[0,:])
shap.summary_plot(shap_values, feature_names=data.columns.tolist(), class_names=['MPG'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compute sensitivity maps
Step2: Show gain matrix a.k.a. leadfield matrix with sensitivity map
Step3: Compare sensitivity map with distribution of source depths
Step4: Sensitivity is likely to co-vary with the distance between sources to
|
<ASSISTANT_TASK:>
Python Code:
# Author: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import numpy as np
import mne
from mne.datasets import sample
from mne.source_space import compute_distance_to_sensors
from mne.source_estimate import SourceEstimate
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
# Read the forward solutions with surface orientation
fwd = mne.read_forward_solution(fwd_fname)
mne.convert_forward_solution(fwd, surf_ori=True, copy=False)
leadfield = fwd['sol']['data']
print("Leadfield size : %d x %d" % leadfield.shape)
grad_map = mne.sensitivity_map(fwd, ch_type='grad', mode='fixed')
mag_map = mne.sensitivity_map(fwd, ch_type='mag', mode='fixed')
eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')
picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False)
picks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True)
fig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True)
fig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14)
for ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']):
im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto',
cmap='RdBu_r')
ax.set_title(ch_type.upper())
ax.set_xlabel('sources')
ax.set_ylabel('sensors')
fig.colorbar(im, ax=ax)
fig_2, ax = plt.subplots()
ax.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()],
bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'],
color=['c', 'b', 'k'])
fig_2.legend()
ax.set(title='Normal orientation sensitivity',
xlabel='sensitivity', ylabel='count')
brain_sens = grad_map.plot(
subjects_dir=subjects_dir, clim=dict(lims=[0, 50, 100]), figure=1)
brain_sens.add_text(0.1, 0.9, 'Gradiometer sensitivity', 'title', font_size=16)
# source space with vertices
src = fwd['src']
# Compute minimum Euclidean distances between vertices and MEG sensors
depths = compute_distance_to_sensors(src=src, info=fwd['info'],
picks=picks_meg).min(axis=1)
maxdep = depths.max() # for scaling
vertices = [src[0]['vertno'], src[1]['vertno']]
depths_map = SourceEstimate(data=depths, vertices=vertices, tmin=0.,
tstep=1.)
brain_dep = depths_map.plot(
subject='sample', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[0, maxdep / 2., maxdep]), figure=2)
brain_dep.add_text(0.1, 0.9, 'Source depth (m)', 'title', font_size=16)
corr = np.corrcoef(depths, grad_map.data[:, 0])[0, 1]
print('Correlation between source depth and gradiomter sensitivity values: %f.'
% corr)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fetch data
Step2: Clean data
Step3: Export to CSV
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import csv
from sklearn.datasets import fetch_20newsgroups
newsgroups = fetch_20newsgroups(subset='all')
df = pd.DataFrame(newsgroups.data, columns=['text'])
df['categories'] = [newsgroups.target_names[index] for index in newsgroups.target]
df.head()
# Convert multiple whitespace characters into a space
df['text'] = df['text'].str.replace('\s+',' ')
# Change newsgroup titles to use underscores rather than periods
df['categories'] = df['categories'].str.replace('.','_')
# Trim leading and tailing whitespace
df['text'] = df['text'].str.strip()
# Truncate all fields to the maximum field length of 128kB
df['text'] = df['text'].str.slice(0,131072)
# Remove any rows with empty fields
df = df.replace('', np.NaN).dropna()
# Drop duplicates
df = df.drop_duplicates(subset='text')
# Limit rows to maximum of 100,000
df = df.sample(min(100000, len(df)))
df.head()
csv_str = df.to_csv(index=False, header=False)
with open("20-newsgroups-dataset.csv", "w") as text_file:
print(csv_str, file=text_file)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: In general, all weights are trainable weights. The only built-in layer that has
Step3: Layers & models also feature a boolean attribute trainable. Its value can be changed.
Step4: When a trainable weight becomes non-trainable, its value is no longer updated during
Step5: Do not confuse the layer.trainable attribute with the argument training in
Step6: The typical transfer-learning workflow
Step7: These are the first 9 images in the training dataset -- as you can see, they're all
Step8: We can also see that label 1 is "dog" and label 0 is "cat".
Step9: Besides, let's batch the data and use caching & prefetching to optimize loading speed.
Step10: Using random data augmentation
Step11: Let's visualize what the first image of the first batch looks like after various random
Step12: Build a model
Step13: Train the top layer
Step14: Do a round of fine-tuning of the entire model
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
from tensorflow import keras
layer = keras.layers.Dense(3)
layer.build((None, 4)) # Create the weights
print("weights:", len(layer.weights))
print("trainable_weights:", len(layer.trainable_weights))
print("non_trainable_weights:", len(layer.non_trainable_weights))
layer = keras.layers.BatchNormalization()
layer.build((None, 4)) # Create the weights
print("weights:", len(layer.weights))
print("trainable_weights:", len(layer.trainable_weights))
print("non_trainable_weights:", len(layer.non_trainable_weights))
layer = keras.layers.Dense(3)
layer.build((None, 4)) # Create the weights
layer.trainable = False # Freeze the layer
print("weights:", len(layer.weights))
print("trainable_weights:", len(layer.trainable_weights))
print("non_trainable_weights:", len(layer.non_trainable_weights))
# Make a model with 2 layers
layer1 = keras.layers.Dense(3, activation="relu")
layer2 = keras.layers.Dense(3, activation="sigmoid")
model = keras.Sequential([keras.Input(shape=(3,)), layer1, layer2])
# Freeze the first layer
layer1.trainable = False
# Keep a copy of the weights of layer1 for later reference
initial_layer1_weights_values = layer1.get_weights()
# Train the model
model.compile(optimizer="adam", loss="mse")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
# Check that the weights of layer1 have not changed during training
final_layer1_weights_values = layer1.get_weights()
np.testing.assert_allclose(
initial_layer1_weights_values[0], final_layer1_weights_values[0]
)
np.testing.assert_allclose(
initial_layer1_weights_values[1], final_layer1_weights_values[1]
)
inner_model = keras.Sequential(
[
keras.Input(shape=(3,)),
keras.layers.Dense(3, activation="relu"),
keras.layers.Dense(3, activation="relu"),
]
)
model = keras.Sequential(
[keras.Input(shape=(3,)), inner_model, keras.layers.Dense(3, activation="sigmoid"),]
)
model.trainable = False # Freeze the outer model
assert inner_model.trainable == False # All layers in `model` are now frozen
assert inner_model.layers[0].trainable == False # `trainable` is propagated recursively
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
train_ds, validation_ds, test_ds = tfds.load(
"cats_vs_dogs",
# Reserve 10% for validation and 10% for test
split=["train[:40%]", "train[40%:50%]", "train[50%:60%]"],
as_supervised=True, # Include labels
)
print("Number of training samples: %d" % tf.data.experimental.cardinality(train_ds))
print(
"Number of validation samples: %d" % tf.data.experimental.cardinality(validation_ds)
)
print("Number of test samples: %d" % tf.data.experimental.cardinality(test_ds))
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for i, (image, label) in enumerate(train_ds.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image)
plt.title(int(label))
plt.axis("off")
size = (150, 150)
train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y))
validation_ds = validation_ds.map(lambda x, y: (tf.image.resize(x, size), y))
test_ds = test_ds.map(lambda x, y: (tf.image.resize(x, size), y))
batch_size = 32
train_ds = train_ds.cache().batch(batch_size).prefetch(buffer_size=10)
validation_ds = validation_ds.cache().batch(batch_size).prefetch(buffer_size=10)
test_ds = test_ds.cache().batch(batch_size).prefetch(buffer_size=10)
from tensorflow import keras
from tensorflow.keras import layers
data_augmentation = keras.Sequential(
[layers.RandomFlip("horizontal"), layers.RandomRotation(0.1),]
)
import numpy as np
for images, labels in train_ds.take(1):
plt.figure(figsize=(10, 10))
first_image = images[0]
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
augmented_image = data_augmentation(
tf.expand_dims(first_image, 0), training=True
)
plt.imshow(augmented_image[0].numpy().astype("int32"))
plt.title(int(labels[0]))
plt.axis("off")
base_model = keras.applications.Xception(
weights="imagenet", # Load weights pre-trained on ImageNet.
input_shape=(150, 150, 3),
include_top=False,
) # Do not include the ImageNet classifier at the top.
# Freeze the base_model
base_model.trainable = False
# Create new model on top
inputs = keras.Input(shape=(150, 150, 3))
x = data_augmentation(inputs) # Apply random data augmentation
# Pre-trained Xception weights requires that input be scaled
# from (0, 255) to a range of (-1., +1.), the rescaling layer
# outputs: `(inputs * scale) + offset`
scale_layer = keras.layers.Rescaling(scale=1 / 127.5, offset=-1)
x = scale_layer(x)
# The base model contains batchnorm layers. We want to keep them in inference mode
# when we unfreeze the base model for fine-tuning, so we make sure that the
# base_model is running in inference mode here.
x = base_model(x, training=False)
x = keras.layers.GlobalAveragePooling2D()(x)
x = keras.layers.Dropout(0.2)(x) # Regularize with dropout
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
model.summary()
model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[keras.metrics.BinaryAccuracy()],
)
epochs = 20
model.fit(train_ds, epochs=epochs, validation_data=validation_ds)
# Unfreeze the base_model. Note that it keeps running in inference mode
# since we passed `training=False` when calling it. This means that
# the batchnorm layers will not update their batch statistics.
# This prevents the batchnorm layers from undoing all the training
# we've done so far.
base_model.trainable = True
model.summary()
model.compile(
optimizer=keras.optimizers.Adam(1e-5), # Low learning rate
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[keras.metrics.BinaryAccuracy()],
)
epochs = 10
model.fit(train_ds, epochs=epochs, validation_data=validation_ds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.3 | Dataframes
Step2: 3.1.2 | Leisure Activity Data
Step3: 3.1.3 | Age Level Activity Data
Step4: 3.1.4 | Geography Level Activity Data
Step5: 3.2 | Cleaning and Organizing Data
Step6: 4 | Visualizing the Data
Step7: From the visualization above, we see that majority of the Americans (both genders) tend to spend most time on the following activities on an average
Step8: We observe from the yearly trends that -
Step9: 4.4 | Does age play any role in the time spent on activities?
Step10: 4.5 | Deep-Diving and visualizing data at sub-activity level for the top 4 main activities
Step11: We see that majority of the time is spent on either sleeping or working followed by attending class (target student crowd) or socializing, relaxing, and leisure.
Step12: 4.7 | Visualizing time spent on Sports & Leisure activities at a geographic level
Step13: 4.8 | Ad-Hoc Analysis
|
<ASSISTANT_TASK:>
Python Code:
# Importing all the required libraries
%matplotlib inline
import sys
import pandas as pd # data manipulation package
import datetime as dt # date tools, used to note current date
import matplotlib.pyplot as plt # graphics package
import matplotlib as mpl # graphics package
import plotly as pl # graphics package
import urllib.request # To import data from Dropbox
# New Libraries
import os # operating system tools (check files)
import requests, io # internet and input tools
import zipfile as zf # zip file tools
import shutil # file management tools
import numpy as np # scientific computing
# Geographical Views (Plotly Authentication)
import plotly.plotly as py
import plotly.graph_objs as go
py.sign_in('Vinee03', '0hNA8NplYEePfVAdDtUa')
# System Details
print('Python version:', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
# To maintain color uniformity throughout all of the visualizations
colors = {1: "royalblue",
2: "bisque",
3: "navy",
4: "silver",
5: "darkmagenta",
6: "pink",
7: "chocolate",
8: "orangered",
9: "lime",
10:"orange",
11:"darkorchid",
12:"black",
13:"yellow"}
# Fetching data from ATUS Website
dls = "https://www.bls.gov/tus/a1_all_years.xlsx"
resp = requests.get(dls)
with open('a1_all_years.xlsx', 'wb') as output:
c = output.write(resp.content)
# Creating a dictionary of dataframes for each year of analysis
df_names = ['df_2011','df_2012','df_2013','df_2014','df_2015']
# Defining a function to extract main activity data for each year ranging from 2011 to 2015
def extract_main(no_of_years):
i= 0
year = 2011
year_max = year + no_of_years
while year < year_max:
year = str(year)
df = pd.read_excel(open('a1_all_years.xlsx','rb'), sheetname= year, header= None)
df_extract = df.loc[[5,11,14,25,37,46,52,58,62,73,85,91],[0,8,9]]
df_names[i] = df_extract
df_names[i].columns = ['Main_Activity', 'AvgHrsMen_'+ year, 'AvgHrsWomen_' + year]
df_names[i] = df_names[i].set_index(['Main_Activity'])
year = int(year)
year=year+1
i = i+1
extract_main(5)
# Merging the year-level dataframes to obtain a consolidated main activity dataframe
df_main_final = pd.concat([df_names[0],df_names[1],df_names[2],df_names[3],df_names[4]], axis = 1)
df_main_final
# Fetching data from ATUS Website
dls = "https://www.bls.gov/tus/a1_all_years.xlsx"
resp = requests.get(dls)
with open('a1_all_years.xlsx', 'wb') as output:
c = output.write(resp.content)
# Creating a dictionary of dataframes for each year of analysis
df_names = ['df_2011','df_2012','df_2013','df_2014','df_2015']
# Defining a function to extract sub-activity data for each year ranging from 2011 to 2015
def extract_sub(no_of_years):
i= 0
year = 2011
year_max = year + no_of_years
while year < year_max:
year = str(year)
df = pd.read_excel(open('a1_all_years.xlsx','rb'), sheetname= year, header= None)
df_extract = df.loc[[6,7,8,9,10,12,13,15,16,17,18,19,20,21,22,23,24,26,28,32,35,38,42,45,47,48,51,53,54,55,56,57,59,60,61,
63,64,72,74,81,84,86,87,90],[0,8,9]]
df_names[i] = df_extract
df_names[i].columns = ['Sub_Activity', 'AvgHrsMen_'+ year, 'AvgHrsWomen_' + year]
df_names[i] = df_names[i].set_index(['Sub_Activity'])
year = int(year)
year=year+1
i = i+1
extract_sub(5)
# Merging the year-level dataframes to obtain a consolidated sub-activity dataframe
df_sub_final = pd.concat([df_names[0],df_names[1],df_names[2],df_names[3],df_names[4]], axis = 1)
df_sub_final
# Fetching Data for Leisure activity (will be used further down in the analysis)
dls = "https://www.bls.gov/tus/charts/chart9.txt"
resp = requests.get(dls)
with open('chart9.txt', 'wb') as output:
c = output.write(resp.content)
# Extracting the text file data
lines = open("chart9.txt").readlines()
open('newfile.txt', 'w').writelines(lines[2:-4])
f_leisure = open("newfile.txt","r")
data_leisure = f_leisure.read()
# Replace the target string to fix the delimiters
data_leisure = data_leisure.replace("\t", '|').replace("||||", '|').replace("|||", '|').replace("||", '|')
# Write the file out again
with open('newfile.txt', 'w') as file:
file.write(data_leisure)
f_leisure = open("newfile.txt","r")
data_leisure = f_leisure.read()
# Extracting/Cleaning the data and renaming certain columns
df_leisure_all = pd.read_csv(open("newfile.txt","r"), delimiter="|")
df_leisure_all = df_leisure_all.rename(columns={'Unnamed: 0': 'Activity_Leisure_SubActivity'})
df_leisure_all = df_leisure_all.drop(df_leisure_all.index[[7]])
df_leisure_all.iloc[0, df_leisure_all.columns.get_loc('Minutes')] = 167
df_leisure_all
df_leisure_all = df_leisure_all.set_index(['Activity_Leisure_SubActivity'])
df_leisure_all= df_leisure_all.astype(float)
df_leisure_all
# Fetching Students Data (will be used further down in the analysis)
dls = "https://www.bls.gov/tus/charts/chart6.txt"
resp = requests.get(dls)
with open('chart6.txt', 'wb') as output:
c = output.write(resp.content)
# Extracting the text file data
lines = open("chart6.txt").readlines()
open('newfile.txt', 'w').writelines(lines[1:-5])
f_s = open("newfile.txt","r")
data_s = f_s.read()
# Replace the target string to fix the delimiters
data_s = data_s.replace("\t", '|').replace("||||", '|').replace("|||", '|').replace("||", '|').replace("activities", 'Working').replace("Educational Working", 'Household/Educational Activities')
# Write the file out again
with open('newfile.txt', 'w') as file:
file.write(data_s)
f_s = open("newfile.txt","r")
data_s = f_s.read()
# Extracting the data and renaming certain columns
df_students = pd.read_csv(open("newfile.txt","r"), delimiter="|")
df_students = df_students.drop(df_students.index[[3,9,5,6,7,8]])
df_students = df_students.rename(columns={'Unnamed: 0': 'Main_Activity', 'Hours': 'Ages 15-49'})
df_students['Main_Activity'] = df_students['Main_Activity'].str.strip()
df_students = df_students.set_index(['Main_Activity'])
df_students
# Fetching Elders Data (will be used further down in the analysis)
dls = "https://www.bls.gov/tus/charts/chart4.txt"
resp = requests.get(dls)
with open('chart4.txt', 'wb') as output:
c = output.write(resp.content)
# Extracting the text file data
lines = open("chart4.txt").readlines()
open('newfile.txt', 'w').writelines(lines[4:-4])
f_e = open("newfile.txt","r")
data_e = f_e.read()
# Replace the target string to fix the delimiters
data_e = data_e.replace("\t", '|').replace("||||", '|').replace("|||", '|').replace("||", '|').replace("Household activities", 'Household/Educational Activities')
# Write the file out again
with open('newfile.txt', 'w') as file:
file.write(data_e)
f_e = open("newfile.txt","r")
data_e = f_e.read()
# Extracting the data and renaming certain columns
df_elderly = pd.read_csv(open("newfile.txt","r"), delimiter="|")
df_elderly.drop('Unnamed: 4', axis = 1, inplace = True)
df_elderly = df_elderly.rename(columns={'Unnamed: 0': 'Main_Activity'})
df_elderly['Main_Activity'] = df_elderly['Main_Activity'].str.strip()
df_elderly = df_elderly.set_index(['Main_Activity'])
df_elderly
# Extracting Geographical Distribution of Sports Data from Dropbox
url = "https://www.dropbox.com/s/c5ahh0ffb7tc3yv/ATUS_Geography_Data_Sports.csv?dl=1"
u_s = urllib.request.urlopen(url)
data_s = u_s.read()
u_s.close()
with open("ATUS_Geography_Data_Sports.csv", "wb") as f :
f.write(data_s)
df_geo_sports = pd.read_csv(open("ATUS_Geography_Data_Sports.csv","r"),delimiter=",")
df_geo_sports
# Extracting Geographical distribution of Organizational, Civic, and Religious Data from Dropbox
url = "https://www.dropbox.com/s/36gnok6gtn3u5gw/ATUS_Geography_Data_Religion.csv?dl=1"
u_r = urllib.request.urlopen(url)
data_r = u_r.read()
u_r.close()
with open("ATUS_Geography_Data_Religion.csv", "wb") as f :
f.write(data_r)
df_geo_religion = pd.read_csv(open("ATUS_Geography_Data_Religion.csv","r"),delimiter=",")
df_geo_religion
# Cleaning the data and computing average time spent on activities at gender level
# Main Activity
df_main_final= df_main_final.apply(pd.to_numeric, errors='ignore')
df_main_final["AvgHrsMenMain"]= df_main_final[['AvgHrsMen_2011','AvgHrsMen_2012','AvgHrsMen_2013','AvgHrsMen_2014','AvgHrsMen_2015']].mean(axis = 1)
df_main_final["AvgHrsWomenMain"]= df_main_final[['AvgHrsWomen_2011','AvgHrsWomen_2012','AvgHrsWomen_2013','AvgHrsWomen_2014','AvgHrsWomen_2015']].mean(axis =1)
df_main_final = df_main_final.round(2)
df_main_final
# Cleaning the data and computing average time spent on activities at gender level
# Sub-Activity
df_sub_final=df_sub_final.replace('\(','',regex=True).replace('\)','',regex=True)
df_sub_final= df_sub_final.apply(pd.to_numeric, errors='ignore')
df_sub_final["AvgHrsMenSub"]=df_sub_final[['AvgHrsMen_2011','AvgHrsMen_2012','AvgHrsMen_2013','AvgHrsMen_2014','AvgHrsMen_2015']].mean(axis =1)
df_sub_final["AvgHrsWomenSub"]=df_sub_final[['AvgHrsWomen_2011','AvgHrsWomen_2012','AvgHrsWomen_2013','AvgHrsWomen_2014','AvgHrsWomen_2015']].mean(axis =1)
df_sub_final = df_sub_final.round(2)
df_sub_final
# Sorting the dataframes as per the hours spent
df_main_final = df_main_final.sort_values('AvgHrsMenMain', ascending=True)
# Re-setting the Index
df_main_final = df_main_final.reset_index()
df_sub_final = df_sub_final.reset_index()
# Trimming the data to format the dataframe
df_main_final['Main_Activity'] = df_main_final['Main_Activity'].str.strip()
df_sub_final['Sub_Activity'] = df_sub_final['Sub_Activity'].str.strip()
# Combining the Students and Elderly dataframes to obtain a consolidated dataframe
df_age_data = pd.concat([df_students,df_elderly], axis = 1)
df_age_data = df_age_data.reset_index()
df_age_data = df_age_data.rename(columns={'index': 'Main_Activity'})
df_age_data = df_age_data.set_index('Main_Activity')
df_age_data
# Plotting the Main Activity data as per the Average time
colortemp = [colors[x] for x in list(range(2,4))]
df_main_final = df_main_final.set_index(['Main_Activity'])
ax = df_main_final[['AvgHrsMenMain','AvgHrsWomenMain']].plot(kind='barh', title ="AVERAGE TIME SPENT ON MAIN ACTIVITIES (2011-2015) AT GENDER LEVEL", figsize=(8,8), legend=True, fontsize=10, color = colortemp )
ax.set_xlabel("AVERAGE HOURS FROM 2011 TO 2015", fontsize=12)
ax.set_ylabel("MAIN ACTIVITIES", fontsize=12)
L=plt.legend(loc = 'lower right')
L.get_texts()[0].set_text('Men')
L.get_texts()[1].set_text('Women')
plt.show()
# Plotting the view for main activities at yearly level
df_men = df_main_final.filter(regex="Men+",axis=1)
df_women = df_main_final.filter(regex="Women+",axis=1)
fig, ax = plt.subplots(nrows=1,ncols=2,figsize=(8,8))
# Creating a loop so that yearly data can be mapped
i = 0
for Activity in list(df_men.index):
ax[0].plot(df_men.loc[Activity].tolist()[:-1],"ko",markersize=5)
ax[0].plot(df_men.loc[Activity].tolist()[:-1],linestyle='dashed',label=Activity,color=colors[i+1])
ax[1].plot(df_women.loc[Activity].tolist()[:-1],"ko",markersize=5)
ax[1].plot(df_women.loc[Activity].tolist()[:-1],linestyle='dashed',label=Activity,color=colors[i+1])
if i>=9:
i=0
else:
i+=1
ax[0].legend(loc="upper left", fontsize=10,framealpha=0.2,bbox_to_anchor=(2.5, 1))
fig.subplots_adjust(wspace=0.3, hspace=0)
ax[0].set_title("Year level Main Activity trends (Men)")
ax[1].set_title("Year level Main Activity trends (Women)")
ax[0].set_xticks(list(range(0,5)))
ax[0].set_xticklabels(list(range(2011,2016)))
ax[1].set_xticks(list(range(0,5)))
ax[1].set_xticklabels(list(range(2011,2016)))
xaxis1 = ax[0].get_xaxis()
xaxis2 = ax[1].get_xaxis()
for ticks1,ticks2 in zip(xaxis1.get_ticklabels(),xaxis2.get_ticklabels()):
ticks1.set_rotation(45)
ticks1.set_color('k')
ticks2.set_rotation(45)
ticks2.set_color('k')
plt.show()
# Creating the visualization to distinctly identify important activies for each gender
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(6,6))
locx = list(range(0,10))
barwidth = 0.35
for loc, Activity in zip(range(len(df_men)),list(df_men.index)):
bar1, = ax[0].barh(loc,round(df_men.loc[Activity]["AvgHrsMenMain"],2),label=Activity,color = colors[loc+1])
ax[0].text(bar1.get_width()+1.5,bar1.get_y()+barwidth/2,bar1.get_width(),ha='left', va='bottom')
bar2, = ax[1].barh(loc,round(df_women.loc[Activity]["AvgHrsWomenMain"],2),label=Activity,color = colors[loc+1])
ax[1].text(bar2.get_width()*1.05,bar2.get_y()+barwidth/2,bar2.get_width())
ax[0].set_xticks([])
ax[0].set_yticks([])
ax[1].set_xticks([])
ax[1].set_yticks([])
ax[0].spines['top'].set_visible(False)
ax[0].spines['left'].set_visible(False)
ax[0].spines['bottom'].set_visible(False)
ax[0].spines['right'].set_visible(False)
ax[1].spines['top'].set_visible(False)
ax[1].spines['right'].set_visible(False)
ax[1].spines['bottom'].set_visible(False)
ax[0].invert_xaxis() #this invert provides the opposite comparison graphs between genders
ax[0].set_title("Average Hours for Men")
ax[1].set_title("Average Hours for Women")
box1 = ax[0].get_position()
box2 = ax[1].get_position()
fig.subplots_adjust(wspace=0.01, hspace=0)
l = ax[1].legend(loc="lower right", fontsize=10,framealpha=0.6, markerscale=5,labelspacing=0.1,borderpad=0.1
,bbox_to_anchor=(2.2, -.1))
plt.show()
# Plotting the stacked bar chart for each age bracket
df_age_data = df_age_data.sort_values('Ages 15-49', ascending=False)
colortemp = [colors[x] for x in list(range(2,6))]
ax = df_age_data[['Ages 15-49','Ages 55-64', 'Ages 65-74', 'Ages 75+']].plot(kind='bar', stacked = True, color = colortemp,title ="AVERAGE TIME SPENT IN ACTIVITIES BY AGE GROUP (2011-2015)", figsize=(8, 8), legend=True, fontsize=10,rot=30)
ax.set_xlabel("MAIN ACTIVITIES", fontsize=12)
ax.set_ylabel("AVERAGE HOURS FROM 2011 TO 2015", fontsize=12)
plt.show()
# Filtering out the sub-activities of the Top 4 main activities
top_list = ['Sleeping', 'Grooming', 'Health-related self care', 'Personal activities', 'Travel related to personal care', 'Working',
'Work-related activities', 'Other income-generating activities', 'Job search and interviewing', 'Travel related to work',
'Attending class', 'Homework and research', 'Travel related to education', 'Socializing, relaxing, and leisure',
'Sports, exercise, and recreation', 'Travel related to leisure and sports']
df_sub_final_u = df_sub_final[df_sub_final['Sub_Activity'].isin(top_list)]
df_sub_final_u = df_sub_final_u.reset_index()
# Merging the sub-activity data with main activity to create categories/segements
data = {'Main_Activity': ['Personal care activities', 'Personal care activities', 'Personal care activities','Personal care activities','Personal care activities', 'Working and work-related activities', 'Working and work-related activities','Working and work-related activities', 'Working and work-related activities', 'Working and work-related activities',
'Educational activities', 'Educational activities', 'Educational activities', 'Leisure and sports', 'Leisure and sports', 'Leisure and sports']}
df_main_act = pd.DataFrame(data)
df_sub_final_q = pd.merge(df_sub_final_u, df_main_act, left_index=True, right_index=True)
df_sub_final_q.drop('index', axis=1, inplace=True)
# Cleaning and sorting the data
df_sub_final_Personal_Care = df_sub_final_q.groupby(['Main_Activity']).get_group('Personal care activities')
df_sub_final_Personal_Care = df_sub_final_Personal_Care.set_index(['Sub_Activity'])
df_sub_final_Personal_Care = df_sub_final_Personal_Care.sort_values('AvgHrsMenSub', ascending=False)
df_sub_final_Educational = df_sub_final_q.groupby(['Main_Activity']).get_group('Educational activities')
df_sub_final_Educational = df_sub_final_Educational.set_index(['Sub_Activity'])
df_sub_final_Educational = df_sub_final_Educational.sort_values('AvgHrsMenSub', ascending=False)
df_sub_final_Leisure_Sports = df_sub_final_q.groupby(['Main_Activity']).get_group('Leisure and sports')
df_sub_final_Leisure_Sports = df_sub_final_Leisure_Sports.set_index(['Sub_Activity'])
df_sub_final_Leisure_Sports = df_sub_final_Leisure_Sports.sort_values('AvgHrsMenSub', ascending=False)
df_sub_final_Work = df_sub_final_q.groupby(['Main_Activity']).get_group('Working and work-related activities')
df_sub_final_Work = df_sub_final_Work.set_index(['Sub_Activity'])
df_sub_final_Work = df_sub_final_Work.sort_values('AvgHrsMenSub', ascending=False)
# Plotting the graph at sub-activity level
fig, ax = plt.subplots(nrows=2, ncols=2)
colortemp = [colors[x] for x in list(range(4,6))]
df_sub_final_Personal_Care[['AvgHrsMenSub','AvgHrsWomenSub']].plot(kind='bar', ax=ax[0,0], color = colortemp, width=0.5, title ="AVERAGE TIME SPENT ON PERSONAL CARE ACTIVITIES", figsize=(15, 10), legend=True,fontsize=10, rot = 30)
df_sub_final_Work[['AvgHrsMenSub','AvgHrsWomenSub']].plot(kind='bar', ax=ax[0,1], color = colortemp, width=0.5, title ="AVERAGE TIME SPENT ON WORK OR WORK RELATED ACTIVITIES", figsize=(15, 10), legend=True, fontsize=10,sharey=ax[0,0],rot = 30 )
df_sub_final_Educational[['AvgHrsMenSub','AvgHrsWomenSub']].plot(kind='bar', ax=ax[1,0],color = colortemp, width=0.5, title ="AVERAGE TIME SPENT ON EDUCATIONAL ACTIVITIES", figsize=(15, 10), legend=True, fontsize=10,rot = 30 )
df_sub_final_Leisure_Sports[['AvgHrsMenSub','AvgHrsWomenSub']].plot(kind='bar', ax=ax[1,1],color = colortemp, width=0.5, title ="AVERAGE TIME SPENT ON LEISURE & SPORTS ACTIVITIES", figsize=(15, 10), legend=True, fontsize=10,sharey=ax[1,0],rot = 30)
# Setting the Legend and Axis Labels
L1 = ax[0,0].legend(loc = 'upper right')
L1.get_texts()[0].set_text('Men')
L1.get_texts()[1].set_text('Women')
L2 = ax[0,1].legend(loc = 'upper right')
L2.get_texts()[0].set_text('Men')
L2.get_texts()[1].set_text('Women')
L3 = ax[1,0].legend(loc = 'upper right')
L3.get_texts()[0].set_text('Men')
L3.get_texts()[1].set_text('Women')
L4 = ax[1,1].legend(loc = 'upper right')
L4.get_texts()[0].set_text('Men')
L4.get_texts()[1].set_text('Women')
ax[0,0].set_ylabel('Average Hours (2011-2015)', fontsize=12)
ax[0,1].set_ylabel('Average Hours (2011-2015)', fontsize=12)
ax[1,0].set_ylabel('Average Hours (2011-2015)', fontsize=12)
ax[1,1].set_ylabel('Average Hours (2011-2015)', fontsize=12)
ax[0,0].set_xlabel('Sub-Activity', fontsize=12)
ax[0,1].set_xlabel('Sub-Activity', fontsize=12)
ax[1,0].set_xlabel('Sub-Activity', fontsize=12)
ax[1,1].set_xlabel('Sub-Activity', fontsize=12)
fig.tight_layout() # aligns all 4 charts within subplots
fig.subplots_adjust(wspace=0.05, hspace=0.7)
# Creating the exploded pie chart
fig, ax = plt.subplots(figsize=(8, 8))
explode = [0,0.1,0.2,.4,0.3,0.5,0.6]
color = [colors[x] for x in list(range(1,len(df_leisure_all["Minutes"])+1))]
p,t = ax.pie(list(df_leisure_all["Minutes"]), explode=explode,shadow=True, startangle=90, radius=1.3,colors=color)
labels = ['{0} - {1:1.2f}'.format(i,j) for i,j in zip(list(df_leisure_all.index), list(df_leisure_all["Minutes"]))]
box1 = ax.get_position()
ax.set_position([box1.x0, box1.y0, box1.width * 0.9, box1.height])
l = ax.legend(p,labels , loc="upper right", fontsize=12,framealpha=0.2,bbox_to_anchor=(1.8, 1))
l.get_title().set_position((30, 0))
# Equal aspect ratio ensures that pie is drawn as a circle
ax.axis('equal')
ax.set_title("AVERAGE TIME SPENT BY AMERICANS IN LEISURE ACTIVITY (IN MINUTES)",fontsize=15)
plt.show()
# Geographic Distribution of Sports and Leisure Activities
for col in df_geo_sports.columns:
df_geo_sports[col] = df_geo_sports[col].astype(str)
#shows color gradient as hours increases
scl = [[0.0, 'rgb(242,240,247)'],[0.2, 'rgb(218,218,235)'],[0.4, 'rgb(188,189,220)'],\
[0.6, 'rgb(158,154,200)'],[0.8, 'rgb(117,107,177)'],[1.0, 'rgb(84,39,143)']]
df_geo_sports['text'] = df_geo_sports['State']
layout = dict(
title = 'Average Time spent on Sports & Leisure (in hours) <br>(Hover for breakdown)',
geo = dict(
scope='usa',
projection=dict( type='albers usa' ),
showlakes = True,
lakecolor = 'rgb(255, 255, 255)'),
)
data = [ dict(
type='choropleth',
colorscale = scl,
autocolorscale = False,
locations = df_geo_sports['code'], # picks 2 digit state code from csv
z = df_geo_sports['Hours'].astype(float), # picks activity hrs as float
locationmode = 'USA-states',
text = df_geo_sports['text'],
marker = dict(
line = dict (
color = 'rgb(255,255,255)',
width = 2
) ),
colorbar = dict(
title = "Time in Hours")
) ]
fig = dict( data=data, layout=layout )
py.iplot( fig, filename='d3-cloropleth-map' )
# Geographic Distribution of Religious and Civic Organization Activities
for col in df_geo_religion.columns:
df_geo_religion[col] = df_geo_religion[col].astype(str)
#shows color gradient as hours increases
scl = [[0.0, 'rgb(243,205,174)'],[0.3, 'rgb(237,184,140)'],[0.6, 'rgb(227,142,72)'],\
[0.9, 'rgb(72,39,11)'],[1.0, 'rgb(38,20,6)']]
df_geo_religion['text'] = df_geo_religion['State']
layout = dict(
title = 'Average time spent on Organizational, Civic & Religious Activities (in hours) <br>(Hover for breakdown)',
geo = dict(
scope='usa',
projection=dict( type='albers usa' ),
showlakes = True,
lakecolor = 'rgb(255, 255, 255)'),
)
data = [ dict(
type='choropleth',
colorscale = scl,
autocolorscale = False,
locations = df_geo_religion['code'], # picks 2digit state code from csv
z = df_geo_religion['Hours'].astype(float), # picks activity hrs as float
locationmode = 'USA-states',
text = df_geo_religion['text'],
marker = dict(
line = dict (
color = 'rgb(255,255,255)',
width = 2
) ),
colorbar = dict(
title = "Time in Hours")
) ]
fig = dict( data=data, layout=layout )
py.iplot( fig, filename='d3-cloropleth-map' )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: cs = requests.session()
Step2: Time series generator
Step3: Extract data
Step4: Sync Sqlite db w/ Redis
Step6: Accumulate and extract data
Step7: %lprun -f pd.read_json accum(stdate='2017-03-01')
Step8: Analysis of nulls
|
<ASSISTANT_TASK:>
Python Code:
# Sometimes Jan calendar will have counts from days
# at beginning of Feb or end or previous Dec.
# Just checking that they agree w/ numbers in
# those months' calendars before dropping dupe
# dates
poldf = pollen_data(yrmths)
check_one2one(poldf, 'Date', 'Count')
poldf = poldf.drop_duplicates('Date').reset_index(drop=1)
poldf.loc[poldf.Count == -1, 'Count'] = np.nan
poldf = poldf.dropna(axis=0)
poldf.Date = pd.to_datetime(poldf.Date)
poldf = poldf.assign(
Yr =lambda x: x.Date.dt.year,
M =lambda x: x.Date.dt.month,
D =lambda x: x.Date.dt.day,
Doy=lambda x: x.Date.dt.dayofyear,
)
poldf['Dayi'] = (poldf.Date - poldf.Date.min()).dt.days
poldf['Prev_cnt'] = poldf.Count.shift(1).fillna(method='ffill').fillna(0) #.interpolate()
poldf['Prev_cnt_null'] = poldf.Dayi.shift() != (poldf.Dayi - 1)
from pandas.util.testing import assert_frame_equal
No = len(poldf2)
assert_frame_equal(poldf.reset_index(drop=1)[:No], poldf2.reset_index(drop=1))
feather.write_dataframe(poldf, 'cache/pollen.fth')
poldf2 = feather.read_dataframe('cache/pollen.fth')
def mkds_url(sdate):
loc = '33.7490,-84.3880'
url_temp = 'https://api.darksky.net/forecast/{key}/{loc},{time}?exclude=flags'
return url_temp.format(key=key, loc=loc, time=mktime(sdate))
with open('KEY.txt', 'r') as f:
key = f.read().strip()
def date_rng_gen(start):
if isinstance(start, str):
start = dt.datetime.strptime(start, '%Y-%m-%d')
while 1:
# print(start)
rng = gen_yr_rng(start, backward=True)
# print(rng[:5])
start = rng[-1]
for d in rng[:-1]:
yield d
def gen_yr_rng(start, backward=True):
start = start
yr = start.year
dprev = start.replace(year=yr - 1)
rng = pd.date_range(start=dprev, end=start)
if backward:
return rng[::-1]
return rng
d2 = dt.datetime(*start.timetuple()[:-3])
def test_date_rng_gen():
ts = list(it.islice(date_rng_gen('2017-03-01'), 400))
a, b = ts[0], ts[-1]
assert (pd.date_range(b, a)[::-1] == ts).all()
s = pd.Series(ts)
assert s.value_counts(normalize=0).max() == 1
def mktime(s, hour=12):
# tm
try:
d = dt.datetime.strptime(s, "%Y-%d-%m")
except TypeError:
d = s
# if isinstance(s, str):
# else:
# d = s
if hour:
d = d.replace(hour=hour)
f = time.mktime(d.timetuple())
return int(f)
test_date_rng_gen()
def camel2score(s):
return s[0].upper() + ''.join([('_' + c.lower()) if c.isupper() else c for c in s[1:]])
def parse_data(j):
hrs = j.pop('hourly')['data']
# global hrdf
js = json.dumps(hrs)
hrdf = pd.read_json(js) #.rename(columns=camel2score)
_dl = j.pop('daily')
[[dl]] = _dl.values()
cr = j.pop('currently')
# if hrdf.shape != (24,15):
# print(hrdf.shape)
# assert hrdf.shape == (24,15), 'Hr shape: {}'.format(hrdf.shape)
assert sorted(j) == ['latitude', 'longitude', 'offset', 'timezone']
return hrdf, dl, cr, j
# hrdf, dl, cr, j = parse_data(r.json())
today = '2017-04-05'
# Pull new
for d in date_rng_gen(today):
if d.month == d.day == 1:
print(d.date())
u = mkds_url(d)
if (u in rd) or ss.cache.has_url(u):
print('.', end='')
continue
r = ss.get(u)
if r.status_code == 403:
print('Forbidden')
break
if r.status_code != 200:
print(d, r.status_code)
r = ss.get(u)
r.url
def sync_rdd(ss, rd, stdate='2017-03-01'):
n = 0
for d in date_rng_gen(stdate):
if d.month == d.day == 1:
print(d.date(), end=' ')
u = mkds_url(d)
if u in rd:
continue
if not ss.cache.has_url(u):
break
r = ss.get(u)
assert u == r.url
rd[r.url] = r.json()
print('.', end='')
n += 1
return n
sync_rdd(ss, rd, stdate=today)
def accum(stdate='2017-03-01'):
Roll through the dates, pull out cached
requests, and add parsed data to list
dat = []
for i, d in enumerate(date_rng_gen(stdate)):
# if i > 20:
# break
if d.month == d.day == 1:
print(d.date())
u = mkds_url(d)
# if ss.cache.has_url(u):
# r = ss.get(u)
# dat.append(parse_data(r.json()))
if u in rd:
j = rd[u]
parsed = parse_data(j)
dat.append(parsed)
# print('.', end='')
else:
break
return dat
return
dat = accum(stdate='2017-04-04')
hrdfs, dls, crs, _ = zip(*dat)
dailydf = pd.read_json(json.dumps(dls)).rename(columns=camel2score)
def concat(dfs):
all_cols = {c for df in dfs for c in df}
col_dat = defaultdict(list)
for cname in all_cols:
for df in dfs:
l = len(df)
col_dat[cname].extend(df.get(cname, [None] * l))
return DataFrame(col_dat)
%time hr_df = concat(hrdfs).rename(columns=camel2score)
feather.write_dataframe(dailydf, 'cache/dark_day.fth')
# feather.write_dataframe(hr_df, 'cache/dark_hr.fth')
hr_dat[:2]
def rep_with_dummies_(df, col):
df = df.copy()
newcs = pd.get_dummies(df[col])
for c in newcs:
df[c] = newcs[c]
return df.drop(col, axis=1)
def rep_with_dummies(df, cols):
for c in cols:
df = rep_with_dummies_(df, c)
return df
hr_dat2 = rep_with_dummies(hr_dat, ['Icon', 'Summary', 'Precip_type'])
ns = (~(hr_dat == hr_dat))
ns.sum()
ncols = ns.sum()[ns.sum() > 0]
cn = hr_dat.eval('Cloud_cover != Cloud_cover').astype(int)
hr_dat2.corrwith(cn).sort_values(ascending=True)
hr_dat2[:3]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: 1. Flux integral approach
Step6: GISS-E2-R model
Step7: The OHC and HFDS totals match the corresponding global timeseries.
Step8: CSIRO-Mk3-6-0 model
Step9: NorESM1-M model
Step10: The HFDS totals match the corresponding global timeseries, but it seems regridding has wrecked the OHC values (the amplitude has dropped appreciably).
|
<ASSISTANT_TASK:>
Python Code:
import os, sys
import iris
import numpy
import iris.plot as iplt
import matplotlib.pyplot as plt
import seaborn
seaborn.set_context('talk')
cwd = os.getcwd()
repo_dir = '/'
for directory in cwd.split('/')[1:]:
repo_dir = os.path.join(repo_dir, directory)
if directory == 'ocean-analysis':
break
modules_dir = os.path.join(repo_dir, 'modules')
sys.path.append(modules_dir)
import general_io as gio
import timeseries
import convenient_universal as uconv
%matplotlib inline
def calc_anomaly(cube):
Calculate the anomaly.
anomaly = cube.copy()
anomaly.data = anomaly.data - anomaly.data[0]
return anomaly
def get_data(model):
Get the hfds, ohc and hfbasin data
rndt_file = '/g/data/r87/dbi599/DRSv2/CMIP5/%s/historical/yr/atmos/r1i1p1/rndt/latest/dedrifted/rndt-zonal-sum_Ayr_%s_historical_r1i1p1_cumsum-all.nc' %(model, model)
hfds_file = '/g/data/r87/dbi599/DRSv2/CMIP5/%s/historical/yr/ocean/r1i1p1/hfds/latest/dedrifted/hfds-zonal-sum_Oyr_%s_historical_r1i1p1_cumsum-all.nc' %(model, model)
ohc_file = '/g/data/r87/dbi599/DRSv2/CMIP5/%s/historical/yr/ocean/r1i1p1/ohc/latest/dedrifted/ohc-zonal-sum_Oyr_%s_historical_r1i1p1_all.nc' %(model, model)
hfbasin_file = '/g/data/r87/dbi599/DRSv2/CMIP5/%s/historical/yr/ocean/r1i1p1/hfbasin/latest/dedrifted/hfbasin-global_Oyr_%s_historical_r1i1p1_cumsum-all.nc' %(model, model)
time_constraint = gio.get_time_constraint(['1861-01-01', '2005-12-31'])
rndt_cube = iris.load_cube(rndt_file, 'TOA Incoming Net Radiation' & time_constraint)
hfds_cube = iris.load_cube(hfds_file, 'surface_downward_heat_flux_in_sea_water' & time_constraint)
ohc_cube = iris.load_cube(ohc_file, 'ocean heat content' & time_constraint)
hfbasin_cube = iris.load_cube(hfbasin_file, 'northward_ocean_heat_transport' & time_constraint)
rndt_anomaly = calc_anomaly(rndt_cube)
hfds_anomaly = calc_anomaly(hfds_cube)
ohc_anomaly = calc_anomaly(ohc_cube)
hfbasin_anomaly = calc_anomaly(hfbasin_cube)
ocean_convergence = ohc_anomaly[-1, ::] - hfds_anomaly[-1, ::]
hfbasin_inferred = ocean_convergence.copy()
hfbasin_inferred.data = numpy.ma.cumsum(-1 * ocean_convergence.data)
atmos_convergence = hfds_anomaly[-1, ::] - rndt_anomaly[-1, ::]
hfatmos_inferred = atmos_convergence.copy()
hfatmos_inferred.data = numpy.ma.cumsum(-1 * atmos_convergence.data)
return rndt_anomaly, hfds_anomaly, ohc_anomaly, hfbasin_anomaly, hfbasin_inferred, hfatmos_inferred
def plot_uptake_storage(ohc_anomaly, hfds_anomaly, rndt_anomaly):
Plot the ocean heat uptake and storage
fig, ax = plt.subplots(figsize=[12, 6])
iplt.plot(ohc_anomaly[-1, ::], color='blue', label='OHC')
iplt.plot(hfds_anomaly[-1, ::], color='orange', label='hfds')
iplt.plot(rndt_anomaly[-1, ::], color='red', label='rndt')
plt.xlabel('latitude')
plt.ylabel('J')
plt.xlim(-90, 90)
plt.axhline(y=0, color='0.5', linestyle='--')
plt.legend()
plt.show()
def plot_transport(hfbasin_anomaly_data, hfbasin_anomaly_inferred, hfatmos_anomaly_inferred):
Plot the northward ocean heat transport
fig, ax = plt.subplots(figsize=[12, 6])
iplt.plot(hfbasin_anomaly_data[-1, ::], color='purple', label='northward OHT')
iplt.plot(hfbasin_anomaly_inferred, color='purple', linestyle='--', label='inferred northward OHT')
iplt.plot(hfatmos_anomaly_inferred, color='green', linestyle='--', label='inferred northward AHT')
plt.xlabel('latitude')
plt.ylabel('J')
plt.xlim(-90, 90)
plt.axhline(y=0, color='0.5', linestyle='--')
plt.legend()
plt.show()
hfds_anomaly, ohc_anomaly, hfbasin_anomaly, hfbasin_inferred = get_data('GISS-E2-R')
plot_uptake_storage(ohc_anomaly, hfds_anomaly)
ohc_anomaly[-1, ::].data.sum()
hfds_anomaly[-1, ::].data.sum()
plot_transport(hfbasin_anomaly, hfbasin_inferred)
hfbasin_anomaly[-1, ::].data.sum()
hfbasin_inferred.data.sum()
hfds_anomaly, ohc_anomaly, hfbasin_anomaly, hfbasin_inferred = get_data('CSIRO-Mk3-6-0')
plot_uptake_storage(ohc_anomaly, hfds_anomaly)
ohc_anomaly[-1, ::].data.sum()
hfds_anomaly[-1, ::].data.sum()
hfds_anomaly.coord('latitude')
data_lat_bounds = hfds_anomaly.coord('latitude').bounds
data_lat_diffs = numpy.apply_along_axis(lambda x: x[1] - x[0], 1, data_lat_bounds)
data_lat_diffs
hfds_scaled = hfds_anomaly / data_lat_diffs
new_points = [('latitude', numpy.arange(-89.25, 90, 1.5))]
hfds_new_grid = hfds_scaled.interpolate(new_points, iris.analysis.Linear())
hfds_new = hfds_new_grid * 1.5
iplt.plot(hfds_new[-1, ::], color='orange', label='hfds')
plt.xlabel('latitude')
plt.ylabel('J')
plt.xlim(-90, 90)
plt.show()
hfds_new[-1, ::].data.sum()
plot_transport(hfbasin_anomaly, hfbasin_inferred)
rndt_anomaly, hfds_anomaly, ohc_anomaly, hfbasin_anomaly, hfbasin_inferred, hfatmos_inferred = get_data('NorESM1-M')
test = False == hfds_anomaly.coord('latitude')
test
plot_uptake_storage(ohc_anomaly, hfds_anomaly, rndt_anomaly)
ohc_anomaly[-1, ::].data.sum()
hfds_anomaly[-1, ::].data.sum()
rndt_anomaly[-1, ::].data.sum()
plot_transport(hfbasin_anomaly, hfbasin_inferred, hfatmos_inferred)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Launch the standalone spark clusters using sparkhpc
Step2: Create a SparkContext and start computing
Step3: Teardown
|
<ASSISTANT_TASK:>
Python Code:
import findspark; findspark.init()
import sparkhpc
sj = sparkhpc.sparkjob.LSFSparkJob(ncores=4)
sj.wait_to_start()
sj
sj2 = sparkhpc.sparkjob.LSFSparkJob(ncores=10)
sj2.submit()
sj.show_clusters()
from pyspark import SparkContext
sc = SparkContext(master=sj.master_url)
sc.parallelize(range(100)).count()
sj.stop()
sj2.stop()
sj.show_clusters()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You might be tempted to encode this data with a straightforward numerical mapping
Step2: It turns out that this is not generally a useful approach in Scikit-Learn
Step3: Notice that the 'neighborhood' column has been expanded into three separate columns, representing the three neighborhood labels, and that each row has a 1 in the column associated with its neighborhood.
Step4: There is one clear disadvantage of this approach
Step5: Many (though not yet all) of the Scikit-Learn estimators accept such sparse inputs when fitting and evaluating models. sklearn.preprocessing.OneHotEncoder and sklearn.feature_extraction.FeatureHasher are two additional tools that Scikit-Learn includes to support this type of encoding.
Step6: For a vectorization of this data based on word count, we could construct a column representing the word "problem," the word "evil," the word "horizon," and so on.
Step7: The result is a sparse matrix recording the number of times each word appears; it is easier to inspect if we convert this to a DataFrame with labeled columns
Step8: There are some issues with this approach, however
Step9: For an example of using TF-IDF in a classification problem, see In Depth
Step10: Still, we can fit a line to the data using LinearRegression and get the optimal result
Step11: It's clear that we need a more sophisticated model to describe the relationship between $x$ and $y$.
Step12: The derived feature matrix has one column representing $x$, and a second column representing $x^2$, and a third column representing $x^3$.
Step13: This idea of improving a model not by changing the model, but by transforming the inputs, is fundamental to many of the more powerful machine learning methods.
Step14: When applying a typical machine learning model to such data, we will need to first replace such missing data with some appropriate fill value.
Step15: We see that in the resulting data, the two missing values have been replaced with the mean of the remaining values in the column. This imputed data can then be fed directly into, for example, a LinearRegression estimator
Step16: Feature Pipelines
Step17: This pipeline looks and acts like a standard Scikit-Learn object, and will apply all the specified steps to any input data.
|
<ASSISTANT_TASK:>
Python Code:
data = [
{'price': 850000, 'rooms': 4, 'neighborhood': 'Queen Anne'},
{'price': 700000, 'rooms': 3, 'neighborhood': 'Fremont'},
{'price': 650000, 'rooms': 3, 'neighborhood': 'Wallingford'},
{'price': 600000, 'rooms': 2, 'neighborhood': 'Fremont'}
]
{'Queen Anne': 1, 'Fremont': 2, 'Wallingford': 3};
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer(sparse=False, dtype=int)
vec.fit_transform(data)
vec.get_feature_names()
vec = DictVectorizer(sparse=True, dtype=int)
vec.fit_transform(data)
sample = ['problem of evil',
'evil queen',
'horizon problem']
from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer()
X = vec.fit_transform(sample)
X
import pandas as pd
pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
from sklearn.feature_extraction.text import TfidfVectorizer
vec = TfidfVectorizer()
X = vec.fit_transform(sample)
pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.array([1, 2, 3, 4, 5])
y = np.array([4, 2, 1, 3, 7])
plt.scatter(x, y);
from sklearn.linear_model import LinearRegression
X = x[:, np.newaxis]
model = LinearRegression().fit(X, y)
yfit = model.predict(X)
plt.scatter(x, y)
plt.plot(x, yfit);
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=3, include_bias=False)
X2 = poly.fit_transform(X)
print(X2)
model = LinearRegression().fit(X2, y)
yfit = model.predict(X2)
plt.scatter(x, y)
plt.plot(x, yfit);
from numpy import nan
X = np.array([[ nan, 0, 3 ],
[ 3, 7, 9 ],
[ 3, 5, 2 ],
[ 4, nan, 6 ],
[ 8, 8, 1 ]])
y = np.array([14, 16, -1, 8, -5])
from sklearn.impute import SimpleImputer
imp = SimpleImputer(strategy='mean')
X2 = imp.fit_transform(X)
X2
model = LinearRegression().fit(X2, y)
model.predict(X2)
from sklearn.pipeline import make_pipeline
model = make_pipeline(SimpleImputer(strategy='mean'),
PolynomialFeatures(degree=2),
LinearRegression())
model.fit(X, y) # X with missing values, from above
print(y)
print(model.predict(X))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 类的方法
Step2: 这个 NPC 的类,在初始化这里定义了 NPC 拥有的三个属性,name、weapon、blood,其中 name 需要创建实例的时候设置。
Step3: 在子类中,可以覆盖父类的方法。
Step4: 再来看看 show_properties() 这个方法,总感觉这样有点笨重,因为不管是否是面向对象的编程方式,都不应该有太多重复的代码,现在 NPC 类中也有 show_properties() 方法,用来显示标准的三个属性,而 Soldier 类中同样名字的方法显示四个属性,如果 NPC 类中增加了属性,那么两边类中的这个方法都要修改。
Step5: init() 构造器方法和 del() 解构器方法
Step6: 面向对象程序设计(英语:Object-oriented programming,缩写:OOP)是种具有对象概念的程序编程范型,同时也是一种程序开发的方法。它可能包含数据、属性、代码与方法。对象则指的是类的实例。它将对象作为程序的基本单元,将程序和数据封装其中,以提高软件的重用性、灵活性和扩展性,对象里的程序可以访问及经常修改对象相关连的数据。在面向对象程序编程里,计算机程序会被设计成彼此相关的对象。
|
<ASSISTANT_TASK:>
Python Code:
# 申明一个 class MyData
class MyData(object):
pass
# 实例化 MyData, 实例的名字叫做 obj_math
obj_math = MyData()
obj_math.x = 4
print(obj_math.x)
class MyData(object):
# 定义一个 SayHello 的方法,self 可以理解为必须传递的参数
def SayHello(self):
print('Hello!')
# 实例化
obj_math = MyData()
# 调用方法
obj_math.SayHello()
# 类的方法的直接调用,其实还是实例化了
MyData().SayHello()
# 在上面基础上,复杂一点的例子
class MyData(object):
# 初始化方法,双下划线前后
# 实例化的时候,需要传递 self 之后的参数
def __init__(self, x, y):
self.x = x
self.y = y
# 定义一个 SayHello 的方法,self 可以理解为必须传递的参数
def SayHello(self):
print('Hello!')
def Add(self):
print(self.x + self.y)
# 实例化
obj_math = MyData(3,4)
# 调用方法
obj_math.SayHello()
obj_math.x = 5
obj_math.Add()
o1 = MyData(1,3)
o1.Add()
# 再复杂一点,创建多个实例
class MyData(object):
# 初始化方法,双下划线前后
# 实例化的时候,需要传递 self 之后的参数
def __init__(self, x, y):
self.x = x
self.y = y
# 定义一个 SayHello 的方法,self 可以理解为必须传递的参数
def SayHello(self):
print('Hello!')
def Add(self):
print(self.x + self.y)
# 实例化
obj_math = MyData(3,4)
# 调用方法
obj_math.SayHello()
obj_math.Add()
# 再创建一个实例
obj_math2 = MyData(5,6)
obj_math2.Add()
# 我们建立一个有趣的简单的模仿游戏的类来说明面向对象编程的概念
# v1.0.0
# NPC 类
class NPC(object):
# 初始化 NPC 的属性
def __init__(self, name):
self.name = name
self.weapon = 'gun'
self.blood = 1000
# 定义方法 - 显示 NPC 属性
def show_properties(self):
print('name:', self.name)
print('weapon:', self.weapon)
print('blood:', self.blood)
# 定义方法 - 通用攻击
def fight_common(self):
print('Fight Common')
n1 = NPC('AA1')
n1.show_properties()
n1.fight_common()
# 战士 Soldier 类,继承自 NPC class
class Soldier(NPC):
# 暂时什么也不干
pass
# 创建一个 Soldier 类, 作为 NPC 的子类
n1 = Soldier('AA2')
# 调用方法,因为 Soldier 中此刻并没有任何实际的方法等,所以实际上自动调用了父类的方法
n1.show_properties()
n1.fight_common()
# 战士 Soldier 类
class Soldier(NPC):
# 建立 soldier 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.soldier_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
print('name:', self.name)
print('weapon:', self.weapon)
print('blood:', self.blood)
print('soldier_level:', self.soldier_level)
# 创建一个 Soldier 类, 作为 NPC 的子类
n1 = Soldier('AA2')
n1.show_properties()
n1.fight_common()
# 战士 Soldier 类
class Soldier(NPC):
# 建立 soldier 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.soldier_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
# 通过 super 来调用父类方法
super(Soldier, self).show_properties()
print('soldier_level', self.soldier_level)
# 创建一个 Soldier 类, 作为 NPC 的子类
n1 = Soldier('AA2')
n1.show_properties()
n1.fight_common()
# 先整理一下上面的代码
# v1.0.1
# NPC 类
class NPC(object):
# 初始化 NPC 的属性
def __init__(self, name):
self.name = name
self.weapon = 'gun'
self.blood = 1000
# 定义方法 - 显示 NPC 属性
def show_properties(self):
print('name:', self.name)
print('weapon:', self.weapon)
print('blood:', self.blood)
# 定义方法 - 通用攻击
def fight_common(self):
print('Fight Common')
# 战士 Soldier 类
class Soldier(NPC):
# 建立 soldier 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.soldier_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
# 通过 super 来调用父类方法
super(Soldier, self).show_properties()
print('soldier_level', self.soldier_level)
# 创建两个 soldier
n1 = Soldier('AA1')
n1.show_properties()
n1.fight_common()
print()
n2 = Soldier('AA2')
n2.show_properties()
n2.fight_common()
# 连续创建多个 soldier 的实例
# 并存储在 list 中
s = []
for i in range(3):
n = Soldier('AA' + str(i))
n.show_properties()
n.fight_common()
s.append(n)
# 看一下存储了对象的列表
print(s)
# 可以和一般访问列表一样访问列表中的对象
for i in s:
print(i.name)
print(len(s))
# 可以删除一个实例
s.pop(1)
# 显示列表中的实例
for i in s:
print(i.name)
print(len(s))
# 再增加一个巫师 Wizard 的类
# 巫师 Wizard 类
class Wizard(NPC):
# 建立 Wizard 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.wizard_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
# 通过 super 来调用父类方法
super(Wizard, self).show_properties()
print('wizard_level', self.wizard_level)
# 定义一个巫师专用的战斗方法
def wizard_fight_magic(self):
print('Wizard Magic!')
# 创建一个 wizard
c1 = Wizard('CC1')
c1.show_properties()
c1.fight_common()
c1.wizard_fight_magic()
# 创建复杂的 NPC,3个 wizards,3个 soldiers
# 创建多个 soldier 的实例
s = []
for i in range(3):
n = Soldier('AA' + str(i))
n.show_properties()
s.append(n)
for i in range(3):
n = Wizard('CC' + str(i))
n.show_properties()
s.append(n)
for i in s:
print(i.name)
print('--')
# 显示类的方法
print(dir(Soldier))
# 显示类的方法
print(dir(Wizard))
# 当前版本的代码
# v1.0.2
# NPC 类
class NPC(object):
# 初始化 NPC 的属性
def __init__(self, name):
self.name = name
self.weapon = 'gun'
self.blood = 1000
# 定义方法 - 显示 NPC 属性
def show_properties(self):
print('name:', self.name)
print('weapon:', self.weapon)
print('blood:', self.blood)
# 定义方法 - 通用攻击
def fight_common(self):
print('Fight Common')
# 战士 Soldier 类
class Soldier(NPC):
# 建立 soldier 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.soldier_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
# 通过 super 来调用父类方法
super(Soldier, self).show_properties()
print('soldier_level', self.soldier_level)
# 巫师 Wizard 类
class Wizard(NPC):
# 建立 Wizard 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.wizard_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
# 通过 super 来调用父类方法
super(Wizard, self).show_properties()
print('wizard_level', self.wizard_level)
# 定义一个巫师专用的战斗方法
def fight_magic(self):
print('Wizard Magic!')
# 在 NPC 的 __init__() 加入显示创建的是什么角色
# v1.0.3
# NPC 类
class NPC(object):
# 初始化 NPC 的属性
def __init__(self, name):
self.name = name
self.weapon = 'gun'
self.blood = 1000
# 先简单的显示
print('')
print('NPC created!')
# 定义方法 - 显示 NPC 属性
def show_properties(self):
print('name:', self.name)
print('weapon:', self.weapon)
print('blood:', self.blood)
# 定义方法 - 通用攻击
def fight_common(self):
print('Fight Common')
# 战士 Soldier 类
class Soldier(NPC):
# 建立 soldier 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.soldier_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
# 通过 super 来调用父类方法
super(Soldier, self).show_properties()
print('soldier_level', self.soldier_level)
# 巫师 Wizard 类
class Wizard(NPC):
# 建立 Wizard 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.wizard_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
# 通过 super 来调用父类方法
super(Wizard, self).show_properties()
print('wizard_level', self.wizard_level)
# 定义一个巫师专用的战斗方法
def fight_magic(self):
print('Wizard Magic!')
s = []
for i in range(2):
n = Soldier('AA' + str(i))
n.show_properties()
s.append(n)
for i in range(2):
n = Wizard('CC' + str(i))
n.show_properties()
s.append(n)
# 但是在 NPC 这个父类中没有显示出具体的子类名称
# 所以我们用下面的方法来显示子类的名称
# type(self).__name__ 来访问类的名称
# v1.0.4
# NPC 类
class NPC(object):
# 初始化 NPC 的属性
def __init__(self, name):
self.name = name
self.weapon = 'gun'
self.blood = 1000
print('')
print(type(self).__name__, 'NPC created!')
# 定义方法 - 显示 NPC 属性
def show_properties(self):
print('name:', self.name)
print('weapon:', self.weapon)
print('blood:', self.blood)
# 定义方法 - 通用攻击
def fight_common(self):
print('Fight Common')
# 战士 Soldier 类
class Soldier(NPC):
# 建立 soldier 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.soldier_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
# 通过 super 来调用父类方法
super(Soldier, self).show_properties()
print('soldier_level', self.soldier_level)
# 巫师 Wizard 类
class Wizard(NPC):
# 建立 Wizard 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.wizard_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
# 通过 super 来调用父类方法
super(Wizard, self).show_properties()
print('wizard_level', self.wizard_level)
# 定义一个巫师专用的战斗方法
def fight_magic(self):
print('Wizard Magic!')
s = []
for i in range(2):
n = Soldier('AA' + str(i))
n.show_properties()
s.append(n)
for i in range(2):
n = Wizard('CC' + str(i))
n.show_properties()
s.append(n)
# 继续优化,根据 NPC 类型来设定 blood 和 weapon
# 将代码尽量集中,是有好处的
# v1.0.5
# NPC 类
class NPC(object):
# 初始化 NPC 的属性
def __init__(self, name):
self.name = name
self.npc_type = type(self).__name__
print('')
print(self.npc_type, 'NPC created!')
if self.npc_type == 'Soldier':
self.weapon = 'sword'
self.blood = 1000
if self.npc_type == 'Wizard':
self.weapon = 'staff'
self.blood = 2000
# 定义方法 - 显示 NPC 属性
def show_properties(self):
print('name:', self.name)
print('weapon:', self.weapon)
print('blood:', self.blood)
# 定义方法 - 通用攻击
def fight_common(self):
print('Fight Common')
# 战士 Soldier 类
class Soldier(NPC):
# 建立 soldier 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.soldier_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
# 通过 super 来调用父类方法
super(Soldier, self).show_properties()
print('soldier_level', self.soldier_level)
# 巫师 Wizard 类
class Wizard(NPC):
# 建立 Wizard 的初始化
def __init__(self, name):
# 调用 父类 NPC 的初始化方法
NPC.__init__(self, name)
# soldier 自己的初始化
self.wizard_level = 1
# 定义方法 - 显示 NPC 属性
def show_properties(self):
# 通过 super 来调用父类方法
super(Wizard, self).show_properties()
print('wizard_level', self.wizard_level)
# 定义一个巫师专用的战斗方法
def fight_magic(self):
print('Wizard Magic!')
s = []
for i in range(2):
n = Soldier('AA' + str(i))
n.show_properties()
s.append(n)
for i in range(2):
n = Wizard('CC' + str(i))
n.show_properties()
s.append(n)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 使用动态属性访问 JSON 类数据
Step2: FrozenJSON 中,尝试获取其它属性会出发解释器调用 __getattr__ 方法,这个方法首先查看 self.__data 有没有指定属性名(而不是键),这样 FrozenJSON 实例便可以处理字典的所有方法,例如把 items 方法委托给 self.__data.items() 方法。如果 self.__data 没有指定名称属性,那么 __getattr__ 方法以那个名称为键,从 self.__data 中获取一个元素,传给 FrozenJSON.build 方法。这样就能深入 JSON 数据的嵌套结构,使用类方法 build 把每一层嵌套转成一个 FrozenJSON 实例,我们没有缓存或转换原始数据,在迭代数据过程中,嵌套的数据结构不断被转成 FrozenJSON 对象,这么做没问题,因为我们的数据量不大
Step3: 处理无效属性名
Step4: 如果 JSON 对象中的键不是有效 Python 标识符,也会遇到类似问题
Step5: 使用 __new__ 代替 build 方法:
Step6: OSCON 的 JSON 数据源有一个明显的缺点
Step7: 在 Python 中至少有两个类与 Record 类似,我们这里自己写是为了说明一个重要的做法:在 __init__ 方法中更新实例的 __dict__ 属性
Step8: LineItem 类第一版,表示订单中商品的类
Step9: 这个类设计的太精简了,有很多问题,例如:
Step10: 看到重量竟然能变成负的,我们可以修改 LineItem 类的接口,使用读值的方法和设值方法管理 weight 属性,但是,如果能直接设定商品 weight 属性,显得更加自然,此外,系统可能在生产环境中,其他buff已经直接访问 item.weight 了。此时,符合 Python 风格做法是,把数据属性换成特性
Step11: 现在我们禁止用户为 weight 属性提供负值或 0,但是工作人员可能会犯错,把金额设成负的,为了防止这种情况,我们可以把 price 属性也变成特性,但是这样我们的 Python 代码就存在一些重复
Step12: 某些情况下,这种经典形式比装饰器语法好,稍后讨论的特性工厂函数就是一个例子。但是如果方法特别多的话,使用装饰器,一眼就能看出来哪些是读值方法,哪些是设置方法,很方便
Step13: 下面尝试覆盖 obj 实例的 prop 属性
Step14: 下面为 Class 类添加一个新特性,覆盖实例属性
Step15: 本节主要观点是,obj.attr 这样的表达式不会从 obj 开始寻找 attr,而是从 obj.__class__ 开始,而且,仅当类中没有名为 attr 特性时,Python 才会从 obj 实例寻找。这条规则不仅适用与特性,还适用与一整类描述符 -- 覆盖型描述符,下张慧进一步讨论
Step16: 至此面我们介绍了特性的重要知识,下面回过头解决前面遇到的问题:保护 LineItem 对象的 weight 和 price 属性,只允许设置大于 0 的值,但是不允许定义两个几乎一样的读值和设值方法
Step17: 注意工厂函数构建的特性,weight 特性覆盖了 weight 实例属性,因此对 self.weight 或 nutmeg.weight 的每个引用都由特性函数处理,只有直接存取 __dict__ 属性才能跳过特性的处理逻辑
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlopen
import warnings
import os
import json
URL = 'http://www.oreilly.com/pub/sc/osconfeed'
JSON = '/home/kaka/osconfeed.json'
def load():
if not os.path.exists(JSON):
msg = 'downloading {} to {}'.format(URL, JSON)
warnings.warn(msg)
with urlopen(URL) as remote, open(JSON, 'wb') as local: # with 语句使用两个上下文管理器,用来读取和保存远程文件
local.write(remote.read())
with open(JSON) as fp: # 解析 JSON 文件,返回 Python 原生对象,这里数据有下面几种类型:dict, list, str, int
return json.load(fp)
load()
feed = load()
sorted(feed['Schedule'].keys())
for key, value in sorted(feed['Schedule'].items()):
print('{:3} {}'.format(len(value), key)) # 显示各个集合中记录数量
feed['Schedule']['speakers'][-1]['name'] # 深入嵌套的字典和列表,获取最后一个演讲者名字
feed['Schedule']['speakers'][-1]['serial'] # 演讲者编号
feed['Schedule']['events'][40]['name']
feed['Schedule']['events'][40]['speakers'] # 每个事件都有一个 'speakers' 字段,列出 0 个或多个演讲者编号
from collections import abc
class FrozenJSON:
def __init__(self, mapping):
# 这么做有两个目的,首先确保传入的是字典,其次为了创建一个副本
self.__data = dict(mapping)
def __getattr__(self, name): # 仅当没有 name 属性时才调用此方法(通常在属性查找找不到时候调用)
if hasattr(self.__data, name):
return getattr(self.__data, name) # 如果 name 是 `__data` 的属性,返回那个属性,例如字典自带的 keys 属性
else:
# 否则,从 self.__data 中获取 name 键对应的元素,返回调用 FrozenJSON.build() 方法得到的结果
return FrozenJSON.build(self.__data[name])
@classmethod
def build(cls, obj): # 这是一个备选构造方法,classmethod 装饰器经常这么用
if isinstance(obj, abc.Mapping): # 如果 obj 是映射,那么构造一个 FrozenJSON 对象
return cls(obj)
# 如果是 MutableSequence,必然是列表。因此我们把 obj 中的每个元素递归传给 .build() 方法,构建一个列表
elif isinstance(obj, abc.MutableSequence):
return [cls.build(item) for item in obj]
else:
# 既不是字典又不是列表,原封不动返回
return obj
raw_feed = load()
feed = FrozenJSON(raw_feed)
len(feed.Schedule.speakers)
sorted(feed.Schedule.keys())
for key, value in sorted(feed.Schedule.items()):
print('{:3} {}'.format(len(value), key))
feed.Schedule.speakers[-1].name
talk = feed.Schedule.events[40]
type(talk)
talk.name
talk.speakers
talk.flavor # 读取不存在的实例抛出 KeyError 异常,而不是通常的 AttributeError 异常
from collections import abc
import keyword
class FrozenJSON:
def __init__(self, mapping):
self.__data = {}
for key, value in mapping.items():
if keyword.iskeyword(key): # 判断是不是保留字
key += '_'
self.__data[key] = value
def __getattr__(self, name):
if hasattr(self.__data, name):
return getattr(self.__data, name)
else:
return FrozenJSON.build(self.__data[name])
@classmethod
def build(cls, obj):
if isinstance(obj, abc.Mapping):
return cls(obj)
elif isinstance(obj, abc.MutableSequence):
return [cls.build(item) for item in obj]
else:
return obj
grad = FrozenJSON({'name': 'Jim Bo', 'class': 1982})
grad.class_
def object_maker(the_class, some_arg):
new_object = the_class.__new__(some_arg)
if isinstance(new_object, the_class):
the_class.__init__(new_object, some_arg)
return new_object
# 下面两个语句作用基本相同
# x = Foo('bar')
# x = object_maker(Foo, 'bar')
from collections import abc
import keyword
class FrozenJSON:
def __new__(cls, arg): # 这是类方法,第一个参数是类本身,余下的参数与 `__init__` 方法一样,只不过没有 self
if isinstance(arg, abc.Mapping):
# 默认行为是委托给超类的 __new__` 方法。这里调用的是 object 基类的 __new__ 方法,把唯一的参数设为 FrozenJSON
return super().__new__(cls)
elif isinstance(arg, abc.MutableSequence):
return [cls(item) for item in arg]
else:
return arg
def __init__(self, mapping):
self.__data = {}
for key, value in mapping.items():
if keyword.iskeyword(key): # 判断是不是保留字
key += '_'
self.__data[key] = value
def __getattr__(self, name):
if hasattr(self.__data, name):
return getattr(self.__data, name)
else:
return FrozenJSON(self.__data[name]) # 之前调用的是 build 方法,现在直接调用构造方法
import warnings
DB_NAME = '/home/kaka/schedule1_db'
CONFERENCE = 'conference.115'
class Record:
def __init__(self, **kwargs):
self.__dict__.update(kwargs) # 关键字参数构建实例属性的简便方式
def load_db(db):
raw_data = load()
warnings.warn('loading ' + DB_NAME)
for collection, rec_list in raw_data['Schedule'].items():
record_type = collection[:-1] # 去掉尾部的 's',例如 events 变成 event
for record in rec_list:
key = '{}.{}'.format(record_type, record['serial'])
record['serial'] = key
db[key] = Record(**record)
import shelve
db = shelve.open(DB_NAME)
if CONFERENCE not in db:
load_db(db)
speaker = db['speaker.3471']
type(speaker)
speaker.name, speaker.twitter
db.close()
import warnings
import inspect # load_db 函数中用
DB_NAME = '/home/kaka/schedule2_db' # 因为要存储几个不同类的实例,所以常见使用不同的数据库文件
CONFERENCE = 'conference.115'
class Record:
def __init__(self, **kwargs):
self.__dict__.update(kwargs) # 动态添加属性
def __eq__(self, other): # 测试时候很有用
if isinstance(other, Record):
return self.__dict__ == other.__dict__
else:
return NotImplemented
class MissingDatabaseError(RuntimeError):
'''需要数据库但没有指定数据库时抛出''' # 自定义异常通常是标志类,没有定义体
class DbRecord(Record):
__db = None # 存储打开的 shelve.Shelf 数据库引用
@staticmethod # 静态方法,以此强调不管调用多少次,效果始终一样
def set_db(db):
DbRecord.__db = db
@staticmethod
def get_db():
return DbRecord.__db
@classmethod
def fetch(cls, ident): #类方法,因此在子类中易于定制它的行为
db = cls.get_db()
try:
return db[ident]
except TypeError:
if db is None:
msg = 'database not set; call "{}.set_db(my_db)"'
raise MissingDatabaseError(msg.format(cls.__name__))
else:
raise # 如果数据库已经打开,我们不知道怎么处理此异常,继续抛出
def __repr__(self):
if hasattr(self, 'serial'):
cls_name = self.__class__.__name__
return '<{} serial = {!r}>'.format(cls_name, self.serial)
else:
return super().__repr__()
class Event(DbRecord):
@property #可以当属性访问
def venue(self):
key = 'venue.{}'.format(self.venue_serial)
# 传给 DbRecord 类的 fetch 类方法
#不直接用 self.fetch(key) 的原因是为了防止在事件记录中有 'fetch' 键
# 那么在事件记录中获取的是 fetch 字段的值,而不是 event 继承自 DbRecord 的 fetch 类方法
return self.__class__.fetch(key)
@property
def speakers(self):
if not hasattr(self, '_speaker_objs'):
# 没有 _speaker_objs 属性就直接从 __dict__ 实例属性中获取 speakers 属性的值,
# 防止无限递归,因为这个特性公开名称也是 speakers
spkr_serials = self.__dict__['speakers']
fetch = self.__class__.fetch # 获取 fetch 类方法的引用
self._speaker_objs = [fetch('speaker.{}'.format(key))
for key in spkr_serials]
return self._speaker_objs
def __repr__(self):
if hasattr(self, 'name'): # 如果记录有 name 属性,在字符串使用
cls_name = self.__class__.__name__
return '<{} {!r}>'.format(cls_name, self.name)
else:
return super().__repr__()
def load_db(db):
raw_data = load()
warnings.warn('loading ' + DB_NAME)
for collection, rec_list in raw_data['Schedule'].items():
record_type = collection[:-1] # 去掉尾部的 's',例如 events 变成 event
cls_name = record_type.capitalize() # 把 record_type 变量首字母变大写
cls = globals().get(cls_name, DbRecord) # 从模块全局作用域获取那个名称对应的对象,找不到使用 DbRecord
if inspect.isclass(cls) and issubclass(cls, DbRecord): # 如果获取的对象是类,而且是 DbRecord 的子类
factory = cls
else:
factory = DbRecord
for record in rec_list:
key = '{}.{}'.format(record_type, record['serial'])
record['serial'] = key
# 存储在数据库中的对象由 factory 构建,factory 可能是
# DbRecord 类,也可能是根据 record_type 的值确定的某个子类。
db[key] = factory(**record)
DbRecord.set_db(db) #db 为 /home/kaka/schedule2_db
event = DbRecord.fetch('event.33950')
event
event.venue
event.venue.name
for spkr in event.speakers:
print('{0.serial}: {0.name}'.format(spkr))
class LineItem:
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
raisins = LineItem('Golden raisins', 10, 6.95)
raisins.subtotal()
raisins.weight = -20
raisins.subtotal()
class LineItem:
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
@property # 实现读值方法
def weight(self):
return self.__weight # 真正的值存到私有属性 __weight 中
@weight.setter #这个属性也是装饰器,这个装饰器把读值方法和设值方法绑定在一起
def weight(self, value):
if value > 0: # 避免值为负
self.__weight = value
else:
raise ValueError('value must be > 0')
wlanuts = LineItem('wlanuts', 0, 10.00) # 现在不能设值负的重量
class LineItem:
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
def get_weight(self):
return self.__weight # 真正的值存到私有属性 __weight 中
def set_weight(self, value):
if value > 0: # 避免值为负
self.__weight = value
else:
raise ValueError('value must be > 0')
weight = property(get_weight, set_weight) # 构建 property 对象,然后赋值给公开的类属性
class Class:
data = 'the class data attr'
@property
def prop(self):
return 'the prop value'\
obj = Class()
vars(obj) # 返回 obj 的 __dict__ 属性,表明没有实例属性
obj.data
obj.data = 'bar'
vars(obj)
obj.data # 遮盖类属性
Class.data
Class.prop
obj.prop
obj.prop = 'foo'
obj.__dict__['prop'] = 'foo'
vars(obj)
obj.prop # 仍然运行特征读值的方法,特性没有被实例属性覆盖
Class.prop = 'baz' # 覆盖 Class.prop 特性,销毁特征对象
obj.prop # 现在获取的是实力属性,Class.prop 不是特性了,因此不会覆盖 obj.prop
obj.data
Class.data
Class.data = property(lambda self: 'the "data" prop value') # 用新特性覆盖 Class.data
obj.data
del Class.data
obj.data
class Foo:
@property
def bar(self):
'''The bar attribute'''
return self.__dict__['bar']
@bar.setter
def bar(self, value):
self.__dict__['bar'] = value
help(Foo)
help(Foo.bar)
def quantity(storage_name):
def qty_getter(instance):
return instance.__dict__[storage_name]
def qty_setter(instance, value):
if value > 0:
instance.__dict__[storage_name] = value
else:
raise ValueError('value must be > 0')
return property(qty_getter, qty_setter)
class LineItem:
weight = quantity('weight') # 用工厂函数将自定义特性 weight 设为类属性
price = quantity('price') # 记住,赋值语句的右面先计算,因此调用 quantity 时,price 属性还不存在
def __init__(self, description, weight, price):
self.description = description
self.weight = weight # 这里,特性已经激活,确保 weight 不能为负数或 0
self.price = price
def subtotal(self):
return self.weight * self.price # 这里也用到了特性,使用特性获取实例中存储的值
nutmeg = LineItem('Moluccan nutmeg', 8, 13.95)
nutmeg.weight, nutmeg.price # 使用特性读取 weight 和 price,会遮盖同名实例属性
sorted(vars(nutmeg).items()) # 查看真正存储值的实例属性
class BlackKnight:
def __init__(self):
self.members = ['an arm', 'another arm',
'a leg', 'another leg']
self.phrases = ["'Tis but a scratch.",
"It's just a flesh wound.",
"I'm invincible!",
"All right, we'll call it a draw."]
@property
def member(self):
print('next member is:')
return self.members[0]
@member.deleter
def member(self):
text = 'BLACK KNIGHT (loses {})\n -- {}'
print(text.format(self.members.pop(0), self.phrases.pop(0)))
knight = BlackKnight()
knight.member
del knight.member
del knight.member
del knight.member
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Again, using the ODEsys convenience class from notebook "35"
Step2: that is still the benchmark to beat. Subclassing ODEsys to have it render, compile and import the code
Step3: Now we are getting close to optimal speed. There is no interaction with the Python interpreter during integration.
Step4: Just to see that everything looks alright
|
<ASSISTANT_TASK:>
Python Code:
import json
import numpy as np
from scipy2017codegen.odesys import ODEsys
from scipy2017codegen.chem import mk_rsys
watrad_data = json.load(open('../scipy2017codegen/data/radiolysis_300_Gy_s.json'))
watrad = mk_rsys(ODEsys, **watrad_data)
tout = np.logspace(-6, 3, 200) # close to one hour of operation
c0 = {'H2O': 55.4e3, 'H+': 1e-4, 'OH-': 1e-4}
y0 = [c0.get(symb.name, 0) for symb in watrad.y]
%timeit yout, info = watrad.integrate_odeint(tout, y0)
# %load ../scipy2017codegen/odesys_cvode.py
import os
import sys
import uuid
import sympy as sym
import setuptools
import numpy as np
import setuptools
import pyximport
from scipy2017codegen import templates
from scipy2017codegen.odesys import ODEsys
pyximport.install()
kw = {
'sources': [],
'include_dirs': [os.getcwd(), np.get_include()],
'libraries': ['sundials_cvode', 'sundials_nvecserial'],
'library_dirs': [],
'extra_compile_args': [],
'extra_link_args': []
}
osx = sys.platform.lower() == 'darwin'
win = os.name == 'nt'
posix = os.name == 'posix'
if not win:
kw['libraries'] += ['m']
if posix:
kw['libraries'] += ['openblas']
class ODEcvode(ODEsys):
default_integrator = 'cvode'
def setup(self):
self.uid = uuid.uuid4().hex[:10]
self.mod_name = 'ode_c_%s' % self.uid
idxs = list(range(len(self.f)))
subs = {s: sym.Symbol('y[%d]' % i) for i, s in enumerate(self.y)}
f_exprs = ['out[%d] = %s;' % (i, sym.ccode(self.f[i].xreplace(subs)))
for i in idxs]
j_col_defs = ['realtype * const col_%d = DENSE_COL(J, %d);' % (ci, ci)
for ci in idxs]
j_exprs = ['col_%d[%d] = %s;' % (ci, ri, self.j[ri, ci].xreplace(subs))
for ci in idxs for ri in idxs if self.j[ri, ci] != 0]
ctx = dict(
func = '\n '.join(f_exprs + ['return 0;']),
dense_jac = '\n '.join(j_col_defs + j_exprs + ['return 0;']),
band_jac = 'return -1;'
)
open('integrate_serial_%s.c' % self.uid, 'wt').write(templates.sundials['integrate_serial.c'] % ctx)
open('%s.pyx' % self.mod_name, 'wt').write(templates.sundials['_integrate_serial.pyx'] % {'uid': self.uid})
open('%s.pyxbld' % self.mod_name, 'wt').write(templates.pyxbld % kw)
self.mod = __import__(self.mod_name)
self.integrate_odeint = None
def integrate_cvode(self, tout, y0, params=(), rtol=1e-8, atol=1e-8, **kwargs):
return self.mod._integrate(np.asarray(tout, dtype=np.float64),
np.asarray(y0, dtype=np.float64),
np.atleast_1d(np.asarray(params, dtype=np.float64)),
abstol=np.atleast_1d(np.asarray(atol, dtype=np.float64)),
reltol=rtol,
**kwargs)
cvode_sys = mk_rsys(ODEcvode, **watrad_data)
%timeit cvode_sys.integrate(tout, y0)
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(1, 1, figsize=(14, 6))
cvode_sys.plot_result(tout, *cvode_sys.integrate(tout, y0), ax=ax)
ax.set_xscale('log')
ax.set_yscale('log')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load a CSV file for Binary classification
Step2: Create PyTorch GPU tensors
Step3: Define the NN model
Step4: The cross-entropy loss function
Step5: Start training in Batches
Step6: Cross validation, metrics, ROC_AUC etc
|
<ASSISTANT_TASK:>
Python Code:
# !pip install pycuda
%reset -f
import numpy
import numpy as np
from __future__ import print_function
from __future__ import division
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import os
import torch
from torch.utils.data.dataset import Dataset
from torch.utils.data import DataLoader
from torchvision import transforms
from torch import nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from sklearn.preprocessing import MultiLabelBinarizer
import time
%matplotlib inline
from pylab import rcParams
rcParams['figure.figsize'] = (6, 6) # setting default size of plots
import tensorflow as tf
print("tensorflow:" + tf.__version__)
!set "KERAS_BACKEND=tensorflow"
import torch
import sys
print('__Python VERSION:', sys.version)
print('__pyTorch VERSION:', torch.__version__)
print('__CUDA VERSION')
from subprocess import call
print('__CUDNN VERSION:', torch.backends.cudnn.version())
print('__Number CUDA Devices:', torch.cuda.device_count())
print('__Devices')
# !pip install http://download.pytorch.org/whl/cu75/torch-0.2.0.post1-cp27-cp27mu-manylinux1_x86_64.whl
# !pip install torchvision
# ! pip install cv2
# import cv2
% reset -f
# ! pip install tables
import torch
from torch.autograd import Variable
import numpy as np
import pandas
import numpy as np
import pandas as pd
from sklearn import cross_validation
from sklearn import metrics
from sklearn.metrics import roc_auc_score, log_loss, roc_auc_score, roc_curve, auc
import matplotlib.pyplot as plt
from sklearn import cross_validation
from sklearn import metrics
from sklearn.metrics import roc_auc_score, log_loss, roc_auc_score, roc_curve, auc
from sklearn.cross_validation import StratifiedKFold, ShuffleSplit, cross_val_score, train_test_split
import logging
handler=logging.basicConfig(level=logging.INFO)
lgr = logging.getLogger(__name__)
%matplotlib inline
F_NAME_TRAIN= 'data-03-diabetes.csv'
# F_NAME_TRAIN='numerai/numerai_training_data.csv'
# X_df_train= pd.read_csv(F_NAME_TRAIN)
X_df_train= pd.read_csv(F_NAME_TRAIN,header=None, dtype=np.float32)
X_df_train_SINGLE=X_df_train.copy(deep=True)
# X_df_train_SINGLE.drop('id', axis=1, inplace=True)
# X_df_train_SINGLE.drop('era', axis=1, inplace=True)
# X_df_train_SINGLE.drop('data_type', axis=1, inplace=True)
# drop the header
# X_df_train_SINGLE.to_csv('numerai/numerai_training_data_clean.csv', header=False)
# X_df_train_SINGLE= pd.read_csv('numerai/numerai_training_data_clean.csv', header=None, dtype=np.float32)
# X_df_train_SINGLE=X_df_train_SINGLE.dropna()
answers_1_SINGLE = list (X_df_train_SINGLE[X_df_train_SINGLE.columns[-1]].values)
answers_1_SINGLE= map(int, answers_1_SINGLE)
X_df_train_SINGLE = X_df_train_SINGLE.drop(X_df_train_SINGLE.columns[-1], axis=1)
# X_df_train_SINGLE=X_df_train_SINGLE.apply(lambda x: pandas.to_numeric(x, errors='ignore'))
print(X_df_train_SINGLE.shape)
X_df_train_SINGLE.head(5)
# (np.where(np.isnan(X_df_train_SINGLE)))
# (np.where(np.isinf(X_df_train_SINGLE)))
X_df_train_SINGLE.info()
use_cuda = False
FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor
Tensor = FloatTensor
# fix seed
seed=17*19
np.random.seed(seed)
torch.manual_seed(seed)
if use_cuda:
torch.cuda.manual_seed(seed)
# sk learn
trainX, testX, trainY, testY = train_test_split(X_df_train_SINGLE, answers_1_SINGLE, test_size=.33, random_state=999)
# Train data
x_data_np = np.array(trainX.values, dtype=np.float32)
y_data_np = np.array(trainY, dtype=np.float32)
y_data_np=y_data_np.reshape((y_data_np.shape[0],1)) # Must be reshaped for PyTorch!
print(x_data_np.shape, y_data_np.shape)
print(type(x_data_np), type(y_data_np))
if use_cuda:
lgr.info ("Using the GPU")
X = Variable(torch.from_numpy(x_data_np).cuda()) # Note the conversion for pytorch
Y = Variable(torch.from_numpy(y_data_np).cuda())
else:
lgr.info ("Using the CPU")
X = Variable(torch.from_numpy(x_data_np)) # Note the conversion for pytorch
Y = Variable(torch.from_numpy(y_data_np))
print(type(X.data), type(Y.data)) # should be 'torch.cuda.FloatTensor'
print(type(X.data), type(Y.data)) # should be 'torch.cuda.FloatTensor'
keep_prob=0.85
# p is the probability of being dropped in PyTorch
dropout = torch.nn.Dropout(p=1 - keep_prob)
# hiddenLayer1Size=32
# hiddenLayer2Size=16
# # # Hypothesis using sigmoid
# linear1=torch.nn.Linear(x_data_np.shape[1], hiddenLayer1Size, bias=True) # size mismatch, m1: [5373 x 344], m2: [8 x 1] at /pytorch/torch/lib/TH/generic/THTensorMath.c:1293
# # xavier initializer
# torch.nn.init.xavier_uniform(linear1.weight)
# linear2=torch.nn.Linear(hiddenLayer1Size, hiddenLayer2Size)
# # xavier initializer
# torch.nn.init.xavier_uniform(linear2.weight)
# linear3=torch.nn.Linear(hiddenLayer2Size, 1)
# # xavier initializer
# torch.nn.init.xavier_uniform(linear3.weight)
# sigmoid = torch.nn.Sigmoid()
# tanh=torch.nn.Tanh()
# model = torch.nn.Sequential(linear1,dropout, tanh, linear2,dropout, tanh, linear3,dropout, sigmoid)
#Hypothesis using sigmoid
linear1=torch.nn.Linear(x_data_np.shape[1], 1, bias=True)
# xavier initializer
torch.nn.init.xavier_uniform(linear1.weight)
sigmoid = torch.nn.Sigmoid()
# model = torch.nn.Sequential(linear1,dropout, sigmoid)
model = torch.nn.Sequential(linear1, sigmoid)
if use_cuda:
lgr.info ("Using the GPU")
model = model.cuda() # On GPU
else:
lgr.info ("Using the CPU")
lgr.info('Model {}'.format(model))
# see https://github.com/facebookresearch/SentEval/blob/master/senteval/tools/classifier.py
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
# optimizer = torch.optim.SGD(model.parameters(), lr=1e-1,momentum=0.9, weight_decay=1e-4)
# optimizer = torch.optim.Adam(model.parameters())
lgr.info('Optimizer {}'.format(optimizer))
import sympy as sp
sp.interactive.printing.init_printing(use_latex=True)
from IPython.display import display, Math, Latex
maths = lambda s: display(Math(s))
latex = lambda s: display(Latex(s))
#the loss function is as follows:
maths("\mathbf{Loss Function:} J(x, z) = -\sum_k^d[x_k \log z_k + (1-x_k)log(1-z_k)]")
import time
start_time = time.time()
epochs=20000
all_losses = []
for step in range(epochs):
optimizer.zero_grad()
hypothesis = model(X)
# cost/loss function
cost = -(Y * torch.log(hypothesis) + (1 - Y)
* torch.log(1 - hypothesis)).mean()
cost.backward()
optimizer.step()
# Keep loss
if step % 150 == 0:
loss = cost.data[0]
all_losses.append(loss)
if step % 4000 == 0:
print(step, cost.data.cpu().numpy())
# RuntimeError: can't convert CUDA tensor to numpy (it doesn't support GPU arrays).
# Use .cpu() to move the tensor to host memory first.
predicted = (model(X).data > 0.5).float()
# predicted = (model(X).data ).float() # This is like predict proba
predictions=predicted.cpu().numpy()
accuracy = (predicted == Y.data).float().mean()
print('TRAINNING Accuracy:' + str(accuracy))
# print ('TRAINING LOG_LOSS=' + str(log_loss(trainY, predictions)))
# R_SCORE=roc_auc_score(Y.data.cpu().numpy(),predictions )
# print ('TRAINING ROC AUC:' + str(R_SCORE))
end_time = time.time()
print ('{} {:6.3f} seconds'.format('GPU:', end_time-start_time))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(all_losses)
plt.show()
model.eval()
# Validation data
x_data_np_val = np.array(testX.values, dtype=np.float32)
y_data_np_val = np.array(testY, dtype=np.float32)
y_data_np_val=y_data_np_val.reshape((y_data_np_val.shape[0],1)) # Must be reshaped for PyTorch!
print(x_data_np_val.shape, y_data_np_val.shape)
print(type(x_data_np_val), type(y_data_np_val))
if use_cuda:
lgr.info ("Using the GPU")
X_val = Variable(torch.from_numpy(x_data_np_val).cuda()) # Note the conversion for pytorch
Y_val = Variable(torch.from_numpy(y_data_np_val).cuda())
else:
lgr.info ("Using the CPU")
X_val = Variable(torch.from_numpy(x_data_np_val)) # Note the conversion for pytorch
Y_val = Variable(torch.from_numpy(y_data_np_val))
# VALIDATION
predicted_val = (model(X_val).data).float()
predictions_val=predicted_val.cpu().numpy()
accuracy_val = (predicted_val == Y_val.data).float().mean()
R_SCORE_VAL=roc_auc_score(Y_val.data.cpu().numpy(),predictions_val)
print ('VALIDATION ROC AUC:' + str(R_SCORE_VAL))
false_positive_rate, true_positive_rate, thresholds = roc_curve(testY, predictions_val)
roc_auc = auc(false_positive_rate, true_positive_rate)
plt.title('LOG_LOSS=' + str(log_loss(testY, predictions_val)))
plt.plot(false_positive_rate, true_positive_rate, 'b', label='AUC = %0.6f' % roc_auc)
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1], 'r--')
plt.xlim([-0.1, 1.2])
plt.ylim([-0.1, 1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# Lab 5 Logistic Regression Classifier
import torch
from torch.autograd import Variable
import numpy as np
torch.manual_seed(777) # for reproducibility
xy = np.loadtxt('data-03-diabetes.csv', delimiter=',', dtype=np.float32)
x_data = xy[:, 0:-1]
y_data = xy[:, [-1]]
# Make sure the shape and data are OK
print(x_data.shape, y_data.shape)
X = Variable(torch.from_numpy(x_data))
Y = Variable(torch.from_numpy(y_data))
# Hypothesis using sigmoid
linear = torch.nn.Linear(8, 1, bias=True)
sigmoid = torch.nn.Sigmoid()
model = torch.nn.Sequential(linear, sigmoid)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for step in range(10001):
optimizer.zero_grad()
hypothesis = model(X)
# cost/loss function
cost = -(Y * torch.log(hypothesis) + (1 - Y)
* torch.log(1 - hypothesis)).mean()
cost.backward()
optimizer.step()
if step % 200 == 0:
print(step, cost.data.numpy())
# Accuracy computation
predicted = (model(X).data > 0.5).float()
accuracy = (predicted == Y.data).float().mean()
print("\nHypothesis: ", hypothesis.data.numpy(), "\nCorrect (Y): ", predicted.numpy(), "\nAccuracy: ", accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercises
Step2: 2.
Step3: 3.
Step4: 4.
Step5: 5.
Step6: 6.
Step7: 7.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
pd.set_option("display.max_rows", 5)
reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
from learntools.core import binder; binder.bind(globals())
from learntools.pandas.summary_functions_and_maps import *
print("Setup complete.")
reviews.head()
median_points = ____
# Check your answer
q1.check()
#%%RM_IF(PROD)%%
median_points = reviews.points.median
q1.assert_check_failed()
#%%RM_IF(PROD)%%
median_points = reviews.points.mean()
q1.assert_check_failed()
#%%RM_IF(PROD)%%
median_points = reviews.points.median()
q1.assert_check_passed()
#_COMMENT_IF(PROD)_
q1.hint()
#_COMMENT_IF(PROD)_
q1.solution()
countries = ____
# Check your answer
q2.check()
#%%RM_IF(PROD)%%
countries = reviews.country.unique()
q2.assert_check_passed()
#%%RM_IF(PROD)%%
countries = reviews.country
q2.assert_check_failed()
#%%RM_IF(PROD)%%
countries = set(reviews.country)
q2.assert_check_passed()
#%%RM_IF(PROD)%%
countries = [str(c).upper() for c in set(reviews.country)]
q2.assert_check_failed()
#_COMMENT_IF(PROD)_
q2.hint()
#_COMMENT_IF(PROD)_
q2.solution()
reviews_per_country = ____
# Check your answer
q3.check()
#%%RM_IF(PROD)%%
reviews_per_country = reviews.country.value_counts()
q3.assert_check_passed()
#%%RM_IF(PROD)%%
reviews_per_country = reviews.groupby('country').size()
# TODO: This check probably should pass, since this series meets the requirements of the question.
# To make this happen, we'd need to implement a comparison that weakens `Series.equals` by ignoring order.
q3.check()
#_COMMENT_IF(PROD)_
q3.hint()
#_COMMENT_IF(PROD)_
q3.solution()
centered_price = ____
# Check your answer
q4.check()
#%%RM_IF(PROD)%%
centered_price = reviews.price - reviews.price.mean()
q4.assert_check_passed()
#_COMMENT_IF(PROD)_
q4.hint()
#_COMMENT_IF(PROD)_
q4.solution()
bargain_wine = ____
# Check your answer
q5.check()
#%%RM_IF(PROD)%%
bargain_wine = reviews.loc[(reviews.points / reviews.price).idxmax(), 'title']
q5.assert_check_passed()
#%%RM_IF(PROD)%%
def bargainness(row):
return row.points / row.price
bargain_scores = reviews.apply(bargainness, axis='columns')
bargain_index = bargain_scores.idxmax()
bargain_wine = reviews.loc[bargain_index, "title"]
q5.assert_check_passed()
#%%RM_IF(PROD)%%
# Alternative correct answer.
bargain_index = bargain_scores.iloc[::-1].idxmax()
bargain_wine = reviews.loc[bargain_index, "title"]
q5.assert_check_passed()
#%%RM_IF(PROD)%%
bargain_wine = 'Chateau Plonk'
q5.assert_check_failed()
#%%RM_IF(PROD)%%
bargain_wine = bargain_index
q5.assert_check_failed()
#_COMMENT_IF(PROD)_
q5.hint()
#_COMMENT_IF(PROD)_
q5.solution()
descriptor_counts = ____
# Check your answer
q6.check()
#%%RM_IF(PROD)%%
n_trop = reviews.description.map(lambda desc: "tropical" in desc).sum()
n_fruity = reviews.description.map(lambda desc: "fruity" in desc).sum()
descriptor_counts = pd.Series([n_trop, n_fruity], index=['tropical', 'fruity'])
q6.assert_check_passed()
#%%RM_IF(PROD)%%
n_trop = reviews.description.str.contains("tropical").sum()
n_fruity = reviews.description.str.contains("fruity").sum()
descriptor_counts = pd.Series([n_trop, n_fruity], index=['tropical', 'fruity'])
q6.assert_check_passed()
#_COMMENT_IF(PROD)_
q6.hint()
#_COMMENT_IF(PROD)_
q6.solution()
star_ratings = ____
# Check your answer
q7.check()
#%%RM_IF(PROD)%%
def stars(row):
if row.country == 'Canada':
return 3
elif row.points >= 95:
return 3
elif row.points >= 85:
return 2
else:
return 1
star_ratings = reviews.apply(stars, axis='columns')
q7.assert_check_passed()
#_COMMENT_IF(PROD)_
q7.hint()
#_COMMENT_IF(PROD)_
q7.solution()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <br><br><br><br>
Step2: <br><br><br><br>
Step3: <br><br><br><br>
Step4: <br><br><br><br><br>
Step5: Submission code for Logistic Regression
Step6: Random Forest Classifier
Step7: Calibrated Random Forest Classifier
Step8: Support Vector Machine Classifier
Step9: <br><br><br><br>
Step10: LogisticRegression
Step11: DecisionTreeClassifier
Step12: RandomForestClassifier
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import sklearn
data_dir = '../data/raw/'
data_filename = 'blood_train.csv'
df_blood = pd.read_csv(data_dir+data_filename)
df_blood.head(10)
X = df_blood.iloc[:,1:5].as_matrix()
y = list(df_blood["Made Donation in March 2007"])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test= sklearn.model_selection.train_test_split(
X, y,
test_size=0.5,
random_state=0)
print("No. Rows in training set:\t", len(X_train))
print("No. Rows in testing set:\t" , len(X_test))
# Split data into 4 partitions
# - training set
# - validation set
# - combined training & validation set
# - testing set
# nrows_total = df_blood.count()[1]
# nrows_train = int(nrows_total/3)
# nrows_valid = int(nrows_total*2/3)
# X_train, y_train = X[:nrows_train] , y[:nrows_train]
# X_valid, y_valid = X[nrows_train:nrows_valid], y[nrows_train:nrows_valid]
# X_test , y_test = X[nrows_valid:] , y[nrows_valid:]
# X_train_valid, y_train_valid = X[:nrows_valid] , y[:nrows_valid]
# print("Total number of rows:\t", nrows_total)
# print("Training rows:\t\t" , 0 ,"-", nrows_train)
# print("Validation rows:\t" , nrows_train,"-", nrows_valid)
# print("Testing rows:\t\t" ,nrows_valid ,"-" , nrows_total)
from sklearn.linear_model import LogisticRegression
clf = sklearn.linear_model.LogisticRegression()
clf.fit(X_train, y_train)
clf_probs = clf.predict_proba(X_test)
score = sklearn.metrics.log_loss(y_test, clf_probs)
print("Log-loss score:\t", score)
from sklearn.linear_model import LogisticRegression
# Load Test Data
data_filename = 'blood_test.csv'
df_test = pd.read_csv(data_dir+data_filename)
# Transform data
# - dropped the ID column
# - converted to matrix array for input to `predict_proba`
Z = df_test.iloc[:,1:5].as_matrix()
# Predict data
clf_probs = clf.predict_proba(Z)
# Add predictions back into test data frame
df_test['Made Donation in March 2007'] = clf_probs[:,1]
df_test.head()
# Setup save filename and directory
submit_dir = '../data/processed/'
submit_filename = 'submit-logistic_regression.csv'
# Save to CSV-file using only the subject-id, and predition columns
df_test.to_csv(submit_dir+submit_filename,
columns=('Unnamed: 0', 'Made Donation in March 2007'),
index=False)
from sklearn.ensemble import RandomForestClassifier
# Train uncalibrated random forest classifier
# on whole train and validation data
# and evaluate on test data
clf = sklearn.ensemble.RandomForestClassifier(n_estimators=25)
clf.fit(X_train, y_train)
# Get probabilities
clf_probs = clf.predict_proba(X_test)
# Test/Evaluate the the model
score = sklearn.metrics.log_loss(y_test, clf_probs)
print("Log-loss score:\t", score)
from sklearn.ensemble import RandomForestClassifier
# Train random forest classifier
# - calibrate on validation data
# - evaluate test data
clf = sklearn.ensemble.RandomForestClassifier(n_estimators=25)
clf.fit(X_train, y_train)
clf_probs = clf.predict_proba(X_test)
from sklearn.calibration import CalibratedClassifierCV
# Pass the RandomForestClassifier into the CalibrationClassifier
sig_clf = CalibratedClassifierCV(clf, method="sigmoid", cv="prefit")
sig_clf.fit(X_train, y_train)
# Get prediction probabilities from model
sig_clf_probs = sig_clf.predict_proba(X_test)
# Test quality of predictions using `log_loss` function
sig_score = sklearn.metrics.log_loss(y_test, sig_clf_probs)
print("Log-loss score:\t", sig_score)
from sklearn import svm
clf = []
clf_probs = []
# clf = svm.SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
# decision_function_shape=None, degree=3, gamma='auto', kernel='rbf',
# max_iter=-1, probability=True, random_state=None, shrinking=True,
# tol=0.001, verbose=False)
# clf = svm.SVC(kernel='rbf',degree=2, probability=True)
#StandardScaler
X_testNorm = sklearn.preprocessing.MinMaxScaler().fit_transform(X_test)
X_trainNorm = sklearn.preprocessing.MinMaxScaler().fit_transform(X_train)
clf = svm.SVC(kernel='linear',cache_size = 7000, degree=2, probability=True)
clf.fit(X_train, y_train)
# Get prediction probabilities from model
clf_probs = clf.predict_proba(X_testNorm)
# Test quality of predictions using `log_loss` function
score = sklearn.metrics.log_loss(y_test, clf_probs)
print("Log-loss score:\t", score)
X = df_blood.iloc[:,1:5].as_matrix()
y = list(df_blood["Made Donation in March 2007"])
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
clf = sklearn.linear_model.LogisticRegression()
score = sklearn.model_selection.cross_val_score(
clf,
X, y,
scoring="neg_log_loss")
print(score)
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
clf = sklearn.tree.DecisionTreeClassifier()
score = sklearn.model_selection.cross_val_score(
clf,
X, y,
scoring="neg_log_loss")
print(score)
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
clf = sklearn.ensemble.RandomForestClassifier()
score = sklearn.model_selection.cross_val_score(
clf,
X, y,
scoring="neg_log_loss")
print(score)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We populate the database with the EMPLOYEE and DEPARTMENT tables so that we can run the various examples.
Step2: <a id="covariance"></a>
Step3: <a id="stddev"></a>
Step4: <a id="variance"></a>
Step5: <a id="median"></a>
Step6: <a id="cume"></a>
Step7: <a id="rank"></a>
Step8: <a id="disc"></a>
Step9: <a id="cont"></a>
Step10: <a id="width"></a>
Step11: We can plot this information by adding some more details to the bucket output.
Step12: And here is a plot of the data to make sense of the histogram.
|
<ASSISTANT_TASK:>
Python Code:
%run db2.ipynb
%sql -sampledata
%%sql
SELECT COVARIANCE_SAMP(SALARY, BONUS)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
%%sql
SELECT STDDEV_SAMP(SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
%%sql
SELECT VARIANCE_SAMP(SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
%%sql
SELECT MEDIAN(SALARY) AS MEDIAN, AVG(SALARY) AS AVERAGE
FROM EMPLOYEE
WHERE WORKDEPT = 'E21'
%%sql
SELECT CUME_DIST(47000) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
%%sql
SELECT PERCENT_RANK(47000) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
%%sql
SELECT PERCENTILE_DISC(0.75) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'E21'
%%sql
SELECT PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'E21'
%%sql
SELECT EMPNO, SALARY, WIDTH_BUCKET(SALARY, 35000, 100000, 13)
FROM EMPLOYEE
ORDER BY EMPNO
%%sql -a
WITH BUCKETS(EMPNO, SALARY, BNO) AS
(
SELECT EMPNO, SALARY,
WIDTH_BUCKET(SALARY, 35000, 100000, 9) AS BUCKET
FROM EMPLOYEE ORDER BY EMPNO
)
SELECT BNO, COUNT(*) AS COUNT FROM BUCKETS
GROUP BY BNO
ORDER BY BNO ASC
%%sql -pb
WITH BUCKETS(EMPNO, SALARY, BNO) AS
(
SELECT EMPNO, SALARY,
WIDTH_BUCKET(SALARY, 35000, 100000, 9) AS BUCKET
FROM EMPLOYEE ORDER BY EMPNO
)
SELECT BNO, COUNT(*) AS COUNT FROM BUCKETS
GROUP BY BNO
ORDER BY BNO ASC
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Euler's method
Step4: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation
Step6: You are now going to solve the following differential equation
Step7: In the following cell you are going to solve the above ODE using four different algorithms
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
def solve_euler(derivs, y0, x):
Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
Ytemp = np.zeros_like(x)
Ytemp[0] = y0
h = x[1]-x[0]
for n in range(0,len(x)-1):
Ytemp[n+1] = Ytemp[n] + h*derivs(Ytemp[n],x[n])
return Ytemp
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
def solve_midpoint(derivs, y0, x):
Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
Ytemp = np.zeros_like(x)
Ytemp[0] = y0
h = x[1]-x[0]
for n in range(0,len(x)-1):
Ytemp[n+1] = Ytemp[n] + h*derivs(Ytemp[n]+(h)/2*derivs(Ytemp[n],x[n]),x[n]+h/2)
return Ytemp
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
def solve_exact(x):
compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
y = np.zeros_like(x)
y = 0.25*np.exp(2*x) - 0.5*x - 0.25
return y
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
x = np.linspace(0,1,11)
y0 = 0
def derivs(y,x):
dy = x + 2*y
return np.array(dy)
euler = solve_euler(derivs,y0,x)
midpoint = solve_midpoint(derivs,y0,x)
exact = solve_exact(x)
odein = odeint(derivs,y0,x)
eulerdiff = abs(euler-exact)
midpointdiff = abs(midpoint-exact)
odeint = []
[odeint.extend(n) for n in odein];
odeintdiff = abs(odeint-exact)
plt.figure(figsize=(20,6))
plt.subplot(1,2,1)
plt.plot(x,euler,color='r',label='Euler')
plt.plot(x,midpoint,color='b',label='Midpoint')
plt.plot(x,exact,color='g',label='Exact')
plt.plot(x,odein,color='k',label='Odeint',alpha=.4)
plt.legend(bbox_to_anchor=(0.8,1.0),fontsize=16)
plt.xlabel('x',fontsize=16)
plt.ylabel('y(x)',fontsize=16)
plt.title('Solutions vs. x',fontsize=20)
plt.subplot(1,2,2)
plt.plot(x,eulerdiff,color='r',label='Euler')
plt.plot(x,midpointdiff,color='b',label='Midpoint')
plt.plot(x,odeintdiff,color='k',label='Odeint')
plt.legend(bbox_to_anchor=(0.7,.95),fontsize=16)
plt.ylim(-.01,.3)
plt.xlabel('x',fontsize=16)
plt.ylabel('error from exact',fontsize=16)
plt.title('Error from exact vs. x',fontsize=20)
plt.tight_layout()
assert True # leave this for grading the plots
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Data
Step2: To merge these two lists together we will use the zip function.
Step3: We are basically done creating the data set. We now will use the pandas library to export this data set into a csv file.
Step4: Export the dataframe to a csv file. We can name the file births1880.csv. The function to_csv will be used to export the file. The file will be saved in the same location of the notebook unless specified otherwise.
Step5: The only parameters we will use is index and header. Setting these parameters to True will prevent the index and header names from being exported. Change the values of these parameters to get a better understanding of their use.
Step6: Get Data
Step7: Even though this functions has many parameters, we will simply pass it the location of the text file.
Step8: Notice the r before the string. Since the slashes are special characters, prefixing the string with a r will escape the whole string.
Step9: This brings us the our first problem of the exercise. The read_csv function treated the first record in the csv file as the header names. This is obviously not correct since the text file did not provide us with header names.
Step10: If we wanted to give the columns specific names, we would have to pass another paramter called names. We can also omit the header parameter.
Step11: You can think of the numbers [0,1,2,3,4] as the row numbers in an Excel file. In pandas these are part of the index of the dataframe. You can think of the index as the primary key of a sql table with the exception that an index is allowed to have duplicates.
Step12: Prepare Data
Step13: As you can see the Births column is of type int64, thus no floats (decimal numbers) or alpha numeric characters will be present in this column.
Step14: Present Data
|
<ASSISTANT_TASK:>
Python Code:
# Import all libraries needed for the tutorial
# General syntax to import specific functions in a library:
##from (library) import (specific library function)
from pandas import DataFrame, read_csv
# General syntax to import a library but no functions:
##import (library) as (give the library a nickname/alias)
import matplotlib.pyplot as plt
import pandas as pd #this is how I usually import pandas
import sys #only needed to determine Python version number
# Enable inline plotting
%matplotlib inline
print 'Python version ' + sys.version
print 'Pandas version ' + pd.__version__
# The inital set of baby names and bith rates
names = ['Bob','Jessica','Mary','John','Mel']
births = [968, 155, 77, 578, 973]
zip?
BabyDataSet = zip(names,births)
BabyDataSet
df = pd.DataFrame(data = BabyDataSet, columns=['Names', 'Births'])
df
df.to_csv?
df.to_csv('births1880.csv',index=False,header=False)
read_csv?
Location = r'C:\Users\david\notebooks\pandas\births1880.csv'
df = pd.read_csv(Location)
df
df = pd.read_csv(Location, header=None)
df
df = pd.read_csv(Location, names=['Names','Births'])
df
import os
os.remove(Location)
# Check data type of the columns
df.dtypes
# Check data type of Births column
df.Births.dtype
# Method 1:
Sorted = df.sort(['Births'], ascending=False)
Sorted.head(1)
# Method 2:
df['Births'].max()
# Create graph
df['Births'].plot()
# Maximum value in the data set
MaxValue = df['Births'].max()
# Name associated with the maximum value
MaxName = df['Names'][df['Births'] == df['Births'].max()].values
# Text to display on graph
Text = str(MaxValue) + " - " + MaxName
# Add text to graph
plt.annotate(Text, xy=(1, MaxValue), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
print "The most popular name"
df[df['Births'] == df['Births'].max()]
#Sorted.head(1) can also be used
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the model, and keep only data for the prediction region/sex/year
Step2: The easiest way to get these predictions into a csv file is to use the Python Pandas package
Step3: To aggregate this into pre-specified age categories, you need to specify the age weights and groups
Step4: You can extract an age-standardized point and interval estimate from the 1000 draws from the posterior distribution stored in age_std as follows
Step5: For groups, just do the same thing group by group
|
<ASSISTANT_TASK:>
Python Code:
!wget http://ghdx.healthdata.org/sites/default/files/record-attached-files/IHME_GBD_HEP_C_RESEARCH_ARCHIVE_Y2013M04D12.ZIP
!unzip IHME_GBD_HEP_C_RESEARCH_ARCHIVE_Y2013M04D12.ZIP
# This Python code will export predictions
# for the following region/sex/year:
predict_region = 'USA'
predict_sex = 'male'
predict_year = 2005
# import dismod code
import dismod_mr
model_path = 'hcv_replication/'
dm = dismod_mr.data.load(model_path)
if predict_year == 2005:
dm.keep(areas=[predict_region], sexes=['total', predict_sex], start_year=1997)
elif predict_year == 1990:
dm.keep(areas=[predict_region], sexes=['total', predict_sex], end_year=1997)
else:
raise(Error, 'predict_year must equal 1990 or 2005')
# Fit model using the data subset (faster, but no borrowing strength)
dm.vars += dismod_mr.model.process.age_specific_rate(dm, 'p', predict_region, predict_sex, predict_year)
%time dismod_mr.fit.asr(dm, 'p', iter=2000, burn=1000, thin=1)
# Make posterior predictions
pred = dismod_mr.model.covariates.predict_for(
dm, dm.parameters['p'],
predict_region, predict_sex, predict_year,
predict_region, predict_sex, predict_year, True, dm.vars['p'], 0, 1)
import pandas as pd
# This generates a csv with 1000 rows,
# one for each draw from the posterior distribution
# Each column corresponds to a one-year age group,
# e.g. column 10 is prevalence at age 10
pd.DataFrame(pred).to_csv(
model_path + '%s-%s-%s.csv'%(predict_region, predict_sex, predict_year))
!ls -hal hcv_replication/$predict_region-*.csv
weights = [1, 8, 8, 9, 9, 10, 10, 10, 10, 10,
10, 10, 10, 10, 9, 9, 9, 9, 9, 9,
9, 9, 9, 9, 9, 9, 9, 9, 9, 9,
9, 9, 8, 8, 8, 8, 8, 8, 8, 8,
8, 7, 7, 7, 7, 7, 7, 7, 7, 7,
6, 6, 6, 6, 6, 6, 5, 5, 5, 5,
5, 5, 4, 4, 4, 4, 4, 4, 4, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
# 1000 samples from the posterior distribution for age-standardized prevalence
import numpy as np, matplotlib.pyplot as plt
age_std = np.dot(pred, weights) / np.sum(weights)
plt.hist(age_std, color='#cccccc', density=True)
plt.xlabel('Age-standardized Prevalence')
plt.ylabel('Posterior Probability');
import pymc as mc
print('age_std prev mean:', age_std.mean())
print('age_std prev 95% UI:', mc.utils.hpd(age_std, .05))
group_cutpoints = [0, 1, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 100]
results = []
for a0, a1 in zip(group_cutpoints[:-1], group_cutpoints[1:]):
age_grp = np.dot(pred[:, a0:a1], weights[a0:a1]) / np.sum(weights[a0:a1])
results.append(dict(a0=a0,a1=a1,mu=age_grp.mean(),std=age_grp.std()))
results = pd.DataFrame(results)
print(np.round(results.head(), 2))
plt.errorbar(.5*(results.a0+results.a1), results.mu,
xerr=.5*(results.a1-results.a0),
yerr=1.96*results['std'],
fmt='ks', capsize=0, mec='w')
plt.axis(ymin=0, xmax=100);
!rm IHME_GBD_HEP_C_RESEARCH_ARCHIVE_Y2013M04D12.ZIP
!rm -r hcv_replication/
!date
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Although we are making an 2-bit adder,
Step3: To generate a Circuit from a Generator, we can directly call the generate static method.
Step4: Let's inspected the generated code
Step5: We can instantiate a Generator using the standard object syntax, which will implicitly call the generate method based on teh parameters, and return an instance of the generated Circuit. By default, this logic will cache definitions based on the generator parameters.
Step6: Here's an example of using the convenience add function which handles the Generator instantiation for us
|
<ASSISTANT_TASK:>
Python Code:
import ast_tools
from ast_tools.transformers.loop_unroller import unroll_for_loops
from ast_tools.passes import begin_rewrite, end_rewrite, loop_unroll
@m.circuit.combinational
def full_adder(A: m.Bit, B: m.Bit, C: m.Bit) -> (m.Bit, m.Bit):
return A ^ B ^ C, A & B | B & C | C & A # sum, carry
@m.circuit.combinational
@end_rewrite()
@loop_unroll()
@begin_rewrite()
def _add(I0: m.Bits[2], I1: m.Bits[2], CIN: m.Bit) -> (m.Bits[2], m.Bit):
O = []
COUT = io.CIN
for i in ast_tools.macros.unroll(range(2)):
Oi, COUT = full_adder(io.I0[i], io.I1[i], COUT)
O.append(Oi)
return m.uint(O), COUT
print(repr(_add.circuit_definition))
class Add(m.Generator):
@staticmethod
def generate(width: int):
T = m.UInt[width]
@m.circuit.combinational
@end_rewrite()
@loop_unroll()
@begin_rewrite()
def _add(I0: T, I1: T, CIN: m.Bit) -> (T, m.Bit):
O = []
COUT = io.CIN
for i in ast_tools.macros.unroll(range(width)):
Oi, COUT = full_adder(io.I0[i], io.I1[i], COUT)
O.append(Oi)
return m.uint(O), COUT
return _add
def add(i0, i1, cin):
We define a convenience function that instantiates the
add generator for us based on the width of the inputs.
if len(i0) != len(i1):
raise TypeError("add arguments must have same length")
if not isinstance(cin, m.Bit):
raise TypeError("add cin must be a Bit")
if (not isinstance(i0, m.UInt) and
not isinstance(i1, m.UInt)):
raise TypeError("add expects UInt inputs")
return Add(len(i0))(i0, i1, cin)
from fault import PythonTester
Add2 = Add.generate(2)
add2 = PythonTester(Add2)
print(add2(1,2,0)[0] == 3)
assert add2(1, 2, 0) == (3, 0), "Failed"
print("Success!")
m.compile("build/Add2", Add2, inline=True)
%cat build/Add2.v
!coreir -i build/Add2.json -p instancecount
class Main(m.Circuit):
io = m.IO(I0=m.In(m.UInt[3]), I1=m.In(m.UInt[3]), CIN=m.In(m.Bit),
O=m.Out(m.UInt[3]), COUT=m.Out(m.Bit))
O, COUT = Add(3)(io.I0, io.I1, io.CIN)
io.O @= O
io.COUT @= COUT
print(repr(Main))
class Main(m.Circuit):
io = m.IO(I0=m.In(m.UInt[3]), I1=m.In(m.UInt[3]), CIN=m.In(m.Bit),
O=m.Out(m.UInt[3]), COUT=m.Out(m.Bit))
O, COUT = add(io.I0, io.I1, io.CIN)
io.O @= O
io.COUT @= COUT
print(repr(Main))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2> Read dataset </h2>
Step3: <h2> Clean up </h2>
Step6: Note that the counts for the various columns are all different; We have to remove NULLs in the delay variables (these correspond to canceled or diverted flights).
Step7: <h2> Predict with the model </h2>
Step8: Now retrieve the model
Step9: <h2> Examine the model behavior </h2>
Step10: <h2> Evaluate model </h2>
|
<ASSISTANT_TASK:>
Python Code:
BUCKET='cs358-bucket' # CHANGE ME
import os
os.environ['BUCKET'] = BUCKET
# Create spark session
from __future__ import print_function
from pyspark.sql import SparkSession
from pyspark import SparkContext
sc = SparkContext('local', 'logistic')
spark = SparkSession \
.builder \
.appName("Logistic regression w/ Spark ML") \
.getOrCreate()
print(spark)
print(sc)
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from pyspark.mllib.regression import LabeledPoint
traindays = spark.read \
.option("header", "true") \
.csv('gs://{}/flights/trainday.csv'.format(BUCKET))
traindays.createOrReplaceTempView('traindays')
spark.sql("SELECT * from traindays LIMIT 5").show()
from pyspark.sql.types import StringType, FloatType, StructType, StructField
header = 'FL_DATE,UNIQUE_CARRIER,AIRLINE_ID,CARRIER,FL_NUM,ORIGIN_AIRPORT_ID,ORIGIN_AIRPORT_SEQ_ID,ORIGIN_CITY_MARKET_ID,ORIGIN,DEST_AIRPORT_ID,DEST_AIRPORT_SEQ_ID,DEST_CITY_MARKET_ID,DEST,CRS_DEP_TIME,DEP_TIME,DEP_DELAY,TAXI_OUT,WHEELS_OFF,WHEELS_ON,TAXI_IN,CRS_ARR_TIME,ARR_TIME,ARR_DELAY,CANCELLED,CANCELLATION_CODE,DIVERTED,DISTANCE,DEP_AIRPORT_LAT,DEP_AIRPORT_LON,DEP_AIRPORT_TZOFFSET,ARR_AIRPORT_LAT,ARR_AIRPORT_LON,ARR_AIRPORT_TZOFFSET,EVENT,NOTIFY_TIME'
def get_structfield(colname):
if colname in ['ARR_DELAY', 'DEP_DELAY', 'DISTANCE', 'TAXI_OUT']:
return StructField(colname, FloatType(), True)
else:
return StructField(colname, StringType(), True)
schema = StructType([get_structfield(colname) for colname in header.split(',')])
inputs = 'gs://{}/flights/tzcorr/all_flights-00000-*'.format(BUCKET) # 1/30th; you may have to change this to find a shard that has training data
#inputs = 'gs://{}/flights/tzcorr/all_flights-*'.format(BUCKET) # FULL
flights = spark.read\
.schema(schema)\
.csv(inputs)
# this view can now be queried ...
flights.createOrReplaceTempView('flights')
trainquery =
SELECT
f.*
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True'
traindata = spark.sql(trainquery)
print(traindata.head(2)) # if this is empty, try changing the shard you are using.
traindata.describe().show()
trainquery =
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.dep_delay IS NOT NULL AND
f.arr_delay IS NOT NULL
traindata = spark.sql(trainquery)
traindata.describe().show()
trainquery =
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.CANCELLED == '0.00' AND
f.DIVERTED == '0.00'
traindata = spark.sql(trainquery)
traindata.describe().show()
def to_example(fields):
return LabeledPoint(\
float(fields['ARR_DELAY'] < 15), #ontime? \
[ \
fields['DEP_DELAY'], \
fields['TAXI_OUT'], \
fields['DISTANCE'], \
])
examples = traindata.rdd.map(to_example)
lrmodel = LogisticRegressionWithLBFGS.train(examples, intercept=True)
print(lrmodel.weights,lrmodel.intercept)
print(lrmodel.predict([6.0,12.0,594.0]))
print(lrmodel.predict([36.0,12.0,594.0]))
lrmodel.clearThreshold()
print(lrmodel.predict([6.0,12.0,594.0]))
print(lrmodel.predict([36.0,12.0,594.0]))
lrmodel.setThreshold(0.7) # cancel if prob-of-ontime < 0.7
print(lrmodel.predict([6.0,12.0,594.0]))
print(lrmodel.predict([36.0,12.0,594.0]))
!gsutil -m rm -r gs://$BUCKET/flights/sparkmloutput/model
MODEL_FILE='gs://' + BUCKET + '/flights/sparkmloutput/model'
lrmodel.save(sc, MODEL_FILE)
print('{} saved'.format(MODEL_FILE))
lrmodel = 0
print(lrmodel)
from pyspark.mllib.classification import LogisticRegressionModel
lrmodel = LogisticRegressionModel.load(sc, MODEL_FILE)
lrmodel.setThreshold(0.7)
print(lrmodel.predict([36.0,12.0,594.0]))
print(lrmodel.predict([8.0,4.0,594.0]))
lrmodel.clearThreshold() # to make the model produce probabilities
print(lrmodel.predict([20, 10, 500]))
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
dist = np.arange(10, 2000, 10)
prob = [lrmodel.predict([20, 10, d]) for d in dist]
sns.set_style("whitegrid")
ax = plt.plot(dist, prob)
plt.xlabel('distance (miles)')
plt.ylabel('probability of ontime arrival')
delay = np.arange(-20, 60, 1)
prob = [lrmodel.predict([d, 10, 500]) for d in delay]
ax = plt.plot(delay, prob)
plt.xlabel('departure delay (minutes)')
plt.ylabel('probability of ontime arrival')
inputs = 'gs://{}/flights/tzcorr/all_flights-00001-*'.format(BUCKET) # you may have to change this to find a shard that has test data
flights = spark.read\
.schema(schema)\
.csv(inputs)
flights.createOrReplaceTempView('flights')
testquery = trainquery.replace("t.is_train_day == 'True'","t.is_train_day == 'False'")
print(testquery)
testdata = spark.sql(testquery)
examples = testdata.rdd.map(to_example)
testdata.describe().show() # if this is empty, change the shard you are using
def eval(labelpred):
'''
data = (label, pred)
data[0] = label
data[1] = pred
'''
cancel = labelpred.filter(lambda data: data[1] < 0.7)
nocancel = labelpred.filter(lambda data: data[1] >= 0.7)
corr_cancel = cancel.filter(lambda data: data[0] == int(data[1] >= 0.7)).count()
corr_nocancel = nocancel.filter(lambda data: data[0] == int(data[1] >= 0.7)).count()
cancel_denom = cancel.count()
nocancel_denom = nocancel.count()
if cancel_denom == 0:
cancel_denom = 1
if nocancel_denom == 0:
nocancel_denom = 1
return {'total_cancel': cancel.count(), \
'correct_cancel': float(corr_cancel)/cancel_denom, \
'total_noncancel': nocancel.count(), \
'correct_noncancel': float(corr_nocancel)/nocancel_denom \
}
# Evaluate model
lrmodel.clearThreshold() # so it returns probabilities
labelpred = examples.map(lambda p: (p.label, lrmodel.predict(p.features)))
print('All flights:')
print(eval(labelpred))
# keep only those examples near the decision threshold
print('Flights near decision threshold:')
labelpred = labelpred.filter(lambda data: data[1] > 0.65 and data[1] < 0.75)
print(eval(labelpred))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0.0 Basic parameters
Step2: 1.0 Run Region-to-region information transfer mapping
Step3: 2.1 Visualize Information transfer mapping matrices (Threshold and Unthresholded)
Step4: 2.2 Compute the regions with the most information transfers TO and FROM
Step5: 3.0 Compute FWE-corrected results (as opposed to FDR)
Step6: 3.1 Visualize information transfer mapping matrices (FWE-Threshold and Unthresholded)
Step7: 3.2 Compute the regions with the most information transfers TO and FROM
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('utils/')
import numpy as np
import loadGlasser as lg
import scipy.stats as stats
import matplotlib.pyplot as plt
import statsmodels.sandbox.stats.multicomp as mc
import sys
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import nibabel as nib
import os
import permutationTesting as pt
from matplotlib.colors import Normalize
from matplotlib.colors import LogNorm
class MidpointNormalize(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
class MidpointNormalizeLog(LogNorm):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
class MidpointNormalize2(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
t1 = (self.midpoint - self.vmin)/2.0
t2 = (self.vmax - self.midpoint)/30.0 + self.midpoint
x, y = [self.vmin, t1, self.midpoint, t2, self.vmax], [0, 0.25, .5, .75, 1.0]
return np.ma.masked_array(np.interp(value, x, y))
# Set basic parameters
basedir = '/projects2/ModalityControl2/'
datadir = basedir + 'data/'
resultsdir = datadir + 'resultsMaster/'
runLength = 4648
subjNums = ['032', '033', '037', '038', '039', '045',
'013', '014', '016', '017', '018', '021',
'023', '024', '025', '026', '027', '031',
'035', '046', '042', '028', '048', '053',
'040', '049', '057', '062', '050', '030', '047', '034']
glasserparcels = lg.loadGlasserParcels()
networkdef = lg.loadGlasserNetworks()
# Define the main networks (in main manuscript)
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud':8, 'aud2':9, 'dan':11}
# Force aud2 key to be the same as aud1
# aud2_ind = np.where(networkdef==networkmappings['aud2'])[0]
# networkdef[aud2_ind] = networkmappings['aud1']
# Merge aud1 and aud2
# networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud':8, 'dan':11}
nParcels = 360
# Import network reordering
networkorder = np.asarray(sorted(range(len(networkdef)), key=lambda k: networkdef[k]))
order = networkorder
order.shape = (len(networkorder),1)
# Construct xticklabels and xticks for plotting figures
networks = networkmappings.keys()
xticks = {}
reorderednetworkaffil = networkdef[order]
for net in networks:
netNum = networkmappings[net]
netind = np.where(reorderednetworkaffil==netNum)[0]
tick = np.max(netind)
xticks[tick] = net
# Load in Glasser parcels in their native format (vertex formula)
glasserfilename = '/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii'
glasser2 = nib.load('/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii')
glasser2 = np.squeeze(glasser2.get_data())
## Load in NM3 Data
ruledims = ['logic','sensory','motor']
datadir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
# Load in RSA matrices
rsaMats = {}
df_stats = {}
for ruledim in ruledims:
rsaMats[ruledim] = np.zeros((nParcels,nParcels,len(subjNums)))
df_stats[ruledim] = {}
scount = 0
for subj in subjNums:
filename = datadir +subj+'_' + ruledim + '_RegionToRegionActFlowGlasserParcels.csv'
rsaMats[ruledim][:,:,scount] = np.loadtxt(filename, delimiter=',')
scount += 1
## Compute Group Stats
for ruledim in ruledims:
## Compute group statistics
# Compute average across subjects
df_stats[ruledim]['avgrho'] = np.mean(rsaMats[ruledim],axis=2)
# Compute t-test for each pairwise connection
t = np.zeros((nParcels,nParcels))
p = np.zeros((nParcels,nParcels))
for i in range(nParcels):
for j in range(nParcels):
t[i,j], p[i,j] = stats.ttest_1samp(rsaMats[ruledim][i,j,:], 0)
# One-sided t-test so...
if t[i,j] > 0:
p[i,j] = p[i,j]/2.0
else:
p[i,j] = 1.0-(p[i,j]/2.0)
df_stats[ruledim]['t'] = t
df_stats[ruledim]['p'] = p
## Run multiple corrections
triu_ind = np.triu_indices(nParcels,k=1)
tril_ind = np.tril_indices(nParcels,k=-1)
tmpq = []
tmpq.extend(df_stats[ruledim]['p'][triu_ind])
tmpq.extend(df_stats[ruledim]['p'][tril_ind])
# only run FDR correction on non-NaN values
ind_nans = np.isnan(tmpq)
ind_nonnan = np.where(ind_nans==False)[0]
tmpq = np.asarray(tmpq)
tmpq2 = mc.fdrcorrection0(tmpq[ind_nonnan])[1]
tmpq[ind_nonnan] = tmpq2
qmat = np.zeros((nParcels,nParcels))
qmat[triu_ind] = tmpq[0:len(triu_ind[0])]
qmat[tril_ind] = tmpq[len(tril_ind[0]):]
df_stats[ruledim]['q'] = qmat
np.fill_diagonal(df_stats[ruledim]['q'],1)
# Visualize Unthresholded and thresholded side-by-side
order = networkorder
order.shape = (len(networkorder),1)
for ruledim in ruledims:
# Unthresholded t-stat map
plt.figure(figsize=(12,10))
plt.subplot(121)
# First visualize unthresholded
mat = df_stats[ruledim]['t'][order,order.T]
ind = np.isnan(mat)
mat[ind] = 0
pos = mat > 0
mat = np.multiply(pos,mat)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat,origin='lower',vmin=0, norm=norm, interpolation='none',cmap='seismic')
plt.colorbar(fraction=0.046)
plt.title('Unthresholded T-stat Map\nInformation Transfer Estimates\n' + ruledim,
fontsize=16,y=1.04)
plt.xlabel('Target Regions',fontsize=12)
plt.ylabel('Source Regions', fontsize=12)
plt.xticks(xticks.keys(),xticks.values(), rotation=-45)
plt.yticks(xticks.keys(),xticks.values())
plt.grid(linewidth=1)
# plt.tight_layout()
# Thresholded T-stat map
plt.subplot(122)
# First visualize unthresholded
mat = df_stats[ruledim]['t']
thresh = df_stats[ruledim]['q'] < 0.05
mat = np.multiply(mat,thresh)
mat = mat[order,order.T]
ind = np.isnan(mat)
mat[ind]=0
pos = mat > 0
mat = np.multiply(pos,mat)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat,origin='lower',norm=norm,vmin = 0,interpolation='none',cmap='seismic')
plt.colorbar(fraction=0.046)
plt.title('FDR-Thresholded T-stat Map\nInformation Transfer Estimates\n ' + ruledim,
fontsize=16, y=1.04)
plt.xlabel('Target Regions',fontsize=12)
plt.ylabel('Source Regions', fontsize=12)
plt.xticks(xticks.keys(),xticks.values(), rotation=-45)
plt.yticks(xticks.keys(),xticks.values())
plt.grid(linewidth=1)
plt.tight_layout()
# plt.savefig('Fig4b_Connectome_ActFlowRSA_TstatMap_MatchVMismatch_' + ruledim + '.pdf')
networks = networkmappings.keys()
regions_actflowTO = {}
regions_actflowFROM = {}
for ruledim in ruledims:
thresh = df_stats[ruledim]['q'] < 0.05
regions_actflowFROM[ruledim] = np.nanmean(thresh,axis=1)*100.0
regions_actflowTO[ruledim] = np.nanmean(thresh,axis=0)*100.0
# # Convert to dataframe
# plt.figure()
# plt.bar(np.arange(nParcels),regions_actflow[ruledim],align='center')
# plt.title('Percent of Significant ActFlow FROM each region', fontsize=16)
# plt.ylabel('Percent of Significant ActFlow\nTo Other Regions', fontsize=12)
# plt.xlabel('Regions', fontsize=12)
# plt.tight_layout()
# Save these arrays to a file
savearrayTO = np.zeros((len(glasser2),len(ruledims)+1))
savearrayFROM = np.zeros((len(glasser2),len(ruledims)+1))
rulecount = 0
for ruledim in ruledims:
for roi in range(1,nParcels+1):
parcel_ind = np.where(glasser2==roi)[0]
# Compute map of all rule dimension for rule general actflow
if rulecount < 3:
savearrayTO[parcel_ind,rulecount] = regions_actflowTO[ruledim][roi-1].astype('double')
savearrayFROM[parcel_ind,rulecount] = regions_actflowFROM[ruledim][roi-1].astype('double')
rulecount += 1
to_avg = savearrayTO[:,0:3] > 0
# Create conjunction map
to_avg = np.mean(to_avg,axis=1)
to_avg = (to_avg == 1)
savearrayTO[:,3] = to_avg
from_avg = savearrayFROM[:,0:3] > 0
from_avg = np.mean(from_avg,axis=1)
from_avg = (from_avg == 1)
savearrayFROM[:,3] = from_avg
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
filename = 'PercentOfRegionsSignificantActFlowFROM_FDR.csv'
np.savetxt(outdir + filename,savearrayFROM,fmt='%s')
wb_file = 'PercentOfRegionsSignificantActFlowFROM_FDR.dscalar.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
filename = 'PercentOfRegionsSignificantActFlowTO_FDR.csv'
np.savetxt(outdir + filename,savearrayTO,fmt='%s')
wb_file = 'PercentOfRegionsSignificantActFlowTO_FDR.dscalar.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
## Load in NM3 Data
ruledims = ['logic','sensory','motor']
datadir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
# Load in RSA matrices
iteMats = {}
df_stats = {}
for ruledim in ruledims:
iteMats[ruledim] = np.zeros((nParcels,nParcels,len(subjNums)))
df_stats[ruledim] = {}
scount = 0
for subj in subjNums:
filename = datadir +subj+'_' + ruledim + '_RegionToRegionActFlowGlasserParcels.csv'
iteMats[ruledim][:,:,scount] = np.loadtxt(filename, delimiter=',')
scount += 1
pt = reload(pt)
fwe_Ts = np.zeros((nParcels,nParcels,len(ruledims)))
fwe_Ps = np.zeros((nParcels,nParcels,len(ruledims)))
# Obtain indices for multiple comparisons
indices = np.ones((nParcels,nParcels))
np.fill_diagonal(indices,0)
notnan_ind = np.isnan(iteMats['logic'][:,:,0])==False
indices = np.multiply(indices,notnan_ind)
flatten_ind = np.where(indices==1)
rulecount = 0
for ruledim in ruledims:
# tmpcor = np.arctanh(corrMats[ruledim][flatten_ind[0],flatten_ind[1],:])
# tmperr = np.arctanh(errMats[ruledim][flatten_ind[0],flatten_ind[1],:])
t, p = pt.permutationFWE(iteMats[ruledim][flatten_ind[0],flatten_ind[1],:], permutations=1000, nproc=15)
fwe_Ts[flatten_ind[0],flatten_ind[1],rulecount] = t
fwe_Ps[flatten_ind[0],flatten_ind[1],rulecount] = 1.0 - p
rulecount += 1
pthresh = .05
# Visualize FWER-corrected T-statistic map
order = networkorder
order.shape = (len(networkorder),1)
rulecount = 0
for ruledim in ruledims:
# Thresholded T-stat map
plt.figure()
# First visualize unthresholded
mat = fwe_Ts[:,:,rulecount]
thresh = fwe_Ps[:,:,rulecount] < pthresh
mat = np.multiply(mat,thresh)
mat = mat[order,order.T]
ind = np.isnan(mat)
mat[ind]=0
pos = mat > 0
mat = np.multiply(pos,mat)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat,origin='lower',norm=norm,vmin = 0,interpolation='none',cmap='seismic')
plt.colorbar(fraction=0.046)
plt.title('FWE-corrected T-statistic Map\nInformation Transfer Estimates\n ' + ruledim,
fontsize=16, y=1.04)
plt.xlabel('Target Regions',fontsize=12)
plt.ylabel('Source Regions', fontsize=12)
plt.xticks(xticks.keys(),xticks.values(), rotation=-45)
plt.yticks(xticks.keys(),xticks.values())
plt.grid(linewidth=1)
plt.tight_layout()
# plt.savefig('Fig6_RegionITE_TstatMap' + ruledim + '_FWER.pdf')
rulecount += 1
networks = networkmappings.keys()
regions_actflowTO = {}
regions_actflowFROM = {}
rulecount = 0
for ruledim in ruledims:
thresh = fwe_Ps[:,:,rulecount] > pthresh
regions_actflowFROM[ruledim] = np.nanmean(thresh,axis=1)*100.0
regions_actflowTO[ruledim] = np.nanmean(thresh,axis=0)*100.0
rulecount += 1
# Save these arrays to a file
savearrayTO = np.zeros((len(glasser2),len(ruledims)+1))
savearrayFROM = np.zeros((len(glasser2),len(ruledims)+1))
rulecount = 0
for ruledim in ruledims:
for roi in range(1,nParcels+1):
parcel_ind = np.where(glasser2==roi)[0]
# Compute map of all rule dimension for rule general actflow
if rulecount < 3:
savearrayTO[parcel_ind,rulecount] = regions_actflowTO[ruledim][roi-1].astype('double')
savearrayFROM[parcel_ind,rulecount] = regions_actflowFROM[ruledim][roi-1].astype('double')
rulecount += 1
to_avg = savearrayTO[:,0:3] > 0
# Create conjunction map
to_avg = np.mean(to_avg,axis=1)
to_avg = (to_avg == 1)
savearrayTO[:,3] = to_avg
from_avg = savearrayFROM[:,0:3] > 0
from_avg = np.mean(from_avg,axis=1)
from_avg = (from_avg == 1)
savearrayFROM[:,3] = from_avg
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
filename = 'PercentOfRegionsSignificantActFlowFROM_FWER.csv'
np.savetxt(outdir + filename,savearrayFROM,fmt='%s')
wb_file = 'PercentOfRegionsSignificantActFlowFROM_FWER.dscalar.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript6andS2and7_RegionToRegionITE/'
filename = 'PercentOfRegionsSignificantActFlowTO_FWER.csv'
np.savetxt(outdir + filename,savearrayTO,fmt='%s')
wb_file = 'PercentOfRegionsSignificantActFlowTO_FWER.dscalar.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Abstract
Step2: Preprocessing
Step3: The selected features
Step4: In sklearn the features have to be numerical that we input in this algorithm, so we need to convert the categorical features to numeric. To do this ordered categorical features will have adjacent numbers and unordered features will get an order as best as possible during conversion to numeric, for instance geographical. These transformations could be done with a labelencoder from sklearn for instance, but we want to keep any order that is in the data, which might help with the prediction. With more features this would not be manageable but with this amount of features it is still doable. Also there cannot be nan/inf/-inf values, hence these will be made 0's. With this algorithm we will also have to scale and normalize the features.
Step5: Classification
Step6: Performance metrics
Step7: Lending Club gives grades (A-G) to their loans so potential investors can see which of these loans are not so risky (A) and which are the riskier loans (G). To make it still worthwhile to invest in the riskier loans, investors supposedly get more interest on these loans. From the figure below we see that indeed the interest grade is higher for riskier loans, but that there are a few exceptions.
Step8: Apart from grade and interest rate there are of course other characteristics of the loans. A few of them are shown below. We see that loans are in the range of almost 0 until 35,000. Hence Lending Club loans seem to be an alternative for personal loans and credit cards and not mortgages. The loans are either 36 months (3 years) or 60 months (5 years) and mostly 3 years. The purpose of the loan is mostly debt consolidation and credit card. Therefore it seems to be mostly people that already have debts. The annual income was cut-off at 200,000 but lies mostly between 25,000 and 100,000.
Step9: We can speculate that some of the characteristics of loans have an influence on the loan ending up in 'charged off'. A first logical check is of course whether the Lending Club 'grade' is already visually a factor in loans ending up in this status. As we can see from the figure below, the 'grade' is very well correlated to the 'charged off proportion' of the loans. Only between F and G the difference is smaller. Hence Lending Club has built a pretty good algorithm to predict the 'charged off' status. Also higher interest loans seem to end up in 'charged off' more often as expected. Furthermore, with purpose the influence is not clearly visible. But with dti (debt-to-income) the difference is significant. This means the more debt a person has compared to their income, the more chance of the loan ending in 'charged off'. Lastly, with home ownership status the difference is visually present and also in numbers 'rent' has the highest charged off proportion, then 'own' and then 'mortgage'.
Step10: Another interesting question is whether it is profitable to invest in loans from Lending Club and whether the 'grade' is has influence on profitability. For this purpose we show the return-of-investment (ROI) overall and per grade. As is seen below the loans have an average of only 1.4% profit. And if we look per grade, only A-C results in profit on average. Loans that end up in 'charged off' are on average very bad for the profits since you will likely loose part of the principal as well. In the A-C categories the loans end up in 'charged off' less times and are therefore on average more profitable even though the loans in the riskier categories deliver more interest returns. The higher interest (more than 20% in the riskiest grades) does not compensate enough for the high 'charged off' ratio, which is around 40% in the riskiest grades as we saw before.
Step11: Prediction
Step12: We trained both our classifiers on both only 'grade' and all features. And with Logistic Regression we also trained one with top-5 features as selected by SelectKBest from sklearn. This is because Logistic Regression sometimes performs better with less features. We see that all F1-scores are around 0.75. Using all features instead of only grade gives only a very marginal increase of around 1% and using 5 features gives not increase. The best performing algorithm based on the F1-score is Logistic regression with all features. But the differences were very small. When looking at the confusion matrices it is clear that all algorithms mostly predict 'Fully Paid', since this is the dominant class (82%) accuracy scores will look pretty well, while the algorithm is actually not that great as can be seen from the confusion matrices. The F1-score metric was chosen based on the fact that it can deal better with unequal classes, but even that reports an score of 0.74 when Random Forest predicts all loans to be 'Fully Paid'. AUC is in this case a better metric, since also with uneven classes random remains 0.5. The algorithms with only grade give an AUC of 0.66. While the Logistic Regression with all features gives a score of 0.71 and Random Forest of 0.7. The top-5 features algorithm is in between those with 0.68. Hence again adding all features gives a little better performance (0.4-0.5) and Logistic Regression with all features performs the best. In the ROC-plot this is also displayed.
Step13: So adding features does lead to a little better performance. Therefore it is interesting to see which features are mostly used for this increase. The important features for logistic regression can be found by seeing which coefficients are used for the features. The bigger the coefficient the more the model uses this feature for prediction. For our best performing model, Logistic Regression with all features, the top-5 features with the biggest coefficients are
Step14: Re-creating grade
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn import preprocessing
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.feature_selection import SelectKBest, mutual_info_classif
from sklearn.metrics import roc_curve, auc, accuracy_score, f1_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import LabelBinarizer
from sklearn.multiclass import OneVsRestClassifier
from scipy.stats import ttest_ind
import matplotlib.dates as mdates
from pandas_confusion import ConfusionMatrix
import statsmodels.api as sm
sns.set_style('white')
loans = pd.read_csv('../data/loan.csv')
loans['roi'] = ((loans['total_rec_int'] + loans['total_rec_prncp']
+ loans['total_rec_late_fee'] + loans['recoveries']) / loans['funded_amnt']) - 1
print('loans:',loans.shape)
print(loans['loan_status'].unique())
print('percentage charged off in all loans:',
round(sum(loans['loan_status']=='Charged Off')/len(loans['loan_status'])*100), '\n')
# selecting loans that went to full term
closed_loans = loans[loans['loan_status'].isin(['Fully Paid', 'Charged Off'])]
print('closed_loans:',closed_loans.shape)
print('precentage closed loans of total loans:', round(closed_loans.shape[0] / loans.shape[0] * 100, 1))
print('percentage charged off in closed loans:',
round(sum(closed_loans['loan_status']=='Charged Off')/len(closed_loans['loan_status'])*100))
include = ['term', 'int_rate', 'installment', 'grade', 'sub_grade', 'emp_length', 'home_ownership',
'annual_inc', 'purpose', 'zip_code', 'addr_state', 'delinq_2yrs', 'earliest_cr_line', 'inq_last_6mths',
'mths_since_last_delinq', 'mths_since_last_record', 'open_acc', 'pub_rec', 'revol_bal', 'revol_util', 'total_acc',
'mths_since_last_major_derog', 'acc_now_delinq', 'loan_amnt', 'open_il_6m', 'open_il_12m',
'open_il_24m', 'mths_since_rcnt_il', 'total_bal_il', 'dti', 'open_acc_6m', 'tot_cur_bal',
'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc', 'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl',
'inq_last_12m', 'issue_d', 'loan_status', 'roi']
# exclude the one joint application
closed_loans = closed_loans[closed_loans['application_type'] == 'INDIVIDUAL']
# make id index
closed_loans.index = closed_loans.id
# include only the features above
closed_loans = closed_loans[include]
# exclude features with more than 10% missing values
columns_not_missing = (closed_loans.isnull().apply(sum, 0) / len(closed_loans)) < 0.1
closed_loans = closed_loans.loc[:,columns_not_missing[columns_not_missing].index]
# delete rows with NANs
closed_loans = closed_loans.dropna()
# calculate nr of days between earliest creditline and issue date of the loan
# delete the two original features
closed_loans['earliest_cr_line'] = pd.to_datetime(closed_loans['earliest_cr_line'])
closed_loans['issue_d'] = pd.to_datetime(closed_loans['issue_d'])
closed_loans['days_since_first_credit_line'] = closed_loans['issue_d'] - closed_loans['earliest_cr_line']
closed_loans['days_since_first_credit_line'] = closed_loans['days_since_first_credit_line'] / np.timedelta64(1, 'D')
closed_loans = closed_loans.drop(['earliest_cr_line', 'issue_d'], axis=1)
# round-up annual_inc and cut-off outliers annual_inc at 200.000
closed_loans['annual_inc'] = np.ceil(closed_loans['annual_inc'] / 1000)
closed_loans.loc[closed_loans['annual_inc'] > 200, 'annual_inc'] = 200
print(closed_loans.shape)
print('percentage charged off in closed loans:',
round(sum(closed_loans['loan_status']=='Charged Off') / len(closed_loans['loan_status']) * 100))
closed_loans.columns
# features that are not float or int, so not to be converted:
# ordered:
# sub_grade, emp_length, zip_code, term
# unordered:
# home_ownership, purpose, addr_state (ordered geographically)
closed_loans_predict = closed_loans.copy()
# term
closed_loans_predict['term'] = closed_loans_predict['term'].apply(lambda x: int(x.split(' ')[1]))
# grade
closed_loans_predict['grade'] = closed_loans_predict['grade'].astype('category')
grade_dict = {'A': 1, 'B': 2, 'C': 3, 'D': 4, 'E': 5, 'F': 6, 'G': 7}
closed_loans_predict['grade'] = closed_loans_predict['grade'].apply(lambda x: grade_dict[x])
# emp_length
emp_length_dict = {'n/a':0,
'< 1 year':0,
'1 year':1,
'2 years':2,
'3 years':3,
'4 years':4,
'5 years':5,
'6 years':6,
'7 years':7,
'8 years':8,
'9 years':9,
'10+ years':10}
closed_loans_predict['emp_length'] = closed_loans_predict['emp_length'].apply(lambda x: emp_length_dict[x])
# zipcode
closed_loans_predict['zip_code'] = closed_loans_predict['zip_code'].apply(lambda x: int(x[0:3]))
# subgrade
closed_loans_predict['sub_grade'] = (closed_loans_predict['grade']
+ closed_loans_predict['sub_grade'].apply(lambda x: float(list(x)[1])/10))
# house
house_dict = {'NONE': 0, 'OTHER': 0, 'ANY': 0, 'RENT': 1, 'MORTGAGE': 2, 'OWN': 3}
closed_loans_predict['home_ownership'] = closed_loans_predict['home_ownership'].apply(lambda x: house_dict[x])
# purpose
purpose_dict = {'other': 0, 'small_business': 1, 'renewable_energy': 2, 'home_improvement': 3,
'house': 4, 'educational': 5, 'medical': 6, 'moving': 7, 'car': 8,
'major_purchase': 9, 'wedding': 10, 'vacation': 11, 'credit_card': 12,
'debt_consolidation': 13}
closed_loans_predict['purpose'] = closed_loans_predict['purpose'].apply(lambda x: purpose_dict[x])
# states
state_dict = {'AK': 0, 'WA': 1, 'ID': 2, 'MT': 3, 'ND': 4, 'MN': 5,
'OR': 6, 'WY': 7, 'SD': 8, 'WI': 9, 'MI': 10, 'NY': 11,
'VT': 12, 'NH': 13, 'MA': 14, 'CT': 15, 'RI': 16, 'ME': 17,
'CA': 18, 'NV': 19, 'UT': 20, 'CO': 21, 'NE': 22, 'IA': 23,
'KS': 24, 'MO': 25, 'IL': 26, 'IN': 27, 'OH': 28, 'PA': 29,
'NJ': 30, 'KY': 31, 'WV': 32, 'VA': 33, 'DC': 34, 'MD': 35,
'DE': 36, 'AZ': 37, 'NM': 38, 'OK': 39, 'AR': 40, 'TN': 41,
'NC': 42, 'TX': 43, 'LA': 44, 'MS': 45, 'AL': 46, 'GA': 47,
'SC': 48, 'FL': 49, 'HI': 50}
closed_loans_predict['addr_state'] = closed_loans_predict['addr_state'].apply(lambda x: state_dict[x])
# make NA's, inf and -inf 0
closed_loans_predict = closed_loans_predict.fillna(0)
closed_loans_predict = closed_loans_predict.replace([np.inf, -np.inf], 0)
# split data in train (70%) and test set (30%)
X_train, X_test, y_train, y_test = train_test_split(closed_loans_predict.drop(['loan_status', 'roi'], axis=1),
closed_loans_predict['loan_status'],
test_size=0.3, random_state=123)
# scaling and normalizing the features
X_train_scaled = preprocessing.scale(X_train)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns)
# scale test set with scaling used in train set
X_test_scaled = scaler.transform(X_test)
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns)
print('percentage charged off in all loans:',
round(sum(loans['loan_status']=='Charged Off')/len(loans['loan_status'])*100))
print('percentage charged off in closed loans:',
round(sum(closed_loans['loan_status']=='Charged Off') / len(closed_loans['loan_status']) * 100))
closed_loans['grade'] = closed_loans['grade'].astype('category', ordered=True)
sns.boxplot(data=closed_loans, x='grade', y='int_rate', color='turquoise')
sns.distplot(closed_loans['loan_amnt'], kde=False, bins=50)
plt.show()
sns.countplot(closed_loans['term'], color='turquoise')
plt.show()
sns.countplot(closed_loans['purpose'], color='turquoise')
plt.xticks(rotation=90)
plt.show()
ax = sns.distplot(closed_loans['annual_inc'], bins=100, kde=False)
plt.xlim([0,200])
ax.set(xlabel='annual income (x 1000)')
plt.show()
grade_status = closed_loans.reset_index().groupby(['grade', 'loan_status'])['id'].count()
risk_grades = dict.fromkeys(closed_loans['grade'].unique())
for g in risk_grades.keys():
risk_grades[g] = grade_status.loc[(g, 'Charged Off')] / (grade_status.loc[(g, 'Charged Off')] + grade_status.loc[(g, 'Fully Paid')])
risk_grades = pd.DataFrame(risk_grades, index=['proportion_unpaid_loans'])
sns.stripplot(data=risk_grades, color='darkgray', size=15)
plt.show()
sns.distplot(closed_loans[closed_loans['loan_status']=='Charged Off']['int_rate'])
sns.distplot(closed_loans[closed_loans['loan_status']=='Fully Paid']['int_rate'])
plt.show()
purpose_paid = closed_loans.reset_index().groupby(['purpose', 'loan_status'])['id'].count()
sns.barplot(data=pd.DataFrame(purpose_paid).reset_index(), x='purpose', y='id', hue='loan_status')
plt.xticks(rotation=90)
plt.show()
sns.boxplot(data=closed_loans, x='loan_status', y='dti')
plt.show()
print(ttest_ind(closed_loans[closed_loans['loan_status']=='Fully Paid']['dti'],
closed_loans[closed_loans['loan_status']=='Charged Off']['dti']))
print((closed_loans[closed_loans['loan_status']=='Fully Paid']['dti']).mean())
print((closed_loans[closed_loans['loan_status']=='Charged Off']['dti']).mean())
home_paid = closed_loans.reset_index().groupby(['home_ownership', 'loan_status'])['id'].count()
sns.barplot(data=pd.DataFrame(home_paid).reset_index(), x='home_ownership', y='id', hue='loan_status')
plt.xticks(rotation=90)
plt.show()
print(home_paid)
print('mortgage:', home_paid['MORTGAGE'][0] / (home_paid['MORTGAGE'][0] + home_paid['MORTGAGE'][1]))
print('own:', home_paid['OWN'][0] / (home_paid['OWN'][0] + home_paid['OWN'][1]))
print('rent:', home_paid['RENT'][0] / (home_paid['RENT'][0] + home_paid['RENT'][1]))
roi = closed_loans.groupby('grade')['roi'].mean()
print(roi)
print(closed_loans['roi'].mean())
sns.barplot(data=roi.reset_index(), x='grade', y='roi', color='gray')
plt.show()
roi = closed_loans.groupby(['grade', 'loan_status'])['roi'].mean()
sns.barplot(data=roi.reset_index(), x='roi', y='grade', hue='loan_status', orient='h')
plt.show()
sns.countplot(data=closed_loans, x='grade', hue='loan_status')
plt.show()
# parameter tuning Logistic Regression
dict_Cs = {'C': [0.001, 0.1, 1, 10, 100]}
clf = GridSearchCV(LogisticRegression(penalty='l1'), dict_Cs, 'f1_weighted', cv=10)
clf.fit(X_train_scaled, y_train)
print(clf.best_params_)
print(clf.best_score_)
# Logistic Regression only grade
clf = LogisticRegression(penalty='l1', C=10)
clf.fit(X_train_scaled.loc[:,['grade']], y_train)
prediction = clf.predict(X_test_scaled.loc[:,['grade']])
# F1-score
print('f1_score:', f1_score(y_test, prediction, average='weighted'))
# AUC
y_score = clf.predict_proba(X_test_scaled.loc[:,['grade']])
fpr1, tpr1, thresholds = roc_curve(np.array(y_test), y_score[:,0], pos_label='Charged Off')
auc1 = round(auc(fpr1, tpr1), 2)
print('auc:', auc1)
# Confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test), prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
# Logistic Regression all features
clf = LogisticRegression(penalty='l1', C=10)
clf.fit(X_train_scaled, y_train)
prediction = clf.predict(X_test_scaled)
# F1-score
print(f1_score(y_test, prediction, average='weighted'))
# AUC
y_score = clf.predict_proba(X_test_scaled)
fpr2, tpr2, thresholds = roc_curve(np.array(y_test), y_score[:,0], pos_label='Charged Off')
auc2 = round(auc(fpr2, tpr2), 2)
print('auc:', auc2)
# Confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test), prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
# Logistic Regression top-5 features selected with Select-K-Best
new_X = (SelectKBest(mutual_info_classif, k=5)
.fit_transform(X_train_scaled, y_train))
clf = LogisticRegression(penalty='l1', C=10)
clf.fit(new_X, y_train)
prediction = clf.predict(X_test_scaled.loc[:, ['term', 'int_rate', 'installment', 'grade', 'sub_grade']])
# F1-score
print(f1_score(y_test, prediction, average='weighted'))
# AUC
y_score = clf.predict_proba(X_test_scaled.loc[:, ['term', 'int_rate', 'installment', 'grade', 'sub_grade']])
fpr3, tpr3, thresholds = roc_curve(np.array(y_test), y_score[:,0], pos_label='Charged Off')
auc3 = round(auc(fpr3, tpr3), 2)
print('auc:', auc3)
# Confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test), prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
# Random Forest only grade
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train.loc[:,['grade']], y_train)
prediction = clf.predict(X_test.loc[:,['grade']])
# F1-score
print(f1_score(y_test, prediction, average='weighted'))
# AUC
y_score = clf.predict_proba(X_test.loc[:,['grade']])
fpr4, tpr4, thresholds = roc_curve(np.array(y_test), y_score[:,0], pos_label='Charged Off')
auc4 = round(auc(fpr4, tpr4), 2)
print('auc:', auc4)
# Confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test), prediction)
print(confusion_matrix)
confusion_matrix.plot()
# Random Forest all features
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train, y_train)
prediction = clf.predict(X_test)
# F1-score
print(f1_score(y_test, prediction, average='weighted'))
# AUC
y_score = clf.predict_proba(X_test)
fpr5, tpr5, thresholds = roc_curve(np.array(y_test), y_score[:,0], pos_label='Charged Off')
auc5 = round(auc(fpr5, tpr5), 2)
print('auc:', auc5)
# Confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test), prediction)
print(confusion_matrix)
confusion_matrix.plot()
# ROC-plot with AUC scores.
plt.plot(fpr1, tpr1, label='Logreg grade (auc = %0.2f)' % auc1, linewidth=4)
plt.plot(fpr2, tpr2, label='Logreg all (auc = %0.2f)' % auc2, linewidth=4)
plt.plot(fpr3, tpr3, label='Logreg top-5 (auc = %0.2f)' % auc3, linewidth=4)
plt.plot(fpr4, tpr4, label='RF grade (auc = %0.2f)' % auc4, linewidth=4)
plt.plot(fpr5, tpr5, label='RF all (auc = %0.2f)' % auc5, linewidth=4)
plt.legend(loc="lower right")
plt.show()
clf = LogisticRegression(penalty='l1', C=10)
clf.fit(X_train_scaled, y_train)
coefs = clf.coef_
# find index of top 5 highest coefficients, aka most used features for prediction
positions = abs(coefs[0]).argsort()[-5:][::-1]
features = list(X_train_scaled.columns[positions])
print(features)
print(coefs[0][positions])
print(clf.classes_)
# use statsmodels logistic regression to get p-values for the top-5 most used features
logit = sm.Logit(y_train == 'Charged Off', np.array(X_train_scaled.loc[:, features]))
result = logit.fit()
print(result.summary())
# split data in train (70%) and test set (30%) stratify by loan_status
X_train, X_test, y_train, y_test = train_test_split(closed_loans_predict.drop(['grade', 'sub_grade', 'int_rate', 'roi', 'loan_status']
, axis=1),
closed_loans['grade'], test_size=0.3,
random_state=123, stratify=closed_loans['loan_status'])
# scaling and normalizing the features
X_train_scaled = preprocessing.scale(X_train)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns)
# scale test set with scaling used in train set
X_test_scaled = scaler.transform(X_test)
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns)
# binarize the labels for multiclass onevsall prediction
lb = LabelBinarizer()
grades = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
lb.fit(grades)
y_train_2 = lb.transform(y_train)
# Logistic Regression predicting grade from the other features (excluding interest rate and subgrade)
clf = OneVsRestClassifier(LogisticRegression(penalty='l1'))
predict_y = clf.fit(X_train_scaled, y_train_2).predict(X_test_scaled)
predict_y = lb.inverse_transform(predict_y)
# confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test, dtype='<U1'), predict_y)
confusion_matrix.plot()
confusion_matrix.print_stats()
# Random Forest predicting grade from the other features (excluding interest rate and subgrade)
clf = OneVsRestClassifier(RandomForestClassifier(n_estimators=100))
predict_y = clf.fit(X_train, y_train_2).predict(X_test)
predict_y = lb.inverse_transform(predict_y)
# confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test, dtype='<U1'), predict_y)
confusion_matrix.plot()
confusion_matrix.print_stats()
# important features
features = []
for i,j in enumerate(grades):
print('\n',j)
feat_imp = clf.estimators_[i].feature_importances_
positions = abs(feat_imp).argsort()[-5:][::-1]
features.extend(list(X_train.columns[positions]))
print(X_train.columns[positions])
print(feat_imp[positions])
print(pd.Series(features).value_counts())
# Excluding loans with grade A
# split data in train (70%) and test set (30%) stratify by loan_status
no_A_loans = closed_loans_predict[closed_loans['grade']!='A']
X_train, X_test, y_train, y_test = train_test_split(no_A_loans.drop(['grade', 'sub_grade', 'int_rate', 'roi', 'loan_status']
, axis=1),
closed_loans[closed_loans['grade']!='A']['grade'], test_size=0.3,
random_state=123, stratify=closed_loans[closed_loans['grade']!='A']['loan_status'])
# scaling and normalizing the features
X_train_scaled = preprocessing.scale(X_train)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns)
# scale test set with scaling used in train set
X_test_scaled = scaler.transform(X_test)
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns)
# binarize the labels for multiclass onevsall prediction
lb = LabelBinarizer()
grades = ['B', 'C', 'D', 'E', 'F', 'G']
lb.fit(grades)
y_train_2 = lb.transform(y_train)
# Excluding loans with grade A
# Random Forest predicting grade from the other features (excluding interest rate and subgrade)
clf = OneVsRestClassifier(RandomForestClassifier(n_estimators=100))
predict_y = clf.fit(X_train, y_train_2).predict(X_test)
predict_y = lb.inverse_transform(predict_y)
# confusion matrix
confusion_matrix = ConfusionMatrix(np.array(y_test, dtype='<U1'), predict_y)
confusion_matrix.plot()
confusion_matrix.print_stats()
# important features
features = []
for i,j in enumerate(grades):
print('\n',j)
feat_imp = clf.estimators_[i].feature_importances_
positions = abs(feat_imp).argsort()[-5:][::-1]
features.extend(list(X_train.columns[positions]))
print(X_train.columns[positions])
print(feat_imp[positions])
print(pd.Series(features).value_counts())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step one
Step2: Let us assume that qubits qr[0] and qr[1] belong to Alice and Bob respetively.
Step3: Qubits qr[0] and qr[1] are now entangled.
Step two
Step4: Supose Alice and Bob want to generate a secret key using $N$ singlet states prepared by Charlie.
Step5: The participants must choose the directions onto which they will measure the spin projections of their qubits.
Step6: Now we combine Charlie's device and Alice's and Bob's detectors into one circuit (singlet + Alice's measurement + Bob's measurement).
Step7: Let us look at the name of one of the prepared circuits.
Step8: It tells us about the number of the singlet state received from Charlie, and the measurements applied by Alice and Bob.
Step three
Step9: Look at the output of the execution of the first circuit.
Step10: It consists of four digits.
Step11: Using these patterns, we can find particular results in the outputs and fill strings the $a$ and $a^{'}$ with the results of Alice's and Bob's measurements.
Step12: Step four
Step13: The keys $k$ and $k'$ are now stored in the aliceKey and bobKey lists, respectively.
Step14: Note that since the strings $k$ and $k^{'}$ are secret, Alice and Bob have no information about mismatches in the bits of their keys.
Step five
Step15: Output
Step16: Finaly, Alice and Bob have the secret keys $k$ and $k^{'}$ (aliceKey and bobKey)!
Step17: Like Alice and Bob, Eve must choose the directions onto which she will measure the spin projections of the qubits.
Step18: Like we did before, now we create the circuits with singlet states and detectors of Eve, Alice and Bob.
Step19: Now we execute all the prepared circuits on the simulator.
Step20: Let us look at the name of the first circuit and the output after it is executed.
Step21: We can see onto which directions Eve, Alice and Bob measured the spin projections and the results obtained.
Step22: Now Eve, Alice and Bob record the results of their measurements.
Step23: As before, Alice, Bob and Eve create the secret keys using the results obtained after measuring the observables $W \otimes W$ and $Z \otimes Z$.
Step24: To find out the number of mismatching bits in the keys of Alice, Bob and Eve we compare the lists aliceKey, bobKey and eveKeys.
Step25: It is also good to know what percentage of the keys is known to Eve.
Step26: Using the chsh_corr function defined above we calculate the CSHS correlation value.
Step27: And now we print all the results.
|
<ASSISTANT_TASK:>
Python Code:
# useful additional packages
import numpy as np
import random
# regular expressions module
import re
# importing the QISKit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, Aer
# import basic plot tools
from qiskit.tools.visualization import circuit_drawer, plot_histogram
# Creating registers
qr = QuantumRegister(2, name="qr")
cr = ClassicalRegister(4, name="cr")
singlet = QuantumCircuit(qr, cr, name='singlet')
singlet.x(qr[0])
singlet.x(qr[1])
singlet.h(qr[0])
singlet.cx(qr[0],qr[1])
## Alice's measurement circuits
# measure the spin projection of Alice's qubit onto the a_1 direction (X basis)
measureA1 = QuantumCircuit(qr, cr, name='measureA1')
measureA1.h(qr[0])
measureA1.measure(qr[0],cr[0])
# measure the spin projection of Alice's qubit onto the a_2 direction (W basis)
measureA2 = QuantumCircuit(qr, cr, name='measureA2')
measureA2.s(qr[0])
measureA2.h(qr[0])
measureA2.t(qr[0])
measureA2.h(qr[0])
measureA2.measure(qr[0],cr[0])
# measure the spin projection of Alice's qubit onto the a_3 direction (standard Z basis)
measureA3 = QuantumCircuit(qr, cr, name='measureA3')
measureA3.measure(qr[0],cr[0])
## Bob's measurement circuits
# measure the spin projection of Bob's qubit onto the b_1 direction (W basis)
measureB1 = QuantumCircuit(qr, cr, name='measureB1')
measureB1.s(qr[1])
measureB1.h(qr[1])
measureB1.t(qr[1])
measureB1.h(qr[1])
measureB1.measure(qr[1],cr[1])
# measure the spin projection of Bob's qubit onto the b_2 direction (standard Z basis)
measureB2 = QuantumCircuit(qr, cr, name='measureB2')
measureB2.measure(qr[1],cr[1])
# measure the spin projection of Bob's qubit onto the b_3 direction (V basis)
measureB3 = QuantumCircuit(qr, cr, name='measureB3')
measureB3.s(qr[1])
measureB3.h(qr[1])
measureB3.tdg(qr[1])
measureB3.h(qr[1])
measureB3.measure(qr[1],cr[1])
## Lists of measurement circuits
aliceMeasurements = [measureA1, measureA2, measureA3]
bobMeasurements = [measureB1, measureB2, measureB3]
# Define the number of singlets N
numberOfSinglets = 500
aliceMeasurementChoices = [random.randint(1, 3) for i in range(numberOfSinglets)] # string b of Alice
bobMeasurementChoices = [random.randint(1, 3) for i in range(numberOfSinglets)] # string b' of Bob
circuits = [] # the list in which the created circuits will be stored
for i in range(numberOfSinglets):
# create the name of the i-th circuit depending on Alice's and Bob's measurement choices
circuitName = str(i) + ':A' + str(aliceMeasurementChoices[i]) + '_B' + str(bobMeasurementChoices[i])
# create the joint measurement circuit
# add Alice's and Bob's measurement circuits to the singlet state curcuit
# singlet state circuit # measurement circuit of Alice # measurement circuit of Bob
circuitName = singlet + aliceMeasurements[aliceMeasurementChoices[i]-1] + bobMeasurements[bobMeasurementChoices[i]-1]
# add the created circuit to the circuits list
circuits.append(circuitName)
print(circuits[0].name)
backend=Aer.get_backend('qasm_simulator')
result = execute(circuits, backend=backend, shots=1).result()
print(result)
result.get_counts(circuits[0])
plot_histogram(result.get_counts(circuits[0]))
abPatterns = [
re.compile('..00$'), # search for the '..00' output (Alice obtained -1 and Bob obtained -1)
re.compile('..01$'), # search for the '..01' output
re.compile('..10$'), # search for the '..10' output (Alice obtained -1 and Bob obtained 1)
re.compile('..11$') # search for the '..11' output
]
aliceResults = [] # Alice's results (string a)
bobResults = [] # Bob's results (string a')
for i in range(numberOfSinglets):
res = list(result.get_counts(circuits[i]).keys())[0] # extract the key from the dict and transform it to str; execution result of the i-th circuit
if abPatterns[0].search(res): # check if the key is '..00' (if the measurement results are -1,-1)
aliceResults.append(-1) # Alice got the result -1
bobResults.append(-1) # Bob got the result -1
if abPatterns[1].search(res):
aliceResults.append(1)
bobResults.append(-1)
if abPatterns[2].search(res): # check if the key is '..10' (if the measurement results are -1,1)
aliceResults.append(-1) # Alice got the result -1
bobResults.append(1) # Bob got the result 1
if abPatterns[3].search(res):
aliceResults.append(1)
bobResults.append(1)
aliceKey = [] # Alice's key string k
bobKey = [] # Bob's key string k'
# comparing the stings with measurement choices
for i in range(numberOfSinglets):
# if Alice and Bob have measured the spin projections onto the a_2/b_1 or a_3/b_2 directions
if (aliceMeasurementChoices[i] == 2 and bobMeasurementChoices[i] == 1) or (aliceMeasurementChoices[i] == 3 and bobMeasurementChoices[i] == 2):
aliceKey.append(aliceResults[i]) # record the i-th result obtained by Alice as the bit of the secret key k
bobKey.append(- bobResults[i]) # record the multiplied by -1 i-th result obtained Bob as the bit of the secret key k'
keyLength = len(aliceKey) # length of the secret key
abKeyMismatches = 0 # number of mismatching bits in Alice's and Bob's keys
for j in range(keyLength):
if aliceKey[j] != bobKey[j]:
abKeyMismatches += 1
# function that calculates CHSH correlation value
def chsh_corr(result):
# lists with the counts of measurement results
# each element represents the number of (-1,-1), (-1,1), (1,-1) and (1,1) results respectively
countA1B1 = [0, 0, 0, 0] # XW observable
countA1B3 = [0, 0, 0, 0] # XV observable
countA3B1 = [0, 0, 0, 0] # ZW observable
countA3B3 = [0, 0, 0, 0] # ZV observable
for i in range(numberOfSinglets):
res = list(result.get_counts(circuits[i]).keys())[0]
# if the spins of the qubits of the i-th singlet were projected onto the a_1/b_1 directions
if (aliceMeasurementChoices[i] == 1 and bobMeasurementChoices[i] == 1):
for j in range(4):
if abPatterns[j].search(res):
countA1B1[j] += 1
if (aliceMeasurementChoices[i] == 1 and bobMeasurementChoices[i] == 3):
for j in range(4):
if abPatterns[j].search(res):
countA1B3[j] += 1
if (aliceMeasurementChoices[i] == 3 and bobMeasurementChoices[i] == 1):
for j in range(4):
if abPatterns[j].search(res):
countA3B1[j] += 1
# if the spins of the qubits of the i-th singlet were projected onto the a_3/b_3 directions
if (aliceMeasurementChoices[i] == 3 and bobMeasurementChoices[i] == 3):
for j in range(4):
if abPatterns[j].search(res):
countA3B3[j] += 1
# number of the results obtained from the measurements in a particular basis
total11 = sum(countA1B1)
total13 = sum(countA1B3)
total31 = sum(countA3B1)
total33 = sum(countA3B3)
# expectation values of XW, XV, ZW and ZV observables (2)
expect11 = (countA1B1[0] - countA1B1[1] - countA1B1[2] + countA1B1[3])/total11 # -1/sqrt(2)
expect13 = (countA1B3[0] - countA1B3[1] - countA1B3[2] + countA1B3[3])/total13 # 1/sqrt(2)
expect31 = (countA3B1[0] - countA3B1[1] - countA3B1[2] + countA3B1[3])/total31 # -1/sqrt(2)
expect33 = (countA3B3[0] - countA3B3[1] - countA3B3[2] + countA3B3[3])/total33 # -1/sqrt(2)
corr = expect11 - expect13 + expect31 + expect33 # calculate the CHSC correlation value (3)
return corr
corr = chsh_corr(result) # CHSH correlation value
# CHSH inequality test
print('CHSH correlation value: ' + str(round(corr, 3)))
# Keys
print('Length of the key: ' + str(keyLength))
print('Number of mismatching bits: ' + str(abKeyMismatches) + '\n')
# measurement of the spin projection of Alice's qubit onto the a_2 direction (W basis)
measureEA2 = QuantumCircuit(qr, cr, name='measureEA2')
measureEA2.s(qr[0])
measureEA2.h(qr[0])
measureEA2.t(qr[0])
measureEA2.h(qr[0])
measureEA2.measure(qr[0],cr[2])
# measurement of the spin projection of Allice's qubit onto the a_3 direction (standard Z basis)
measureEA3 = QuantumCircuit(qr, cr, name='measureEA3')
measureEA3.measure(qr[0],cr[2])
# measurement of the spin projection of Bob's qubit onto the b_1 direction (W basis)
measureEB1 = QuantumCircuit(qr, cr, name='measureEB1')
measureEB1.s(qr[1])
measureEB1.h(qr[1])
measureEB1.t(qr[1])
measureEB1.h(qr[1])
measureEB1.measure(qr[1],cr[3])
# measurement of the spin projection of Bob's qubit onto the b_2 direction (standard Z measurement)
measureEB2 = QuantumCircuit(qr, cr, name='measureEB2')
measureEB2.measure(qr[1],cr[3])
# lists of measurement circuits
eveMeasurements = [measureEA2, measureEA3, measureEB1, measureEB2]
# list of Eve's measurement choices
# the first and the second elements of each row represent the measurement of Alice's and Bob's qubits by Eve respectively
eveMeasurementChoices = []
for j in range(numberOfSinglets):
if random.uniform(0, 1) <= 0.5: # in 50% of cases perform the WW measurement
eveMeasurementChoices.append([0, 2])
else: # in 50% of cases perform the ZZ measurement
eveMeasurementChoices.append([1, 3])
circuits = [] # the list in which the created circuits will be stored
for j in range(numberOfSinglets):
# create the name of the j-th circuit depending on Alice's, Bob's and Eve's choices of measurement
circuitName = str(j) + ':A' + str(aliceMeasurementChoices[j]) + '_B' + str(bobMeasurementChoices[j] + 2) + '_E' + str(eveMeasurementChoices[j][0]) + str(eveMeasurementChoices[j][1] - 1)
# create the joint measurement circuit
# add Alice's and Bob's measurement circuits to the singlet state curcuit
# singlet state circuit # Eve's measurement circuit of Alice's qubit # Eve's measurement circuit of Bob's qubit # measurement circuit of Alice # measurement circuit of Bob
circuitName = singlet + eveMeasurements[eveMeasurementChoices[j][0]-1] + eveMeasurements[eveMeasurementChoices[j][1]-1] + aliceMeasurements[aliceMeasurementChoices[j]-1] + bobMeasurements[bobMeasurementChoices[j]-1]
# add the created circuit to the circuits list
circuits.append(circuitName)
backend=Aer.get_backend('qasm_simulator')
result = execute(circuits, backend=backend, shots=1).result()
print(result)
print(str(circuits[0].name) + '\t' + str(result.get_counts(circuits[0])))
plot_histogram(result.get_counts(circuits[0]))
ePatterns = [
re.compile('00..$'), # search for the '00..' result (Eve obtained the results -1 and -1 for Alice's and Bob's qubits)
re.compile('01..$'), # search for the '01..' result (Eve obtained the results 1 and -1 for Alice's and Bob's qubits)
re.compile('10..$'),
re.compile('11..$')
]
aliceResults = [] # Alice's results (string a)
bobResults = [] # Bob's results (string a')
# list of Eve's measurement results
# the elements in the 1-st column are the results obtaned from the measurements of Alice's qubits
# the elements in the 2-nd column are the results obtaned from the measurements of Bob's qubits
eveResults = []
# recording the measurement results
for j in range(numberOfSinglets):
res = list(result.get_counts(circuits[j]).keys())[0] # extract a key from the dict and transform it to str
# Alice and Bob
if abPatterns[0].search(res): # check if the key is '..00' (if the measurement results are -1,-1)
aliceResults.append(-1) # Alice got the result -1
bobResults.append(-1) # Bob got the result -1
if abPatterns[1].search(res):
aliceResults.append(1)
bobResults.append(-1)
if abPatterns[2].search(res): # check if the key is '..10' (if the measurement results are -1,1)
aliceResults.append(-1) # Alice got the result -1
bobResults.append(1) # Bob got the result 1
if abPatterns[3].search(res):
aliceResults.append(1)
bobResults.append(1)
# Eve
if ePatterns[0].search(res): # check if the key is '00..'
eveResults.append([-1, -1]) # results of the measurement of Alice's and Bob's qubits are -1,-1
if ePatterns[1].search(res):
eveResults.append([1, -1])
if ePatterns[2].search(res):
eveResults.append([-1, 1])
if ePatterns[3].search(res):
eveResults.append([1, 1])
aliceKey = [] # Alice's key string a
bobKey = [] # Bob's key string a'
eveKeys = [] # Eve's keys; the 1-st column is the key of Alice, and the 2-nd is the key of Bob
# comparing the strings with measurement choices (b and b')
for j in range(numberOfSinglets):
# if Alice and Bob measured the spin projections onto the a_2/b_1 or a_3/b_2 directions
if (aliceMeasurementChoices[j] == 2 and bobMeasurementChoices[j] == 1) or (aliceMeasurementChoices[j] == 3 and bobMeasurementChoices[j] == 2):
aliceKey.append(aliceResults[j]) # record the i-th result obtained by Alice as the bit of the secret key k
bobKey.append(-bobResults[j]) # record the multiplied by -1 i-th result obtained Bob as the bit of the secret key k'
eveKeys.append([eveResults[j][0], -eveResults[j][1]]) # record the i-th bits of the keys of Eve
keyLength = len(aliceKey) # length of the secret skey
abKeyMismatches = 0 # number of mismatching bits in the keys of Alice and Bob
eaKeyMismatches = 0 # number of mismatching bits in the keys of Eve and Alice
ebKeyMismatches = 0 # number of mismatching bits in the keys of Eve and Bob
for j in range(keyLength):
if aliceKey[j] != bobKey[j]:
abKeyMismatches += 1
if eveKeys[j][0] != aliceKey[j]:
eaKeyMismatches += 1
if eveKeys[j][1] != bobKey[j]:
ebKeyMismatches += 1
eaKnowledge = (keyLength - eaKeyMismatches)/keyLength # Eve's knowledge of Bob's key
ebKnowledge = (keyLength - ebKeyMismatches)/keyLength # Eve's knowledge of Alice's key
corr = chsh_corr(result)
# CHSH inequality test
print('CHSH correlation value: ' + str(round(corr, 3)) + '\n')
# Keys
print('Length of the key: ' + str(keyLength))
print('Number of mismatching bits: ' + str(abKeyMismatches) + '\n')
print('Eve\'s knowledge of Alice\'s key: ' + str(round(eaKnowledge * 100, 2)) + ' %')
print('Eve\'s knowledge of Bob\'s key: ' + str(round(ebKnowledge * 100, 2)) + ' %')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare Bokeh
Step2: Variables
Step5: Functions
Step6: Execution
|
<ASSISTANT_TASK:>
Python Code:
#Import all required libraries
import numpy as np
import pandas as pd
from bokeh.plotting import figure, show, output_file
from bokeh.models import HoverTool, ColumnDataSource
from bokeh.io import output_notebook
import glob
output_notebook()
#Set the variables.
FOLDER = "../results/"
FILE_WILDCARD = "*_results.csv"
COLUMN_CLASSIFICATION = "Assignment"
COLUMN_LOAD = "Mapped # Reads"
METHOD = "Genome Detective"
#Make sure to adjust these values to a size that fits your data.
HEATMAP_TITLE = "Viruses classified by %s" % METHOD
HEATMAP_WIDTH = 700
HEATMAP_HEIGHT = 500
HEATMAP_TITLE_FONT_SIZE = "16pt"
HEATMAP_AXIS_FONT_SIZE = "12pt"
COLOUR = ["#6b2d18"] #Selected from coffee beans: http://s.eatthis-cdn.com/media/images/ext/851818315/coffee-beans.jpg
#This is where you can save the resulting html file.
#Write the directory path + file name you want to use,
# and the title you want the file to have.
#This title will appear in the top bar of your browser window.
OUTPUT_FILE = "../results/test_heatmap.html"
OUTPUT_TITLE = "Heatmap of viruses classified by %s" % METHOD
#Define the required functions
def create_concatenated_dataframe(file_wildcard, folder):
Input: a filename with wildcard,
and a folder name
Output: One concatenated dataframe of all the input files
#Step 1: create a list of the files using glob
file_list = glob.glob("%s%s" % (folder, file_wildcard))
file_list = sorted(file_list)
file_suffix = file_wildcard.lstrip("*")
#Step 2: open the files as dataframe, remove "Contigs" column and add sample IDs
df_list = []
for results_file in file_list:
results_df = pd.read_csv(results_file)
results_df = results_df.drop("Contigs", axis = 1) #remove unnecessary (and long!) column
sample_id = results_file[:results_file.index(file_suffix)].split('/')[-1]
results_df["sample"] = sample_id
df_list.append(results_df)
#Step 3: concatenate the dataframes
super_df = pd.concat(df_list, ignore_index=True)
return(super_df)
def create_and_show_heatmap(dataframe, column_classification, column_load):
Input: Dataframe, names of columns:
'classification'
'load'
Output: heatmap (shown in the notebook and
exported to the html file defined as variable above)
samples = dataframe["sample"]
assignments = dataframe[column_classification]
loads = dataframe[column_load]
#a fix for numbers with tildes, e.g. '~80'
loads_new = []
for value in loads:
if isinstance(value, str):
value = value.strip('~')
loads_new.append(int(value))
colors = len(loads) * COLOUR #multiply to make an equally long list
max_load = max(loads_new)
alphas = [ min( x / float(max_load), 0.9) + 0.1 for x in loads_new ]
source = ColumnDataSource(
data = dict(samples=samples, assignments=assignments, colors=colors, loads=loads, alphas=alphas)
)
TOOLS = "hover, save, pan, box_zoom, wheel_zoom, reset"
p = figure(title = HEATMAP_TITLE,
x_range = list(sorted(set(samples))),
y_range = list(reversed(sorted(set(assignments)))), #reverse to order 'from top to bottom'
x_axis_location = "above",
toolbar_location="right",
tools = TOOLS)
p.plot_width = HEATMAP_WIDTH
p.plot_height = HEATMAP_HEIGHT
p.grid.grid_line_color = None
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_text_font_size = HEATMAP_AXIS_FONT_SIZE
p.axis.major_label_standoff = 0
p.xaxis.major_label_orientation = np.pi/4
p.title.text_color = COLOUR[0]
p.title.text_font_size = HEATMAP_TITLE_FONT_SIZE
p.rect("samples", "assignments", 1, 1, source=source,
color="colors", alpha="alphas", line_color=None)
p.select_one(HoverTool).tooltips = [
('Sample', "@samples"),
('Taxon' , "@assignments"),
('Number of reads', "@loads"),
]
output_file(OUTPUT_FILE, title=OUTPUT_TITLE)
show(p)
df = create_concatenated_dataframe(file_wildcard = FILE_WILDCARD, folder = FOLDER)
create_and_show_heatmap(dataframe=df,
column_classification=COLUMN_CLASSIFICATION,
column_load=COLUMN_LOAD)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Normality of the residuals
Step2: Omni test
Step3: Influence tests
Step4: Explore other options by typing dir(influence_test)
Step5: Other plotting options can be found on the Graphics page.
Step6: Heteroskedasticity tests
Step7: Goldfeld-Quandt test
Step8: Linearity
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
from statsmodels.compat import lzip
import statsmodels
import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
import statsmodels.stats.api as sms
import matplotlib.pyplot as plt
# Load data
url = 'http://vincentarelbundock.github.io/Rdatasets/csv/HistData/Guerry.csv'
dat = pd.read_csv(url)
# Fit regression model (using the natural log of one of the regressaors)
results = smf.ols('Lottery ~ Literacy + np.log(Pop1831)', data=dat).fit()
# Inspect the results
print(results.summary())
name = ['Jarque-Bera', 'Chi^2 two-tail prob.', 'Skew', 'Kurtosis']
test = sms.jarque_bera(results.resid)
lzip(name, test)
name = ['Chi^2', 'Two-tail probability']
test = sms.omni_normtest(results.resid)
lzip(name, test)
from statsmodels.stats.outliers_influence import OLSInfluence
test_class = OLSInfluence(results)
test_class.dfbetas[:5,:]
from statsmodels.graphics.regressionplots import plot_leverage_resid2
fig, ax = plt.subplots(figsize=(8,6))
fig = plot_leverage_resid2(results, ax = ax)
np.linalg.cond(results.model.exog)
name = ['Lagrange multiplier statistic', 'p-value',
'f-value', 'f p-value']
test = sms.het_breushpagan(results.resid, results.model.exog)
lzip(name, test)
name = ['F statistic', 'p-value']
test = sms.het_goldfeldquandt(results.resid, results.model.exog)
lzip(name, test)
name = ['t value', 'p value']
test = sms.linear_harvey_collier(results)
lzip(name, test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Numpy exercises
Step2: If you want to apply a transformation over each element of this list you have to build a loop and operate over each element.
Step3: The output should be
Step4: Note
Step5: Exercise 2
Step6: The output should be
Step7: However, numpy provides us with built-in functions that guarantee that any errors and exceptions are handled properly
Step8: Whether you use the basic operators or the built-in functions will depend on the situation.
Step9: Another key difference with Matlab is that the "*" operator won't give us matrix multiplication. It will instead compute an element-wise multiplication. Again, numpy has a built-in function for this purpose that will guarantee propper handling of errors
Step10: Numpy also gives us funtions to perform matrix multiplications and dot products.
Step11: Note that the np.dot() function is very powerful and can perform a number of different operations depending on the nature of the input arguments. For example, if we give it a pair of matrices of adequate dimensions, it will perform the same operation as np.matmul().
Step12: We can also compute typical numerical operations, which will be applied element-wise
Step13: Element-wise division between matrices is performed using the ''/'' operator or the divide() built-in function
Step14: We can use the "**" operator or the power() built-in function to raise elements from a matrix to a given power, or to raise elements of one matrix to positionally-corresponding powers in another matrix
Step15: Finally, we can transpose a matrix by using the numpy.transpose() function, the ndarray.transpose() method or its abbreviated version, ndarray.T. We usually use the abbreviated version, but the other forms have their place in certain contexts. Check their documentations to see what options they offer
Step16: Exercise 3
Step17: 2.2. N-D numpy arrays
Step18: The answer is no. We can easily check this by examining their shapes and dimensions
Step19: Effectively, array1 is a 1D array, whereas array2 is a 2D array. There are some methods that will let you modify the dimensions of an array. To go from a 2-D to 1-D array we have the methods flatten(), ravel() and reshape(). Check the result of the following code (you can use the help function to check the funcionalities of each method).
Step20: Note
Step21: We might also need to remove empty or unused dimensions. For this we have np.squeeze()
Step22: Exercise 4
Step23: The output should be
Step24: Other numpy methods where you can specify the axis along with a certain operation should be carried out are
Step25: The output should be
Step26: The output should be
Step27: Exercise 7
Step28: The output should be
Step29: Exercise 8
|
<ASSISTANT_TASK:>
Python Code:
# Import numpy library
import numpy as np
x = [5, 4, 3, 4]
print(type(x[0]))
# Create a list of floats containing the same elements as in x
x_f = []
for element in x:
# <FILL IN>
print(x_f)
print(type(x_f[0]))
# Numpy arrays can be created from numeric lists or using different numpy methods
y = np.arange(8) + 1
x = np.array(x_f)
z = np.array([[1, 2], [2, 1]])
# The arange() function generates vectors of equally spaced numbers. We can
# specify start and stop positions as well as the step length (the steps don't
# need to be integers!):
print('A vector that goes from 2 to 8 in steps of 2: ', np.arange(2, 9, 2))
# Numpy also has a linspace() function that works exactly like its Matlab
# counterpart:
print('\nA vector of length 5 that spans from 0 to 1 in constant increments:\n',
np.linspace(0, 1, 5))
# Check the different data types involved
print('\nThe type of variable x_f is ', type(x_f))
print('The type of variable x is ', type(x))
# Print the shapes of the numpy arrays
print('\nThe variable x has shape ', x.shape)
print('The variable y has shape ', y.shape)
print('The variable z has shape ', z.shape)
my_array = np.arange(9).reshape((3, 3))
print(my_array)
print('the type is: ', type(my_array))
# Convert my_array to list
my_array_list = my_array.tolist()
print('\n', my_array_list)
print('the type is: ',type(my_array_list))
# 1. Define a new 3x2 array named my_array with [1, 2, 3] in the first row and
# [4, 5, 6] in the second. Check the dimension of the array.
# my_array = <FILL IN>
print(my_array)
print('Its shape is: ', np.shape(my_array))
#2. Define a new 3x4 array named my_zeros with all its elements to zero
# my_zeros = <FILL IN>
print('A 3x4 vector of zeros:')
print(my_zeros)
#3. Define a new 4x2 array named my_ones with all its elements to one
# my_ones = <FILL IN>
print('A 4x2 vector of ones:')
print(my_ones)
#4. Modify the dimensions of my_ones to a 2x4 array using command np.reshape()
# my_ones2 = <FILL IN>
print('A 2x4 vector of ones:')
print(my_ones2)
#5. Define a new 4x4 identity array named my_eye
# my_ones2 = <FILL IN>
print('A 4x4 identity vector:')
print(my_ones2)
x1 = np.arange(9).reshape((3, 3))
x2 = np.ones((3, 3))
result = x1 + x2
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nAddition of x1 and x2 using the + operator:\n', result)
# We can add two arrays:
x1 = np.arange(9).reshape((3, 3))
x2 = np.ones((3, 3))
result = np.add(x1, x2)
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nAddition of x1 and x2 using built-in functions:\n', result)
# Or compute the difference:
result = np.subtract(x1, x2)
print('\nSubtraction of x1 and x2 using built-in functions:\n', result)
# We can add or subtract row and column vectors:
row_vect = np.ones((1, 3))
col_vect = np.ones((3, 1))
result = np.add(x1, row_vect)
print('x1:\n', x1, '\n\nrow_vect:\n', row_vect, '\n\ncol_vect:\n', col_vect)
print('\nAddition of a row vector:\n', result)
result = np.add(x1, col_vect)
print('\nAddition of a column vector:\n', result)
# We can perform element-wise multiplication by using the * operator:
x1 = np.arange(9).reshape((3, 3))
x2 = np.ones((3, 3)) * 2 # a 3x3 array with 2s in every cell
result = x1 * x2
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nElement-wise multiplication of x1 and x2 using the * operator:\n', result)
# or by using the built-in numpy function:
result = np.multiply(x1, x2)
print('\nElement-wise multiplication of x1 and x2 using built-in functions:\n', result)
# We can perform matrix multiplication:
x1 = np.arange(9).reshape((3, 3))
x2 = np.ones((3, 3)) * 2 # a 3x3 array with 2s in every cell
result = np.matmul(x1, x2)
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nProduct of x1 and x2:\n', result)
# Or the dot product between vectors:
v1 = np.arange(4)
v2 = np.arange(3, 7)
result = np.dot(v1, v2)
print('\nv1:\n', v1, '\n\nv2:\n', v2)
print('\nDot product of v1 and v2:\n', result)
x1 = np.arange(9).reshape((3, 3))
x2 = np.ones((3, 3)) * 2 # a 3x3 array with 2s in every cell
result = np.matmul(x1, x2)
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nProduct of x1 and x2 using np.matmul():\n', result)
result = np.dot(x1, x2)
print('\nProduct of x1 and x2 using np.dot():\n', result, '\n')
# Read the np.dot() documentation for more information:
help(np.dot)
# Examples of element-wise numerical operations:
x1 = np.arange(9).reshape((3, 3)) + 1
print('x1:\n', x1)
print('\nExponentiation of x1:\n', np.exp(x1))
print('\nLogarithm of x1:\n', np.log(x1))
print('\nSquare root of x1:\n', np.sqrt(x1))
# Element-wise division of two matrices:
x1 = np.arange(9).reshape((3, 3)) * 3
x2 = np.ones((3, 3)) * 3
result = np.divide(x1, x2)
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nElement-wise division of x1 and x2:\n', result)
# Performing power operations with the ** operator:
x1 = np.arange(9).reshape((3, 3))
result = x1**2
print('x1:\n', x1)
print('\nRaising all elements in x1 to the power of 2 using the ** operator:\n', result)
result = x1**x1
print('\nRaising all elements in x1 to themselves using the ** operator:\n', result)
# Performing power operations with the power() function:
result = np.power(x1, 2)
print('\nRaising all elements in x1 to the power of 2 using the power() function:\n', result)
result = np.power(x1, x1)
print('\nRaising all elements in x1 to themselves using the power() function:\n', result)
# Three different ways of transposing a matrix:
x1 = np.arange(9).reshape((3, 3))
print('x1:\n', x1)
print('\nTranspose of x1 using the numpy function:\n', np.transpose(x1))
print('\nTranspose of x1 using the ndarray method:\n', x1.transpose())
print('\nTranspose of x1 using the abbreviated form:\n', x1.T)
print('\nOddly enough, the three methods produce the same result!')
# Complete the following exercises. Print the partial results to visualize them.
# Create a 3x4 array called `y`. It's up to you to decide what it contains.
#y = <FILL IN>
# Create a column vector of length 3 called `x_col`.
#x_col = <FILL IN>
# Multiply the 2-D array `y` by 2
#y_by2 = <FILL IN>
# Multiply each of the columns in `y` by the column vector `x_col`
#z = <FILL IN>
# Obtain the matrix product of the transpose of x_col and y
#x_by_y = <FILL IN>
# Compute the sine of a vector that spans from -5 to 5 in increments of 0.5
#x = <FILL IN>
#x_sin = <FILL IN>
array1 = np.array([1,1,1])
print('array1:\n', array1)
array2 = np.ones((3,1))
print('\narray2:\n', array2)
print('Shape of array1 :',array1.shape)
print('Number of dimensions of array1 :',array1.ndim)
print('Shape of array2 :',array2.shape)
print('Number of dimensions of array2 :',array2.ndim)
x1 = np.arange(9).reshape((3, 3))
print('x1:\n', x1)
print('Its shape is: ', x1.shape)
print('\n Use the method flaten:')
print('x1.flatten(): ', x1.flatten())
print('Its shape is: ', x1.flatten().shape)
print('\n Use the method ravel:')
print('x1.ravel(): ', x1.ravel())
print('Its shape is:', x1.ravel().shape)
print('\n Use the method shape:')
print('x1.reshape(-1): ', x1.reshape(-1))
print('Its shape is: ', x1.reshape(-1).shape)
# Note that here the method shape is used to reorganize the array into a 1-D
# array. A more common use of reshape() is to simply redimension an array from
# shape (i, j) to shape (i', j') satisfying the condition i*j = i'*j'.
# For example:
print('\n A more common use of reshape():')
x1 = np.arange(12).reshape((4, 3))
print('x1:\n', x1)
print('Its shape is: ', x1.shape)
print('\nx1.reshape((2,6)):\n', x1.reshape((2,6)))
print('Its shape is: ', x1.reshape((2,6)).shape)
# Let's start with a 1-D array:
array1 = np.array([1,1,1])
print('1D array:\n',array1)
print('Its shape is: ', array1.shape)
# Let's turn it into a column vector (2-D array with dimension 1x3):
array2 = array1[:,np.newaxis]
print('\n2D array:\n',array2)
print('Its shape is: ', array2.shape)
# Let's turn it into a row vector (2-D array with dimension 3x1):
array3 = array1[np.newaxis,:]
print('\n2D array:\n',array3)
print('Its shape is: ', array3.shape)
array1_1D = np.squeeze(array1)
print('1D array:\n',array1_1D)
print('Its shape is: ', array1_1D.shape)
array2_1D = np.squeeze(array2)
print('\n1D array:\n',array2_1D)
print('Its shape is: ', array2_1D.shape)
array3_1D = np.squeeze(array3)
print('\n1D array:\n',array3_1D)
print('Its shape is: ', array3_1D.shape)
# Given the following matrix and vector:
vect = np.arange(3)[:, np.newaxis]
mat = np.arange(9).reshape((3, 3))
# Apply the necessary transformation to vect so that you can perform the matrix
# multiplication np.matmul(vect, mat)
# vect = <FILL IN>
print(np.matmul(vect, mat))
x1 = np.arange(24).reshape((8, 3))
print(x1.shape)
print(np.mean(x1))
print(np.mean(x1,axis=0))
print(np.mean(x1,axis=1))
# Given the following list of heights:
heights = [1.60, 1.85, 1.68, 1.90, 1.78, 1.58, 1.62, 1.60, 1.70, 1.56]
# 1. Obtain a 2x5 array, called `h_array`, using the methods you learnt above.
# h_array = <FILL IN>
print('h_array: \n',h_array)
print('Its shape is: \n', h_array.shape)
# 2. Use method mean() to get the mean of each column, and the mean of each row.
# Store them in two vectors, named `mean_column` and `mean_row` respectively.
#mean_column = <FILL IN>
#mean_row = <FILL IN>
print('\nMean of each column: \n',mean_column)
print('Its shape is (it must coincide with number of columns):\n', mean_column.shape)
print('\nMean of each row: \n',mean_row)
print('Its shape is (it must coincide with number of rows):\n', mean_row.shape)
# 3. Obtain a 5x2 array by multiplying the mean vectors. You may need to create a
# new axis. The array name should be `new_array`
#new_array = <FILL IN>
print('\nNew array: \n',new_array)
print('New array shape: \n', new_array.shape)
my_array = np.array([[1, -1, 3, 3],[2, 2, 4, 6]])
print('Array 1:')
print(my_array)
print(my_array.shape)
my_array2 = np.ones((2,3))
print('Array 2:')
print(my_array2)
print(my_array2.shape)
# Vertically stack matrix my_array with itself
#ex1_res = <FILL IN>
print('Vertically stack:')
print(ex1_res)
# Horizontally stack matrix my_array and my_array2
#ex2_res = <FILL IN>
print('Horizontally stack:')
print(ex2_res)
# Transpose the vector `my_array`, and then stack a ones vector
# as the first column. Alternatively, you can stack a row, and then transpose.
# Just make sure that the final shape is (4,3). Name it `expanded`:
#ones_v = <FILL_IN>
ones_v = np.ones((4,1))
#expanded = <FILL IN>
print('Expanded array: \n',expanded)
print('Its shape is: \n', expanded.shape)
# Selecting specific elements from a vector:
vect = np.arange(10)+4
new_vect = vect[[1, 3, 7]] # remember that in python arrays start at 0
print('vect:\n', vect)
print('\nSelecting specific elements:\n', new_vect)
# Selecting a range of elements from a vector:
new_vect = vect[3:7]
print('\nSelecting a range of elements:\n', new_vect)
# Selecting a subarray from an array:
array = np.arange(12).reshape((3, 4))
new_array = array[2, 2:4]
print('\nArray:\n', array)
print('\nSelecting a subarray:\n', new_array)
X = np.arange(0,25).reshape((5,5))
print('X:\n',X)
# 1. Keep last row of matrix X
#X_sub1 = <FILL IN>
print('\nX_sub1: \n',X_sub1)
# 2. Keep first column of the three first rows of X
#X_sub2 = <FILL IN>
print('\nX_sub2: \n',X_sub2)
# 3. Keep first two columns of the three first rows of X
#X_sub3 = <FILL IN>
print('\nX_sub3: \n',X_sub3)
# 4. Invert the order of the rows of X
#X_sub4 = <FILL IN>
print('\nX_sub4: \n',X_sub4)
# 5. Keep odd columns (first, third...) of X
#X_sub5 = <FILL IN>
print('\nX_sub5: \n',X_sub5)
x = np.array([-3,-2,-1,0,1,2,3])
# Create a new vector `y` with the elements of x, but replacing by 0 each number whose
# absolute value is 2 or less.
y = np.copy(x) # CAUTION! Doing y = x will create two pointers to the same array.
condition = np.abs(x)<=2
y[np.where(condition)[0]]=0
# Note that np.where() returns a tuple. In this case the second element of the
# tuple is empty. Read the np.where() docstring for more info.
print('Before conditioning: \n',x)
print('\nAfter conditioning: \n',y)
#
x = np.arange(12).reshape((3, 4))
# Write your code here
# <SOL>
# </SOL>
print(x_new)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup the source spaces
Step2: Plot the positions of each source space
Step3: Compare volume source locations to segmentation file in freeview
|
<ASSISTANT_TASK:>
Python Code:
# Author: Alan Leggitt <alan.leggitt@ucsf.edu>
#
# License: BSD (3-clause)
import numpy as np
from scipy.spatial import ConvexHull
from mayavi import mlab
from mne import setup_source_space, setup_volume_source_space
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
subj = 'sample'
aseg_fname = subjects_dir + '/sample/mri/aseg.mgz'
# setup a cortical surface source space and extract left hemisphere
surf = setup_source_space(subj, subjects_dir=subjects_dir,
add_dist=False, overwrite=True)
lh_surf = surf[0]
# setup a volume source space of the left cerebellum cortex
volume_label = 'Left-Cerebellum-Cortex'
sphere = (0, 0, 0, 120)
lh_cereb = setup_volume_source_space(subj, mri=aseg_fname, sphere=sphere,
volume_label=volume_label,
subjects_dir=subjects_dir)
# extract left cortical surface vertices, triangle faces, and surface normals
x1, y1, z1 = lh_surf['rr'].T
faces = lh_surf['use_tris']
normals = lh_surf['nn']
# normalize for mayavi
normals /= np.sum(normals * normals, axis=1)[:, np.newaxis]
# extract left cerebellum cortex source positions
x2, y2, z2 = lh_cereb[0]['rr'][lh_cereb[0]['inuse'].astype(bool)].T
# open a 3d figure in mayavi
mlab.figure(1, bgcolor=(0, 0, 0))
# plot the left cortical surface
mesh = mlab.pipeline.triangular_mesh_source(x1, y1, z1, faces)
mesh.data.point_data.normals = normals
mlab.pipeline.surface(mesh, color=3 * (0.7,))
# plot the convex hull bounding the left cerebellum
hull = ConvexHull(np.c_[x2, y2, z2])
mlab.triangular_mesh(x2, y2, z2, hull.simplices, color=3 * (0.5,), opacity=0.3)
# plot the left cerebellum sources
mlab.points3d(x2, y2, z2, color=(1, 1, 0), scale_factor=0.001)
# adjust view parameters
mlab.view(173.78, 101.75, 0.30, np.array([-0.03, -0.01, 0.03]))
mlab.roll(85)
# Export source positions to nift file
nii_fname = data_path + '/MEG/sample/mne_sample_lh-cerebellum-cortex.nii'
# Combine the source spaces
src = surf + lh_cereb
src.export_volume(nii_fname, mri_resolution=True)
# Uncomment the following lines to display source positions in freeview.
'''
# display image in freeview
from mne.utils import run_subprocess
mri_fname = subjects_dir + '/sample/mri/brain.mgz'
run_subprocess(['freeview', '-v', mri_fname, '-v',
'%s:colormap=lut:opacity=0.5' % aseg_fname, '-v',
'%s:colormap=jet:colorscale=0,2' % nii_fname, '-slice',
'157 75 105'])
'''
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup paths
Step2: Compute example data. For reference see
Step3: Get a SourceMorph object for VolSourceEstimate
Step4: Apply morph to VolSourceEstimate
Step5: Convert morphed VolSourceEstimate into NIfTI
Step6: Plot results
|
<ASSISTANT_TASK:>
Python Code:
# Author: Tommy Clausner <tommy.clausner@gmail.com>
#
# License: BSD (3-clause)
import os
import nibabel as nib
import mne
from mne.datasets import sample, fetch_fsaverage
from mne.minimum_norm import apply_inverse, read_inverse_operator
from nilearn.plotting import plot_glass_brain
print(__doc__)
sample_dir_raw = sample.data_path()
sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')
subjects_dir = os.path.join(sample_dir_raw, 'subjects')
fname_evoked = os.path.join(sample_dir, 'sample_audvis-ave.fif')
fname_inv = os.path.join(sample_dir, 'sample_audvis-meg-vol-7-meg-inv.fif')
fname_t1_fsaverage = os.path.join(subjects_dir, 'fsaverage', 'mri',
'brain.mgz')
fetch_fsaverage(subjects_dir) # ensure fsaverage src exists
fname_src_fsaverage = subjects_dir + '/fsaverage/bem/fsaverage-vol-5-src.fif'
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
# Apply inverse operator
stc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, "dSPM")
# To save time
stc.crop(0.09, 0.09)
src_fs = mne.read_source_spaces(fname_src_fsaverage)
morph = mne.compute_source_morph(
inverse_operator['src'], subject_from='sample', subjects_dir=subjects_dir,
niter_affine=[10, 10, 5], niter_sdr=[10, 10, 5], # just for speed
src_to=src_fs, verbose=True)
stc_fsaverage = morph.apply(stc)
# Create mri-resolution volume of results
img_fsaverage = morph.apply(stc, mri_resolution=2, output='nifti1')
# Load fsaverage anatomical image
t1_fsaverage = nib.load(fname_t1_fsaverage)
# Plot glass brain (change to plot_anat to display an overlaid anatomical T1)
display = plot_glass_brain(t1_fsaverage,
title='subject results to fsaverage',
draw_cross=False,
annotate=True)
# Add functional data as overlay
display.add_overlay(img_fsaverage, alpha=0.75)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Multiplication and Division
Step2: A challenge for you!
Step3: The results are different due to the order in which Python runs the operations. Anyway, remember the importance of parentheses in determining 'precedence' – which operations get done first.
Step4: is not the same as
Step5: Exponents
Step6: That's the same as
Step7: So 2**4 is how we get 2-to-the-power-of-4 (the same as multiplying 2 together 4 times). Hopefully this all kind of rings a bell even if you've not studied any maths in a while.
Step8: Floating Point numbers
Step9: Many programming languages (Python used to be one of them) treat whole numbers (1, 8) differently from floating point numbers (1.0, 8.000000000001). In those cases, we get the answer 0 to the first equation because when we use whole numbers in our calculations we got whole numbers back. So 1/8 is obviously not 0, but that's the nearest whole number to 0.125! This is what we mean about computers doing exactly what you tell them
Step10: Just to make this clear
Step11: A challenge for you!
Step12: The answer is 36 as Python would do the multiplication inside the parenthesis first.
Step13: In the code below replace the question marks ??? with the appropriate code to produce a remainder of 6. You may need a calculator to help you. Run the code to check your answer (you can change your answer and run the code multiple times to get the right answer).
Step14: Operator Precedence
Step15: A challenge for you!
Step16: Note
Step17: Notice that last ticky bit
Step18: The reason for this is that Python is implicitly converting (sometimes called casting) the numbers between the two different data types. This conversion doesn't happen with more complex bits of Python code, but for simple numbers Python tries to be helpful and save you the time and effort.
Step19: Now let's work out the length of the hypotenuse of a right-angle triangle with the same side lengths as above
Step20: Note
Step21: Notice the helpful error message after running the code above? We'll come back to how to read errors like these in more detail in the next notebook, but again Python is giving us hints about where we might have made a mistake.
Step22: Let's look at this a little more closely
Step23: If you run the code above, you'll see another error! Also notice that in the two lines of code, in the first the whole line is in one colour (meaning the computer can see that it's all one string), but in the broken example right before this the text changes colour once we get to "to escape the error..." (meaning that the computer doesn't see it all as one string).
Step24: See how that wraps the text on the \n? Also note that the computer is printing exactly what we told it to do
Step25: As you can see from running the code above, it's a bit annoying that they look the same when we print them. But if you run the next lines of code (after thinking about what they might do), you'll see how Python tries to be helpful with its errors
Step27: See how the first line of code prints 2016, but the second line of code (which tries to add together a string "2015" and the number 1) gives you an error that lazily tells you about a problem with str (i.e. string) and int (i.e. integer) 'concatentation'. More on concatenation in a minute.
Step28: Adding with strings (concatenation)
Step29: So just like you would do to add two numbers together, we can add "String1" and "String2" together to get "String1String2". But notice that the + operator doesn't insert whitespace (i.e. a ' ' character) or anything else. It just sticks the two strings together exactly as they are.
Step30: Multiplication
Step31: A challenge for you!
Step32: Now, why do you think the next example below doesn't work? (Read the error output if you're not sure what's going on.)
Step33: What is a variable?
Step34: Hmmmm, nothing printed out this time...
Step35: Can you check the data type of your variable result and switch it to float?
Step36: Check it out! We assigned the outcome of -2 * 10 to a variable called result; then we did something else (printed out a string); and then we printed out the value of the variable and the computer remembered!
Step37: Cool, both variables have the same value! We assigned the result of 1 * 5 to a variable named myFirstVariable and then we assigned this value to a second variable called mySecondVariable.
Step38: Whoa! mySecondVariable didn't change and still remembers what we assigned to in the first time.
Step39: Naming variables
Step41: But this block of code will not
Step42: Notice how the notebook has coloured the text so that the '1' in what we wanted to make the variable name stands out? Again, that's Python trying to help us figure out what is going wrong, but it requires that you look closely at the output of the error message.
Step43: As for many issues related to Python's style, it is good practice to always refer to the offical PEP 8 -- Style Guide for Python Code
Step44: Similar to the example above, in the code cell below
Step45: Supported code
Step46: Excercise 3
Step47: Exercise 4
Step48: Broken code
Step49: Code (Applied Geo-example)
|
<ASSISTANT_TASK:>
Python Code:
3 - 2 + 10
2 * 5
10 / 5
print(4 * (2 - 8) + 2)
print(4 * 2 - 8 + 2)
(3 * 2) - 10
3 * (2 - 10)
2 ** 4
2 * 2 * 2 * 2
print(2**8) # 2-to-the-8
print(256**(1.0/8.0)) # 256-to-the-one-eighth
print(1/8)
print(1.0/8.0)
print(9/2)
print(9%2)
8%2
print( (2*3) ** 2 )
print( (2*3) ** 2 / (12 % 5) )
(2+4) ** 2 % (120 / ???)
(30 + 2 ) / 0
(1345 - 20 ) / ((- 3) ** 2 - ???)
print(7 * 4)
print(7 * 4.0)
print(20 / 5)
print(20.0 / 5)
print(22 / 7)
print(22.0 / 7)
print( int(22.0/7) )
print(6 + 3)
print(6.0 + 3)
(10 * 20)/2
print("I'm a string of text!")
print('Me too! *&($£(£_@~{}|?<>$)')
print("I'm contained within double quotes. I'll be fine.")
print('I'm sourrounded by single-quotes and I contain a single-quote so there may be trouble ahead...')
print('I\'m using the backslash.')
print('I\'m also using the backslash \'\\\' to escape the error normally caused by having single-quotes.')
print('I\'m also using the backslash \'\\' to escape the error normally caused by having single-quotes.')
print("If music be the food of love, play on;\n Give me excess of it, that, surfeiting ,\n The appetite may sicken, and so die.")
print("2016")
print(2016)
2015 + 1
"2015" + 1
print(Hi there,
this time,
I won't need those annoying newline characters.
I also don't have problems with "quotes" or 'quotes'! )
print("String1" + "String2")
print("Hey, looks like" + " I'm " + "adding "+ "6" +" strings" + " together")
print("This is code " ??? " camp's notebook is " ??? " number " + "2.")
print("I like Python a lot" + "!" * 3)
print("Foo " * 25)
20 * '5'
print("5" * "2")
result = -2 * 10
result = -2 * 10
print("I'm wasting space...")
print(result)
# First, we need to check the data type of our variable using a function called "type"
type(result)
# Rewrite the existing variable or put it as a new variable
result = float(result) # or result_float = float(result_float)
type(result)
myFirstVariable = 1 * 5
print(myFirstVariable)
mySecondVariable = myFirstVariable
print(mySecondVariable)
myFirstVariable = 2
print(myFirstVariable)
print(mySecondVariable)
cheers ??? name ??? " is awesome!"
print(cheers)
cheers = name + " is awesome!"
print(cheers)
famous_geographer = "Mercator"
print(famous_geographer)
1st_geographic_projection = Mercator's most famous Geographic Projection
is a cylindrical map projection
that retains the ability
to ability to represent lines of constant course (loxodromes)
print(1st_geographic_projection)
this_var = "Mercator"
This_var = "Galileo"
print(this_var)
print(This_var)
old_satellite = 'Sputnik 1'
old_satellite_description = " was the first artificial Earth satellite, launched from the Soviet Union on the 4th of October, 1957."
print("Здравствуйте! My name is " + old_satellite)
print(old_satellite + old_satellite_description)
new_satellite = 'Landsat'
print(new_satellite)
print("The new satellite is " + new_satellite + " and the old satellite is " + old_satellite)
pr???nt("Hello there " + ???(new_satellite) )
print("Hello there " + new_satellite + " 8" )
diameter_cm = 58
cm2inches = 0.393700787
diameter_inches = diameter_cm ??? cm2inches
print(diameter_inches)
diameter_cm = 58
cm2inches = 0.393700787
diameter_inches = diameter_cm * cm2inches
print(diameter_inches)
import math
PI = math.pi
radius_cm = diameter_cm/2
volume = (4/3) ??? PI ??? (radius_cm ??? 3 )
print(volume)
import math
PI = math.pi
radius_cm = diameter_cm/2
volume = (4/3) * PI * (radius_cm ** 3 )
print(volume)
print(new_satellite + "has a Near Infrared (NI), \
which band captures light in the wavelength from "+
770 +
" to " +
900 +
" nanometers." )
# The error message indicates a type error, as we can only concatenate string
# The code should work by including "" to the numbers
print(new_satellite + " has a Near Infrared (NI), \
which band captures light in the wavelength from "+
"770" +
" to " +
"900" +
" nanometers." )
# King's College coordinates
# What format are they in? Does it seem appropriate?
longitude = -0.11596798896789551
latitude = 51.51130657591914
#cast the floats to strings
??? = str(longitude)
lat = str(???)
# King's College marker
KCL_position = "https://www.openstreetmap.org/?mlat="+lat+"8&mlon="+lon+"#map=15/"+lat+"/"+lon
print(KCL_position)
# King's College coordinates
# What format are they in? Does it seem appropriate?
longitude = -0.11596798896789551
latitude = 51.51130657591914
#cast the floats to strings
lon = str(longitude)
lat = str(latitude)
# King's College marker
KCL_position = "https://www.openstreetmap.org/?mlat="+lat+"8&mlon="+lon+"#map=15/"+lat+"/"+lon
print(KCL_position)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The default distribution is "uniform", but we can use any(?) of the distributions supported by numpy.
Step2: Class Float
Step3: Class NumpyRandomGenerator
Step4: Class FakerGenerator
Step5: Class Constant
Step6: Class Sequential
Step7: Calling reset() on the generator makes the numbering start from 1 again.
Step8: Note
Step9: If a new Sequential generator is created from an existing one via the _spawn() method then its count will start again from 1.
Step10: Class SelectOne
Step11: It is possible to specify different probabilities for each element to be chosen.
Step12: Class SelectMultiple
Step13: Similarly to SelectOne, one can pass a list of probabilities for the values to be chosen.
Step14: It is also possible to pass a random generator for the argument n. This produces tuples of varying length, where the length of each tuple is determined by the values produced by this generator.
Step15: Class Subsample
Step16: Class CharString
Step17: It is possible to vary the length of generated character strings, and to specify the character set.
Step18: Class DigitString
Step19: Class HashDigest
Step20: Class Geolocation
Step21: Class TimestampNEW
Step22: Note that the generated items are datetime objects (even though they appear as strings when printed above).
Step23: Class GeoJSONGeolocationPair
Step24: Class ExtractAttribute
Step25: Using ExtractAttribute we can produce \"derived\" generators which extract the attributes aaa, bbb from the elements produced by g.
Step26: Class IterateOver
Step27: Using tohu generators as iterators
Step28: ItemList
Step29: Fundamentally an ItemList behaves like a regular list.
|
<ASSISTANT_TASK:>
Python Code:
g = Integer(low=100, high=200)
g.reset(seed=12345); print_generated_sequence(g, num=15)
g.reset(seed=9999); print_generated_sequence(g, num=15)
some_integers = g.generate(5, seed=99999)
for x in some_integers:
print(x)
#g = Integer(low=100, high=200, distribution=None)
g = Float(low=2.71828, high=3.14159)
g.reset(seed=12345); print_generated_sequence(g, num=4)
g.reset(seed=9999); print_generated_sequence(g, num=4)
g1 = NumpyRandomGenerator(method="normal", loc=3.0, scale=5.0)
g2 = NumpyRandomGenerator(method="poisson", lam=30)
g3 = NumpyRandomGenerator(method="exponential", scale=0.3)
g1.reset(seed=12345); print_generated_sequence(g1, num=4)
g2.reset(seed=12345); print_generated_sequence(g2, num=15)
g3.reset(seed=12345); print_generated_sequence(g3, num=4)
g1 = FakerGenerator(method="name")
g2 = FakerGenerator(method="name", locale='hi_IN')
g3 = FakerGenerator(method="phone_number")
g4 = FakerGenerator(method="job")
g1.reset(seed=12345); print_generated_sequence(g1, num=4)
g2.reset(seed=12345); print_generated_sequence(g2, num=4)
g3.reset(seed=12345); print_generated_sequence(g3, num=4)
g4.reset(seed=12345); print_generated_sequence(g4, num=4)
g = Constant("Foobar"); print_generated_sequence(g, num=10)
g = Constant(42); print_generated_sequence(g, num=20)
g = Sequential(prefix='Foo_', digits=3)
g.reset()
print_generated_sequence(g, num=5)
print_generated_sequence(g, num=5)
print("-----------------------------")
g.reset()
print_generated_sequence(g, num=5)
g.reset(seed=12345); print_generated_sequence(g, num=5)
g.reset(seed=9999); print_generated_sequence(g, num=5)
g1 = Sequential(prefix="Quux_", digits=2)
g1.reset(seed=12345)
print_generated_sequence(g1, num=5)
g2 = g1._spawn()
print_generated_sequence(g1, num=5)
print_generated_sequence(g2, num=5)
g = SelectOne(values=['foobar', 42, 'quux', True, 1.2345])
g.reset(seed=12345); print_generated_sequence(g, num=15)
g.reset(seed=9999); print_generated_sequence(g, num=15)
g = SelectOne(values=['aa', 'bb', 'cc'], p=[0.8, 0.15, 0.05])
g.reset(seed=12345); print_generated_sequence(g, num=20)
g = SelectMultiple(values=['foobar', 42, 'quux', True, 1.2345], size=3)
g.reset(seed=12345); print_generated_sequence(g, num=4)
g.reset(seed=99999); print_generated_sequence(g, num=4)
g = SelectMultiple(values=['aa', 'bb', 'cc', 'dd', 'ee'], size=3, p=[0.6, 0.1, 0.2, 0.05, 0.05])
g.reset(seed=12345); print_generated_sequence(g, num=4)
rand_nums = Integer(low=2, high=5)
g = SelectMultiple(values=['a', 'b', 'c', 'd', 'e'], size=rand_nums)
g.reset(seed=11111); print_generated_sequence(g, num=10, sep='\n')
values = list(range(50))
g = Subsample(values, p=0.3)
g.reset(seed=12345); print_generated_sequence(g, num=10, sep='\n')
g = CharString(length=15)
g.reset(seed=12345); print_generated_sequence(g, num=5)
g.reset(seed=9999); print_generated_sequence(g, num=5)
g = CharString(min_length=4, max_length=12, charset="ABCDEFGHIJKLMNOPQRSTUVWXYZ")
g.reset(seed=12345); print_generated_sequence(g, num=5, sep='\n')
g = DigitString(length=15)
g.reset(seed=12345); print_generated_sequence(g, num=5)
g.reset(seed=9999); print_generated_sequence(g, num=5)
g = DigitString(min_length=5, max_length=20)
g.reset(seed=9999); print_generated_sequence(g, num=10, sep='\n')
g = HashDigest(length=8)
g.reset(seed=12345); print_generated_sequence(g, num=9)
g.reset(seed=9999); print_generated_sequence(g, num=9)
g = HashDigest(length=20)
g.reset(seed=12345); print_generated_sequence(g, num=4)
g.reset(seed=9999); print_generated_sequence(g, num=4)
g = HashDigest(min_length=6, max_length=20)
g.reset(seed=12345); print_generated_sequence(g, num=5, sep='\n')
g = HashDigest(length=16, as_bytes=True)
g.reset(seed=12345); print_generated_sequence(g, num=3, sep='\n')
g = GeolocationPair()
g.reset(seed=12345); print_generated_sequence(g, num=5, sep='\n')
from tohu.generators import TimestampNEW
g = TimestampNEW(start='2016-02-14', end='2016-02-18')
g.reset(seed=12345); print_generated_sequence(g, num=5, sep='\n')
g = TimestampNEW(start='1998-03-01 00:02:00', end='1998-03-01 00:02:15')
g.reset(seed=99999); print_generated_sequence(g, num=10, sep='\n')
type(next(g))
import json
from shapely.geometry import MultiPoint
with open('./data/ne_110m_admin_1_states_provinces_shp.geojson', 'r') as f:
geojson = json.load(f)
g = GeoJSONGeolocationPair(geojson)
pts = g.generate(N=200, seed=12345)
list(pts)[:10]
MultiPoint(pts)
class QuuxGenerator(CustomGenerator):
aaa = Integer(0, 100)
bbb = HashDigest(length=6)
g = QuuxGenerator()
h1 = ExtractAttribute(g, 'aaa')
h2 = ExtractAttribute(g, 'bbb')
g.reset(seed=99999); print_generated_sequence(g, num=5, sep='\n')
print_generated_sequence(h1, num=5)
print_generated_sequence(h2, num=5)
seq = ['aa', 'bb', 'cc', 'dd', 'ee']
g = IterateOver(seq)
g.reset(); print(list(g.generate(N=3)))
g.reset(); print(list(g.generate(N=10)))
g.reset(); print(list(g))
int_generator = Integer(low=100, high=500).reset(seed=99999)
for i, x in enumerate(int_generator):
if i > 20:
break
print(x, end=" ")
g = HashDigest(length=6)
item_list = g.generate(N=10, seed=12345)
print(item_list)
print(list(item_list))
item_list.reset(seed=999999)
print(list(item_list.subsample(num=6)))
print(list(item_list.subsample(num=6)))
print(list(item_list.subsample(num=6)))
item_list.reset(seed=99999)
print(list(item_list.subsample(p=0.4)))
print(list(item_list.subsample(p=0.4)))
print(list(item_list.subsample(p=0.4)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Once you're in your playground repo, you can look at all the branches and print out a lot of information to the screen.
Step2: All of these branches are nothing but commit-streams in disguise, as can be seen above. It's a very simple model that leads to a lot of interesting version control patterns.
Step3: See what branches we have created.
Step4: Notice that you have created the mybranch1 branch but you're still on the master branch.
Step5: Notice that it is bootstrapped off the master branch and has the same files. You can check that with the ls command.
Step6: Note
Step7: Alright, so we're on our new branch but we haven't added or modified anything yet; there's nothing to commit.
Step8: We add the file to the index, and then commit the files to the local repository on the mybranch1 branch.
Step9: At this point, we have committed a new file (books.md) to our new branch in our local repo. Our remote repo is still not aware of this new file (or branch). In fact, our master branch is still not really aware of this file.
Step10: Fail! Why? Because Git didn't know what to push to on origin (the name of our remote repo) and didn't want to assume we wanted to call the branch mybranch1 on the remote. We need to tell that to Git explicitly (just like it tells us to).
Step11: Aha, now we have both a remote and a local for mybranch1. We can use the convenient arguments to branch in order to see the details of all the branches.
Step12: We make sure we are back on master
Step13: What have we done?
Step14: At this point, we've committed our new feature to our feature branch in our local repo. Presumably it's all tested and everything is working nicely. We'd like to merge it into our master branch now. First, we'll switch to the master branch.
Step15: The master branch doesn't have any idea about our new feature yet! We should merge the feature branch into the master branch.
Step16: Now our master branch is up to date with our feature branch. We can now delete our feature branch since it is no longer relevant.
Step17: Finally, let's push the changes to our remote repo.
|
<ASSISTANT_TASK:>
Python Code:
%%bash
cd /tmp
rm -rf playground #remove if it exists
git clone https://github.com/dsondak/playground.git
%%bash
cd /tmp/playground
git branch -avv
%%bash
cd /tmp/playground
git branch mybranch1
%%bash
cd /tmp/playground
git branch
%%bash
cd /tmp/playground
git checkout mybranch1
git branch
%%bash
cd /tmp/playground
ls
%%bash
cd /tmp/playground
git status
%%bash
cd /tmp/playground
echo '# Things I wish G.R.R. Martin would say: Finally updating A Song of Ice and Fire.' > books.md
git status
%%bash
cd /tmp/playground
git add .
git status
%%bash
cd /tmp/playground
git commit -m "Added another test file to demonstrate git features" -a
git status
%%bash
cd /tmp/playground
git push
%%bash
cd /tmp/playground
git push --set-upstream origin mybranch1
%%bash
cd /tmp/playground
git branch -avv
%%bash
cd /tmp/playground
git checkout master
%%bash
cd /tmp/playground
git checkout -b feature-branch
%%bash
cd /tmp/playground
git branch
%%bash
cd /tmp/playground
echo '# The collected works of G.R.R. Martin.' > feature.txt
%%bash
cd /tmp/playground
git status
%%bash
cd /tmp/playground
git add feature.txt
git commit -m 'George finished his books!'
%%bash
cd /tmp/playground
git checkout master
ls
%%bash
cd /tmp/playground
git merge feature-branch
%%bash
cd /tmp/playground
git status
ls
%%bash
cd /tmp/playground
git branch -d feature-branch
%%bash
cd /tmp/playground
git push
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0. General note
Step2: 3. Compare
Step3: Table is not given in this publication.
|
<ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
from uncertainties import unumpy as unp
import pytheos as eos
eta = np.linspace(1., 0.6, 9)
print(eta)
dorogokupets2015_mgo = eos.periclase.Dorogokupets2015()
help(eos.periclase.Dorogokupets2015)
dorogokupets2015_mgo.print_equations()
dorogokupets2015_mgo.print_equations()
dorogokupets2015_mgo.print_parameters()
v0 = 74.698
dorogokupets2015_mgo.three_r
v = v0 * (eta)
temp = 3000.
p = dorogokupets2015_mgo.cal_p(v, temp * np.ones_like(v))
print('for T = ', temp)
for eta_i, p_i in zip(eta, p):
print("{0: .3f} {1: .2f} ".format(eta_i, p_i))
v = dorogokupets2015_mgo.cal_v(p, temp * np.ones_like(p), min_strain=0.6)
print((v/v0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build graphs and then display them
Step2: Create a vector representation
Step3: Compute pairwise similarity matrix
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pylab as plt
import networkx as nx
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='C')
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
from eden.util import display
print display.serialize_graph(G)
from eden.util import display
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label=[0,0,.1])
G.add_node(1, label=[0,.1,0])
G.add_node(2, label=[.1,0,0])
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label={'A':1, 'B':2, 'C':3})
G.add_node(1, label={'A':1, 'B':2, 'D':3})
G.add_node(2, label={'A':1, 'D':2, 'E':3})
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='C')
G.add_node(3, label='D')
G.add_node(4, label='E')
G.add_node(5, label='F')
G.add_edge(0,1, label='x')
G.add_edge(0,2, label='y')
G.add_edge(1,3, label='z', nesting=True, weight=.5)
G.add_edge(0,3, label='z', nesting=True, weight=.1)
G.add_edge(2,3, label='z', nesting=True, weight=.01)
G.add_edge(3,4, label='k')
G.add_edge(3,5, label='j')
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3, prog='circo')
from eden.graph import Vectorizer
X=Vectorizer(2).transform_single(G)
from eden.util import describe
print describe(X)
print X
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='C')
G.add_node(3, label='D')
G.add_node(4, label='E')
G.add_node(5, label='F')
G.add_edge(0,1, label='x')
G.add_edge(0,2, label='y')
G.add_edge(1,3, label='z', nesting=True)
G.add_edge(0,3, label='z', nesting=True)
G.add_edge(2,3, label='z', nesting=True)
G.add_edge(3,4, label='k')
G.add_edge(3,5, label='j')
from eden.graph import Vectorizer
X=Vectorizer(2).transform_single(G)
from eden.util import describe
print describe(X)
print X
import networkx as nx
graph_list = []
G=nx.Graph()
G.add_node(0, label='A', entity='CATEG')
G.add_node(1, label='B', entity='CATEG')
G.add_node(2, label='C', entity='CATEG')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', entity='CATEG')
G.add_node(1, label='B', entity='CATEG')
G.add_node(2, label='X', entity='CATEG')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', entity='CATEG')
G.add_node(1, label='B', entity='CATEG')
G.add_node(2, label='X', entity='CATEG')
G.add_edge(0,1, label='x', entity='CATEG_EDGE')
G.add_edge(1,2, label='x', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='X', entity='CATEG')
G.add_node(1, label='X', entity='CATEG')
G.add_node(2, label='X', entity='CATEG')
G.add_edge(0,1, label='x', entity='CATEG_EDGE')
G.add_edge(1,2, label='x', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[1,0,0], entity='VEC')
G.add_node(1, label=[0,1,0], entity='VEC')
G.add_node(2, label=[0,0,1], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[1,1,0], entity='VEC')
G.add_node(1, label=[0,1,1], entity='VEC')
G.add_node(2, label=[0,0,1], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[1,0.1,0.2], entity='VEC')
G.add_node(1, label=[0.3,1,0.4], entity='VEC')
G.add_node(2, label=[0.5,0.6,1], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[0.1,0.2,0.3], entity='VEC')
G.add_node(1, label=[0.4,0.5,0.6], entity='VEC')
G.add_node(2, label=[0.7,0.8,0.9], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'B':1, 'C':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'B':1, 'C':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'b':1, 'C':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'C':1, 'D':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'C':1, 'D':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'C':1, 'D':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'B':1, 'C':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'B':1, 'C':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'b':1, 'C':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
from eden.util import display
for g in graph_list:
display.draw_graph(g, size=5, node_size=800, node_border=1, layout='shell', secondary_vertex_label = 'entity')
%%time
from eden.graph import Vectorizer
vectorizer = Vectorizer(complexity=2, n=4)
vectorizer.fit(graph_list)
X = vectorizer.transform(graph_list)
y=[1]*4+[2]*4+[3]*4
print 'Instances: %d \nFeatures: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])
opts={'knn': 3, 'metric': 'rbf', 'k_threshold': 0.7, 'gamma': 1e-2}
from eden.embedding import display_embedding, embedding_quality
print 'Embedding quality [adjusted Rand index]: %.2f data: %s #classes: %d' % (embedding_quality(X, y, opts), X.shape, len(set(y)))
display_embedding(X,y, opts)
from ipy_table import *
def prep_table(K):
header = [' ']
header += [i for i in range(K.shape[0])]
mat = [header]
for id, row in enumerate(K):
new_row = [id]
new_row += list(row)
mat.append(new_row)
return mat
from sklearn import metrics
K=metrics.pairwise.pairwise_kernels(X, metric='linear')
mat=prep_table(K)
make_table(mat)
apply_theme('basic')
set_global_style(float_format = '%0.2f')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Open a file which is located on the web. No type is to be specified for "f".
Step2: Loop over the TTree called "events" in the file. It is accessed with the dot operator.
|
<ASSISTANT_TASK:>
Python Code:
import ROOT
f = ROOT.TFile.Open("http://indico.cern.ch/event/395198/material/0/0.root")
maxPt=-1
for event in f.events:
maxPt=-1
for track in event.tracks:
pt = track.Pt()
if pt > maxPt: maxPt = pt
if event.evtNum % 100 == 0:
print "Processing event number %i" %event.evtNum
print "Max pt is %f" %maxPt
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2、打印如下9*9 乘法口诀表,注意每列左侧竖向对齐。
Step2: 3、写函数,可检查一个数(2-100000之间整数)能不能表示成两个质数之和,如果能,则打印这两个质数。主程序用18及93887分别做测试。
Step3: 4、有一个列表:[1, 2, 3, 4…n],n=20;请编写代码打印如下规律的输出(不允许直接拷贝并直接打印作为答案,想象n可能是变量):
|
<ASSISTANT_TASK:>
Python Code:
ch=input('请输入一个字符: ')
n=int(input('请输入打印行数:'))
for i in range(1,n+1):
print(' '*(n-i)+(ch+' ')*i)
for i in range(1,10):
for j in range(1,i+1):
print('{}*{}={:<2}'.format(i,j,i*j),end=' ')
print()
def zhishu(x):
flag=1
for i in range(2,x//2):
if x%i==0:
flag=0
return flag
def test(n):
flag=0
for i in range (2,n//2):
j=n-i
if zhishu(i)+zhishu(j)==2:
print(n,'可以写成两个质数之和:')
print(n,'=',i,'+',j)
flag+=1
if flag==0:
print(n,'不能写成两个质数之和.')
test(18)
test(933887)
list=[1,2,3,4,5]
for i in range (1,20):
if i<=3:
for j in range(3):
print(i,list)
elif i<=20-2:
list[0]=i-2
for m in range(1,5):
list[i]=list[i-1]+1
print(list)
else:
print(list)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then run the following cell and send some values from pypeoutgoing.ipynb running in another window. The will be sent over the "pype". Watch them printed below once they are received
Step2: Once you have finished experimenting, you can close the pype
|
<ASSISTANT_TASK:>
Python Code:
import os, sys
sys.path.append(os.path.abspath('../../main/python'))
import thalesians.tsa.pypes as pypes
pype = pypes.Pype(pypes.Direction.INCOMING, name='EXAMPLE', port=5758); pype
for x in pype: print(x)
pype.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that this is the original default CCMC/FCIQMC HANDE columns/key mapping in preparator.
Step2: Now we execute our executor, preparator and analyser.
Step3: In this analysis model, all replicas [3] are analysed individually.
Step4: The data has additional columns with this data
Step5: Not everything was analysed long enough to be analysed.
Step6: This was unsuccessful
Step7: Lower initial population, lower shoulder [4]?
Step8: There seems to be a trend.
|
<ASSISTANT_TASK:>
Python Code:
from pyhande.data_preparing.hande_ccmc_fciqmc import PrepHandeCcmcFciqmc
from pyhande.extracting.extractor import Extractor
from pyhande.error_analysing.blocker import Blocker
from pyhande.results_viewer.get_results import analyse_data
extra = Extractor() # Keep the defaults, merge using UUIDs.
prep = PrepHandeCcmcFciqmc()
ana = Blocker.inst_hande_ccmc_fciqmc(start_its=[20000, 20000]) # Keep the defaults, find starting iterations automatically, using 'blocking' finder.
prep._observables_init
results = analyse_data(["data/replica_complex_fciqmc_init_pop_10.out.xz", "data/replica_complex_fciqmc_init_pop_100.out.xz", "data/replica_complex_fciqmc_init_pop_10_part2.out.xz"], extra, prep, ana)
prep.observables
prep.data
results.summary_pretty
results.analyser.no_opt_block
results.plot_shoulder()
results.add_shoulder()
results.add_metadata(['qmc:D0_population'])
results.compare_obs(['shoulder height', 'D0_population'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can convert a motif to several formats.
Step2: Some other useful tidbits.
Step3: To convert a motif to an image, use to_img(). Supported formats are png, ps and pdf.
Step4: Motif scanning
Step5: This return a dictionary with the sequence names as keys. The value is a list with positions where the motif matches. Here, as the AP1 motif is a palindrome, you see matches on both forward and reverse strand. This is more clear when we use pwm_scan_all() that returns position, score and strand for every match.
Step6: The number of matches to return is set to 50 by default, you can control this by setting the nreport argument. Use scan_rc=False to only scan the forward orientation.
Step7: While this functionality works, it is not very efficient. To scan many motifs in potentially many sequences, use the functionality in the scanner module. If you only want the best match per sequence, there is a utility function called scan_to_best_match, otherwise, use the Scanner class.
Step8: The matches are in the same order as the sequences in the original file.
Step9: Now let’s get the best score for the CTCF motif for each sequence.
Step10: In many cases you’ll want to set a threshold. In this example we’ll use a 1% FPR threshold, based on scanning randomly selected sequences from the hg38 genome. The first time you run this, it will take a while. However, the tresholds will be cached. This means that for the same combination of motifs and genome, the previously generated threshold will be used.
Step11: Finding de novo motifs
Step12: This will basically run the same pipeline as the gimme motifs command. All output files will be stored in outdir and gimme_motifs returns a list of Motif instances. If you only need the motifs but not the graphical report, you can decide to skip it by setting create_report to False. Additionally, you can choose to skip clustering (cluster=False) or to skip calculation of significance (filter_significant=False). For instance, the following command will only predict motifs and cluster them.
Step13: All parameters for motif finding are set by the params argument
Step14: Motif statistics
Step15: A lot of statistics are generated and you will not always need all of them. You can choose one or more specific metrics with the additional stats argument.
Step16: Motif comparison
Step17: Compare two motifs
Step18: Find closest match in a motif database
|
<ASSISTANT_TASK:>
Python Code:
with open("MA0099.3.jaspar") as f:
motifs = read_motifs(f, fmt="jaspar")
print(motifs[0])
with open("example.pfm") as f:
motifs = read_motifs(f)
# pwm
print(motifs[0].to_pwm())
# pfm
print(motifs[0].to_pfm())
# consensus sequence
print(motifs[0].to_consensus())
# TRANSFAC
print(motifs[0].to_transfac())
# MEME
print(motifs[0].to_meme())
m = motif_from_consensus("NTGASTCAN")
print(len(m))
# Trim by information content
m.trim(0.5)
print(m.to_consensus(), len(m))
# Slices
print(m[:3].to_consensus())
# Shuffle
random_motif = motif_from_consensus("NTGASTGAN").randomize()
print(random_motif)
m = motif_from_consensus("NTGASTCAN")
m.to_img("ap1.png", fmt="png")
from IPython.display import Image
Image("ap1.png")
from gimmemotifs.motif import motif_from_consensus
from gimmemotifs.fasta import Fasta
f = Fasta("test.small.fa")
m = motif_from_consensus("TGAsTCA")
m.pwm_scan(f)
m.pwm_scan_all(f)
m.pwm_scan_all(f, nreport=1, scan_rc=False)
from gimmemotifs.motif import motif_from_consensus
from gimmemotifs.scanner import scan_to_best_match
m1 = motif_from_consensus("TGAsTCA")
m1.id = "AP1"
m2 = motif_from_consensus("CGCG")
m2.id = "CG"
motifs = [m1, m2]
print("motif\tpos\tscore")
result = scan_to_best_match("test.small.fa", motifs)
for motif, matches in result.items():
for match in matches:
print("{}\t{}\t{}".format(motif, match[1], match[0]))
from gimmemotifs.motif import default_motifs
from gimmemotifs.scanner import Scanner
from gimmemotifs.fasta import Fasta
import numpy as np
# Input file
fname = "Gm12878.CTCF.top500.w200.fa"
# Select the CTCF motif from the default motif database
motifs = [m for m in default_motifs() if "CTCF" in m.factors['direct']][:1]
# Initialize the scanner
s = Scanner()
s.set_motifs(motifs)
scores = [r[0] for r in s.best_score("Gm12878.CTCF.top500.w200.fa")]
print("{}\t{:.2f}\t{:.2f}\t{:.2f}".format(
len(scores),
np.mean(scores),
np.min(scores),
np.max(scores)
))
# Set a 1% FPR threshold based on random hg38 sequence
s.set_genome("hg38")
s.set_threshold(fpr=0.01)
# get the number of sequences with at least one match
counts = [n[0] for n in s.count("Gm12878.CTCF.top500.w200.fa", nreport=1)]
print(counts[:10])
# or the grand total of number of sequences with 1 match
print(s.total_count("Gm12878.CTCF.top500.w200.fa", nreport=1))
# Scanner.scan() just gives all information
seqs = Fasta("Gm12878.CTCF.top500.w200.fa")[:10]
for i,result in enumerate(s.scan(seqs)):
seqname = seqs.ids[i]
for m,matches in enumerate(result):
motif = motifs[m]
for score, pos, strand in matches:
print(seqname, motif, score, pos, strand)
from gimmemotifs.denovo import gimme_motifs
peaks = "Gm12878.CTCF.top500.w200.fa"
outdir = "CTCF.gimme"
params = {
"tools": "Homer,BioProspector",
"genome": "hg38",
}
motifs = gimme_motifs(peaks, outdir, params=params)
motifs = gimme_motifs(peaks, outdir,
params=params, filter_significant=False, create_report=False)
from gimmemotifs.tools import get_tool
from gimmemotifs.background import MatchedGcFasta
m = get_tool("homer") # tool name is case-insensitive
# Create a background fasta file with a similar GC%
fa = MatchedGcFasta("TAp73alpha.fa", number=1000)
fa.writefasta("bg.fa")
# Run motif prediction
params = {
"background": "bg.fa",
"width": "20",
"number": 5,
}
motifs, stdout, stderr = m.run("TAp73alpha.fa", params=params)
print(motifs[0].to_consensus())
from gimmemotifs.background import MatchedGcFasta
from gimmemotifs.fasta import Fasta
from gimmemotifs.stats import calc_stats
from gimmemotifs.motif import default_motifs
sample = "TAp73alpha.fa"
bg = MatchedGcFasta(sample, genome="hg19", number=1000)
motifs = [m for m in default_motifs() if any(f in m.factors['direct'] for f in ["TP53", "TP63", "TP73"])]
stats = calc_stats(motifs, sample, bg)
print("Stats for", motifs[0])
for k, v in stats[str(motifs[0])].items():
print(k,v)
print()
best_motif = sorted(motifs, key=lambda x: stats[str(x)]["recall_at_fdr"])[-1]
print("Best motif (recall at 10% FDR):", best_motif)
metrics = ["roc_auc", "recall_at_fdr"]
stats = calc_stats(motifs, sample, bg, stats=metrics)
for metric in metrics:
for motif in motifs:
print("{}\t{}\t{:.2f}".format(
motif.id, metric, stats[str(motif)][metric]
))
from gimmemotifs.comparison import MotifComparer
from gimmemotifs.motif import motif_from_consensus
from gimmemotifs.motif import read_motifs
m1 = motif_from_consensus("RRRCATGYYY")
m2 = motif_from_consensus("TCRTGT")
mc = MotifComparer()
score, pos, orient = mc.compare_motifs(m1, m2)
if orient == -1:
m2 = m2.rc()
pad1, pad2 = "", ""
if pos < 0:
pad1 = " " * -pos
elif pos > 0:
pad2 =" " * pos
print(pad1 + m1.to_consensus())
print(pad2 + m2.to_consensus())
motifs = [
motif_from_consensus("GATA"),
motif_from_consensus("NTATAWA"),
motif_from_consensus("ACGCG"),
]
mc = MotifComparer()
results = mc.get_closest_match(motifs, dbmotifs=read_motifs("HOMER"), metric="seqcor")
# Load motifs
db = read_motifs("HOMER", as_dict=True)
for motif in motifs:
match, scores = results[motif.id]
print("{}: {} - {:.3f}".format(motif.id, match, scores[0]))
dbmotif = db[match]
orient = scores[2]
if orient == -1:
dbmotif = dbmotif.rc()
padm, padd = 0, 0
if scores[1] < 0:
padm = -scores[1]
elif scores[1] > 0:
padd = scores[1]
print(" " * padm + motif.to_consensus())
print(" " * padd + dbmotif.to_consensus())
print()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tutorial
Step2: Load data into global variable y. Each entry is an offset in units of kpc.
Step3: Check out a quick histogram of the data.
Step5: 1. Pre-registering a Test Statistic
Step6: Compute the test statistic of the real data to verify that it works.
Step16: Setting up a Computational Framework
Step23: 2. Evaluating a Simple Model
Step24: Now try instantiating a model and drawing a dozen samples from its prior as a test
Step25: Test out the log-posterior function to make sure it's not obviously buggy.
Step26: Similarly the mock-data producing function (with an arbitrary $a_1$ value).
Step27: Finally, test the sampling distribution function.
Step28: 2b. Fit the model to the data
Step29: It will be useful for later to know the mean of the posterior
Step30: 2c. Visually compare the posterior predictions with the data.
Step31: This kind of plot should be familiar
Step32: This plot is nice because it is comparing apples with apples
Step34: Based on these visual checks, would you say the model does a good job of predicting the observed data?
Step36: We can now do the following
Step38: Here's a function that plots the distribution of T, and reports the p-value
Step40: Does this result agree with your visual evaluation of the model fitness from the last section? If not, perhaps the test statistic you chose doesn't reflect the agreement you're looking for when inspecting the posterior predictions. If you'd like to re-define your test statistic, do so now and repeat this check.
Step42: Does your value of $p_D$ make intuitive sense?
Step43: Roughly how precisely do we need to know the log Evidence, to be able to compare models? Run log_evidence with different values of N (the number of prior samples in the average) to until you're satisfied that you're getting a usefully accurate result.
|
<ASSISTANT_TASK:>
Python Code:
class SolutionMissingError(Exception):
def __init__(self):
Exception.__init__(self,"You need to complete the solution for this code to work!")
def REPLACE_WITH_YOUR_SOLUTION():
raise SolutionMissingError
REMOVE_THIS_LINE = REPLACE_WITH_YOUR_SOLUTION
import numpy as np
import scipy.stats as st
from scipy.special import logsumexp
import emcee
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams.update({'font.size': 16});
y = np.loadtxt('data/model_comparison.dat')
plt.rcParams['figure.figsize'] = (8.0, 6.0)
bins = np.linspace(0,1000,20)
plt.hist(y, bins=bins, color="skyblue");
plt.xlabel("Measured distance $y$");
plt.ylabel("Frequency");
try:
exec(open('solutions/teststatistic.py').read())
except IOError:
REMOVE_THIS_LINE()
def T(yy):
Argument: a data vector (either the real data or a simulated data set)
Returns: a scalar test statistic computed from the argument
REPLACE_WITH_YOUR_SOLUTION()
T_true = T(y)
print("The test statistic of the real data is T =", T_true)
# This is something we can throw to discourage direct instantiation of the base class
class VirtualClassError(Exception):
def __init__(self):
Exception.__init__(self,"Do not directly instantiate the base Model class!")
class Model:
Base class for inference and model evaluation in a simple cluster mis-centering analysis.
In all these functions, `args' is the ordered list of model parameters.
def __init__(self):
Note: derived classes should have their own __init__ function which ends by calling this one
# Sometimes it will be convenient to compute many log_likelihood values at once:
self.vectorized_log_likelihood = np.vectorize(self.log_likelihood)
self.samples = None
self.Nsamples = 0
def log_prior(self, *args):
Evaluate the log prior PDF P(args|H)
raise VirtualClassError # to be overriden by child classes
def draw_samples_from_prior(self, N):
Return N samples from the prior PDF P(args|H)
raise VirtualClassError # to be overriden by child classes
def log_likelihood(self, *args):
Evaluate the log of the likelihood function L(args) = P(y|args,H)
raise VirtualClassError # to be overriden by child classes
def sampling_distribution(self, yy, *args):
Evaluate the sampling distribution P(yy|args,H) at a point in data space yy given parameter(s) args
We expect a vector input yy, and return the corresponding probabilities.
Note: This is useful for making plots of "the model" overlaid on the histogram of the data
raise VirtualClassError # to be overriden by child classes
def generate_replica_dataset(self, *args):
Draw a replica dataset y_rep from the sampling distribution P(y_rep|args,H).
y_rep should have the same length as the true data set.
raise VirtualClassError # to be overriden by child classes
def log_posterior(self, *args):
Evaluate the log of the (unnormalized) posterior PDF P(args|y,H)
Note: We'll use this with an MCMC sampler, so it should call the non-vectorized likelihood.
lnp = self.log_prior(*args)
if lnp != -np.inf:
lnp += self.log_likelihood(*args)
return lnp
def draw_samples_from_posterior(self, guess=None, nwalkers=None, nsteps=None, burn=None, thinby=None):
Use emcee to draw samples from P(args|y,H)
# Deal with unset inputs:
if guess is None: print("You need to specify a starting point in parameter space with the `guess=` kwarg...")
if nwalkers is None: print("You need to specify the `nwalkers=` kwarg...")
if nsteps is None: print("You need to specify the chain length `nsteps=` kwarg...")
if burn is None: print("You need to specify the length of burnin `burn=` kwarg...")
if thinby is None: print("You need to specify the thinning factor `thinby=` kwarg...")
# The density to sample is this model's own posterior PDF
lnprob = self.log_posterior
npars = len(guess)
self.sampler = emcee.EnsembleSampler(nwalkers, npars, lnprob)
# You could add e.g. threads=4 to speed things up with multiprocessing
# Generate an ensemble of walkers within +/-1% of the guess:
theta_0 = np.array([guess*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)])
# Note that the initial parameter array theta_0 should have dimensions nwalkers × npars
# Evolve the ensemble:
self.sampler.run_mcmc(theta_0, nsteps)
# Plot the raw samples:
plt.rcParams['figure.figsize'] = (12.0, 6.0)
plt.subplot(211)
for j in range(nwalkers):
plt.plot(self.sampler.chain[j,:,0], 'o', alpha=0.2)
plt.title("Raw Markov chains")
# Extract the chain, remove burnin, merge, and thin:
samples = self.sampler.chain[:, burn:, :].reshape((-1, npars))
samples = samples[range(0,samples.shape[0],thinby),:]
# Keep the samples with the model for future use!
self.samples = samples
self.Nsamples = len(samples)
# Plot the thinned chain
plt.subplot(212)
plt.plot(samples[:,0], 'o')
plt.title("Thinned, post-burnin chains");
try:
exec(open('solutions/exponentialmodel.py').read())
except IOError:
REMOVE_THIS_LINE()
class ExponentialModel(Model):
Simple exponential model for mis-centering.
def __init__(self):
# Define any hyperparameters for the a1 prior here.
# E.g., for uniform, something like "self.min_a1 = value" and "self.max_a1 = value"
# More sophisticatedly, you could make these values arguments of __init__.
REPLACE_WITH_YOUR_SOLUTION()
# The next line finishes initialization by calling the parent class' __init__
Model.__init__(self)
def log_prior(self, a1):
Evaluate the log prior PDF P(a1|H)
REPLACE_WITH_YOUR_SOLUTION()
def draw_samples_from_prior(self, N):
Return N samples of a1 from the prior PDF P(a1|H)
REPLACE_WITH_YOUR_SOLUTION()
def log_likelihood(self, a1):
Evaluate the log of the likelihood function L(a1) = P(y|a1,H)
Argument a1 is scalar.
return np.sum(st.expon.logpdf(y, scale=a1))
def sampling_distribution(self, yy, a1):
Evaluate the sampling distribution P(yy|a,H) at a point in data space yy given parameter value a1
We expect a vector input yy, and return the corresponding probabilities.
Note: This is useful for making plots of "the model" overlaid on the histogram of the data
return st.expon.pdf(yy, scale=a1)
def generate_replica_dataset(self, a1):
Draw a replica data set y_rep from the sampling distribution P(y_rep|a1,H).
y_rep should have the same length as the true data set.
Argument a1 is a scalar.
REPLACE_WITH_YOUR_SOLUTION()
Model1 = ExponentialModel()
print("12 sample values drawn from the prior of Model1: ", Model1.draw_samples_from_prior(12))
for a1 in [1.0, 10.0, 100.0, -3.14]:
print('Log-posterior for a1=', a1, ' = ', Model1.log_posterior(a1))
plt.rcParams['figure.figsize'] = (8.0, 6.0)
plt.hist(Model1.generate_replica_dataset(500.), bins=bins, color="lightgray");
plt.xlabel("Measured distance $y$");
plt.ylabel("Frequency (replica)");
plt.plot(bins, Model1.sampling_distribution(bins, 500.));
plt.xlabel("Measured distance $y$");
plt.ylabel("$p(y|a_1)$");
try:
exec(open('solutions/fit.py').read())
except IOError:
# This will execute out of the box, but will not work well. The arguments should be fiddled with.
Model1.draw_samples_from_posterior(guess=[1000.0], nwalkers=8, nsteps=10, burn=0, thinby=1)
Model1.post_mean = np.mean(Model1.samples, axis=0)
print("Posterior mean value of a1 = ", Model1.post_mean)
plt.rcParams['figure.figsize'] = (8.0, 6.0)
# First the histogram of observed data, as backdrop:
plt.hist(y, bins=bins, color="skyblue", density=True, label="Observed")
# Now overlay a curve following the sampling distribution conditioned on the posterior mean value of a1:
pp = Model1.sampling_distribution(bins, Model1.post_mean)
plt.plot(bins, pp, linestyle="dashed", color="red", label="Posterior mean model")
plt.xlabel("Measured distance $y$")
plt.ylabel("Normalized Frequency")
plt.legend();
plt.rcParams['figure.figsize'] = (8.0, 6.0)
# First the histogram of observed data, as backdrop:
plt.hist(y, bins=bins, color="skyblue", density=True, label="Observed")
# Choose a posterior sample at random and generate a replica dataset, and show its histogram
j = np.random.randint(0, len(Model1.samples))
mock = Model1.generate_replica_dataset(Model1.samples[j])
plt.hist(mock, bins=bins, alpha=1.0, histtype="step", color="red", density=True, label="Sample posterior prediction")
plt.xlabel("Measured distance $y$")
plt.ylabel("Normalized Frequency")
plt.legend();
def visual_check(Model, Nreps=None):
plt.rcParams['figure.figsize'] = (8.0, 6.0)
# First the histogram of observed data, as backdrop:
plt.hist(y, bins=bins, color="skyblue", density=True, label="Observed")
# Compute the posterior mean parameter (vector)
pm = np.mean(Model.samples, axis=0)
# Make a large number of replica datasets, and overlay histograms of them all
if Nreps is None: Nreps = len(Model.samples)
alpha = 5.0 / Nreps
for jj in np.round(np.linspace(0, len(Model.samples), num=Nreps, endpoint=False)):
j = int(jj)
if j==0:
# Plot a dataset drawn using a = the posterior mean a, to give a good legend
mock = Model.generate_replica_dataset(pm)
plt.hist(mock, bins=bins, histtype="step", alpha=1.0, color="red", density=True, label="Sample posterior predictions")
else:
# Take the next posterior sample a and generate a replica dataset
mock = Model1.generate_replica_dataset(Model.samples[j])
plt.hist(mock, bins=bins, histtype="step", alpha=alpha, color="red", density=True)
# Include the posterior mean model for comparison
pp = Model1.sampling_distribution(bins, pm)
plt.plot(bins, pp, linestyle="dashed", color="red", label="Posterior mean model")
plt.xlabel("Measured distance $y$")
plt.ylabel("Normalized Frequency")
plt.legend();
visual_check(Model1, Nreps=100)
def distribution_of_T(Model):
Compute T(yrep) for each yrep drawn from the posterior predictive distribution,
using parameter samples stored in Model.
return np.array([T(Model.generate_replica_dataset(a)) for a in Model.samples])
try:
exec(open('solutions/pvalue.py').read())
except IOError:
REMOVE_THIS_LINE()
def pvalue(Model):
Compute the posterior predictive p-value, P(T > T(y)|y,H):
REPLACE_WITH_YOUR_SOLUTION()
def posterior_predictive_check(Model, nbins=25):
Compute the posterior predictive distribution of the test statistic T(y_rep), and compare with T(y_obs)
# Compute distribution of T(yrep):
TT = distribution_of_T(Model)
# Plot:
plt.rcParams['figure.figsize'] = (8.0, 6.0)
plt.hist(TT, bins=nbins, histtype="step", color="red", label="$P(T(y_{\\rm rep})|y)$")
# Overlay T(y_obs):
plt.axvline(x=T(y), color="gray", linestyle="dashed", label="$T(y_{\\rm observed})$")
plt.xlabel("Test statistic T(y)")
plt.ylabel("Posterior predictive probability density")
plt.legend();
# Compute p-value:
p = pvalue(Model)
print("p-value =", p)
return p
p1 = posterior_predictive_check(Model1)
try:
exec(open('solutions/dic.py').read())
except IOError:
REMOVE_THIS_LINE()
def DIC(Model):
Compute the Deviance Information Criterion for the given model
# Compute the deviance D for each sample, using the vectorized code.
D = -2.0*Model.vectorized_log_likelihood(Model.samples)
pD = REPLACE_WITH_YOUR_SOLUTION()
DIC = REPLACE_WITH_YOUR_SOLUTION()
return DIC, pD
DIC1, pD1 = DIC(Model1)
print("Effective number of fitted parameters =", pD1)
print("DIC =", DIC1)
try:
exec(open('solutions/evidence.py').read())
except IOError:
REMOVE_THIS_LINE()
def log_evidence(Model, N=1000):
Compute the log evidence for the model using N samples from the prior
REPLACE_WITH_YOUR_SOLUTION()
for Nevidence in [1, 10, 100]: # You *will* want to change these values
%time logE1 = log_evidence(Model1, N=Nevidence)
print("From", Nevidence, "samples, the log-evidence is", logE1, "\n")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='inputs_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='inputs_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(h1*alpha, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(h1*alpha, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, reuse=True, alpha=alpha)
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real)*(1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if 'generator' in var.name]
d_vars = [var for var in t_vars if 'discriminator' in var.name]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'fgoals-f3-h', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Defining the physics
Step2: Defining the time evolution parameters
Step3: Set the conditions which will cause the pulse optimisation to terminate
Step4: Set the initial pulse type
Step5: Give an extension for output files
Step6: Run the optimisation
Step7: Report the results
Step8: Plot the initial and final amplitudes
Step9: Versions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import datetime
from qutip import Qobj, identity, sigmax, sigmay, sigmaz, tensor
from qutip.qip import hadamard_transform
import qutip.logging_utils as logging
logger = logging.get_logger()
#Set this to None or logging.WARN for 'quiet' execution
log_level = logging.INFO
#QuTiP control modules
import qutip.control.pulseoptim as cpo
import qutip.control.symplectic as sympl
example_name = 'Symplectic'
#Drift
w1 = 1
w2 = 1
g1 = 0.5
A0 = Qobj(np.array([[w1, 0, g1, 0],
[0, w1, 0, g1],
[g1, 0, w2, 0],
[0, g1, 0, w2]]))
#Control
Ac = Qobj(np.array([[1, 0, 0, 0,], \
[0, 1, 0, 0], \
[0, 0, 0, 0], \
[0, 0, 0, 0]]))
ctrls = [Ac]
n_ctrls = len(ctrls)
initial = identity(4)
# Target
a = 1
Ag = np.array([[0, 0, a, 0],
[0, 0, 0, a],
[a, 0, 0, 0],
[0, a, 0, 0]])
Sg = Qobj(sympl.calc_omega(2).dot(Ag)).expm()
# Number of time slots
n_ts = 1000
# Time allowed for the evolution
evo_time = 10
# Fidelity error target
fid_err_targ = 1e-10
# Maximum iterations for the optisation algorithm
max_iter = 500
# Maximum (elapsed) time allowed in seconds
max_wall_time = 30
# Minimum gradient (sum of gradients squared)
# as this tends to 0 -> local minima has been found
min_grad = 1e-20
# pulse type alternatives: RND|ZERO|LIN|SINE|SQUARE|SAW|TRIANGLE|
p_type = 'ZERO'
#Set to None to suppress output files
f_ext = "{}_n_ts{}_ptype{}.txt".format(example_name, n_ts, p_type)
# Note that this call uses
# dyn_type='SYMPL'
# This means that matrices that describe the dynamics are assumed to be
# Symplectic, i.e. the propagator can be calculated using
# expm(combined_dynamics.omega*dt)
# This has defaults for:
# prop_type='FRECHET'
# therefore the propagators and their gradients will be calculated using the
# Frechet method, i.e. an exact gradient
# fid_type='TRACEDIFF'
# so that the fidelity error, i.e. distance from the target, is give
# by the trace of the difference between the target and evolved operators
result = cpo.optimize_pulse(A0, ctrls, initial, Sg, n_ts, evo_time,
fid_err_targ=fid_err_targ, min_grad=min_grad,
max_iter=max_iter, max_wall_time=max_wall_time,
dyn_type='SYMPL',
out_file_ext=f_ext, init_pulse_type=p_type,
log_level=log_level, gen_stats=True)
result.stats.report()
print("Final evolution\n{}\n".format(result.evo_full_final))
print("********* Summary *****************")
print("Final fidelity error {}".format(result.fid_err))
print("Final gradient normal {}".format(result.grad_norm_final))
print("Terminated due to {}".format(result.termination_reason))
print("Number of iterations {}".format(result.num_iter))
print("Completed in {} HH:MM:SS.US".format(datetime.timedelta(seconds=result.wall_time)))
fig1 = plt.figure()
ax1 = fig1.add_subplot(2, 1, 1)
ax1.set_title("Initial Control amps")
ax1.set_xlabel("Time")
ax1.set_ylabel("Control amplitude")
for j in range(n_ctrls):
ax1.step(result.time,
np.hstack((result.initial_amps[:, j], result.initial_amps[-1, j])),
where='post')
ax2 = fig1.add_subplot(2, 1, 2)
ax2.set_title("Optimised Control Amplitudes")
ax2.set_xlabel("Time")
ax2.set_ylabel("Control amplitude")
for j in range(n_ctrls):
ax2.step(result.time,
np.hstack((result.final_amps[:, j], result.final_amps[-1, j])),
where='post')
plt.tight_layout()
plt.show()
from qutip.ipynbtools import version_table
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Pearson's test
Step2: Pearson's comparison of microscopy derived images
Step3: Maybe remove so not to clash with Mark's.
|
<ASSISTANT_TASK:>
Python Code:
#This line is very important: (It turns on the inline visuals!)
%pylab inline
a = [2,9,32,12,14,6,9,23,4,5,13,6,7,92,21,45];
b = [7,21,4,2,92,9,9,6,13,12,45,5,6,23,14,32];
#Please calculate the dot product of the vectors 'a' and 'b'.
#You may use any method you like. If get stuck. Check:
#http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html
#If you rearrange the numbers in 'b', what sequence will give
#the highest dot-product magnitude?
#The cross-correlation algorithm is another name for the Pearson's test.
#Here it is written in code form and utilising the builtin functions:
c = [0,1,2]
d = [3,4,5]
rho = np.average((c-np.average(c))*(d-np.average(d)))/(np.std(c)*np.std(d))
print('rho',np.round(rho,3))
#equally you can write
rho = np.dot(c-np.average(c),d-np.average(d))/sqrt(((np.dot(c-np.average(c),c-np.average(c)))*np.dot(d-np.average(d),d-np.average(d))))
print('rho',round(rho,3))
#Why is the rho for c and d, 1.0?
#Edit the variables c and d and find the pearson's value for 'a' and 'b'.
#What happens when you correlate 'a' with 'a'?
#Here is an image from the Fiji practical
from tifffile import imread as imreadtiff
im = imreadtiff('neuron.tif')
print('image dimensions',im.shape, ' im dtype:',im.dtype)
subplot(2,2,1)
imshow(im[0,:,:],cmap='Blues_r')
subplot(2,2,2)
imshow(im[1,:,:],cmap='Greens_r')
subplot(2,2,3)
imshow(im[2,:,:],cmap='Greys_r')
subplot(2,2,4)
imshow(im[3,:,:],cmap='Reds_r')
a = im[0,:,:].reshape(-1)
b = im[3,:,:].reshape(-1)
#Calculate the pearson's coefficent (rho) for the image channel 0, 3.
#You should hopefully obtain a value 0.829
#from tifffile import imread as imreadtiff
im = imreadtiff('composite.tif')
#The organisation of this file is not simple. It is also a 16-bit image.
print("shape of im: ",im.shape,"bit-depth: ",im.dtype)
#We can assess the image data like so.
CH0 = im[0,0,:,:]
CH1 = im[1,0,:,:]
#Single channels visualisation can handle 16-bit
subplot(2,2,1)
imshow(CH0,cmap='Reds_r')
subplot(2,2,2)
imshow(CH1,cmap='Greens_r')
subplot(2,2,3)
#RGB data have to range between 0 and 255 in each channel and be int (8-bit).
imRGB = np.zeros((CH0.shape[0],CH0.shape[1],3))
imRGB[:,:,0] = CH0/255.0
imRGB[:,:,1] = CH1/255.0
imshow((imRGB.astype(np.uint8)))
#What is the current Pearson's value for this image?
np.max(imRGB/256.0)
rho_max = 0
#This moves one of your images with respect to the other.
for c in range(1,40):
for r in range(1,40):
#We need to dynamically sample our image.
temp = CH0[c:-40+c,r:-40+r].reshape(-1);
#The -40 makes sure they are the same size.
ref = CH1[:-40,:-40].reshape(-1);
rho = np.dot(temp-np.average(temp),ref-np.average(ref))/sqrt(((np.dot(temp-np.average(temp),temp-np.average(temp)))*np.dot(ref-np.average(ref),ref-np.average(ref))))
#You will need to work out the highest rho value is recorded.
#You will then need to find the coordinates of this high rho.
#You will then need to provide a visualisation with the image translated.
np.max(imRGB)
imshow?
whos
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: One nice feature of ipython notebooks is it's easy to make small changes to code and
Step2: Transforming string labels into integers
Step3: Transforming string features into integers
Step4: Ok, a couple last steps to get everything ready for sklearn. The features and labels are taken out of their dataframes and put into a numpy.ndarray and list, respectively.
Step5: Predicting well failures with logistic regression
Step6: Comparing logistic regression to tree-based methods
Step7: Congratulations! You have a working data science setup, in which you have
Step8: Now we'll take the to_transform list that you populated above with categorical variables, and use that to loop through columns that will be one-hot encoded.
Step9: Now that the features are a little fixed up, I'd invite you to rerun the models, and see if the cross_val_score goes up as a result. It is also a great chance to take some of the theory discussion from the workshop and play around with the parameters of your models, and see if you can increase their scores that way. There's a blank code box below where you can play around.
Step10: End-to-end workflows using Pipeline and GridSearchCV
Step11: Pipeline
Step12: Reading the classification report
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
features_df = pd.DataFrame.from_csv("well_data.csv")
labels_df = pd.DataFrame.from_csv("well_labels.csv")
print( labels_df.head(20) )
print( features_df.head() )
def label_map(y):
if y=="functional":
return 2
elif y=="functional needs repair":
return 1
else:
return 0
labels_df = labels_df.applymap(label_map)
print( labels_df.head() )
def transform_feature( df, column_name ):
unique_values = set( df[column_name].tolist() )
transformer_dict = {}
for ii, value in enumerate(unique_values):
transformer_dict[value] = ii
def label_map(y):
return transformer_dict[y]
df[column_name] = df[column_name].apply( label_map )
return df
### list of column names indicating which columns to transform;
### this is just a start! Use some of the print( labels_df.head() )
### output upstream to help you decide which columns get the
### transformation
names_of_columns_to_transform = ["funder", "installer", "wpt_name", "basin", "subvillage",
"region", "lga", "ward", "public_meeting", "recorded_by",
"scheme_management", "scheme_name", "permit",
"extraction_type", "extraction_type_group",
"extraction_type_class",
"management", "management_group",
"payment", "payment_type",
"water_quality", "quality_group", "quantity", "quantity_group",
"source", "source_type", "source_class",
"waterpoint_type", "waterpoint_type_group"]
for column in names_of_columns_to_transform:
features_df = transform_feature( features_df, column )
print( features_df.head() )
### remove the "date_recorded" column--we're not going to make use
### of time-series data today
features_df.drop("date_recorded", axis=1, inplace=True)
print(features_df.columns.values)
X = features_df.as_matrix()
y = labels_df["status_group"].tolist()
import sklearn.linear_model
import sklearn.cross_validation
clf = sklearn.linear_model.LogisticRegression()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print( score )
import sklearn.tree
import sklearn.ensemble
clf = sklearn.tree.DecisionTreeClassifier()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print( score )
clf = sklearn.ensemble.RandomForestClassifier()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print( score )
import sklearn.preprocessing
def hot_encoder(df, column_name):
column = df[column_name].tolist()
column = np.reshape( column, (len(column), 1) ) ### needs to be an N x 1 numpy array
enc = sklearn.preprocessing.OneHotEncoder()
enc.fit( column )
new_column = enc.transform( column ).toarray()
column_titles = []
### making titles for the new columns, and appending them to dataframe
for ii in range( len(new_column[0]) ):
this_column_name = column_name+"_"+str(ii)
df[this_column_name] = new_column[:,ii]
return df
print(features_df.columns.values)
features_df.drop( "funder", axis=1, inplace=True )
features_df.drop( "installer", axis=1, inplace=True )
features_df.drop( "wpt_name", axis=1, inplace=True )
features_df.drop( "subvillage", axis=1, inplace=True )
features_df.drop( "ward", axis=1, inplace=True )
names_of_columns_to_transform.remove("funder")
names_of_columns_to_transform.remove("installer")
names_of_columns_to_transform.remove("wpt_name")
names_of_columns_to_transform.remove("subvillage")
names_of_columns_to_transform.remove("ward")
for feature in names_of_columns_to_transform:
features_df = hot_encoder( features_df, feature )
print( features_df.head() )
X = features_df.as_matrix()
y = labels_df["status_group"].tolist()
clf = sklearn.ensemble.RandomForestClassifier()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print(score)
import sklearn.feature_selection
select = sklearn.feature_selection.SelectKBest(k=100)
selected_X = select.fit_transform(X, y)
print( selected_X.shape )
import sklearn.pipeline
select = sklearn.feature_selection.SelectKBest(k=100)
clf = sklearn.ensemble.RandomForestClassifier()
steps = [('feature_selection', select),
('random_forest', clf)]
pipeline = sklearn.pipeline.Pipeline(steps)
X_train, X_test, y_train, y_test = sklearn.cross_validation.train_test_split(X, y, test_size=0.33, random_state=42)
### fit your pipeline on X_train and y_train
pipeline.fit( X_train, y_train )
### call pipeline.predict() on your X_test data to make a set of test predictions
y_prediction = pipeline.predict( X_test )
### test your predictions using sklearn.classification_report()
report = sklearn.metrics.classification_report( y_test, y_prediction )
### and print the report
print(report)
import sklearn.grid_search
#import warnings
#warnings.filterwarnings("ignore")
parameters = dict(feature_selection__k=[100, 200],
random_forest__n_estimators=[50],
random_forest__min_samples_split=[4])
cv = sklearn.grid_search.GridSearchCV(pipeline, param_grid=parameters)
print(pipeline.named_steps)
cv.fit(X_train, y_train)
y_predictions = cv.predict(X_test)
report = sklearn.metrics.classification_report( y_test, y_predictions )
### and print the report
print(report)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For each exercise, fill in the function according to its docstring.
Step2: Algebraic Equations
Step3: Write a function that computes the general solution to the cubic $x^3 + ax^2 + bx + c$.
Step4: Differential Equations
Step5: If the population growth is bounded, it is modeled by
|
<ASSISTANT_TASK:>
Python Code:
from sympy import *
from sympy.solvers.solveset import solveset
init_printing()
a, b, c, d, x, y, z, t = symbols('a b c d x y z t')
f, g, h = symbols('f g h', cls=Function)
def quadratic():
return solveset(a*x**2 + b*x + c, x)
quadratic()
def cubic():
return solveset(x**3 + a*x**2 + b*x + c, x)
cubic()
dsolve(f(t).diff(t) - a*f(t), f(t))
dsolve(f(t).diff(t) - f(t)*(1 - f(t)), f(t))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Estimation
Step2: When we receive data, we can extract the following information
Step3: To localize the transmitter, we simply take the measurements in $D$ and fit them to the model in receivePowerModel
Step4: As shown above, the estimated result is nearly identical to the simulated parameters, which should be the case, as the only source of noise here is from quantization error.
Step5: Precision Visualization
Step6: For multiple pings, we simply sum the log probability together. The below code simulates three pings, one at $(-50, 0)$, one at $(0, -50)$, and one at $(35, 35)$. Each has a calculated distance of 50 meters, with the distance again represented by the red circles, and ping locations represented by the red triangles.
Step7: In order to generate a heatmap for a dataset, we need to first determine the appropriate heatmap area. As a prerequisite, we must know the estimated transmitter location and ping locations. Because the transmitter location might not be within the polygon that encloses all the ping locations, it is essential to take the transmitter location into account when generating the heatmap area. A simple way to do this would be to take the maximum extents of both estimated transmitter and ping locations. We can then use the same technique as above to generate the heatmap.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt; plt.ion()
from scipy.optimize import least_squares
from scipy import stats as st
def receivePowerModel(d, k, n):
return k - 10 * n * np.log10(d)
vx = np.array([[5, 1, 0],
[-1, 5, 0],
[-5, -1, 0],
[1, -5, 0]])
tx = (477954, 3638577, 11, 'S')
txp = 100
n = 4
C = 1
d = 1000
tx_loc = np.array([tx[0], tx[1], 0])
dx_loc = np.array([tx[0] - 100, tx[1], 30])
s_per_leg = 30
c_loc = dx_loc
D_p = []
for leg in range(4):
for s in range(s_per_leg):
c_loc += vx[leg,:]
dist = np.linalg.norm(tx_loc - c_loc)
R = receivePowerModel(dist, txp - C, n)
D_p.append([c_loc[0], c_loc[1], c_loc[2], R])
simulated_D = np.array(D_p)
plt.scatter(simulated_D[:,0], simulated_D[:,1], c = simulated_D[:,3])
plt.colorbar(label="Ping Power")
plt.xlabel("Easting (m)")
plt.ylabel("Northing (m)")
plt.title("Simulated Data with No Noise")
plt.show()
def residuals(params, data):
tx = params[0]
ty = params[1]
t_loc = np.array([tx, ty, 0])
k = params[2]
n = params[3]
R = data[:,3]
d_loc = data[:,0:3]
residuals = np.zeros(len(R))
for i in range(len(R)):
residuals[i] = R[i] - receivePowerModel(np.linalg.norm(t_loc - d_loc[i,:]), k, n)
return residuals
initialGuess = np.array([np.mean(simulated_D[:,0]), np.mean(simulated_D[:,1]),
np.max(simulated_D[:,3]), 4])
results = least_squares(residuals, initialGuess, kwargs={'data':simulated_D})
print(results.x)
print("Lateral error: %f" % np.linalg.norm(tx_loc - np.array([results.x[0], results.x[1], 0])))
print("k error: %f" % (txp - C - results.x[2]))
print("n error: %f" % (n - results.x[3]))
c_loc = dx_loc
D_p = []
s_xy = np.sqrt(3)
s_z = np.sqrt(0.5)
s_R = np.sqrt(5)
for leg in range(4):
for s in range(s_per_leg):
c_loc += vx[leg,:]
dist = np.linalg.norm(tx_loc - c_loc)
R = receivePowerModel(dist, txp - C, n)
D_p.append([c_loc[0] + np.random.normal(0,s_xy),
c_loc[1] + np.random.normal(0,s_xy),
c_loc[2] + np.random.normal(0,s_z),
R + np.random.normal(0,s_R)])
simulated_D_error = np.array(D_p)
plt.scatter(simulated_D_error[:,0], simulated_D_error[:,1], c = simulated_D_error[:,3])
plt.colorbar(label="Ping Power")
plt.xlabel("Easting (m)")
plt.ylabel("Northing (m)")
plt.title("Simulated Data with Noise")
plt.show()
initialGuess = np.array([np.mean(simulated_D_error[:,0]), np.mean(simulated_D_error[:,1]),
np.max(simulated_D_error[:,3]), 4])
results = least_squares(residuals, initialGuess, kwargs={'data':simulated_D_error})
print(results.x)
print("Lateral error: %f" % np.linalg.norm(tx_loc - np.array([results.x[0], results.x[1], 0])))
print("k error: %f" % (txp - C - results.x[2]))
print("n error: %f" % (n - results.x[3]))
width = 100
dist = 50
field = np.zeros((2 * width, 2 * width))
origin = np.array([width, width])
stddev = 0.4 * dist
mean = dist
rv = st.norm(loc = mean, scale = stddev)
for x in range(field.shape[0]):
for y in range(field.shape[1]):
r = np.linalg.norm(np.array([x, y]) - origin)
field[x, y] = rv.pdf(r)
fig = plt.figure()
plt.imshow(field, extent=(-width, width, -width, width), origin='lower')
plt.colorbar()
plt.scatter(0, 0, marker='^', color='red')
cir = plt.Circle((0, 0), radius=dist, fill=False, edgecolor='red')
fig.gca().add_artist(cir)
plt.xlabel('Easting (m)')
plt.ylabel('Northing (m)')
plt.title('Certainty Distribution for a Single Ping')
plt.show()
width = 120
dist = 50
pings = [np.array([-dist, 0]), np.array([0, -dist]), np.array([.7*dist, .7*dist])]
field = np.zeros((2 * width, 2 * width))
origin = np.array([width, width])
stddev = 0.4 * dist
mean = dist
rv = st.norm(loc = mean, scale = stddev)
for x in range(field.shape[0]):
for y in range(field.shape[1]):
for ping in pings:
r = np.linalg.norm(np.array([x, y]) - origin - ping)
field[x, y] += np.log10(rv.pdf(r))
fig1 = plt.figure()
plt.imshow(np.power(10, field), extent=(-width, width, -width, width), origin='lower')
plt.colorbar()
for ping in pings:
plt.scatter(ping[0], ping[1], marker='^', color='red')
cir = plt.Circle(ping, radius=dist, fill=False, edgecolor='red')
fig1.gca().add_artist(cir)
plt.xlabel('Easting (m)')
plt.ylabel('Northing (m)')
plt.title('Certainty Distribution for a Three Pings')
plt.show()
def distancePowerModel(R, k, n):
return np.power(10, (k - R) / (10 * n))
simulated_D_error
new_D = simulated_D_error[simulated_D_error[:,3] > 20,:]
tx_loc = results.x[0:2]
n = results.x[3]
k = results.x[2]
resolution = 0.25 #px per meter
border = 20 # meter
max_x = max(np.max(simulated_D_error[:,0]), tx_loc[0]) + border
min_x = min(np.min(simulated_D_error[:,0]), tx_loc[0]) - border
max_y = max(np.max(simulated_D_error[:,1]), tx_loc[1]) + border
min_y = min(np.min(simulated_D_error[:,1]), tx_loc[1]) - border
# origin is top left, 0.5 px / m resolution
field = np.zeros((int((max_x - min_x) * resolution), int((max_y - min_y) * resolution)))
origin = np.array([max_x, min_y])
means = [distancePowerModel(R, k, n) for R in new_D[:,3]]
rv = [st.norm(loc = d, scale = 0.4 * d) for d in means]
for x in range(field.shape[0]):
for y in range(field.shape[1]):
px_loc = origin + np.array([-x, y]) / resolution
for ping_idx in range(len(rv)):
distance = np.linalg.norm(px_loc - new_D[ping_idx,0:2])
if distance < 2 * means[ping_idx]:
field[x, y] += np.log10(rv[ping_idx].pdf(distance))
plt.imshow(np.power(10, field), extent=(min_x, max_x, min_y, max_y), origin="lower")
tx_loc
tx_loc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us take a look at the data in 2D (we ignore the intercept which is constantly equal to 1).
Step2: The data was generated so that we have two quite distinct classes. This is usually not the case in reality, and for this reason we will see what happens when outliers are implemented (see homework below).
Step3: Defining the cost function
Step4: Now we are able to compute the loss for a single training sample, we can get the total cost.
Step5: Recall that the prediction on a feature vector $x$ is given by the value of $j$ that maximizes the score $\theta_j^T x$.
Step6: Gradient
Step7: The last quantity needed in order to apply the gradient descent is the total gradient (if we want to apply a batch gradient descent, the gradient for a single sample is enough in the stochastic gradient descent case is enough).
Step8: Now that we have the gradient, we can apply the gradient descent algorithm.
|
<ASSISTANT_TASK:>
Python Code:
k_classes = 2
X = [[1., 1.5, 0.2], [1., 0.3, 1.2], [1, 1.6, 0.4], [1., 1.3, 0.25], [1., 0.5, 1.12]]
Y = [1, 2, 1, 1, 2]
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure()
X1 = [x[1] for x in X]
X2 = [x[2] for x in X]
plt.scatter(X1, X2, c=Y) # plot x1, x2, color is defined by the label y
plt.show()
def score(x, theta):
d = len(x)
thetaTx = 0
for idx in range(d):
thetaTx += x[idx] * theta[idx]
return thetaTx
def cost_function(x, y, thetas, delta):
thetayTx = predict(x, thetas[y])
loss = 0
d = len(x)
for j in range(d):
if j is not y:
print("x " + str(x))
print("thetas " + str(thetas))
thetajTx = predict(x, thetas[idx])
loss += max(0, thetajTx - thetayTx + delta)
return loss
def cost_function_total(X, Y, thetas, delta):
cost = 0 # initialize the cost with 0
n = len(Y)
for i in range(n): # iterate over the training set
x = X[i] # get the ith feature vector
y = Y[i] # get the ith label
cost += cost_function(x, y, thetas, delta) # add the cost of the current sample to the total cost
return cost
def initialize_thetas(X, k_classes):
d = len(X[1])
theta = [0] * d
return [theta] * k_classes
thetas_0 = initialize_thetas(X, 2)
def predict(x, thetas):
k_classes = len(thetas)
prediction = 0
highest_score = score(x, thetas[prediction]) # initialize with the first class
for idx_class in range(k_classes):
class_score = score(x, thetas[idx_class])
if class_score > highest_score:
prediction = idx_class
return prediction + 1
predict(X[0], thetas_0)
def gradients(x, y, thetas, delta):
d = len(x)
k_classes = len(thetas)
predicted_class = predict(x, thetas)
grads = [[0] * d] * k_classes # initialize a list of k_class gradients with zeros everywhere
for idx_class in range(k_classes): # iterate over all the classes to compute the gradient for each class
# there are 2 formulas: one for the true class (given by 'y') and another one for the other classes
if idx_class + 1 == y: # if idx_class is equal to the actual class
p = 0
for j in range(k_classes):
if j + 1 != y: # are counting over the classes different than the actual class
if score(x, thetas[j]) - score(x, thetas[y - 1]) + delta > 0:
p += 1
for idx in range(d):
grads[idx_class][idx] = - p * x[idx]
else: # if idx_class is not the actual class
if score(x, thetas[idx_class]) - score(x, thetas[y - 1]) + delta > 0:
for idx in range(d):
grads[idx_class][idx] = x[idx]
# we do not need an else statement here because the gradient would be equal to 0 in this case,
# and the gradient has been initialized with zeros
return grads
# to delete
print(gradients(X[0], Y[0], thetas_0, 4.0))
# For the sake of clarity, we first define a function that sums vectors elementwise
def sum_vectors(x1, x2):
d = len(x1)
sum_vector = x1
for idx in range(d):
sum_vector[idx] += x2[idx]
return sum_vector
def gradient_total(X, Y, thetas, delta):
n = len(Y) # number of training samples
d = len(X[1])
k_classes = len(thetas)
grads_sum = [[0] * d] * k_classes
for i in range(n):
x = X[i]
y = Y[i]
grads = gradients(x, y, thetas, delta) # get the gradient for the current sample
for j in range(k_classes):
grads_sum[j] = sum_vectors(grads[j], grads_sum[j]) # add it to the total gradients
return grads_sum
# to delete
gradient_total(X, Y, thetas_0, 4.0)
# For the sake of readability, we define a function axpb (that stands for a x plus b) that outputs a * x + b
# where a is a scalar and x and b are vectors
def axpb(a, x, b):
# x and b should have the same size, a is a scalar
d = len(x)
sum_vector = b
for idx in range(d):
sum_vector[idx] += a * x[idx]
return sum_vector
def gradient_descent(X, Y, delta, learning_rate):
k_classes = len(set(Y))
thetas = initialize_thetas(X, k_classes)
for i_iter in range(5):
grads = gradient_total(X, Y, thetas, delta)
for j in range(k_classes):
thetas[j] = axpb(-learning_rate, grads[j], thetas[j])
print("X " + str(X))
cost = cost_function_total(X, Y, thetas, delta)
print("iteration " + str(i_iter) + ", cost = " + str(cost))
return thetas
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we need to define materials that will be used in the problem
Step2: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step4: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step5: Likewise, we can construct a control rod guide tube with the same surfaces.
Step6: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step7: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step8: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step9: We now must create a geometry that is assigned a root universe and export it to XML.
Step10: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
Step11: Let us also create a plot to verify that our fuel assembly geometry was created successfully.
Step12: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Step13: Next, we will instantiate an openmc.mgxs.Library for the energy and delayed groups with our the fuel assembly geometry.
Step14: Now, we can run OpenMC to generate the cross sections.
Step15: Tally Data Processing
Step16: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step17: Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Step18: Another useful feature of the Python API is the ability to extract the surface currents for the interfaces and surfaces of a mesh. We can inspect the currents for the mesh by getting the pandas dataframe.
Step19: Cross Section Visualizations
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import math
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.mgxs
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
# Create a materials collection and export to XML
materials = openmc.Materials((fuel, water, zircaloy))
materials.export_to_xml()
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(R=0.39218)
clad_outer_radius = openmc.ZCylinder(R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Create universes array with the fuel pin and guide tube universes
universes = np.tile(fuel_pin_universe, (17,17))
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
# Create root Cell
root_cell = openmc.Cell(name='root cell', fill=assembly)
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
# Create Geometry and export to XML
geometry = openmc.Geometry(root_universe)
geometry.export_to_xml()
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 2500
# Instantiate a Settings object
settings = openmc.Settings()
settings.batches = batches
settings.inactive = inactive
settings.particles = particles
settings.output = {'tallies': False}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings.export_to_xml()
# Plot our geometry
plot = openmc.Plot.from_geometry(geometry)
plot.pixels = (250, 250)
plot.color_by = 'material'
openmc.plot_inline(plot)
# Instantiate a 20-group EnergyGroups object
energy_groups = openmc.mgxs.EnergyGroups()
energy_groups.group_edges = np.logspace(-3, 7.3, 21)
# Instantiate a 1-group EnergyGroups object
one_group = openmc.mgxs.EnergyGroups()
one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])
# Instantiate a tally mesh
mesh = openmc.Mesh(mesh_id=1)
mesh.type = 'regular'
mesh.dimension = [17, 17, 1]
mesh.lower_left = [-10.71, -10.71, -10000.]
mesh.width = [1.26, 1.26, 20000.]
# Initialize an 20-energy-group and 6-delayed-group MGXS Library
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = energy_groups
mgxs_lib.num_delayed_groups = 6
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'transport', 'nu-scatter matrix', 'kappa-fission', 'inverse-velocity', 'chi-prompt',
'prompt-nu-fission', 'chi-delayed', 'delayed-nu-fission', 'beta']
# Specify a "mesh" domain type for the cross section tally filters
mgxs_lib.domain_type = 'mesh'
# Specify the mesh domain over which to compute multi-group cross sections
mgxs_lib.domains = [mesh]
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
# Instantiate a current tally
mesh_filter = openmc.MeshFilter(mesh)
current_tally = openmc.Tally(name='current tally')
current_tally.scores = ['current']
current_tally.filters = [mesh_filter]
# Add current tally to the tallies file
tallies_file.append(current_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
# Extrack the current tally separately
current_tally = sp.get_tally(name='current tally')
# Set the time constants for the delayed precursors (in seconds^-1)
precursor_halflife = np.array([55.6, 24.5, 16.3, 2.37, 0.424, 0.195])
precursor_lambda = math.log(2.0) / precursor_halflife
beta = mgxs_lib.get_mgxs(mesh, 'beta')
# Create a tally object with only the delayed group filter for the time constants
beta_filters = [f for f in beta.xs_tally.filters if type(f) is not openmc.DelayedGroupFilter]
lambda_tally = beta.xs_tally.summation(nuclides=beta.xs_tally.nuclides)
for f in beta_filters:
lambda_tally = lambda_tally.summation(filter_type=type(f), remove_filter=True) * 0. + 1.
# Set the mean of the lambda tally and reshape to account for nuclides and scores
lambda_tally._mean = precursor_lambda
lambda_tally._mean.shape = lambda_tally.std_dev.shape
# Set a total nuclide and lambda score
lambda_tally.nuclides = [openmc.Nuclide(name='total')]
lambda_tally.scores = ['lambda']
delayed_nu_fission = mgxs_lib.get_mgxs(mesh, 'delayed-nu-fission')
# Use tally arithmetic to compute the precursor concentrations
precursor_conc = beta.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \
delayed_nu_fission.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / lambda_tally
# The difference is a derived tally which can generate Pandas DataFrames for inspection
precursor_conc.get_pandas_dataframe().head(10)
current_tally.get_pandas_dataframe().head(10)
# Extract the energy-condensed delayed neutron fraction tally
beta_by_group = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type='energy', remove_filter=True)
beta_by_group.mean.shape = (17, 17, 6)
beta_by_group.mean[beta_by_group.mean == 0] = np.nan
# Plot the betas
plt.figure(figsize=(18,9))
fig = plt.subplot(231)
plt.imshow(beta_by_group.mean[:,:,0], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 1')
fig = plt.subplot(232)
plt.imshow(beta_by_group.mean[:,:,1], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 2')
fig = plt.subplot(233)
plt.imshow(beta_by_group.mean[:,:,2], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 3')
fig = plt.subplot(234)
plt.imshow(beta_by_group.mean[:,:,3], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 4')
fig = plt.subplot(235)
plt.imshow(beta_by_group.mean[:,:,4], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 5')
fig = plt.subplot(236)
plt.imshow(beta_by_group.mean[:,:,5], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 6')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Varijabla rjecnik je prazan rječnik, odnosno ne sadrži ni jedan uređeni par (ključ
Step2: Varijabla rjecnik sadrži u dva uređena para. Prvi uređeni par je 'a'
Step3: Rječnici su promjenjivi tipovi podataka tako da se njihov sadržaj naknadno može i mijenjati. Ako želimo promijeniti vrijednost postojećeg ključa, to možemo napraviti na sljedeći način
Step4: Sintaksa rada s rječnicima slična je one s listama, jedino što se umjesto indeksa u listi u uglatoj zagradi definira ključ. Iz primjera je također vidljivo da u rječniku ne možemo imati dva ista ključa. Pridruživanjem druge vrijednosti pod postojeći ključ brišemo prethodnu vrijednost pod tim ključem.
Step5: Ako na ovaj način dohvaćamo vrijednost ključa koji ne postoji u rječniku, javit će se pogreška KeyError, koja nas i upućuje da je pogreška u nazivu ključa
Step6: Drugi način dohvaćanja vrijednosti iz ključa je metodom get()
Step7: Ako pak na ovaj način dohvaćamo vrijednost ključa koji ne postoji u rječniku, dobit ćemo vrijednost None
Step8: Budući da nećemo uvijek znati postoji li neki ključ u rječniku, najsigurnije dohvaćanje vrijednosti je pomoću metode get().
Step9: Pri statističkoj analizi i modeliranju teksta, rječnike ćemo koristiti za prebrojavanje raznih događaja u tekstu. Zato nam je vrlo važan postupak koji za neki ključ vrijednost povećavamo za 1. Za to ćemo koristiti sljedeću naredbu
Step10: U rječniku pod određenim ključem pohranjujemo dosadašnju vrijednost pod tim ključem, ili 0 uvećanu za 1. U našem slučaju, po izvršavanju ove naredbe pod ključem će biti pohranjena vrijednost 1. Kada bi ponovno pokrenuli naredbu, pod ključem bi pohranjena bila vrijednost 2. Što ćemo redovito raditi jest prolaženje kroz sekvencu događaja u tekstu (znakova, nizova znakova – tzv. n-grafa, riječi, nizova riječi – tzv. n-grama) te u rječniku bilježiti koliko smo puta susreli određeni događaj.
Step11: Pogledajmo kako možemo iterirati kroz rječnik. Do sada smo iterirali kroz niz znakova (znamo da time prolazimo kroz svaki znak tog niza), listu (prolazimo kroz svaki element liste) te datoteku (prolazimo kroz svaki red datoteke). Kada iteriramo kroz rječnik, prolazimo kroz sve ključeve tog rječnika. Ponavljamo
Step12: Iz ovog primjera je vidljivo da poredak uređenih parova nije definiran, jer se radi o skupu.
Step13: Upoznajmo još tri metode nad rječnicima - keys() za dohvaćanje svih ključeva rječnika, values() za dohvaćanje svih vrijednosti te items() za dohvaćanje svih parova (ključ, vrijednost)
Step14: Iz primjera je vidljivo da metode keys() i values() vraćaju listu čiji elementi liste su ključevi ili vrijednosti. Metoda items() vraća listu parova ne u uglatim, već u oblim zagradama. Ta metoda, naime, vraća listu n-torki. N-torke su tip podataka koji tek sada po prvi put susrećemo. N-torke su nepromjenjivi pandan promjenjivim listama te ćemo ih koristiti nešto kasnije, kada ćemo u rječnike pod ključevima htjeti bilježiti sekvencu riječi. Prisjetimo se, rječnici kao ključeve primaju samo nepromjenjive tipove podataka.
|
<ASSISTANT_TASK:>
Python Code:
rjecnik={}
rjecnik={'a':5, 'b':3.8}
rjecnik={'a':5, 'b':3.8}
len(rjecnik)
rjecnik={'a':5, 'b':3.8}
rjecnik['b']=9
print rjecnik
print rjecnik['a']
rjecnik={'a':5, 'b':3.8}
print rjecnik['c']
rjecnik={'a':5, 'b':3.8}
print rjecnik.get('a')
rjecnik={'a':5, 'b':3.8}
print rjecnik.get('c')
rjecnik={'a':5, 'b':3.8}
print rjecnik.get('c',0)
rjecnik={'a':5, 'b':3.8}
rjecnik['c']=rjecnik.get('c',0)+1
print rjecnik.get('c')
rjecnik={}
niz='otorinolaringologija'
for znak in niz:
rjecnik[znak]=rjecnik.get(znak,0)+1
print rjecnik
rjecnik={'a':5, 'b':3.8, 'c':1}
for kljuc in rjecnik:
print 'ključ '+kljuc+' ima vrijednost '+str(rjecnik[kljuc])
rjecnik={'a':5, 'b':3.8, 'c':1}
'a' in rjecnik
rjecnik={'a':5, 'b':3.8, 'c':1}
'd' in rjecnik
rjecnik={'a':5, 'b':3.8, 'c':1}
print rjecnik.keys()
print rjecnik.values()
print rjecnik.items()
rjecnik={'a':5, 'b':3.8, 'c':1, 'd':10, 'e':2}
sorted(rjecnik.items(),key=lambda x:-x[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Every method implemented on scikit-learn assumes that data comes in a dataset. Scikit-learn includes a few well-known datasets. The Iris flower dataset includes information about 150 instances from three different Iris flower species, including sepal and petal length and width. The natural task to solve using this dataset is to learn to guess the Iris species knowing the sepal and petal measures. Let's import the dataset and show the values for the first instance
Step2: The dataset includes 150 instances, with 4 attributes each. Our first step will be to separate the dataset into to separate sets, using 75% of the instances for training our classifier, and the remaining 25% for evaluating it (and, in this case, taking only two features, sepal width and length). We will also perform feature scaling
Step3: Let's plot the training data using pyplot
Step4: Note that setosa is easily separable from the other two classes, while versicolor and virginica are pretty messed... To implement linear classification, we will use the SGDClassifier from scikit-learn. SGD stands for Stochastic Gradient Descent, a very popular numerical procedure to find the local minimum of a function (in this case, the loss function, which measures how far every instance is from our boundary). The algorithm will learn the coefficients of the hyperplane by minimizing the loss function. Let's fit a Linear Classification method to our training data, and show the built hyperplane
Step5: Plot the three calculated decision curves. Note that Class 0 is linearly separable, while Class 1 and Class 2 are not
Step6: Let's how our classifier can predict the class of a certain instance, given its sepal length and width
Step7: Let's see how good our classifier is on our training set, measuring accuracy
Step8: To get a better idea of the expected performance of our classifier on unseen data, que must measure accuracy on the testing set
Step9: Let's try some additinoal measures
Step10: Now, let's try cross-validation
Step12: Calculate the mean and standard error of cross-validation accuracy
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import IPython
import sklearn as sk
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
print 'IPython version:', IPython.__version__
print 'numpy version:', np.__version__
print 'scikit-learn version:', sk.__version__
print 'matplotlib version:', matplotlib.__version__
from sklearn import datasets
iris = datasets.load_iris()
X_iris, y_iris = iris.data, iris.target
print X_iris.shape, y_iris.shape
print X_iris[0], y_iris[0]
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
# Get dataset with only the first two attributes
X, y = X_iris[:,:2], y_iris
# Split the dataset into a trainig and a testing set
# Test set will be the 25% taken randomly
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=33)
print X_train.shape, y_train.shape
# Standarize the features
scaler = StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
colors = ['red', 'greenyellow', 'blue']
for i in xrange(len(colors)):
px = X_train[:, 0][y_train == i]
py = X_train[:, 1][y_train == i]
plt.scatter(px, py, c=colors[i])
plt.legend(iris.target_names)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
# create the linear model classifier
from sklearn.linear_model import SGDClassifier
clf = SGDClassifier()
# fit (train) the classifier
clf.fit(X_train, y_train)
# print learned coeficients
print clf.coef_
print clf.intercept_
x_min, x_max = X_train[:, 0].min() - .5, X_train[:, 0].max() + .5
y_min, y_max = X_train[:, 1].min() - .5, X_train[:, 1].max() + .5
xs = np.arange(x_min,x_max,0.5)
fig, axes = plt.subplots(1,3)
fig.set_size_inches(10,6)
for i in [0,1,2]:
axes[i].set_aspect('equal')
axes[i].set_title('Class ' + str(i) + ' versus the rest')
axes[i].set_xlabel('Sepal length')
axes[i].set_ylabel('Sepal width')
axes[i].set_xlim(x_min, x_max)
axes[i].set_ylim(y_min, y_max)
plt.sca(axes[i])
for j in xrange(len(colors)):
px = X_train[:, 0][y_train == j]
py = X_train[:, 1][y_train == j]
plt.scatter(px, py, c=colors[j])
ys = (-clf.intercept_[i]-xs*clf.coef_[i,0])/clf.coef_[i,1]
plt.plot(xs,ys,hold=True)
print clf.predict(scaler.transform([[4.7, 3.1]]))
print clf.decision_function(scaler.transform([[4.7, 3.1]]))
from sklearn import metrics
y_train_pred = clf.predict(X_train)
print metrics.accuracy_score(y_train, y_train_pred)
y_pred = clf.predict(X_test)
print metrics.accuracy_score(y_test, y_pred)
print metrics.classification_report(y_test, y_pred, target_names=iris.target_names)
print metrics.confusion_matrix(y_test, y_pred)
from sklearn.cross_validation import cross_val_score, KFold
from sklearn.pipeline import Pipeline
# create a composite estimator made by a pipeline of the standarization and the linear model
clf = Pipeline([
('scaler', StandardScaler()),
('linear_model', SGDClassifier())
])
# create a k-fold croos validation iterator of k=5 folds
cv = KFold(X.shape[0], 5, shuffle=True, random_state=33)
# by default the score used is the one returned by score method of the estimator (accuracy)
scores = cross_val_score(clf, X, y, cv=cv)
print scores
from scipy.stats import sem
def mean_score(scores):
Print the empirical mean score and standard error of the mean.
return ("Mean score: {0:.3f} (+/-{1:.3f})").format(
np.mean(scores), sem(scores))
print mean_score(scores)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'name': ['matt', 'james', 'adam'],
'status': ['active', 'active', 'inactive'],
'number': [12345, 23456, 34567],
'message': ['[job: , money: none, wife: none]',
'[group: band, wife: yes, money: 10000]',
'[job: none, money: none, wife: , kids: one, group: jail]']})
import yaml
def g(df):
df.message = df.message.replace(['\[','\]'],['{','}'], regex=True).apply(yaml.safe_load)
df1 = pd.DataFrame(df.pop('message').values.tolist(), index=df.index)
result = pd.concat([df, df1], axis=1)
result = result.replace('', 'none')
result = result.replace(np.nan, 'none')
return result
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Adding Datasets
Step3: Running Compute
Step4: Now we'll compute synthetics at the times provided using the default options
Step5: Plotting
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('mesh', compute_times=[0.75], dataset='mesh01')
b['requiv@primary@component'] = 1.8
b.run_compute(irrad_method='none', distortion_method='roche', model='rochemodel')
b.run_compute(irrad_method='none', distortion_method='rotstar', model='rotstarmodel')
afig, mplfig = b.plot(model='rochemodel',show=True)
afig, mplfig = b.plot(model='rotstarmodel',show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create mosaics
Step2: Parallel production
Step6: xy2latlon
Step7: Execute campt for all obsids
Step8: Combine obsid results and merge with marking related files
Step9: Tile coordinates
Step10: Check for presence of campt files
Step11: Metadata
|
<ASSISTANT_TASK:>
Python Code:
from hirise_tools.downloads import download_RED_product
from hirise_tools.products import RED_PRODUCT_ID
import logging
from nbtools.logging import setup_live_logging
from planet4 import io, region_data
db = io.DBManager()
roi = region_data.Potsdam()
obsids = roi.all_obsids
obsids
obsids = db.obsids
len(obsids)
setup_live_logging('planet4', logging.DEBUG)
from planet4.projection import create_RED45_mosaic
from planet4 import io
from nbtools import execute_in_parallel
len(obsids)
results = execute_in_parallel(create_RED45_mosaic, obsids)
for res in results:
print(res)
from ipyparallel import Client
c = Client()
lbview = c.load_balanced_view()
dview = c.direct_view()
with c[:].sync_imports():
from planet4.projection import create_RED45_mosaic
results = lbview.map_async(create_RED45_mosaic, obsids)
from nbtools import display_multi_progress
display_multi_progress(results, obsids)
for res in results:
print(res)
from pysis.isis import campt
from pysis.exceptions import ProcessError
import pvl
from planet4 import io, catalog_production as cp
from pathlib import Path
# clusterpath = io.analysis_folder() / 'p4_catalog'
rm = cp.ReleaseManager('v1.0')
rm.calc_tile_coordinates()
rm.calc_metadata()
rm.merge_all()
meta = pd.read_csv(rm.metadata_path)
meta.query('map_scale==0.25')
obsids = rm.obsids
with dview.sync_imports():
from pysis.isis import campt
from pysis.exceptions import ProcessError
from pathlib import Path
from ipyparallel import CompositeError
def do_campt(mosaicname, savepath, temppath):
try:
campt(from_=mosaicname, to=savepath, format='flat', append='no',
coordlist=temppath, coordtype='image')
except ProcessError as e:
print(e.stderr)
return obsid, False
def obsid_marking_from_clusterpath(inpath):
Find obsid and marking kind from HiRISE cluster path.
For example, a cluster path like this:
'/Users/klay6683/Dropbox/data/planet4/p4_analysis/p4_catalog/ESP_011350_0945_blotches.csv'
will return:
('ESP_011350_0945', 'blotches')
s = inpath.stem # stem is 'name' (without folders) minus extension
# s = ESP_xxxxxx_yyyy_blotches (or ..._fans)
# obsid = s[:until last '_']
sep = s.rfind('_')
return s[:sep], s[sep+1:]
class XY2LATLON:
edrpath = io.get_ground_projection_root()
def __init__(self, df, inpath, overwrite=False, obsid=None):
self.obsid = obsid
self.df = df
self.inpath = inpath
self.overwrite = overwrite
self.edrpath = io.get_ground_projection_root()
self._obsid = obsid
@property
def obsid(self):
return self.df.image_name.iloc[0] if self._obsid is None else self._obsid
@obsid.setter
def obsid(self, value):
self._obsid = value
@property
def mosaicname(self):
return f"{self.obsid}_mosaic_RED45.cub"
@property
def mosaicpath(self):
return self.edrpath / self.obsid / self.mosaicname
@property
def savepath(self):
return self.inpath / f"{self.obsid}_campt_out.csv"
@property
def savepath_blotch(self):
return self.inpath / f"{self.obsid}_blotch_campt_out.csv"
@property
def savepath_fan(self):
return self.inpath / f"{self.obsid}_fan_campt_out.csv"
@property
def temppath(self):
return self.inpath / f"{self.obsid}.tocampt"
def process_inpath(self):
df = self.df
tempcoords = ['image_x', 'image_y']
df[tempcoords].to_csv(str(self.temppath), header=False, index=False)
if self.savepath.exists() and self.overwrite is False:
return
try:
do_campt(self.mosaicpath, self.savepath, self.temppath)
except Exception as e:
print(e)
return False
def combine_marking_files(self):
try:
f = pd.read_csv(self.savepath_fan)
except FileNotFoundError:
f = None
try:
b = pd.read_csv(self.savepath_blotch)
except FileNotFoundError:
b = None
pd.concat([f, b]).to_csv(self.savepath, index=False)
from pysis.isis import campt
from pysis.exceptions import ProcessError
import pvl
from astropy import units as u
from astropy.coordinates import CartesianRepresentation
def vec_unit_to_astro(pvlunit):
rep = CartesianRepresentation(pvlunit.value, unit=pvlunit.units)
return rep
class CAMPTER:
edrpath = io.get_ground_projection_root()
def __init__(self, obsid):
self.obsid = obsid
@property
def mosaicpath(self):
mosaicname = f"{self.obsid}_mosaic_RED45.cub"
return self.edrpath / self.obsid / mosaicname
def execute(self, sample, line):
try:
return campt(from_=self.mosaicpath, SAMPLE=sample, LINE=line, type_='image')
except ProcessError as e:
print(e.stderr)
raise ValueError("CAMPT failed.")
def calc_point(self, sample, line):
obj = pvl.loads(self.execute(sample, line))
self.dic = obj['GroundPoint']
return self.dic
@property
def body_vector(self):
vec = self.dic['BodyFixedCoordinate']
return vec_unit_to_astro(vec)
@property
def lat(self):
return self.dic['PlanetographicLatitude'].value*u.deg
def get_two_results(self, p1, p2):
dic1 = self.calc_point(*p1)
dic2 = self.calc_point(*p2)
return dic1, dic2
def calc_pixel_diff_km(self, p1, p2):
Calculate BodyFixed Vector difference for two pixel coordinates.
Parameters:
p1, p2 : tuple
Tuples of (SAMPLE, LINE) coordinates
dic1, dic2 = self.get_two_results(p1, p2)
key = 'BodyFixedCoordinate'
v1 = vec_unit_to_astro(dic1[key])
v2 = vec_unit_to_astro(dic2[key])
return (v1 - v2)
def calc_pixel_diff_latlon(self, p1, p2, key='lat'):
Calculate Lat/Lon difference for two pixel coordinates.
Parameters:
p1, p2 : tuple
Tuples of (SAMPLE, LINE) coordinates
switch = dict(lat='PlanetographicLatitude',
lon='PositiveEast360Longitude')
dic1, dic2 = self.get_two_results(p1, p2)
key = switch[key]
lat1 = dic1[key].value * u.deg
lat2 = dic2[key].value * u.deg
return (lat1 - lat2)
campter = CAMPTER('ESP_011680_1055')
campter.dic['BodyFixedCoordinate']
campter.calc_pixel_diff_km((1,1), (1.5,1))
campter.calc_pixel_diff_latlon((1,1), (2,1), key='lat').value
0.0000031
campter.dic.keys()
from astropy import units as u
from astropy.coordinates import CartesianRepresentation
rep = CartesianRepresentation(campter.body_vector.value, unit=campter.body_vector.units)
rep
def xy2latlon(inpath):
d = dict(inpath=inpath)
xy = XY2LATLON(inpath)
ok = xy.process_inpath() #inpath, *marking_mosaicpath_from_inpath(inpath))
d['ok'] = ok
return d
blotches = rm.read_blotch_file().drop_duplicates()
fans = rm.read_fan_file().drop_duplicates()
combined = pd.concat([blotches, fans])
obsids = combined.obsid.unique()
len(obsids)
for obsid in obsids:
xy = XY2LATLON(combined, rm.savefolder, obsid=obsid)
xy.combine_marking_files()
from tqdm import tqdm
for obsid in tqdm(obsids):
data = combined[combined.image_name==obsid]
xy = XY2LATLON(data, rm.savefolder)
xy.process_inpath()
pd.set_option('display.max_columns', 100)
cols_to_merge = ("image_name image_x image_y PlanetocentricLatitude PlanetographicLatitude "
"PositiveEast360Longitude BodyFixedCoordinateX BodyFixedCoordinateY "
"BodyFixedCoordinateZ".split())
index = ['image_name', 'image_x', 'image_y']
def combine_campt_into_catalog():
for marking in ['fans', 'blotches']:
print(marking)
data = eval(marking)
bucket = []
for obsid in obsids:
df = data.query('image_name==@obsid')
if df.shape[0] == 0:
continue
xy = XY2LATLON(df, rm.savefolder, obsid=obsid)
savepath = xy.savepath_blotch if marking=='blotches' else xy.savepath_fan
bucket.append(pd.read_csv(savepath).assign(image_name=obsid))
ground = pd.concat(bucket).drop_duplicates()
ground.rename(dict(Sample='image_x', Line='image_y'), axis=1, inplace=True)
data.sort_values(by=index, inplace=True)
ground.sort_values(by=index, inplace=True)
merged = data.merge(ground[cols_to_merge], on=index)
savepath = rm.blotch_merged if marking == 'blotches' else rm.fan_merged
if marking == 'fans':
merged.version = merged.version.astype('int')
merged.to_csv(savepath, index=False)
cols_to_merge = ("obsid image_x image_y PlanetocentricLatitude PlanetographicLatitude "
"PositiveEast360Longitude BodyFixedCoordinateX BodyFixedCoordinateY "
"BodyFixedCoordinateZ".split())
index = ['obsid', 'image_x', 'image_y']
def get_all_campt_results():
bucket = []
for obsid in obsids:
xy = XY2LATLON(None, rm.savefolder, obsid=obsid)
bucket.append(pd.read_csv(xy.savepath).assign(obsid=obsid))
ground = pd.concat(bucket).drop_duplicates()
ground.rename(dict(Sample='image_x', Line='image_y'), axis=1, inplace=True)
return ground
def combine_campt_into_catalog(fans, blotches):
ground = get_all_campt_results()
fans = fans.merge(ground[cols_to_merge], on=index)
blotches = blotches.merge(ground[cols_to_merge], on=index)
return fans, blotches
ground = get_all_campt_results()
fans.shape
fans.merge(ground[cols_to_merge], on=index).shape
blotches.shape
blotches.merge(ground[cols_to_merge], on=index).shape
fans.columns
from planet4.projection import TileCalculator, xy_to_hirise
from planet4 import projection as proj
edrpath = io.get_ground_projection_root()
edrpath
obsids.shape
cubepaths = [edrpath / obsid / f"{obsid}_mosaic_RED45.cub" for obsid in obsids]
cubepaths[:3]
# testing
tc = TileCalculator(cubepaths[0])
tc.calc_tile_coords()
df = pd.read_csv(tc.campt_results_path)
pd.set_option('max_columns', 60)
%matplotlib inline
todo=[]
for cubepath in cubepaths:
tilecalc = proj.TileCalculator(cubepath, read_data=False)
if not tilecalc.campt_results_path.exists():
todo.append(cubepath)
print(f"{len(todo)} still todo.")
def get_tile_coords(cubepath):
from planet4.projection import TileCalculator
tilecalc = TileCalculator(cubepath)
tilecalc.calc_tile_coords()
results = execute_in_parallel(get_tile_coords, todo)
len(cubepaths)
bucket = []
from tqdm import tqdm
for cubepath in tqdm(cubepaths):
tilecalc = proj.TileCalculator(cubepath, read_data=False)
bucket.append(tilecalc.tile_coords_df)
coords = pd.concat(bucket, ignore_index=True)
coords.shape
coords.head()
catalog = 'catalog_1.0b3'
savefolder = io.analysis_folder() / catalog / f"{roi.name.lower()}"
savefolder.mkdir(exist_ok=True)
savename = savefolder / f"{roi.name.lower()}_tile_coords.csv"
savefolder = io.data_root / catalog
savename = savefolder / "all_images_tile_coords.csv"
coords.to_csv(savename, index=False)
coords = pd.read_csv(savename)
coords.shape
coords.tail()
%matplotlib nbagg
coords.plot(kind='scatter', marker='.', x='BodyFixedCoordinateY', y='BodyFixedCoordinateX')
ax = plt.gca()
ax.invert_xaxis()
ax.invert_yaxis()
ax.set_title(f"{roi} tile center coordinates")
fig = plt.gcf()
fig.savefig(f'/Users/klay6683/Dropbox/src/p4_paper1/figures/{roi.lower()}_tile_center_coordinates.png', dpi=200)
from matplotlib.colors import LogNorm
xycoords = coords['BodyFixedCoordinateX BodyFixedCoordinateY'.split()]
xrange = []
yrange = []
for col,bucket in zip(xycoords.columns, [xrange, yrange]):
coord = xycoords[col]
bucket.append(int(coord.min()))
bucket.append(int(coord.max()))
xrange
yrange
gridres = 0.5
xedges = np.arange(xrange[0], xrange[1]+1, gridres)
yedges = np.arange(yrange[0], yrange[1]+1, gridres)
plt.figure()
plt.scatter(xycoords.BodyFixedCoordinateY, xycoords.BodyFixedCoordinateX,
marker='.', alpha=0.1, color='red')
counts, y_ret, x_ret, _ = plt.hist2d(xycoords.BodyFixedCoordinateY, xycoords.BodyFixedCoordinateX,
bins=[yedges, xedges], cmin=10)
plt.colorbar()
plt.hlines(yedges, *xrange, lw=0.5)
plt.vlines(xedges, *yrange, lw=0.5)
plt.gca().axes.set_aspect('equal', 'datalim')
plt.gca().invert_xaxis()
plt.gca().invert_yaxis()
# plt.savefig('ithaca_coverage.png', dpi=200)
H, y_ret, x_ret = np.histogram2d(xycoords.BodyFixedCoordinateY, xycoords.BodyFixedCoordinateX,
bins=[yedges, xedges])
yindices, xindices = np.where(H>11)
x1 = -288.402
y1 = -3.17798
x2 = -283.19
y2 = -6.21769
m = (y1-y2)/(x1-x2)
b = y1 - m*x1
b
def f(x):
return m*x + b
f(-288)
box1 = []
box2 = []
for xind, yind in zip(xindices, yindices):
xlow = x_ret[xind]
xhigh = x_ret[xind+1]
xmean = (xlow+xhigh)/2
ylow = y_ret[yind]
yhigh = y_ret[yind+1]
ymean = (ylow+yhigh)/2
x_query = '@xlow < BodyFixedCoordinateX < @xhigh'
y_query = '@ylow < BodyFixedCoordinateY < @yhigh'
boxtmp = coords.query(f"{x_query} and {y_query}")
if f(xmean) > ymean: # if the separating line is above the current y-value
box1.append(boxtmp)
elif f(xmean) < ymean:
if xmean > -281.5:
box1.append(boxtmp)
else:
box2.append(boxtmp)
box1 = pd.concat(box1, ignore_index=True)
box2 = pd.concat(box2, ignore_index=True)
import seaborn as sns
sns.set_context('poster')
ax = box1.plot.scatter(x='BodyFixedCoordinateX', y='BodyFixedCoordinateY', color='green')
box2.plot.scatter(x='BodyFixedCoordinateX', y='BodyFixedCoordinateY', color='blue', ax=ax)
x = np.linspace(-290, -278, 100)
ax.plot(x, m*x+b, c='red')
ax.hlines(yedges, *xrange, lw=0.5)
ax.vlines(xedges, *yrange, lw=0.5)
ax.set_xlim(-293, -276)
ax.set_ylim(-11, 0)
ax.set_title("Ithaca P4 tile coords with > 11 entries in 0.25 sqkm cell")
fig = ax.get_figure()
fig.savefig("/Users/klay6683/Dropbox/p4_4Chase/ithaca/box_selection.png", dpi=200)
box1.image_id.to_csv('box1_image_ids.csv', index=False)
box2.image_id.to_csv('box2_image_ids.csv', index=False)
meta = pd.read_hdf('/Users/klay6683/Dropbox/SternchenAndMe/python_stuff/hirise_rdr_index.hdf')
colorimgs = meta[meta.PRODUCT_ID.str.endswith('_COLOR')]
coords = coords.merge(colorimgs, right_on='OBSERVATION_ID', left_on='image_name')
coords.SOLAR_LONGITUDE.max()
coords.columns
coords.groupby('image_name').IMAGE_LINES.mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will demonstrate with a toy model which has a simple loop cycling A -> B -> C -> A, with A allowed to enter the system and C allowed to leave. A graphical view of the system is drawn below
Step2: While this model contains a loop, a flux state exists which has no flux through reaction v3, and is identified by loopless FBA.
Step3: However, if flux is forced through v3, then there is no longer a feasible loopless solution.
Step4: Loopless FBA is also possible on genome scale models, but it requires a capable MILP solver.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import plot_helper
import cobra.test
from cobra import Reaction, Metabolite, Model
from cobra.flux_analysis.loopless import construct_loopless_model
from cobra.solvers import get_solver_name
plot_helper.plot_loop()
test_model = Model()
test_model.add_metabolites(Metabolite("A"))
test_model.add_metabolites(Metabolite("B"))
test_model.add_metabolites(Metabolite("C"))
EX_A = Reaction("EX_A")
EX_A.add_metabolites({test_model.metabolites.A: 1})
DM_C = Reaction("DM_C")
DM_C.add_metabolites({test_model.metabolites.C: -1})
v1 = Reaction("v1")
v1.add_metabolites({test_model.metabolites.A: -1, test_model.metabolites.B: 1})
v2 = Reaction("v2")
v2.add_metabolites({test_model.metabolites.B: -1, test_model.metabolites.C: 1})
v3 = Reaction("v3")
v3.add_metabolites({test_model.metabolites.C: -1, test_model.metabolites.A: 1})
DM_C.objective_coefficient = 1
test_model.add_reactions([EX_A, DM_C, v1, v2, v3])
construct_loopless_model(test_model).optimize()
v3.lower_bound = 1
construct_loopless_model(test_model).optimize()
salmonella = cobra.test.create_test_model("salmonella")
construct_loopless_model(salmonella).optimize(solver=get_solver_name(mip=True))
ecoli = cobra.test.create_test_model("ecoli")
construct_loopless_model(ecoli).optimize(solver=get_solver_name(mip=True))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (二)準備測試資料
Step2: (三)測試分類器並作圖
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
h = .02 # step size in the mesh
names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Decision Tree",
"Random Forest", "AdaBoost", "Naive Bayes", "Linear Discriminant Ana.",
"Quadratic Discriminant Ana."]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
AdaBoostClassifier(),
GaussianNB(),
LinearDiscriminantAnalysis(),
QuadraticDiscriminantAnalysis()]
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable
]
%matplotlib inline
figure = plt.figure(figsize=(30,20), dpi=300)
i = 1
# iterate over datasets
for ds in datasets:
# preprocess dataset, split into training and test part
X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(len(datasets), (len(classifiers) + 1)//2, i)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names[0:4], classifiers[0:4]):
ax = plt.subplot(len(datasets), (len(classifiers) + 1)//2, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot also the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(name,fontsize=28)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=30, horizontalalignment='right')
i += 1
figure.subplots_adjust(left=.02, right=.98)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 下面兩個 cell 分別演示如何使用 urllib來透過 url 抓取資料
Step2: Lab:請修改 url 為你自己感興趣的資料來源, 下載資料並存在自己命名的檔案中
Step3: hdulist 顧名思義, 就是 a list of Header/Data Unit (廢話), 關於 FITS檔案的格式, 請參閱 http
Step4: 雖然落落長一串, 希望你的眼睛沒有什麼業障, 可以發現有一些有趣的屬性/方法, 像是 data, dump, writeto... 等等
Step5: 你看看你, data這不就來了嗎~~~
Step6: 好滴... 這個演示就先告個段落, 記得要把打開的東西 (hdulist) 關起來, 養成好習慣以後才不會很難改
Step7: 範例:用Numpy讀寫檔案
Step8: 範例:用Astropy讀寫 ASCII 檔案
Step9: read() 出來的是個 table 物件, dir()的樂趣我就留給各位, 下面直接舉兩個常用的例子
|
<ASSISTANT_TASK:>
Python Code:
# import urllib.request
import urllib
fitsurl="http://dr10.sdss3.org/sas/dr10/sdss/spectro/redux/26/spectra/0651/spec-0651-52141-0569.fits"
#urllib.request.urlretrieve(fitsurl,"spec.fits")
urllib.urlretrieve(fitsurl,"spec.fits")
texturl="http://services.swpc.noaa.gov/text/predicted-sunspot-radio-flux.txt"
#urllib.request.urlretrieve(texturl, "astro_table.txt")
urllib.urlretrieve(texturl, "astro_table.txt")
from astropy.io import fits
hdulist = fits.open('spec.fits')
hdulist.info()
dir(hdulist[4])
hdulist[4].data
hdulist[4].data["flux"]
hdulist.close()
# 尚需要更合適的檔案來當範例
import numpy as np
讀進的array = np.loadtxt("檔名")
# 對讀進的array做一些運算後存成新array,然後將新array寫到一個新檔案中 (代補)
np.savetxt('新檔名', 新array)
from astropy.io import ascii
#data = ascii.read("astro_table.txt")
data = ascii.read("astro_table.txt",data_start=2)
print(data)
print(data["col5","col1"])
d = np.where((data["col1"] >= 2017) & (data["col1"] <= 2019) & (data["col2"] >=6) & (data["col2"] <=9))
print(d)
print(data[d]["col1","col2","col3"])
dir(data)
data.write("out", format="ascii")
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.