text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2> <font color="firebrick">In class exercise</font> </h2>
Step2: Transcriptional repression
Step3: <h2> <font color='firebrick'> Interactive exploration of the Hill function </font> </h2>
Step4: Generate functions comparing the logic approximation to the Hill function, for the activating case
Step5: Multi-dimensional Input Functions
Step6: Toggling the activator X
|
<ASSISTANT_TASK:>
Python Code:
# import statements to make numeric and plotting functions available
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
## define your function in this cell
def hill_activating(X, B, K, n):
Xn = X**n
return (B * Xn)/(K**n + Xn)
## generate a plot using your hill_activating function defined above
# setup paramters for our simulation
B = 5
K = 10
x = linspace(0,30,200) # generate 200 evenly spaced points between 0 and 30
y = hill_activating(x, B, K, 1) # hill fxn with n = 1
plot(x, y, label='n=1')
xlabel('Concentration of X')
ylabel('Promoter activity')
legend(loc='best')
ylim(0, 6)
pass
## generate curves for different n here
# setup paramters for our simulation
B = 5
K = 10
x = linspace(0,30,200) # generate 200 evenly spaced points between 0 and 30
y1 = hill_activating(x, B, K, 1) # hill fxn with n = 1
y2 = hill_activating(x, B, K, 2) # hill fxn with n = 2
y4 = hill_activating(x, B, K, 4) # hill fxn with n = 4
y8 = hill_activating(x, B, K, 8) # hill fxn with n = 8
plot(x, y1, label='n=1')
plot(x, y2, label='n=2')
plot(x, y4, label='n=4')
plot(x, y8, label='n=8')
xlabel('Concentration of X')
ylabel('Rate of production of Y')
legend(loc='best')
ylim(0, 6)
pass
## define your repressive hill function in this cell
def hill_repressing(X, B, K, n):
return B/(1.0 + (X/K)**n)
## generate a plot using your hill_activating function defined above
## For X values range from 0 to 30
B = 5
K = 10
x = linspace(0,30,200)
plot(x, hill_repressing(x, B, K, 1), label='n=1')
plot(x, hill_repressing(x, B, K, 2), label='n=2')
plot(x, hill_repressing(x, B, K, 4), label='n=4')
plot(x, hill_repressing(x, B, K, 8), label='n=8')
xlabel('Conc. of X')
ylabel('Rate of production of Y')
legend(loc='best')
ylim(0, 6)
pass
## write your logic approximation functions here
def logic_activating(X, B, K):
if X > K:
theta = 1
else:
theta = 0
return B*theta
def logic_repressing(X, B, K):
if X < K:
theta = 1
else:
theta = 0
return B*theta
## generate plots using your hill_activating and logic_activating functions defined above
## For X values range from 0 to 30
B = 5
K = 10
n = 4
x = linspace(0, 30, 200)
plot(x, hill_activating(x, B, K, n), label='n=8')
logicx = [logic_activating(i, B, K) for i in x]
plot(x, logicx, label='logic approximation')
xlabel('Concentration of X')
ylabel('Promoter activity')
ylim(-0.1, 5.5)
legend(loc='best')
pass
## write a function to represent the simple differential equation above
def dYdt(B,a,Y):
return B - a*Y # write your code here
## generate a plot using your dY function defined above
## Evaluated over 200 time units
Y = [0] # initial value of Y
B = 0.2
a = 0.05
nsteps = 200
for i in range(nsteps):
deltay = dYdt(B, a, Y[-1])
ynew = Y[-1] + deltay
Y.append(ynew)
plot(Y)
ylim(0, 4.5)
xlabel('Time units')
ylabel('Concentration of Y')
pass
# setup pulse of X
# off (0) for first 50 steps, on for next 100 steps, off again for last 100 steps
X = [0]*50 + [1]*100 + [0]*100
Y = [0]
B = 0.2
K = 0.5
a = 0.05
nsteps = 250
for i in range(1, nsteps):
xnow = X[i]
growth = logic_activating(xnow, B, K)
decay = a*Y[-1]
deltay = growth - decay
ynew = Y[-1] + deltay
Y.append(ynew)
plot(X, color='red', linestyle='dashed', label="X")
plot(Y, color='blue', label="Y")
ylim(0, 4.5)
xlabel('Time units')
ylabel('Concentration')
legend(loc="best")
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparations exercise 2
Step2: Preparation exercise 3
Step3: Find the control law $L$ for the discrete time case using Ackermans formula
Step4: Computer experiments
Step5: Computer exercise 2
Step6: Computer exercise 3 Simulation
Step7: Discrete system with same feedback gain as continuous time system, and with correct feedback gain
Step8: Simulate the three systems
Step9: Effect of sampling interval
|
<ASSISTANT_TASK:>
Python Code:
A = np.array([[2, -2], [1, 0]])
B = np.array([[1],[0]])
C = np.array([[0,1]])
D = np.array([[0]])
As = sy.Matrix(A)
Bs = sy.Matrix(B)
Cs = sy.Matrix(C)
l1,l2 = sy.symbols("l1 l2")
L = sy.Matrix([[l1, l2]])
Ac = A+B*L
pc = Ac.eigenvals()
pc
l1l2 = sy.solve([l1/2+1+2, l1**2+4*l1+4*l2-4 + 4], (l1,l2))
L = np.array([float(l) for l in l1l2[0]])
L
Ac2 = Ac.subs(l1,-6).subs(l2,-3)
Ac2.eigenvals()
tol = 1e-12
h = 0.1
hp = h
term = np.identity(2)*h
Psi = term
k=1.0
while np.linalg.norm(term) > tol:
k += 1.0
term = np.dot(A,term)*h/k
Psi += term
hp *= h
print k
Phi = np.identity(2) + np.dot(A,Psi)
Gamma = np.dot(Psi, B)
print Phi
print Gamma
z1 = np.exp((-2-1j)*h)
z2 = np.exp((-2+1j)*h)
print z1
print z2
p1 = -z1-z2
p2 = z1*z2
Wc = np.hstack((Gamma, np.dot(Phi,Gamma)))
P = np.dot(Phi, Phi) + p1*Phi + p2*np.identity(2)
WinvP = np.dot(np.linalg.inv(Wc), P)
Ld = -np.real(WinvP[-1,:])
Ld.shape = (1,2)
print Ld
Phic = Phi + np.dot(Gamma, Ld)
np.linalg.eig(Phic)
sys_c = cm.ss(A,B,C,D)
sys_d = cm.ss(Phi, Gamma, C, D, h)
Lc = -cm.place(A, B, [-2+1j, -2-1j])
print (L-Lc)
print L
print Lc
print np.linalg.eig(A+np.dot(B,Lc))
sys_d2 = cm.c2d(sys_c, h)
print sys_d
print sys_d2
Ld2 = -cm.place(Phi,Gamma,[z1,z2])
print (Ld-Ld2)
print Ld
print Ld2
print np.linalg.eig(Phi+np.dot(Gamma,Ld2))
print z1
Ac = A + np.dot(B,Lc)
sys_cl = cm.ss(Ac, B, C, D)
dcg = cm.dcgain(sys_cl)
m = 1.0/dcg[0,0]
sys_cl = sys_cl*m
print cm.dcgain(sys_cl)
sys_cl_d1 = cm.ss(Phi+np.dot(Gamma,Lc), Gamma*m, C, D, h )
sys_cl_d2 = cm.ss(Phi+np.dot(Gamma,Ld), Gamma, C, D, h )
dcg_d = np.dot(np.dot(C, np.linalg.inv(np.identity(2)-sys_cl_d2.A)), sys_cl_d2.B)
md = 1.0/dcg_d[0,0]
sys_cl_d2 = md * sys_cl_d2
Tc = np.linspace(0,10,100)
Td = h*np.arange(100)
(yc, tc) = cm.step(sys_cl, Tc)
(yd1, td1) = cm.step(sys_cl_d1, Td)
(yd2, td2) = cm.step(sys_cl_d2, Td)
yy1 = np.ravel(yd1)
yy2 = np.ravel(yd2)
plt.plot(tc,yc)
plt.plot(td1,np.ravel(yd1)/1)
plt.plot(td2,np.ravel(yd2)/yy2[-1])
plt.xlim([0, 4])
plt.legend(["cont", "discrete w cont param", "discrete w discr param"], loc="lower right")
Tc = np.linspace(0,10,100)
Td = h*np.arange(100)
(yc, tc) = cm.impulse(sys_cl, Tc)
(yd1, td1) = cm.impulse(sys_cl_d1, Td)
(yd2, td2) = cm.impulse(sys_cl_d2, Td)
plt.plot(tc,yc)
plt.plot(td1,np.ravel(yd1))
plt.plot(td2,np.ravel(yd2))
plt.legend(["cont", "discrete w cont param", "discrete w discr param"], loc="upper right")
(yc, tc) = cm.step(sys_cl, Tc)
plt.plot(tc,yc)
for hh in [0.01, 0.1, 0.2, 0.4, 0.8, 1.6]:
sys_d2 = cm.c2d(sys_c, hh)
z1 = np.exp((-2-1j)*hh)
z2 = np.exp((-2+1j)*hh)
F = sys_d2.A
G = sys_d2.B
Ld2 = -cm.place(F,G,[z1,z2])
print Ld2
sys_cl_d = cm.ss(F+np.dot(G,Ld2), G, C, D, hh )
# Find dc gain of the discrete time system (See below)
dcg_d = np.dot(np.dot(C, np.linalg.inv(np.identity(2)-sys_cl_d.A)), sys_cl_d.B)
md = 1.0/dcg_d[0,0]
sys_cl_d = md * sys_cl_d
print md
print cm.dcgain(sys_cl_d)
Td = np.arange(1000)*hh
(yd1, td1) = cm.step(sys_cl_d, Td)
yy = np.ravel(yd1)
#plt.plot(td1,yy/yy[-1])
plt.plot(td1,yy)
plt.xlim([0, 10])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MMW production API endpoint base url.
Step2: The job is not completed instantly and the results are not returned directly by the API request that initiated the job. The user must first issue an API request to confirm that the job is complete, then fetch the results. The demo presented here performs automated retries (checks) until the server confirms the job is completed, then requests the JSON results and converts (deserializes) them into a Python dictionary.
Step3: 2. Construct AOI GeoJSON for job request
Step4: 3. Issue job requests, fetch job results when done, then examine results. Repeat for each request type
Step5: Issue job request
Step6: Everything below is just exploration of the results. Examine the content of the results (as JSON, and Python dictionaries)
Step7: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items
Step8: Issue job request
Step9: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items
Step10: Issue job request
Step11: result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items
|
<ASSISTANT_TASK:>
Python Code:
import json
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
def requests_retry_session(
retries=3,
backoff_factor=0.3,
status_forcelist=(500, 502, 504),
session=None,
):
session = session or requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session
api_url = "https://app.wikiwatershed.org/api/"
def get_job_result(api_url, s, jobrequest):
url_tmplt = api_url + "jobs/{job}/"
get_url = url_tmplt.format
result = ''
while not result:
get_req = requests_retry_session(session=s).get(get_url(job=jobrequest['job']))
result = json.loads(get_req.content)['result']
return result
s = requests.Session()
APIToken = 'Token 0501d9a98b8170a41d57df8ce82c000c477c621a' # HIDE THE API TOKEN
s.headers.update({
'Authorization': APIToken,
'Content-Type': 'application/json'
})
from shapely.geometry import box, MultiPolygon
width = 0.0004 # Looks like using a width smaller than 0.0002 causes a problem with the API?
# GOOS: (-88.5552, 40.4374) elev 240.93. Agriculture Site—Goose Creek (Corn field) Site (GOOS) at IML CZO
# SJER: (-119.7314, 37.1088) elev 403.86. San Joaquin Experimental Reserve Site (SJER) at South Sierra CZO
lon, lat = -119.7314, 37.1088
bbox = box(lon-0.5*width, lat-0.5*width, lon+0.5*width, lat+0.5*width)
payload = MultiPolygon([bbox]).__geo_interface__
json_payload = json.dumps(payload)
payload
# convenience function, to simplify the request calls, below
def analyze_api_request(api_name, s, api_url, json_payload):
post_url = "{}analyze/{}/".format(api_url, api_name)
post_req = requests_retry_session(session=s).post(post_url, data=json_payload)
jobrequest_json = json.loads(post_req.content)
# Fetch and examine job result
result = get_job_result(api_url, s, jobrequest_json)
return result
result = analyze_api_request('land', s, api_url, json_payload)
type(result), result.keys()
result['survey'].keys()
categories = result['survey']['categories']
len(categories), categories[1]
land_categories_nonzero = [d for d in categories if d['coverage'] > 0]
land_categories_nonzero
result = analyze_api_request('terrain', s, api_url, json_payload)
categories = result['survey']['categories']
len(categories), categories
[d for d in categories if d['type'] == 'average']
result = analyze_api_request('climate', s, api_url, json_payload)
categories = result['survey']['categories']
len(categories), categories[:2]
ppt = [d['ppt'] for d in categories]
tmean = [d['tmean'] for d in categories]
# ppt is in cm, right?
sum(ppt)
import calendar
import numpy as np
calendar.mdays
# Annual tmean needs to be weighted by the number of days per month
sum(np.asarray(tmean) * np.asarray(calendar.mdays[1:]))/365
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We now define a function that will create a histogram, fill it and write it to a file. Later, we will read back the histogram from disk.
Step2: All objects the class of which has a dictionary can be written on disk. By default, the most widely used ROOT classes are shipped with a dictionary
Step3: Before reading the object, we can check from the commandline the content of the file with the rootls utility
Step4: We see that the file contains one object of type TH1F, the name of which is theHisto and the title of which is My Test Histogram.
|
<ASSISTANT_TASK:>
Python Code:
import ROOT
%jsroot on
def writeHisto(outputFileName):
outputFile = ROOT.TFile(outputFileName, "RECREATE")
h = ROOT.TH1F("theHisto","My Test Histogram;X Title; Y Title",64, -4, 4)
h.FillRandom("gaus")
# now we write to the file
h.Write()
writeHisto("output.root")
%%bash
rootls -l output.root
inputFile = ROOT.TFile("output.root")
h = inputFile.theHisto
c = ROOT.TCanvas()
h.Draw()
c.Draw()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1
Step2: The input parameter
Step3: We can see that this takes considerably more time compared to the 3-beads choice.
Step4: We are usually not interested in the whole spectrum. In particular we can focus on the first (smallest) eigenvalues.
Step5: After these, we have the eigenvalues representing the non-null normal modes of the system.
Step6: 3
Step7: The MSF usually represent a decent approximation of the B-factors obtained from crystallography
Step8: 5
Step9: This is now much faster (x10) than the full-matrix diagonalization.
Step10: 5b
Step11: As we can see the final result is extremely accurate, while the computational time is much reduced, already for a relatively small-sized molecule like the one used here (71 nucleotides).
Step12: We can take a look at this trajectory using nglview
Step13: We can also save it as a pdb (or any other format) to visualize it later on with a different visualization software
Step14: References
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# import barnaba
import barnaba.enm as enm
# define the input file
fname = "../test/data/sample1.pdb"
%time enm_obj=enm.Enm(fname,sparse=False)
%time enm_AA=enm.Enm(fname,sele_atoms="AA",cutoff=0.7)
e_val=enm_obj.get_eval()
plt.plot(e_val)
plt.plot(e_val[:10],marker='s')
plt.ylabel(r'$\lambda_i$')
plt.xlabel('$i$')
plt.plot(1/e_val[6:26],marker='o')
plt.ylabel(r'$1/\lambda_i$')
plt.xlabel('$i$')
msf=enm_obj.get_MSF()
plt.figure(figsize=(12,3))
plt.plot(msf)
plt.ylabel('MSF$_i$')
plt.xlabel('$i$')
plt.xlim(-1,enm_obj.n_beads)
plt.grid()
%time fluc_C2,reslist=enm_AA.c2_fluctuations()
plt.plot(fluc_C2,c='r')
plt.ylabel('C2-C2 fluctuations')
plt.xlabel('Res index')
plt.xlim(-0.5,69.5)
%time enm_sparse=enm.Enm(fname,sparse=True,sele_atoms="AA",cutoff=0.7)
plt.plot(1/enm_AA.get_eval()[6:26],label='Full',marker='x')
plt.plot(1/enm_sparse.get_eval()[6:],marker='o',label='Sparse',ls='')
plt.legend()
plt.ylabel(r'$1/\lambda_i$')
plt.xlabel('$i$')
%time fluc_C2_sparse,reslist=enm_sparse.c2_fluctuations()
plt.plot(fluc_C2,c='r',label='Full')
plt.plot(fluc_C2_sparse,c='b',label='Sparse',ls='',marker='s')
plt.ylabel('C2-C2 fluctuations')
plt.xlabel('Res index')
plt.legend()
plt.xlim(-0.5,69.5)
traj_mode=enm_AA.get_mode_traj(6,amp=5.0,nframes=50)
import nglview
view = nglview.show_mdtraj(traj_mode)
view
traj_mode.save_pdb('./enm_traj_mode_6.pdb')
# default is beads_name="C2"
%time fluc_mat,res_list=enm_sparse.get_dist_fluc_mat()
plt.figure(figsize=(10,10))
plt.imshow(fluc_mat,)
tt=plt.xticks(np.arange(len(res_list)),res_list,rotation=90,fontsize=7)
tt=plt.yticks(np.arange(len(res_list)),res_list,fontsize=7)
# one can calculate fluctuations of different atoms
%time fluc_mat,res_list=enm_sparse.get_dist_fluc_mat(beads_name="C1\'")
plt.figure(figsize=(10,10))
plt.imshow(fluc_mat,)
tt=plt.xticks(np.arange(len(res_list)),res_list,rotation=90,fontsize=7)
tt=plt.yticks(np.arange(len(res_list)),res_list,fontsize=7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import torch
softmax_output = load_data()
y = torch.argmin(softmax_output, dim=1).view(-1, 1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: The kernel
Step3: Incorporating it in PyTorch
Step4: We use this update step in a building block for the Sinkhorn iteration
Step5: We also define a function to get the coupling itself
Step6: We define some test distributions. These are similar to examples from Python Optimal Transport.
Step7: We run a sanity check for the distance
Step8: We might also check that sinkstep is the same on GPU and CPU (Kai Zhao pointed out that this was not the case for an earlier versions of this notebook, thank you, and indeed, there was a bug in the CPU implementation.)
Step9: We can visiualize the coupling along with the marginals
Step10: This looks a lot like the coupling form Python Optimal Transport and in fact all three match results computed with POT
Step12: Performance comparison to existing implementations
Step13: With this problem size and forward + backward, we achieve a speedup factor of approximately 6.5 when doing about 3 times as many iterations.
Step14: It's fast enough to just use baricenters for interpolation
|
<ASSISTANT_TASK:>
Python Code:
import math
import time
import torch
import torch.utils.cpp_extension
%matplotlib inline
from matplotlib import pyplot
import matplotlib.transforms
import ot # for comparison
cuda_source =
#include <torch/extension.h>
#include <ATen/core/TensorAccessor.h>
#include <ATen/cuda/CUDAContext.h>
using at::RestrictPtrTraits;
using at::PackedTensorAccessor;
#if defined(__HIP_PLATFORM_HCC__)
constexpr int WARP_SIZE = 64;
#else
constexpr int WARP_SIZE = 32;
#endif
// The maximum number of threads in a block
#if defined(__HIP_PLATFORM_HCC__)
constexpr int MAX_BLOCK_SIZE = 256;
#else
constexpr int MAX_BLOCK_SIZE = 512;
#endif
// Returns the index of the most significant 1 bit in `val`.
__device__ __forceinline__ int getMSB(int val) {
return 31 - __clz(val);
}
// Number of threads in a block given an input size up to MAX_BLOCK_SIZE
static int getNumThreads(int nElem) {
#if defined(__HIP_PLATFORM_HCC__)
int threadSizes[5] = { 16, 32, 64, 128, MAX_BLOCK_SIZE };
#else
int threadSizes[5] = { 32, 64, 128, 256, MAX_BLOCK_SIZE };
#endif
for (int i = 0; i != 5; ++i) {
if (nElem <= threadSizes[i]) {
return threadSizes[i];
}
}
return MAX_BLOCK_SIZE;
}
template <typename T>
__device__ __forceinline__ T WARP_SHFL_XOR(T value, int laneMask, int width = warpSize, unsigned int mask = 0xffffffff)
{
#if CUDA_VERSION >= 9000
return __shfl_xor_sync(mask, value, laneMask, width);
#else
return __shfl_xor(value, laneMask, width);
#endif
}
// While this might be the most efficient sinkhorn step / logsumexp-matmul implementation I have seen,
// this is awfully inefficient compared to matrix multiplication and e.g. NVidia cutlass may provide
// many great ideas for improvement
template <typename scalar_t, typename index_t>
__global__ void sinkstep_kernel(
// compute log v_bj = log nu_bj - logsumexp_i 1/lambda dist_ij - log u_bi
// for this compute maxdiff_bj = max_i(1/lambda dist_ij - log u_bi)
// i = reduction dim, using threadIdx.x
PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> log_v,
const PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> dist,
const PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> log_nu,
const PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> log_u,
const scalar_t lambda) {
using accscalar_t = scalar_t;
__shared__ accscalar_t shared_mem[2 * WARP_SIZE];
index_t b = blockIdx.y;
index_t j = blockIdx.x;
int tid = threadIdx.x;
if (b >= log_u.size(0) || j >= log_v.size(1)) {
return;
}
// reduce within thread
accscalar_t max = -std::numeric_limits<accscalar_t>::infinity();
accscalar_t sumexp = 0;
if (log_nu[b][j] == -std::numeric_limits<accscalar_t>::infinity()) {
if (tid == 0) {
log_v[b][j] = -std::numeric_limits<accscalar_t>::infinity();
}
return;
}
for (index_t i = threadIdx.x; i < log_u.size(1); i += blockDim.x) {
accscalar_t oldmax = max;
accscalar_t value = -dist[i][j]/lambda + log_u[b][i];
max = max > value ? max : value;
if (oldmax == -std::numeric_limits<accscalar_t>::infinity()) {
// sumexp used to be 0, so the new max is value and we can set 1 here,
// because we will come back here again
sumexp = 1;
} else {
sumexp *= exp(oldmax - max);
sumexp += exp(value - max); // if oldmax was not -infinity, max is not either...
}
}
// now we have one value per thread. we'll make it into one value per warp
// first warpSum to get one value per thread to
// one value per warp
for (int i = 0; i < getMSB(WARP_SIZE); ++i) {
accscalar_t o_max = WARP_SHFL_XOR(max, 1 << i, WARP_SIZE);
accscalar_t o_sumexp = WARP_SHFL_XOR(sumexp, 1 << i, WARP_SIZE);
if (o_max > max) { // we're less concerned about divergence here
sumexp *= exp(max - o_max);
sumexp += o_sumexp;
max = o_max;
} else if (max != -std::numeric_limits<accscalar_t>::infinity()) {
sumexp += o_sumexp * exp(o_max - max);
}
}
__syncthreads();
// this writes each warps accumulation into shared memory
// there are at most WARP_SIZE items left because
// there are at most WARP_SIZE**2 threads at the beginning
if (tid % WARP_SIZE == 0) {
shared_mem[tid / WARP_SIZE * 2] = max;
shared_mem[tid / WARP_SIZE * 2 + 1] = sumexp;
}
__syncthreads();
if (tid < WARP_SIZE) {
max = (tid < blockDim.x / WARP_SIZE ? shared_mem[2 * tid] : -std::numeric_limits<accscalar_t>::infinity());
sumexp = (tid < blockDim.x / WARP_SIZE ? shared_mem[2 * tid + 1] : 0);
}
for (int i = 0; i < getMSB(WARP_SIZE); ++i) {
accscalar_t o_max = WARP_SHFL_XOR(max, 1 << i, WARP_SIZE);
accscalar_t o_sumexp = WARP_SHFL_XOR(sumexp, 1 << i, WARP_SIZE);
if (o_max > max) { // we're less concerned about divergence here
sumexp *= exp(max - o_max);
sumexp += o_sumexp;
max = o_max;
} else if (max != -std::numeric_limits<accscalar_t>::infinity()) {
sumexp += o_sumexp * exp(o_max - max);
}
}
if (tid == 0) {
log_v[b][j] = (max > -std::numeric_limits<accscalar_t>::infinity() ?
log_nu[b][j] - log(sumexp) - max :
-std::numeric_limits<accscalar_t>::infinity());
}
}
template <typename scalar_t>
torch::Tensor sinkstep_cuda_template(const torch::Tensor& dist, const torch::Tensor& log_nu, const torch::Tensor& log_u,
const double lambda) {
TORCH_CHECK(dist.is_cuda(), "need cuda tensors");
TORCH_CHECK(dist.device() == log_nu.device() && dist.device() == log_u.device(), "need tensors on same GPU");
TORCH_CHECK(dist.dim()==2 && log_nu.dim()==2 && log_u.dim()==2, "invalid sizes");
TORCH_CHECK(dist.size(0) == log_u.size(1) &&
dist.size(1) == log_nu.size(1) &&
log_u.size(0) == log_nu.size(0), "invalid sizes");
auto log_v = torch::empty_like(log_nu);
using index_t = int32_t;
auto log_v_a = log_v.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
auto dist_a = dist.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
auto log_nu_a = log_nu.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
auto log_u_a = log_u.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
auto stream = at::cuda::getCurrentCUDAStream();
int tf = getNumThreads(log_u.size(1));
dim3 blocks(log_v.size(1), log_u.size(0));
dim3 threads(tf);
sinkstep_kernel<<<blocks, threads, 2*WARP_SIZE*sizeof(scalar_t), stream>>>(
log_v_a, dist_a, log_nu_a, log_u_a, static_cast<scalar_t>(lambda)
);
return log_v;
}
torch::Tensor sinkstep_cuda(const torch::Tensor& dist, const torch::Tensor& log_nu, const torch::Tensor& log_u,
const double lambda) {
return AT_DISPATCH_FLOATING_TYPES(log_u.scalar_type(), "sinkstep", [&] {
return sinkstep_cuda_template<scalar_t>(dist, log_nu, log_u, lambda);
});
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("sinkstep", &sinkstep_cuda, "sinkhorn step");
}
wasserstein_ext = torch.utils.cpp_extension.load_inline("wasserstein", cpp_sources="", cuda_sources=cuda_source,
extra_cuda_cflags=["--expt-relaxed-constexpr"] )
def sinkstep(dist, log_nu, log_u, lam: float):
# dispatch to optimized GPU implementation for GPU tensors, slow fallback for CPU
if dist.is_cuda:
return wasserstein_ext.sinkstep(dist, log_nu, log_u, lam)
assert dist.dim() == 2 and log_nu.dim() == 2 and log_u.dim() == 2
assert dist.size(0) == log_u.size(1) and dist.size(1) == log_nu.size(1) and log_u.size(0) == log_nu.size(0)
log_v = log_nu.clone()
for b in range(log_u.size(0)):
log_v[b] -= torch.logsumexp(-dist/lam+log_u[b, :, None], 0)
return log_v
class SinkhornOT(torch.autograd.Function):
@staticmethod
def forward(ctx, mu, nu, dist, lam=1e-3, N=100):
assert mu.dim() == 2 and nu.dim() == 2 and dist.dim() == 2
bs = mu.size(0)
d1, d2 = dist.size()
assert nu.size(0) == bs and mu.size(1) == d1 and nu.size(1) == d2
log_mu = mu.log()
log_nu = nu.log()
log_u = torch.full_like(mu, -math.log(d1))
log_v = torch.full_like(nu, -math.log(d2))
for i in range(N):
log_v = sinkstep(dist, log_nu, log_u, lam)
log_u = sinkstep(dist.t(), log_mu, log_v, lam)
# this is slight abuse of the function. it computes (diag(exp(log_u))*Mt*exp(-Mt/lam)*diag(exp(log_v))).sum()
# in an efficient (i.e. no bxnxm tensors) way in log space
distances = (-sinkstep(-dist.log()+dist/lam, -log_v, log_u, 1.0)).logsumexp(1).exp()
ctx.log_v = log_v
ctx.log_u = log_u
ctx.dist = dist
ctx.lam = lam
return distances
@staticmethod
def backward(ctx, grad_out):
return grad_out[:, None] * ctx.log_u * ctx.lam, grad_out[:, None] * ctx.log_v * ctx.lam, None, None, None
def get_coupling(mu, nu, dist, lam=1e-3, N=1000):
assert mu.dim() == 2 and nu.dim() == 2 and dist.dim() == 2
bs = mu.size(0)
d1, d2 = dist.size()
assert nu.size(0) == bs and mu.size(1) == d1 and nu.size(1) == d2
log_mu = mu.log()
log_nu = nu.log()
log_u = torch.full_like(mu, -math.log(d1))
log_v = torch.full_like(nu, -math.log(d2))
for i in range(N):
log_v = sinkstep(dist, log_nu, log_u, lam)
log_u = sinkstep(dist.t(), log_mu, log_v, lam)
return (log_v[:, None, :]-dist/lam+log_u[:, :, None]).exp()
# some test distribution densities
n = 100
lam = 1e-3
x = torch.linspace(0, 100, n)
mu1 = torch.distributions.Normal(20., 10.).log_prob(x).exp()
mu2 = torch.distributions.Normal(60., 30.).log_prob(x).exp()
mu3 = torch.distributions.Normal(40., 20.).log_prob(x).exp()
mu1 /= mu1.sum()
mu2 /= mu2.sum()
mu3 /= mu3.sum()
mu123 = torch.stack([mu1, mu2, mu3], dim=0)
mu231 = torch.stack([mu2, mu3, mu1], dim=0)
cost = (x[None, :]-x[:, None])**2
cost /= cost.max()
pyplot.plot(mu1, label="$\mu_1$")
pyplot.plot(mu2, label="$\mu_2$")
pyplot.plot(mu3, label="$\mu_3$")
pyplot.legend();
t = time.time()
device = "cuda"
res = torch.autograd.gradcheck(lambda x: SinkhornOT.apply(x.softmax(1),
mu231.to(device=device, dtype=torch.double),
cost.to(device=device, dtype=torch.double),
lam, 500),
(mu123.log().to(device=device, dtype=torch.double).requires_grad_(),))
print("OK? {} took {:.0f} sec".format(res, time.time()-t))
res_cpu = sinkstep(cost.cpu(), mu123.log().cpu(), mu231.log().cpu(), lam)
res_gpu = sinkstep(cost.to(device), mu123.log().to(device), mu231.log().to(device), lam).cpu()
assert (res_cpu - res_gpu).abs().max() < 1e-5
coupling = get_coupling(mu123.cuda(), mu231.cuda(), cost.cuda())
pyplot.figure(figsize=(10,10))
pyplot.subplot(2, 2, 1)
pyplot.plot(mu2.cpu())
pyplot.subplot(2, 2, 4)
pyplot.plot(mu1.cpu(), transform=matplotlib.transforms.Affine2D().rotate_deg(270) + pyplot.gca().transData)
pyplot.subplot(2, 2, 3)
pyplot.imshow(coupling[0].cpu());
o_coupling12 = torch.tensor(ot.bregman.sinkhorn_stabilized(mu1.cpu(), mu2.cpu(), cost.cpu(), reg=1e-3))
o_coupling23 = torch.tensor(ot.bregman.sinkhorn_stabilized(mu2.cpu(), mu3.cpu(), cost.cpu(), reg=1e-3))
o_coupling31 = torch.tensor(ot.bregman.sinkhorn_stabilized(mu3.cpu(), mu1.cpu(), cost.cpu(), reg=1e-3))
pyplot.imshow(o_coupling12)
o_coupling = torch.stack([o_coupling12, o_coupling23, o_coupling31], dim=0)
(o_coupling.float() - coupling.cpu()).abs().max().item()
# Copyright 2018 Daniel Dazac
# MIT Licensed
# License and source: https://github.com/dfdazac/wassdistance/
class SinkhornDistance(torch.nn.Module):
r
Given two empirical measures each with :math:`P_1` locations
:math:`x\in\mathbb{R}^{D_1}` and :math:`P_2` locations :math:`y\in\mathbb{R}^{D_2}`,
outputs an approximation of the regularized OT cost for point clouds.
Args:
eps (float): regularization coefficient
max_iter (int): maximum number of Sinkhorn iterations
reduction (string, optional): Specifies the reduction to apply to the output:
'none' | 'mean' | 'sum'. 'none': no reduction will be applied,
'mean': the sum of the output will be divided by the number of
elements in the output, 'sum': the output will be summed. Default: 'none'
Shape:
- Input: :math:`(N, P_1, D_1)`, :math:`(N, P_2, D_2)`
- Output: :math:`(N)` or :math:`()`, depending on `reduction`
def __init__(self, eps, max_iter, reduction='none'):
super(SinkhornDistance, self).__init__()
self.eps = eps
self.max_iter = max_iter
self.reduction = reduction
def forward(self, mu, nu, C):
u = torch.zeros_like(mu)
v = torch.zeros_like(nu)
# To check if algorithm terminates because of threshold
# or max iterations reached
actual_nits = 0
# Stopping criterion
thresh = 1e-1
# Sinkhorn iterations
for i in range(self.max_iter):
u1 = u # useful to check the update
u = self.eps * (torch.log(mu+1e-8) - torch.logsumexp(self.M(C, u, v), dim=-1)) + u
v = self.eps * (torch.log(nu+1e-8) - torch.logsumexp(self.M(C, u, v).transpose(-2, -1), dim=-1)) + v
err = (u - u1).abs().sum(-1).mean()
actual_nits += 1
if err.item() < thresh:
break
U, V = u, v
# Transport plan pi = diag(a)*K*diag(b)
pi = torch.exp(self.M(C, U, V))
# Sinkhorn distance
cost = torch.sum(pi * C, dim=(-2, -1))
self.actual_nits = actual_nits
if self.reduction == 'mean':
cost = cost.mean()
elif self.reduction == 'sum':
cost = cost.sum()
return cost, pi, C
def M(self, C, u, v):
"Modified cost for logarithmic updates"
"$M_{ij} = (-c_{ij} + u_i + v_j) / \epsilon$"
return (-C + u.unsqueeze(-1) + v.unsqueeze(-2)) / self.eps
@staticmethod
def ave(u, u1, tau):
"Barycenter subroutine, used by kinetic acceleration through extrapolation."
return tau * u + (1 - tau) * u1
n = 100
x = torch.linspace(0, 100, n)
mu1 = torch.distributions.Normal(20., 10.).log_prob(x).exp()
mu2 = torch.distributions.Normal(60., 30.).log_prob(x).exp()
mu1 /= mu1.sum()
mu2 /= mu2.sum()
mu1, mu2, cost = mu1.cuda(), mu2.cuda(), cost.cuda()
sinkhorn = SinkhornDistance(eps=1e-3, max_iter=200)
def x():
mu1_ = mu1.detach().requires_grad_()
dist, P, C = sinkhorn(mu1_, mu2, cost)
gr, = torch.autograd.grad(dist, mu1_)
torch.cuda.synchronize()
dist, P, C = sinkhorn(mu1.cuda(), mu2.cuda(), cost.cuda())
torch.cuda.synchronize()
x()
%timeit x()
pyplot.imshow(P.cpu())
sinkhorn.actual_nits
def y():
mu1_ = mu1.detach().requires_grad_()
l = SinkhornOT.apply(mu1_.unsqueeze(0), mu2.unsqueeze(0), cost, 1e-3, 200)
gr, = torch.autograd.grad(l.sum(), mu1_)
torch.cuda.synchronize()
y()
%timeit y()
N = 50
a, b, c = torch.zeros(3, N, N, device="cuda")
x = torch.linspace(-5, 5, N, device="cuda")
a[N//5:-N//5, N//5:-N//5] = 1
b[(x[None]**2+x[:,None]**2 > 4) & (x[None]**2+x[:,None]**2 < 9)] = 1
c[((x[None]-2)**2+(x[:,None]-2)**2 < 4) | ((x[None]+2)**2+(x[:,None]+2)**2 < 4)] = 1
pyplot.imshow(c.cpu(), cmap=pyplot.cm.gray_r)
coords = torch.stack([x[None, :].expand(N, N), x[:, None].expand(N, N)], 2).view(-1, 2)
dist = ((coords[None]-coords[:, None])**2).sum(-1)
dist /= dist.max()
a = (a / a.sum()).view(1, -1)
b = (c / b.sum()).view(1, -1)
c = (c / c.sum()).view(1, -1)
SinkhornOT.apply(a, b, dist, 1e-3, 200)
def get_barycenter(mu, dist, weights, lam=1e-3, N=1000):
assert mu.dim() == 2 and dist.dim() == 2 and weights.dim() == 1
bs = mu.size(0)
d1, d2 = dist.size()
assert mu.size(1) == d1 and d1 == d2 and weights.size(0) == bs
log_mu = mu.log()
log_u = torch.full_like(mu, -math.log(d1))
zeros = torch.zeros_like(log_u)
for i in range(N):
log_v = sinkstep(dist.t(), log_mu, log_u, lam)
log_u = sinkstep(dist, zeros, log_v, lam)
a = torch.sum(-weights[:, None] * log_u, dim=0, keepdim=True)
log_u += a
return (log_v[:, None, :]-dist/lam+log_u[:, :, None]).exp()
res = []
for i in torch.linspace(0, 1, 10):
res.append(get_barycenter(torch.cat([a, b, c], 0), dist, torch.tensor([i*0.9, (1-i)*0.9, 0], device="cuda"), N=100))
pyplot.figure(figsize=(15,5))
pyplot.imshow(torch.cat([r[0].sum(1).view(N, N).cpu() for r in res], 1), cmap=pyplot.cm.gray_r)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NOTE on notation
Step2: Q23. Given X below, reverse the last dimension.
Step3: Q24. Given X below, permute its dimensions such that the new tensor has shape (3, 1, 2).
Step4: Q25. Given X, below, get the first, and third rows.
Step5: Q26. Given X below, get the elements 5 and 7.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import tensorflow as tf
import numpy as np
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
sess = tf.InteractiveSession()
X = tf.constant(
[[[0, 0, 1],
[0, 1, 0],
[0, 0, 0]],
[[0, 0, 1],
[0, 1, 0],
[1, 0, 0]]])
_X = np.arange(1, 1*2*3*4 + 1).reshape((1, 2, 3, 4))
X = tf.convert_to_tensor(_X)
_X = np.ones((1, 2, 3))
X = tf.convert_to_tensor(_X)
_X = np.arange(1, 10).reshape((3, 3))
X = tf.convert_to_tensor(_X)
_X = np.arange(1, 10).reshape((3, 3))
X = tf.convert_to_tensor(_X)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define formulae
Step2: Apply formulae to a range of x-values
Step3: Figure 1 from paper
Step4: Apply the distribution to simulated data
|
<ASSISTANT_TASK:>
Python Code:
% matplotlib inline
import numpy as np
import math
import nibabel as nib
import scipy.stats as stats
import matplotlib.pyplot as plt
from nipy.labs.utils.simul_multisubject_fmri_dataset import surrogate_3d_dataset
import palettable.colorbrewer as cb
from nipype.interfaces import fsl
import os
import pandas as pd
def peakdens3D(x,k):
fd1 = 144*stats.norm.pdf(x)/(29*6**(0.5)-36)
fd211 = k**2.*((1.-k**2.)**3. + 6.*(1.-k**2.)**2. + 12.*(1.-k**2.)+24.)*x**2. / (4.*(3.-k**2.)**2.)
fd212 = (2.*(1.-k**2.)**3. + 3.*(1.-k**2.)**2.+6.*(1.-k**2.)) / (4.*(3.-k**2.))
fd213 = 3./2.
fd21 = (fd211 + fd212 + fd213)
fd22 = np.exp(-k**2.*x**2./(2.*(3.-k**2.))) / (2.*(3.-k**2.))**(0.5)
fd23 = stats.norm.cdf(2.*k*x / ((3.-k**2.)*(5.-3.*k**2.))**(0.5))
fd2 = fd21*fd22*fd23
fd31 = (k**2.*(2.-k**2.))/4.*x**2. - k**2.*(1.-k**2.)/2. - 1.
fd32 = np.exp(-k**2.*x**2./(2.*(2.-k**2.))) / (2.*(2.-k**2.))**(0.5)
fd33 = stats.norm.cdf(k*x / ((2.-k**2.)*(5.-3.*k**2.))**(0.5))
fd3 = fd31 * fd32 * fd33
fd41 = (7.-k**2.) + (1-k**2)*(3.*(1.-k**2.)**2. + 12.*(1.-k**2.) + 28.)/(2.*(3.-k**2.))
fd42 = k*x / (4.*math.pi**(0.5)*(3.-k**2.)*(5.-3.*k**2)**0.5)
fd43 = np.exp(-3.*k**2.*x**2/(2.*(5-3.*k**2.)))
fd4 = fd41*fd42 * fd43
fd51 = math.pi**0.5*k**3./4.*x*(x**2.-3.)
f521low = np.array([-10.,-10.])
f521up = np.array([0.,k*x/2.**(0.5)])
f521mu = np.array([0.,0.])
f521sigma = np.array([[3./2., -1.],[-1.,(3.-k**2.)/2.]])
fd521,i = stats.mvn.mvnun(f521low,f521up,f521mu,f521sigma)
f522low = np.array([-10.,-10.])
f522up = np.array([0.,k*x/2.**(0.5)])
f522mu = np.array([0.,0.])
f522sigma = np.array([[3./2., -1./2.],[-1./2.,(2.-k**2.)/2.]])
fd522,i = stats.mvn.mvnun(f522low,f522up,f522mu,f522sigma)
fd5 = fd51*(fd521+fd522)
out = fd1*(fd2+fd3+fd4+fd5)
return out
xs = np.arange(-4,4,0.01).tolist()
ys_3d_k01 = []
ys_3d_k05 = []
ys_3d_k1 = []
ys_2d_k01 = []
ys_2d_k05 = []
ys_2d_k1 = []
ys_1d_k01 = []
ys_1d_k05 = []
ys_1d_k1 = []
for x in xs:
ys_1d_k01.append(peakdens1D(x,0.1))
ys_1d_k05.append(peakdens1D(x,0.5))
ys_1d_k1.append(peakdens1D(x,1))
ys_2d_k01.append(peakdens2D(x,0.1))
ys_2d_k05.append(peakdens2D(x,0.5))
ys_2d_k1.append(peakdens2D(x,1))
ys_3d_k01.append(peakdens3D(x,0.1))
ys_3d_k05.append(peakdens3D(x,0.5))
ys_3d_k1.append(peakdens3D(x,1))
plt.figure(figsize=(7,5))
plt.plot(xs,ys_1d_k01,color="black",ls=":",lw=2)
plt.plot(xs,ys_1d_k05,color="black",ls="--",lw=2)
plt.plot(xs,ys_1d_k1,color="black",ls="-",lw=2)
plt.plot(xs,ys_2d_k01,color="blue",ls=":",lw=2)
plt.plot(xs,ys_2d_k05,color="blue",ls="--",lw=2)
plt.plot(xs,ys_2d_k1,color="blue",ls="-",lw=2)
plt.plot(xs,ys_3d_k01,color="red",ls=":",lw=2)
plt.plot(xs,ys_3d_k05,color="red",ls="--",lw=2)
plt.plot(xs,ys_3d_k1,color="red",ls="-",lw=2)
plt.ylim([-0.1,0.55])
plt.show()
os.chdir("/Users/Joke/Documents/Onderzoek/Studie_7_newpower/WORKDIR/")
sm=1
smooth_FWHM = 3
smooth_sd = smooth_FWHM/(2*math.sqrt(2*math.log(2)))
data = surrogate_3d_dataset(n_subj=1,sk=smooth_sd,shape=(500,500,500),noise_level=1)
minimum = data.min()
newdata = data - minimum #little trick because fsl.model.Cluster ignores negative values
img=nib.Nifti1Image(newdata,np.eye(4))
img.to_filename(os.path.join("RF_"+str(sm)+".nii.gz"))
cl=fsl.model.Cluster()
cl.inputs.threshold = 0
cl.inputs.in_file=os.path.join("RF_"+str(sm)+".nii.gz")
cl.inputs.out_localmax_txt_file=os.path.join("locmax_"+str(sm)+".txt")
cl.inputs.num_maxima=10000000
cl.inputs.connectivity=26
cl.inputs.terminal_output='none'
cl.run()
peaks = pd.read_csv("locmax_"+str(1)+".txt",sep="\t").drop('Unnamed: 5',1)
peaks.Value = peaks.Value + minimum
xn = np.arange(-10,10,0.01)
yn = []
for x in xn:
yn.append(peakdens3D(x,1))
plt.figure(figsize=(7,5))
plt.hist(peaks.Value,lw=0,facecolor=twocol[0],normed=True,bins=np.arange(-5,5,0.1),label="observed distribution")
plt.xlim([-2,5])
plt.ylim([0,0.6])
plt.plot(xn,yn,color=twocol[1],lw=3,label="theoretical distribution")
plt.title("histogram")
plt.xlabel("peak height")
plt.ylabel("density")
plt.legend(loc="upper left",frameon=False)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Variable
Step2: Place Holder
Step3: 除了variable和placeholder以外,还需要各种各样的node (常数,运算等)
Step4: add Node
Step5: 一切的grpth完成后,需要用session来执行graph以启动正式的运算
Step6: 对于定义的variable,需要进行初始化操作
|
<ASSISTANT_TASK:>
Python Code:
import numpy as py
import tensorflow as tf
import matplotlib as plt
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b # + provides a shortcut for tf.add(a, b)
node1 = tf.constant(3.0, dtype=tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
node3 = tf.add(node1, node2)
node4 = a + b
sess = tf.Session()
print(sess.run([node1, node2]))
print(sess.run(adder_node, {a: 3, b: 4.5}))
print(sess.run(adder_node, {a: [1, 3], b: [2, 4]}))
add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b: 4.5}))
init = tf.global_variables_initializer()
sess.run(init)
print(sess.run(linear_model, {x: [1, 2, 3, 4]}))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Step6: Generator
Step7: Discriminator
Step9: Model Loss
Step11: Optimizers
Step12: Building the model
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
Step15: Hyperparameters
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
# Test: Know about trainset
type(trainset)
trainset.keys()
trainset["X"].shape
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
# 4x4x512 now
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
# 8x8x256 now
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# 16x16x128 now
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
# 32x32x3 now
out = tf.tanh(logits)
return out
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
# 16x16x64
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
# 8x8x128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
# 4x4x256
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0002
batch_size = 128
epochs = 25
alpha = 0.2
beta1 = 0.5
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None, use_bias=False)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool, name='is_training')
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs,
labels: batch_ys,
is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
prev_shape = prev_layer.get_shape().as_list()
weights = tf.Variable(tf.random_normal([int(prev_shape[-1]), num_units], stddev=0.05))
layer = tf.matmul(prev_layer, weights)
beta = tf.Variable(tf.ones([num_units]))
gamma = tf.Variable(tf.zeros([num_units]))
pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False)
pop_variance = tf.Variable(tf.zeros([num_units]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
decay = 0.99
batch_mean, batch_variance = tf.nn.moments(layer, [0])
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
batch_norm_layer = (layer - batch_mean) / tf.sqrt(batch_variance + epsilon)
return batch_norm_layer
def batch_norm_inference():
batch_norm_layer = (layer - pop_mean) / tf.sqrt(pop_variance + epsilon)
return batch_norm_layer
layer = tf.cond(is_training, batch_norm_training, batch_norm_inference)
layer = gamma * layer + beta
layer = tf.nn.relu(layer)
return layer
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
beta = tf.Variable(tf.zeros([out_channels]))
gamma = tf.Variable(tf.ones([out_channels]))
pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False)
pop_variance = tf.Variable(tf.zeros([out_channels]), trainable=False)
#bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
epsilon = 1e-3
def batch_norm_training():
decay = 0.99
batch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False)
train_mean = tf.assign(pop_mean, batch_mean * decay + pop_mean * (1 - decay))
train_variance = tf.assign(pop_variance, batch_variance * decay + pop_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
batch_norm_layer = (conv_layer - batch_mean) / tf.sqrt(batch_variance + epsilon)
return batch_norm_layer
def batch_norm_inferene():
batch_norm_layer = (conv_layer - pop_mean) / tf.sqrt(pop_variance + epsilon)
return batch_norm_layer
conv_layer = tf.cond(is_training, batch_norm_training, batch_norm_inferene)
conv_layer = gamma * conv_layer + beta
#conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
# Setting initial conditions
layer = tf.placeholder(tf.float32, [None, 28, 28, 1])
print('Simulating parameter propagation over 5 steps:')
for layer_i in range(1,5):
strides = 2 if layer_i % 3 == 0 else 1
input_shape = layer.get_shape().as_list()
in_channels = layer.get_shape().as_list()[3]
out_channels = layer_i*4
weights = tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05)
layer = tf.nn.conv2d(layer, weights, strides=[1,strides, strides, 1], padding='SAME')
print('-----------------------------------------------')
print('strides:{0}'.format(strides))
print('Input layer shape:{0}'.format(input_shape))
print('in_channels:{0}'.format(in_channels))
print('out_channels:{0}'.format(out_channels))
print('Truncated normal output:{0}'.format(weights))
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training:True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SVM calibration
Step2: Now we check the parameters of the best estimator
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import grid_search
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from time import time
import numpy as np
import pandas as pd
import sys
#Test each array of features in the training dataset
if __name__ == '__main__':
csv_file_features = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13],
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13]]
estimators_array = []
print('Order of results')
print('Training score, Test score, Time, Features, SVM parameters\n')
C_range = [0.01, 0.1, 1, 10, 100]
gamma_range = [0.01, 0.1, 1, 10, 100]
#Test for all the features sets
for i in csv_file_features:
stdsc = StandardScaler()
df = pd.read_csv("../../Dataset/Train/EEG_Train_Sorted.csv")
#Separate Class labels anda data
X = df.ix[:, i]
y = df['Class']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
#Standarize data
X_train_std = stdsc.fit_transform(X_train)
X_test_std = stdsc.transform(X_test)
#Create SVM and define parameters
svm = SVC()
param_grid = [{'C': C_range,
'gamma': gamma_range,
'kernel': ['rbf'],
'decision_function_shape': ['ovr'],
'random_state': [0]}]
#Set Grid Params
init = time()
gscv = grid_search.GridSearchCV(svm,
param_grid,
n_jobs=6,
cv=StratifiedShuffleSplit(y=y_train,
n_iter=5,
test_size=0.4,
random_state=0))
#Test classifier
gscv.fit(X_train_std, y_train)
sys.exit("Error message")
#Save our estimator
estimators_array.append(gscv)
#Report testing results
print('{:.5f}, {:.5f}, {:.5f}s, {}, {}'.format(gscv.best_score_, gscv.score(X_test_std, y_test), time() - init, i, gscv.best_params_))
#Get best estimator parameters
print (estimators_array[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1D Optimal Classifier
Step2: Let's generate some data. Below we have provided you with a function that gives you two cases
Step3: Now, let's actually run the function and generate some data.
Step4: Often when dealing with data you may want to either save something you generated or load data you got from somebody else. One of the most common formats is HDF5, a "data model, library, and file format for storing and managing data." It is also the most common storage format in data science. h5py provides a python API for HDF5. In most cases, you do not need to know very much about HDF5 or h5py, just how to read/write tensors into/from files, which you can easily pick up from the h5py (Quick Start)[http
Step5: Here's how we can retreive the data back.
Step6: Create histograms and plot the data.
Step7: Next, let's compute the log-likelihood ratio (LLR).
Step8: We can also plot the mapping between the input feature vector and the LLR.
Step9: Don't worry about the 'bumps' towards the edges of the plot, that's just due to low statistics.
Step10: Finally, we can scan a threshold cut on the LLR and make the ROC curve.
Step11: That's it for the first part of this tutorial. Now let's do some Machine Learning!
Step12: Another things is that Keras require the 'X' inputs to be formatted in a certain way. Here's a simple example of that.
Step13: Now when we hopefully understand a little more about how to manipulate numpy arrays, let's prepare our actual data for training a Neural Network in Keras.
Step14: Finally we get to the fun stuff, let's implement our first Neural Network in Keras. To learn more what all this does, you are strongly encouraged to go and read the Keras documentation.
Step15: Now let's add some layers to our model with "Dense". Dense implements the operation
Step16: Next, we compile our model and choose 'binary_crossentropy' as our loss function and 'ADAM' (adaptive moment estimation) as the optimizer (an extension of stochastic gradient descent).
Step17: We then train our model in the training data. Keras automatically runs validation on the test data.
Step18: This very simple model should converge very quickly (even after 1-3 epochs). Training more complicated networks can take a very long time (days, weeks, or even months).
Step19: You can plot the loss versus epoch.
Step20: Finally, let's plot the ROC curve and compare it to the result we got form the 1D optimal classifier above.
|
<ASSISTANT_TASK:>
Python Code:
# Import the print function that is compatible with Python 3
from __future__ import print_function
# Import numpy - the fundamental package for scientific computing with Python
import numpy as np
# Import plotting Python plotting from matplotlib
import matplotlib.pyplot as plt
# Generate 1000 samples from a Gaussian pdf with mu=5 and sigma=2
mu = 5
sigma = 2
number_of_samples = 1000
test_samples = np.random.normal(mu, sigma, number_of_samples)
# Plotting with matplotlib
# First clear the figures
plt.clf()
# Segment the canvas into upper and lower subplots
plt.subplot(211)
# Plot the random numbers
plt.plot(test_samples)
plt.subplot(212)
# Histogram the numbers
plt.hist(test_samples, bins=100)
# Display the canvas
plt.show()
# Function that generates N signal and N background samples (note that 'do_simple' is true by default)
def generate_samples(N, do_superposition=False):
# Case 1: Signal and background are each a single Gaussian
if not do_superposition:
# Signal Gaussian has mean 1 and standard deviation 1
mu1, sigma1 = 1, 1
signal = np.random.normal(mu1, sigma1, N)
# Background Gaussian has mean 1 and standard deviation 1
mu1, sigma1 = -1, 1
background = np.random.normal(mu1, sigma1, N)
# Case 2: Signal and background are superpositions of Gaussians
else:
mu1a, sigma1a = -1.1, 0.5
x1a = np.random.normal(mu1a, sigma1a, int(0.6*N))
mu1b, sigma1b = 1, 1
x1b = np.random.normal(mu1b, sigma1b, int(0.4*N))
mu2a, sigma2a = 2, 0.5
x2a = np.random.normal(mu2a, sigma2a, int(0.7*N))
mu2b, sigma2b = -1, 1
x2b = np.random.normal(mu2b, sigma2b, int(0.3*N))
signal = np.append(x1a,x1b)
background = np.append(x2a,x2b)
return signal, background
# If 'do_superposition = True' we get multiple Gaussians
do_superposition = True
# Number of samples
N = 10000000
# Number of bins in the histograms
nbins = 500
# Generate signal and background
signal, background = generate_samples(N, do_superposition)
import h5py
# create a new file
h5_file = h5py.File("data1.h5", "w")
h5_file.create_dataset('signal', data=signal)
h5_file.create_dataset('background', data=background)
h5_file.close()
h5_file_readonly = h5py.File('data1.h5','r')
signal = h5_file_readonly['signal'][:]
background = h5_file_readonly['background'][:]
h5_file_readonly.close()
# Plot the histograms
plt.clf()
plt.hist(signal, 50, density=True, facecolor='blue', alpha=0.75, label='S')
plt.hist(background, 50, density=True, facecolor='red', alpha=0.75, label='B')
plt.xlabel('Input feature x')
plt.ylabel('Probability density (arbitrary units)')
plt.title(r'Signal and Background')
plt.legend(loc='upper right')
plt.axis([-5, 5, 0, 0.7])
plt.grid(True)
plt.show()
# Create the histograms, which we will use for calculating the log-likelihood ratio (LLR) below
h_signal = np.histogram(signal, bins=500, range=(-5,5))
h_background = np.histogram(background, bins=500, range=(-5,5))
LL_dict = {} # used only for plotting
LL_dict_bybin = {} # used for computing
for i in range(len(h_signal[0])):
# the if statements are there to account for "binning effects"
if (h_background[0][i] > 0 and h_signal[0][i] > 0):
LL_dict[h_background[1][i]] = np.log(1.*h_signal[0][i]/h_background[0][i])
elif (h_signal[0][i] > 0): # in case background bin = 0
LL_dict[h_background[1][i]] = np.log(100000.) #huge number
elif (h_background[0][i] > 0): # in case signal bin = 0
LL_dict[h_background[1][i]] = np.log(1./100000.) #very small number
else:
LL_dict[h_background[1][i]] = np.log(1.)
LL_dict_bybin[i] = LL_dict[h_background[1][i]]
# array of 'x' values
xvals = [d for d in LL_dict]
# array of 'y' values
yvals = [LL_dict[d] for d in LL_dict]
xvals = np.array(xvals)
yvals = np.array(yvals)
# Return the indices that result from sorting the array (but do not modify the array itself)
index_sorted = xvals.argsort()
# Sort the arrays
xvals = xvals[index_sorted[::-1]]
yvals = yvals[index_sorted[::-1]]
# Plot the LLR as a function of input feature x
plt.clf()
plt.plot(xvals,yvals)
plt.xlabel('Input feature x')
plt.ylabel('Log Likelihood Ratio')
plt.title(r'LLR as a function of x')
plt.axis([-6,6,-6,6])
plt.show()
# Number of bins in the histgrams
nbins = 50
# Create histograms
h_signal_yvals = np.histogram([], bins=nbins, range=(-10,10))
h_background_yvals = np.histogram([], bins=nbins, range=(-10,10))
# Fill histograms
for i in range(len(h_signal[0])):
whichbin = np.digitize(LL_dict[h_signal[1][i]], h_signal_yvals[1])
if (whichbin > 49):
whichbin = 49
h_signal_yvals[0][whichbin]+=h_signal[0][i]
h_background_yvals[0][whichbin]+=h_background[0][i]
# Plot the PDF of the LLR
plt.clf()
plt.xlabel('Log Likelihood Ratio')
plt.ylabel('Probability Density (arbitrary units)')
plt.title(r'Signal and Background')
plt.bar(h_signal_yvals[1][:-1],h_signal_yvals[0], width=h_signal_yvals[1][1]-h_signal_yvals[1][0],facecolor='blue', alpha=0.75, label='S')
plt.bar(h_background_yvals[1][:-1],h_background_yvals[0], width=h_background_yvals[1][1]-h_background_yvals[1][0],facecolor='red', alpha=0.75, label='B')
plt.show()
# Make the ROC curve
ROCx = np.zeros(nbins)
ROCy = np.zeros(nbins)
intx = 0.
inty = 0.
for i in range(nbins):
intx+=h_signal_yvals[0][i]
inty+=h_background_yvals[0][i]
for i in range(nbins):
sum_signal = 0.
sum_background = 0.
for j in range(i,len(h_signal_yvals[1])-1):
sum_signal+=h_signal_yvals[0][j]
sum_background+=h_background_yvals[0][j]
ROCx[i] = sum_signal/intx
ROCy[i] = sum_background/inty
# Plot the ROC curve
plt.clf()
plt.axes().set_aspect('equal')
plt.plot(ROCx,ROCy,label="LLR")
plt.plot([0,1],[0,1],linestyle='--',color="#C0C0C0",label="Random")
plt.xlabel('Pr(label signal | signal)')
plt.ylabel('Pr(label signal | background)')
plt.title(r'ROC Curve')
plt.axis([0, 1, 0, 1])
plt.legend(loc='upper left')
plt.show()
# Say we have two 1x3 arrays (e.g. A=signal, B=background)
A = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype=np.float32)
B = np.array([11, 12, 13, 14, 15, 16, 17, 18, 19, 20], dtype=np.float32)
# We want to have labels '1' associated with the signal (A) and labels '0' associated with the background (B)
A_labels = np.ones(10)
B_labels = np.zeros(10)
print('\nA: {}'.format(A))
print('B: {}'.format(B))
print('\nA_labels: {}'.format(A_labels))
print('B_labels: {}\n'.format(B_labels))
# We can concatenate the A and B arrays, and the A_labels and B_labels array like this
C = np.concatenate((A,B))
C_labels = np.concatenate((A_labels,B_labels))
print('\nC: {}'.format(C))
print('C_labels: {}'.format(C_labels))
# Before training on the a dataset one often want to split it up into a 'training set' and a 'test set'
# There is a useful function in scikit-learn that does this for you
# This function also scrambles the examples
from sklearn.model_selection import train_test_split
C_train, C_test, C_labels_train, C_labels_test, = train_test_split(C, C_labels, test_size=3, random_state=1)
# If this seems confusing, taking a look at the print output below should hopefully make things clear
print('\nC_train: {}'.format(C_train))
print('C_labels_train: {}'.format(C_labels_train))
print('\nC_test: {}'.format(C_test))
print('\nC_labels_test: {}'.format(C_labels_test))
A = np.array([1, 2, 3, 4], dtype=np.float32)
print(A)
AT = np.array(A)[np.newaxis].T
print(AT)
# total number of signal + background events
n_signal = len(signal)
n_background = len(background)
n_total = len(signal) + len(background)
# use 90% of the total number of events for training the network
n_train = int(0.9*n_total)
# use the remaning 10% for testing
n_test = n_total-n_train
# generate an array of ones as signal labels
sig_labels = np.ones(n_signal)
# generate an array of zeros as background labels
bkg_labels = np.zeros(n_background)
# concatenate the signal and background samples
X = np.concatenate((signal,background))
y = np.concatenate((sig_labels,bkg_labels))
# Format the inputs for Keras
X = np.array(X)[np.newaxis].T
# split the dataset into a training and a validation set and scamble the inputs (as illustrated above)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test, = train_test_split(X, y, test_size=n_test, random_state=1)
from keras.models import Sequential
model = Sequential()
from keras.layers import Dense
# Since our samples are only the X values (of either signal or background), the first layer just has one input dimension
model.add(Dense(1, input_dim=1, kernel_initializer='normal', activation='relu'))
# We then implement only one hidden layer with 8 neurons (you can experiment with changing this number)
n_neurons_hidden = 8
model.add(Dense(n_neurons_hidden, kernel_initializer='normal', activation='relu'))
# Finally we add one output layer
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Print a summary of the model structure
model.summary()
history=model.fit(X_train, y_train, validation_data=(X_test,y_test), epochs=3, batch_size=2048)
print(history.history)
loss_history=history.history["loss"]
plt.plot(range(len(loss_history)),loss_history)
plt.show()
# Here we make use of a function in scikit-learn to calculate the ROC curve
from sklearn.metrics import roc_curve, auc
fpr, tpr, _ = roc_curve(y_test, model.predict(X_test))
roc_auc = auc(fpr, tpr)
# Plot the ROC curve from the NN and overlay the ROC curve from the 1D classifier
plt.clf()
plt.axes().set_aspect('equal')
plt.plot(ROCx,ROCy,label="LLR")
plt.plot(tpr,fpr,color='darkorange',label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0,1],[0,1],linestyle='--',color="#C0C0C0",label="Random")
plt.xlabel('Pr(label signal | signal)')
plt.ylabel('Pr(label signal | background)')
plt.title(r'ROC Curve')
plt.axis([0, 1, 0, 1])
plt.legend(loc='upper left')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now a 4-sided die and pull out hierarchial info
Step2: Build the whole thing out the Bernoulli dists
|
<ASSISTANT_TASK:>
Python Code:
observations = np.array([20, 6, 6, 6, 6, 6])
with pm.Model():
probs = pm.Dirichlet('probs', a=np.ones(6)) # flat prior
rolls = pm.Multinomial('rolls', n=50, p=probs, observed=observations)
trace = pm.sample(5000)
pm.plot_posterior(trace);
pm.traceplot(trace)
# fair fwould be all the same, we want different
# 1/4 = 0.25, so we want [10%, 20%, 30%, 40%]
observations = np.array([50*.1, 50*.2, 50*.3 ,50*.4])
with pm.Model():
probs = pm.Dirichlet('probs', a=np.ones(4)) # flat prior
rolls = pm.Multinomial('rolls', n=observations.sum(), p=probs, observed=observations)
trace = pm.sample(5000)
# pm.traceplot(trace)
pm.plot_posterior(trace);
# this is the number of 1,2,3,4
N_rolls = 50
observations = np.array([N_rolls*.1, N_rolls*.2, N_rolls*.3 ,N_rolls*.4])
# so the data for even and odd is, even = 1
obs_evenodd = [1]*observations[np.asarray([1, 3])].sum().astype(int) + [0]*observations[np.asarray([0, 2])].sum().astype(int)
# make then obs_2_4 from even, 2=True
obs_2_4 = [1]*observations[1].astype(int) + [0]*observations[3].astype(int)
# make then obs_1_3 from even, 1=True
obs_1_3 = [1]*observations[0].astype(int) + [0]*observations[2].astype(int)
with pm.Model() as our_first_model:
p_evenodd = pm.Beta('p_evenodd', alpha=1, beta=1)
evenodd = pm.Bernoulli('evenodd', p=p_evenodd, observed=obs_evenodd)
p_2_4 = pm.Beta('p_2_4', alpha=1, beta=1)
b_2_4 = pm.Bernoulli('b_2_4', p=p_2_4, observed=obs_2_4)
p_1_3 = pm.Beta('p_1_3', alpha=1, beta=1)
b_1_3 = pm.Bernoulli('b_1_3', p=p_1_3, observed=obs_1_3)
p1 = pm.Deterministic('p1', (1-p_evenodd)*p_1_3)
p2 = pm.Deterministic('p2', (p_evenodd)*p_2_4)
p3 = pm.Deterministic('p3', (1-p_evenodd)*(1-p_1_3))
p4 = pm.Deterministic('p4', (p_evenodd)*(1-p_1_3))
trace = pm.sample(5000,)
pm.traceplot(trace, var_names=['p1', 'p2', 'p3', 'p4'])
pm.plot_posterior(trace, var_names=['p1', 'p2', 'p3', 'p4']);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Procedural code version
Step2: Named tuples tuples but BETTER!
Step3: Set up arrays
Step4: Use itertool package to get 15 combinations of 2 rings
Step5: The [Cost, Damage, Armor] of the 22 ring combinations
Step6: Fill 3x 660-cell 3D arrays with the cost, damage and armor for each kit-combination
Step7: E.g. [0,0,0] = rings
Step8: 660 boss vs player battles
Step9: Test a specific combination of weapons, armor and rings
Step10: OOP code version
Step11: ...Use inheritance
Step12: another 660 fights, OOP style
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import itertools
import warnings
warnings.simplefilter(action='ignore')
startbosshp = 104
bossdamage = 8
bossarmor = 1
startplayerhp = 100
playerdamage = 0
playerarmor = 0
from collections import namedtuple
Item = namedtuple('item', ['name', 'cost', 'damage', 'armor'])
weaponsnt = [
Item('Dagger', 8, 4, 0),
Item('Shortsword', 10, 5, 0),
Item('Warhammer', 25, 6, 0),
Item('Longsword', 40, 7, 0),
Item('Greataxe', 74, 8, 0),
]
armornt = [
Item('Leather', 13, 0, 1),
Item('Chainmail', 31, 0, 2),
Item('Splintmail', 53, 0, 3),
Item('Bandedmail', 75, 0, 4),
Item('Platemail', 102, 0, 5),
Item('Naked', 0, 0, 0),
]
ringsnt = [
Item('Damage +1', 25, 1, 0),
Item('Damage +2', 50, 2, 0),
Item('Damage +3', 100, 3, 0),
Item('Defense +1', 20, 0, 1),
Item('Defense +2', 40, 0, 2),
Item('Defense +3', 80, 0, 3),
]
wn = 5 #weapons
an = 6 #armor
rn = 22 #rings
comb = wn * an * rn #total number of possibilities
#setup arrays
player_spent = np.full((wn, an, rn), np.nan)
player_damage = np.full((wn, an, rn), np.nan) #damage
player_armor = np.full((wn, an, rn), np.nan) #cost
#setup arrays
#weapons = np.full((wn, 3), 0)
armor = np.full((an, 3), 0)
rings0 = np.full((6, 3), 0)
rings = np.full((rn, 3), 0)
weapons = np.array([[8, 4, 0], [10, 5, 0], [25, 6, 0], [40, 7, 0], [74, 8, 0]])
armor[:,:] = [[13, 0, 1], [31, 0, 2], [53, 0, 3], [75, 0, 4], [102, 0, 5], [0, 0, 0]]
rings0[:,:] = [[25, 1, 0], [50, 2, 0], [100, 3, 0], [20, 0, 1], [40, 0, 2], [80, 0, 3]]
ring_combs = (list(itertools.combinations(range(6), 2)))
print(ring_combs)
for i in range(0, len(ring_combs)):
rings[i, 0] = int(rings0[ring_combs[i][0]][0] + rings0[ring_combs[i][1]][0]) #spent
rings[i, 1] = int(rings0[ring_combs[i][0]][1] + rings0[ring_combs[i][1]][1]) #damage
rings[i, 2] = int(rings0[ring_combs[i][0]][2] + rings0[ring_combs[i][1]][2]) #armor
rings[15:-1, :] = rings0[:,:]
print(rings)
for w in range(0, wn):
for a in range(0, an):
for r in range(0, rn):
player_spent[w, a, r] = weapons[w, 0] + armor[a, 0] + rings[r, 0]
player_damage[w, a, r] = weapons[w, 1] + armor[a, 1] + rings[r, 1]
player_armor[w, a, r] = weapons[w, 2] + armor[a, 2] + rings[r, 2]
playerspent = player_spent[0,0,0]
print('playerspent=',playerspent)
playerdamage = player_damage[0,0,0]
print('playerdamage=',playerdamage)
playerarmor = player_armor[0,0,0]
print('playerarmor=',playerarmor)
bestspend = 999
wi = 0
ai = 0
ri = 0
worstspend = 0
wi2 = 0
ai2 = 0
ri2 = 0
win_no = 0
lose_no = 0
for w in range(0, wn): #length=5
for a in range(0, an): #length=6
for r in range(0, rn): #length=22
#get 1 of 660
playerspent = player_spent[w, a, r]
playerdamage = player_damage[w, a, r]
playerarmor = player_armor[w, a, r]
bosshp = startbosshp
playerhp = startplayerhp
playeractdam = playerdamage - bossarmor
if (playeractdam < 1):
playeractdam = 1
# playactdam = max(playeractdam, 1)
bossactdam = bossdamage - playerarmor
if (bossactdam < 1):
bossactdam = 1
while (bosshp > 0) and (playerhp > 0):
#bosshp = bosshp - playeractdam
bosshp -= playeractdam
#playerhp = playerhp - bossactdam
playerhp -= bossactdam
if playerhp > bosshp: #if I win
#win_no = win_no += 1
win_no += 1
if playerspent < bestspend:
bestspend = playerspent
wi = w
ai = a
ri = r
if playerhp < bosshp: #if I lose
#lose_no = lose_no + 1
lose_no += 1
if playerspent > worstspend:
worstspend = playerspent
wi2 = w
ai2 = a
ri2 = r
print('lowest cost while still winning =',bestspend)
print(weaponsnt[wi])
print(armornt[ai])
print('ringscombi',rings[ri])
print(wi)
print(ai)
print(ri)
print('-')
print('highest cost while still losing =',worstspend)
print(weaponsnt[wi2])
print(armornt[ai2])
print('ringscombi',rings[ri2])
print(wi2)
print(ai2)
print(ri2)
print('-')
print('win_no=',win_no)
print('lose_no=',lose_no)
ww = 0
aa = 5
rr = 5
playerspend = player_spent[ww, aa, rr]
playerdamage = player_damage[ww, aa, rr]
playerarmor = player_armor[ww, aa, rr]
bosshp = startbosshp
playerhp = startplayerhp
playeractdam = playerdamage - bossarmor
if (playeractdam < 1):
playeractdam = 1
bossactdam = bossdamage - playerarmor
if (bossactdam < 1):
bossactdam = 1
while (bosshp > 0) & (playerhp > 0):
bosshp = bosshp - playeractdam
playerhp = playerhp - bossactdam
print('bosshp=',bosshp)
print('playerhp=',playerhp)
print('-')
# class Boss:
# def __init__(self, hp, damage, armor):
# self.hp = hp
# self.damage = damage
# self.armor = armor
# def calc_actdamage(self, playerarmor):
# self.actdamage = self.damage - playerarmor
# if (self.actdamage < 1):
# self.actdamage = 1
# class Player:
# def __init__(self, spent, hp, damage, armor):
# self.spent = spent
# self.hp = hp
# self.damage = damage
# self.armor = armor
# def calc_actdamage(self, bossarmor):
# self.actdamage = self.damage - bossarmor
# if (self.actdamage < 1):
# self.actdamage = 1
class RpgChara:
def __init__(self, hp, damage, armor):
self.hp = hp
self.damage = damage
self.armor = armor
def calc_actdamage(self, enemyarmor):
self.actdamage = self.damage - enemyarmor
if (self.actdamage < 1):
self.actdamage = 1
class Boss(RpgChara):
pass
class Player(RpgChara):
def __init__(self, spent, hp, damage, armor):
super().__init__(hp, damage, armor)
self.spent = spent
bestspend = 999 #too high in order to come down
wi = 0
ai = 0
ri = 0
worstspend = 0 #too low in order to go up
wi2 = 0
ai2 = 0
ri2 = 0
win_no = 0
lose_no = 0
for w in range(0, wn): #length5
for a in range(0, an): #length6
for r in range(0, rn): #length22
#get 1 of 660 instances per loop
boss_i = Boss(startbosshp, bossdamage, bossarmor)
player_i = Player(player_spent[w, a, r], startplayerhp, player_damage[w, a, r], \
player_armor[w, a, r])
#but what is their actual damage
boss_i.calc_actdamage(player_i.armor)
player_i.calc_actdamage(boss_i.armor)
while (boss_i.hp > 0) & (player_i.hp > 0):
boss_i.hp -= player_i.actdamage
player_i.hp -= boss_i.actdamage
if player_i.hp > boss_i.hp: #if I win
win_no += 1
if player_i.spent < bestspend:
bestspend = player_i.spent
wi = w
ai = a
ri = r
if player_i.hp < boss_i.hp: #if I lose
lose_no += 1
if player_i.spent > worstspend:
worstspend = player_i.spent
wi2 = w
ai2 = a
ri2 = r
print('lowest spend while still winning =',bestspend)
print(weaponsnt[wi])
print(armornt[ai])
print('ringscombi',rings[ri])
print(ri)
print('-')
print('highest spend while still losing =',worstspend)
print(weaponsnt[wi2])
print(armornt[ai2])
print('ringscombi',rings[ri2])
print(ri2)
print('-')
print('win_no=',win_no)
print('lose_no=',lose_no)
HTML(html)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def is_possible(x , y ) :
if(x < 2 and y != 0 ) :
return false
y = y - x + 1
if(y % 2 == 0 and y >= 0 ) :
return True
else :
return False
if __name__== ' __main __' :
x = 5
y = 2
if(is_possible(x , y ) ) :
print("Yes ")
else :
print("No ")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For example if we have a series of dates we can then generate features such as Year, Month, Day, Dayofweek, Is_month_start, etc as shown below
Step2: This function works by determining if a column is continuous or categorical based on the cardinality of its values. If it is above the max_card parameter (or a float datatype) then it will be added to the cont_names else cat_names. An example is below
Step3: For example we will make a sample DataFrame with int, float, bool, and object datatypes
Step4: We can then call df_shrink_dtypes to find the smallest possible datatype that can support the data
Step5: df_shrink(df) attempts to make a DataFrame uses less memory, by fit numeric columns into smallest datatypes. In addition
Step6: Let's compare the two
Step7: We can see that the datatypes changed, and even further we can look at their relative memory usages
Step8: Here's another example using the ADULT_SAMPLE dataset
Step9: We reduced the overall memory used by 79%!
Step10: df
Step11: These transforms are applied as soon as the data is available rather than as data is called from the DataLoader
Step12: While visually in the DataFrame you will not see a change, the classes are stored in to.procs.categorify as we can see below on a dummy DataFrame
Step13: Each column's unique values are stored in a dictionary of column
Step14: Currently, filling with the median, a constant, and the mode are supported.
Step15: TabularPandas Pipelines -
Step16: Integration example
Step17: We can decode any set of transformed data by calling to.decode_row with our raw data
Step18: We can make new test datasets based on the training data with the to.new()
Step19: We can then convert it to a DataLoader
Step20: Other target types
Step21: Not one-hot encoded
Step22: Regression
Step24: Not being used now - for multi-modal
Step25: Export -
|
<ASSISTANT_TASK:>
Python Code:
#|export
def make_date(df, date_field):
"Make sure `df[date_field]` is of the right date type."
field_dtype = df[date_field].dtype
if isinstance(field_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):
field_dtype = np.datetime64
if not np.issubdtype(field_dtype, np.datetime64):
df[date_field] = pd.to_datetime(df[date_field], infer_datetime_format=True)
df = pd.DataFrame({'date': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24']})
make_date(df, 'date')
test_eq(df['date'].dtype, np.dtype('datetime64[ns]'))
#|export
def add_datepart(df, field_name, prefix=None, drop=True, time=False):
"Helper function that adds columns relevant to a date in the column `field_name` of `df`."
make_date(df, field_name)
field = df[field_name]
prefix = ifnone(prefix, re.sub('[Dd]ate$', '', field_name))
attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
if time: attr = attr + ['Hour', 'Minute', 'Second']
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in attr: df[prefix + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
mask = ~field.isna()
df[prefix + 'Elapsed'] = np.where(mask,field.values.astype(np.int64) // 10 ** 9,np.nan)
if drop: df.drop(field_name, axis=1, inplace=True)
return df
df = pd.DataFrame({'date': ['2019-12-04', None, '2019-11-15', '2019-10-24']})
df = add_datepart(df, 'date')
df.head()
#|hide
test_eq(df.columns, ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start', 'Elapsed'])
test_eq(df[df.Elapsed.isna()].shape,(1, 13))
# Test that week dtype is consistent with other datepart fields
test_eq(df['Year'].dtype, df['Week'].dtype)
test_eq(pd.api.types.is_numeric_dtype(df['Elapsed']), True)
#|hide
df = pd.DataFrame({'f1': [1.],'f2': [2.],'f3': [3.],'f4': [4.],'date':['2019-12-04']})
df = add_datepart(df, 'date')
df.head()
#|hide
# Test Order of columns when date isn't in first position
test_eq(df.columns, ['f1', 'f2', 'f3', 'f4', 'Year', 'Month', 'Week', 'Day',
'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start', 'Elapsed'])
# Test that week dtype is consistent with other datepart fields
test_eq(df['Year'].dtype, df['Week'].dtype)
#|export
def _get_elapsed(df,field_names, date_field, base_field, prefix):
for f in field_names:
day1 = np.timedelta64(1, 'D')
last_date,last_base,res = np.datetime64(),None,[]
for b,v,d in zip(df[base_field].values, df[f].values, df[date_field].values):
if last_base is None or b != last_base:
last_date,last_base = np.datetime64(),b
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1))
df[prefix + f] = res
return df
#|export
def add_elapsed_times(df, field_names, date_field, base_field):
"Add in `df` for each event in `field_names` the elapsed time according to `date_field` grouped by `base_field`"
field_names = list(L(field_names))
#Make sure date_field is a date and base_field a bool
df[field_names] = df[field_names].astype('bool')
make_date(df, date_field)
work_df = df[field_names + [date_field, base_field]]
work_df = work_df.sort_values([base_field, date_field])
work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'After')
work_df = work_df.sort_values([base_field, date_field], ascending=[True, False])
work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'Before')
for a in ['After' + f for f in field_names] + ['Before' + f for f in field_names]:
work_df[a] = work_df[a].fillna(0).astype(int)
for a,s in zip([True, False], ['_bw', '_fw']):
work_df = work_df.set_index(date_field)
tmp = (work_df[[base_field] + field_names].sort_index(ascending=a)
.groupby(base_field).rolling(7, min_periods=1).sum())
if base_field in tmp: tmp.drop(base_field, axis=1,inplace=True)
tmp.reset_index(inplace=True)
work_df.reset_index(inplace=True)
work_df = work_df.merge(tmp, 'left', [date_field, base_field], suffixes=['', s])
work_df.drop(field_names, axis=1, inplace=True)
return df.merge(work_df, 'left', [date_field, base_field])
df = pd.DataFrame({'date': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24'],
'event': [False, True, False, True], 'base': [1,1,2,2]})
df = add_elapsed_times(df, ['event'], 'date', 'base')
df.head()
#|export
def cont_cat_split(df, max_card=20, dep_var=None):
"Helper function that returns column names of cont and cat variables from given `df`."
cont_names, cat_names = [], []
for label in df:
if label in L(dep_var): continue
if ((pd.api.types.is_integer_dtype(df[label].dtype) and
df[label].unique().shape[0] > max_card) or
pd.api.types.is_float_dtype(df[label].dtype)):
cont_names.append(label)
else: cat_names.append(label)
return cont_names, cat_names
# Example with simple numpy types
df = pd.DataFrame({'cat1': [1, 2, 3, 4], 'cont1': [1., 2., 3., 2.], 'cat2': ['a', 'b', 'b', 'a'],
'i8': pd.Series([1, 2, 3, 4], dtype='int8'),
'u8': pd.Series([1, 2, 3, 4], dtype='uint8'),
'f16': pd.Series([1, 2, 3, 4], dtype='float16'),
'y1': [1, 0, 1, 0], 'y2': [2, 1, 1, 0]})
cont_names, cat_names = cont_cat_split(df)
#|hide_input
print(f'cont_names: {cont_names}\ncat_names: {cat_names}`')
#|hide
# Test all columns
cont, cat = cont_cat_split(df)
test_eq((cont, cat), (['cont1', 'f16'], ['cat1', 'cat2', 'i8', 'u8', 'y1', 'y2']))
# Test exclusion of dependent variable
cont, cat = cont_cat_split(df, dep_var='y1')
test_eq((cont, cat), (['cont1', 'f16'], ['cat1', 'cat2', 'i8', 'u8', 'y2']))
# Test exclusion of multi-label dependent variables
cont, cat = cont_cat_split(df, dep_var=['y1', 'y2'])
test_eq((cont, cat), (['cont1', 'f16'], ['cat1', 'cat2', 'i8', 'u8']))
# Test maximal cardinality bound for int variable
cont, cat = cont_cat_split(df, max_card=3)
test_eq((cont, cat), (['cat1', 'cont1', 'i8', 'u8', 'f16'], ['cat2', 'y1', 'y2']))
cont, cat = cont_cat_split(df, max_card=2)
test_eq((cont, cat), (['cat1', 'cont1', 'i8', 'u8', 'f16', 'y2'], ['cat2', 'y1']))
cont, cat = cont_cat_split(df, max_card=1)
test_eq((cont, cat), (['cat1', 'cont1', 'i8', 'u8', 'f16', 'y1', 'y2'], ['cat2']))
# Example with pandas types and generated columns
df = pd.DataFrame({'cat1': pd.Series(['l','xs','xl','s'], dtype='category'),
'ui32': pd.Series([1, 2, 3, 4], dtype='UInt32'),
'i64': pd.Series([1, 2, 3, 4], dtype='Int64'),
'f16': pd.Series([1, 2, 3, 4], dtype='Float64'),
'd1_date': ['2021-02-09', None, '2020-05-12', '2020-08-14'],
})
df = add_datepart(df, 'd1_date', drop=False)
df['cat1'].cat.set_categories(['xl','l','m','s','xs'], ordered=True, inplace=True)
cont_names, cat_names = cont_cat_split(df, max_card=0)
#|hide_input
print(f'cont_names: {cont_names}\ncat_names: {cat_names}')
#|hide
cont, cat = cont_cat_split(df, max_card=0)
test_eq((cont, cat), (
['ui32', 'i64', 'f16', 'd1_Year', 'd1_Month', 'd1_Week', 'd1_Day', 'd1_Dayofweek', 'd1_Dayofyear', 'd1_Elapsed'],
['cat1', 'd1_date', 'd1_Is_month_end', 'd1_Is_month_start', 'd1_Is_quarter_end', 'd1_Is_quarter_start', 'd1_Is_year_end', 'd1_Is_year_start']
))
#|export
def df_shrink_dtypes(df, skip=[], obj2cat=True, int2uint=False):
"Return any possible smaller data types for DataFrame columns. Allows `object`->`category`, `int`->`uint`, and exclusion."
# 1: Build column filter and typemap
excl_types, skip = {'category','datetime64[ns]','bool'}, set(skip)
typemap = {'int' : [(np.dtype(x), np.iinfo(x).min, np.iinfo(x).max) for x in (np.int8, np.int16, np.int32, np.int64)],
'uint' : [(np.dtype(x), np.iinfo(x).min, np.iinfo(x).max) for x in (np.uint8, np.uint16, np.uint32, np.uint64)],
'float' : [(np.dtype(x), np.finfo(x).min, np.finfo(x).max) for x in (np.float32, np.float64, np.longdouble)]
}
if obj2cat: typemap['object'] = 'category' # User wants to categorify dtype('Object'), which may not always save space
else: excl_types.add('object')
new_dtypes = {}
exclude = lambda dt: dt[1].name not in excl_types and dt[0] not in skip
for c, old_t in filter(exclude, df.dtypes.items()):
t = next((v for k,v in typemap.items() if old_t.name.startswith(k)), None)
if isinstance(t, list): # Find the smallest type that fits
if int2uint and t==typemap['int'] and df[c].min() >= 0: t=typemap['uint']
new_t = next((r[0] for r in t if r[1]<=df[c].min() and r[2]>=df[c].max()), None)
if new_t and new_t == old_t: new_t = None
else: new_t = t if isinstance(t, str) else None
if new_t: new_dtypes[c] = new_t
return new_dtypes
show_doc(df_shrink_dtypes, title_level=3)
df = pd.DataFrame({'i': [-100, 0, 100], 'f': [-100.0, 0.0, 100.0], 'e': [True, False, True],
'date':['2019-12-04','2019-11-29','2019-11-15',]})
df.dtypes
dt = df_shrink_dtypes(df)
dt
#|hide
test_eq(df['i'].dtype, 'int64')
test_eq(dt['i'], 'int8')
test_eq(df['f'].dtype, 'float64')
test_eq(dt['f'], 'float32')
# Default ignore 'object' and 'boolean' columns
test_eq(df['date'].dtype, 'object')
test_eq(dt['date'], 'category')
# Test categorifying 'object' type
dt2 = df_shrink_dtypes(df, obj2cat=False)
test_eq('date' not in dt2, True)
#|export
def df_shrink(df, skip=[], obj2cat=True, int2uint=False):
"Reduce DataFrame memory usage, by casting to smaller types returned by `df_shrink_dtypes()`."
dt = df_shrink_dtypes(df, skip, obj2cat=obj2cat, int2uint=int2uint)
return df.astype(dt)
show_doc(df_shrink, title_level=3)
df = pd.DataFrame({'i': [-100, 0, 100], 'f': [-100.0, 0.0, 100.0], 'u':[0, 10,254],
'date':['2019-12-04','2019-11-29','2019-11-15']})
df2 = df_shrink(df, skip=['date'])
df.dtypes
df2.dtypes
#|hide_input
print(f'Initial Dataframe: {df.memory_usage().sum()} bytes')
print(f'Reduced Dataframe: {df2.memory_usage().sum()} bytes')
#|hide
test_eq(df['i'].dtype=='int64' and df2['i'].dtype=='int8', True)
test_eq(df['f'].dtype=='float64' and df2['f'].dtype=='float32', True)
test_eq(df['u'].dtype=='int64' and df2['u'].dtype=='int16', True)
test_eq(df2['date'].dtype, 'object')
test_eq(df2.memory_usage().sum() < df.memory_usage().sum(), True)
# Test int => uint (when col.min() >= 0)
df3 = df_shrink(df, int2uint=True)
test_eq(df3['u'].dtype, 'uint8') # int64 -> uint8 instead of int16
# Test excluding columns
df4 = df_shrink(df, skip=['i','u'])
test_eq(df['i'].dtype, df4['i'].dtype)
test_eq(df4['u'].dtype, 'int64')
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
new_df = df_shrink(df, int2uint=True)
#|hide_input
print(f'Initial Dataframe: {df.memory_usage().sum() / 1000000} megabytes')
print(f'Reduced Dataframe: {new_df.memory_usage().sum() / 1000000} megabytes')
#|export
class _TabIloc:
"Get/set rows by iloc and cols by name"
def __init__(self,to): self.to = to
def __getitem__(self, idxs):
df = self.to.items
if isinstance(idxs,tuple):
rows,cols = idxs
cols = df.columns.isin(cols) if is_listy(cols) else df.columns.get_loc(cols)
else: rows,cols = idxs,slice(None)
return self.to.new(df.iloc[rows, cols])
#|export
class Tabular(CollBase, GetAttr, FilteredBase):
"A `DataFrame` wrapper that knows which cols are cont/cat/y, and returns rows in `__getitem__`"
_default,with_cont='procs',True
def __init__(self, df, procs=None, cat_names=None, cont_names=None, y_names=None, y_block=None, splits=None,
do_setup=True, device=None, inplace=False, reduce_memory=True):
if inplace and splits is not None and pd.options.mode.chained_assignment is not None:
warn("Using inplace with splits will trigger a pandas error. Set `pd.options.mode.chained_assignment=None` to avoid it.")
if not inplace: df = df.copy()
if reduce_memory: df = df_shrink(df)
if splits is not None: df = df.iloc[sum(splits, [])]
self.dataloaders = delegates(self._dl_type.__init__)(self.dataloaders)
super().__init__(df)
self.y_names,self.device = L(y_names),device
if y_block is None and self.y_names:
# Make ys categorical if they're not numeric
ys = df[self.y_names]
if len(ys.select_dtypes(include='number').columns)!=len(ys.columns): y_block = CategoryBlock()
else: y_block = RegressionBlock()
if y_block is not None and do_setup:
if callable(y_block): y_block = y_block()
procs = L(procs) + y_block.type_tfms
self.cat_names,self.cont_names,self.procs = L(cat_names),L(cont_names),Pipeline(procs)
self.split = len(df) if splits is None else len(splits[0])
if do_setup: self.setup()
def new(self, df, inplace=False):
return type(self)(df, do_setup=False, reduce_memory=False, y_block=TransformBlock(), inplace=inplace,
**attrdict(self, 'procs','cat_names','cont_names','y_names', 'device'))
def subset(self, i): return self.new(self.items[slice(0,self.split) if i==0 else slice(self.split,len(self))])
def copy(self): self.items = self.items.copy(); return self
def decode(self): return self.procs.decode(self)
def decode_row(self, row): return self.new(pd.DataFrame(row).T).decode().items.iloc[0]
def show(self, max_n=10, **kwargs): display_df(self.new(self.all_cols[:max_n]).decode().items)
def setup(self): self.procs.setup(self)
def process(self): self.procs(self)
def loc(self): return self.items.loc
def iloc(self): return _TabIloc(self)
def targ(self): return self.items[self.y_names]
def x_names (self): return self.cat_names + self.cont_names
def n_subsets(self): return 2
def y(self): return self[self.y_names[0]]
def new_empty(self): return self.new(pd.DataFrame({}, columns=self.items.columns))
def to_device(self, d=None):
self.device = d
return self
def all_col_names (self):
ys = [n for n in self.y_names if n in self.items.columns]
return self.x_names + self.y_names if len(ys) == len(self.y_names) else self.x_names
properties(Tabular,'loc','iloc','targ','all_col_names','n_subsets','x_names','y')
#|export
class TabularPandas(Tabular):
"A `Tabular` object with transforms"
def transform(self, cols, f, all_col=True):
if not all_col: cols = [c for c in cols if c in self.items.columns]
if len(cols) > 0: self[cols] = self[cols].transform(f)
#|export
def _add_prop(cls, nm):
@property
def f(o): return o[list(getattr(o,nm+'_names'))]
@f.setter
def fset(o, v): o[getattr(o,nm+'_names')] = v
setattr(cls, nm+'s', f)
setattr(cls, nm+'s', fset)
_add_prop(Tabular, 'cat')
_add_prop(Tabular, 'cont')
_add_prop(Tabular, 'y')
_add_prop(Tabular, 'x')
_add_prop(Tabular, 'all_col')
#|hide
df = pd.DataFrame({'a':[0,1,2,0,2], 'b':[0,0,0,0,1]})
to = TabularPandas(df, cat_names='a')
t = pickle.loads(pickle.dumps(to))
test_eq(t.items,to.items)
test_eq(to.all_cols,to[['a']])
#|hide
import gc
def _count_objs(o):
"Counts number of instanes of class `o`"
objs = gc.get_objects()
return len([x for x in objs if isinstance(x, pd.DataFrame)])
df = pd.DataFrame({'a':[0,1,2,0,2], 'b':[0,0,0,0,1]})
df_b = pd.DataFrame({'a':[1,2,0,0,2], 'b':[1,0,3,0,1]})
to = TabularPandas(df, cat_names='a', inplace=True)
_init_count = _count_objs(pd.DataFrame)
to_new = to.new(df_b, inplace=True)
test_eq(_init_count, _count_objs(pd.DataFrame))
#|export
class TabularProc(InplaceTransform):
"Base class to write a non-lazy tabular processor for dataframes"
def setup(self, items=None, train_setup=False): #TODO: properly deal with train_setup
super().setup(getattr(items,'train',items), train_setup=False)
# Procs are called as soon as data is available
return self(items.items if isinstance(items,Datasets) else items)
@property
def name(self): return f"{super().name} -- {getattr(self,'__stored_args__',{})}"
#|export
def _apply_cats (voc, add, c):
if not is_categorical_dtype(c):
return pd.Categorical(c, categories=voc[c.name][add:]).codes+add
return c.cat.codes+add #if is_categorical_dtype(c) else c.map(voc[c.name].o2i)
def _decode_cats(voc, c): return c.map(dict(enumerate(voc[c.name].items)))
#|export
class Categorify(TabularProc):
"Transform the categorical variables to something similar to `pd.Categorical`"
order = 1
def setups(self, to):
store_attr(classes={n:CategoryMap(to.iloc[:,n].items, add_na=(n in to.cat_names)) for n in to.cat_names}, but='to')
def encodes(self, to): to.transform(to.cat_names, partial(_apply_cats, self.classes, 1))
def decodes(self, to): to.transform(to.cat_names, partial(_decode_cats, self.classes))
def __getitem__(self,k): return self.classes[k]
#|exporti
@Categorize
def setups(self, to:Tabular):
if len(to.y_names) > 0:
if self.vocab is None:
self.vocab = CategoryMap(getattr(to, 'train', to).iloc[:,to.y_names[0]].items, strict=True)
else:
self.vocab = CategoryMap(self.vocab, sort=False, add_na=self.add_na)
self.c = len(self.vocab)
return self(to)
@Categorize
def encodes(self, to:Tabular):
to.transform(to.y_names, partial(_apply_cats, {n: self.vocab for n in to.y_names}, 0), all_col=False)
return to
@Categorize
def decodes(self, to:Tabular):
to.transform(to.y_names, partial(_decode_cats, {n: self.vocab for n in to.y_names}), all_col=False)
return to
show_doc(Categorify, title_level=3)
df = pd.DataFrame({'a':[0,1,2,0,2]})
to = TabularPandas(df, Categorify, 'a')
to.show()
cat = to.procs.categorify
cat.classes
#|hide
def test_series(a,b): return test_eq(list(a), b)
test_series(cat['a'], ['#na#',0,1,2])
test_series(to['a'], [1,2,3,1,3])
#|hide
df1 = pd.DataFrame({'a':[1,0,3,-1,2]})
to1 = to.new(df1)
to1.process()
#Values that weren't in the training df are sent to 0 (na)
test_series(to1['a'], [2,1,0,0,3])
to2 = cat.decode(to1)
test_series(to2['a'], [1,0,'#na#','#na#',2])
#|hide
#test with splits
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,2]})
to = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]])
test_series(cat['a'], ['#na#',0,1,2])
test_series(to['a'], [1,2,3,0,3])
#|hide
df = pd.DataFrame({'a':pd.Categorical(['M','H','L','M'], categories=['H','M','L'], ordered=True)})
to = TabularPandas(df, Categorify, 'a')
cat = to.procs.categorify
test_series(cat['a'], ['#na#','H','M','L'])
test_series(to.items.a, [2,1,3,2])
to2 = cat.decode(to)
test_series(to2['a'], ['M','H','L','M'])
#|hide
#test with targets
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,2], 'b': ['a', 'b', 'a', 'b', 'b']})
to = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]], y_names='b')
test_series(to.vocab, ['a', 'b'])
test_series(to['b'], [0,1,0,1,1])
to2 = to.procs.decode(to)
test_series(to2['b'], ['a', 'b', 'a', 'b', 'b'])
#|hide
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,2], 'b': ['a', 'b', 'a', 'b', 'b']})
to = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]], y_names='b')
test_series(to.vocab, ['a', 'b'])
test_series(to['b'], [0,1,0,1,1])
to2 = to.procs.decode(to)
test_series(to2['b'], ['a', 'b', 'a', 'b', 'b'])
#|hide
#test with targets and train
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,2], 'b': ['a', 'b', 'a', 'c', 'b']})
to = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]], y_names='b')
test_series(to.vocab, ['a', 'b'])
#|hide
#test to ensure no copies of the dataframe are stored
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,4]})
to = TabularPandas(df, cat, cont_names='a', splits=[[0,1,2],[3,4]])
test_eq(hasattr(to.categorify, 'to'), False)
#|exporti
@Normalize
def setups(self, to:Tabular):
store_attr(but='to', means=dict(getattr(to, 'train', to).conts.mean()),
stds=dict(getattr(to, 'train', to).conts.std(ddof=0)+1e-7))
return self(to)
@Normalize
def encodes(self, to:Tabular):
to.conts = (to.conts-self.means) / self.stds
return to
@Normalize
def decodes(self, to:Tabular):
to.conts = (to.conts*self.stds ) + self.means
return to
#|hide
norm = Normalize()
df = pd.DataFrame({'a':[0,1,2,3,4]})
to = TabularPandas(df, norm, cont_names='a')
x = np.array([0,1,2,3,4])
m,s = x.mean(),x.std()
test_eq(norm.means['a'], m)
test_close(norm.stds['a'], s)
test_close(to['a'].values, (x-m)/s)
#|hide
df1 = pd.DataFrame({'a':[5,6,7]})
to1 = to.new(df1)
to1.process()
test_close(to1['a'].values, (np.array([5,6,7])-m)/s)
to2 = norm.decode(to1)
test_close(to2['a'].values, [5,6,7])
#|hide
norm = Normalize()
df = pd.DataFrame({'a':[0,1,2,3,4]})
to = TabularPandas(df, norm, cont_names='a', splits=[[0,1,2],[3,4]])
x = np.array([0,1,2])
m,s = x.mean(),x.std()
test_eq(norm.means['a'], m)
test_close(norm.stds['a'], s)
test_close(to['a'].values, (np.array([0,1,2,3,4])-m)/s)
#|hide
norm = Normalize()
df = pd.DataFrame({'a':[0,1,2,3,4]})
to = TabularPandas(df, norm, cont_names='a', splits=[[0,1,2],[3,4]])
test_eq(hasattr(to.procs.normalize, 'to'), False)
#|export
class FillStrategy:
"Namespace containing the various filling strategies."
def median (c,fill): return c.median()
def constant(c,fill): return fill
def mode (c,fill): return c.dropna().value_counts().idxmax()
#|export
class FillMissing(TabularProc):
"Fill the missing values in continuous columns."
def __init__(self, fill_strategy=FillStrategy.median, add_col=True, fill_vals=None):
if fill_vals is None: fill_vals = defaultdict(int)
store_attr()
def setups(self, to):
missing = pd.isnull(to.conts).any()
store_attr(but='to', na_dict={n:self.fill_strategy(to[n], self.fill_vals[n])
for n in missing[missing].keys()})
self.fill_strategy = self.fill_strategy.__name__
def encodes(self, to):
missing = pd.isnull(to.conts)
for n in missing.any()[missing.any()].keys():
assert n in self.na_dict, f"nan values in `{n}` but not in setup training set"
for n in self.na_dict.keys():
to[n].fillna(self.na_dict[n], inplace=True)
if self.add_col:
to.loc[:,n+'_na'] = missing[n]
if n+'_na' not in to.cat_names: to.cat_names.append(n+'_na')
show_doc(FillMissing, title_level=3)
#|hide
fill1,fill2,fill3 = (FillMissing(fill_strategy=s)
for s in [FillStrategy.median, FillStrategy.constant, FillStrategy.mode])
df = pd.DataFrame({'a':[0,1,np.nan,1,2,3,4]})
df1 = df.copy(); df2 = df.copy()
tos = (TabularPandas(df, fill1, cont_names='a'),
TabularPandas(df1, fill2, cont_names='a'),
TabularPandas(df2, fill3, cont_names='a'))
test_eq(fill1.na_dict, {'a': 1.5})
test_eq(fill2.na_dict, {'a': 0})
test_eq(fill3.na_dict, {'a': 1.0})
for t in tos: test_eq(t.cat_names, ['a_na'])
for to_,v in zip(tos, [1.5, 0., 1.]):
test_eq(to_['a'].values, np.array([0, 1, v, 1, 2, 3, 4]))
test_eq(to_['a_na'].values, np.array([0, 0, 1, 0, 0, 0, 0]))
#|hide
fill = FillMissing()
df = pd.DataFrame({'a':[0,1,np.nan,1,2,3,4], 'b': [0,1,2,3,4,5,6]})
to = TabularPandas(df, fill, cont_names=['a', 'b'])
test_eq(fill.na_dict, {'a': 1.5})
test_eq(to.cat_names, ['a_na'])
test_eq(to['a'].values, np.array([0, 1, 1.5, 1, 2, 3, 4]))
test_eq(to['a_na'].values, np.array([0, 0, 1, 0, 0, 0, 0]))
test_eq(to['b'].values, np.array([0,1,2,3,4,5,6]))
#|hide
fill = FillMissing()
df = pd.DataFrame({'a':[0,1,np.nan,1,2,3,4], 'b': [0,1,2,3,4,5,6]})
to = TabularPandas(df, fill, cont_names=['a', 'b'])
test_eq(hasattr(to.procs.fill_missing, 'to'), False)
#|hide
procs = [Normalize, Categorify, FillMissing, noop]
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4]})
to = TabularPandas(df, procs, cat_names='a', cont_names='b')
#Test setup and apply on df_main
test_series(to.cat_names, ['a', 'b_na'])
test_series(to['a'], [1,2,3,2,2,3,1])
test_series(to['b_na'], [1,1,2,1,1,1,1])
x = np.array([0,1,1.5,1,2,3,4])
m,s = x.mean(),x.std()
test_close(to['b'].values, (x-m)/s)
test_eq(to.classes, {'a': ['#na#',0,1,2], 'b_na': ['#na#',False,True]})
#|hide
#Test apply on y_names
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})
to = TabularPandas(df, procs, 'a', 'b', y_names='c')
test_series(to.cat_names, ['a', 'b_na'])
test_series(to['a'], [1,2,3,2,2,3,1])
test_series(to['b_na'], [1,1,2,1,1,1,1])
test_series(to['c'], [1,0,1,0,0,1,0])
x = np.array([0,1,1.5,1,2,3,4])
m,s = x.mean(),x.std()
test_close(to['b'].values, (x-m)/s)
test_eq(to.classes, {'a': ['#na#',0,1,2], 'b_na': ['#na#',False,True]})
test_eq(to.vocab, ['a','b'])
#|hide
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})
to = TabularPandas(df, procs, 'a', 'b', y_names='c')
test_series(to.cat_names, ['a', 'b_na'])
test_series(to['a'], [1,2,3,2,2,3,1])
test_eq(df.a.dtype, np.int64 if sys.platform == "win32" else int)
test_series(to['b_na'], [1,1,2,1,1,1,1])
test_series(to['c'], [1,0,1,0,0,1,0])
#|hide
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,np.nan,1,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})
to = TabularPandas(df, procs, cat_names='a', cont_names='b', y_names='c', splits=[[0,1,4,6], [2,3,5]])
test_series(to.cat_names, ['a', 'b_na'])
test_series(to['a'], [1,2,2,1,0,2,0])
test_eq(df.a.dtype, np.int64 if sys.platform == "win32" else int)
test_series(to['b_na'], [1,2,1,1,1,1,1])
test_series(to['c'], [1,0,0,0,1,0,1])
#|export
def _maybe_expand(o): return o[:,None] if o.ndim==1 else o
#|export
class ReadTabBatch(ItemTransform):
"Transform `TabularPandas` values into a `Tensor` with the ability to decode"
def __init__(self, to): self.to = to.new_empty()
def encodes(self, to):
if not to.with_cont: res = (tensor(to.cats).long(),)
else: res = (tensor(to.cats).long(),tensor(to.conts).float())
ys = [n for n in to.y_names if n in to.items.columns]
if len(ys) == len(to.y_names): res = res + (tensor(to.targ),)
if to.device is not None: res = to_device(res, to.device)
return res
def decodes(self, o):
o = [_maybe_expand(o_) for o_ in to_np(o) if o_.size != 0]
vals = np.concatenate(o, axis=1)
try: df = pd.DataFrame(vals, columns=self.to.all_col_names)
except: df = pd.DataFrame(vals, columns=self.to.x_names)
to = self.to.new(df)
return to
#|export
@typedispatch
def show_batch(x: Tabular, y, its, max_n=10, ctxs=None):
x.show()
#|export
@delegates()
class TabDataLoader(TfmdDL):
"A transformed `DataLoader` for Tabular data"
def __init__(self, dataset, bs=16, shuffle=False, after_batch=None, num_workers=0, **kwargs):
if after_batch is None: after_batch = L(TransformBlock().batch_tfms)+ReadTabBatch(dataset)
super().__init__(dataset, bs=bs, shuffle=shuffle, after_batch=after_batch, num_workers=num_workers, **kwargs)
def create_batch(self, b): return self.dataset.iloc[b]
def do_item(self, s): return 0 if s is None else s
TabularPandas._dl_type = TabDataLoader
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_test.drop('salary', axis=1, inplace=True)
df_main.head()
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
to = TabularPandas(df_main, procs, cat_names, cont_names, y_names="salary", splits=splits)
dls = to.dataloaders()
dls.valid.show_batch()
to.show()
row = to.items.iloc[0]
to.decode_row(row)
to_tst = to.new(df_test)
to_tst.process()
to_tst.items.head()
tst_dl = dls.valid.new(to_tst)
tst_dl.show_batch()
def _mock_multi_label(df):
sal,sex,white = [],[],[]
for row in df.itertuples():
sal.append(row.salary == '>=50k')
sex.append(row.sex == ' Male')
white.append(row.race == ' White')
df['salary'] = np.array(sal)
df['male'] = np.array(sex)
df['white'] = np.array(white)
return df
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_main = _mock_multi_label(df_main)
df_main.head()
#|exporti
@EncodedMultiCategorize
def setups(self, to:Tabular):
self.c = len(self.vocab)
return self(to)
@EncodedMultiCategorize
def encodes(self, to:Tabular): return to
@EncodedMultiCategorize
def decodes(self, to:Tabular):
to.transform(to.y_names, lambda c: c==1)
return to
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
y_names=["salary", "male", "white"]
%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names=y_names, y_block=MultiCategoryBlock(encoded=True, vocab=y_names), splits=splits)
dls = to.dataloaders()
dls.valid.show_batch()
def _mock_multi_label(df):
targ = []
for row in df.itertuples():
labels = []
if row.salary == '>=50k': labels.append('>50k')
if row.sex == ' Male': labels.append('male')
if row.race == ' White': labels.append('white')
targ.append(' '.join(labels))
df['target'] = np.array(targ)
return df
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_main = _mock_multi_label(df_main)
df_main.head()
@MultiCategorize
def encodes(self, to:Tabular):
#to.transform(to.y_names, partial(_apply_cats, {n: self.vocab for n in to.y_names}, 0))
return to
@MultiCategorize
def decodes(self, to:Tabular):
#to.transform(to.y_names, partial(_decode_cats, {n: self.vocab for n in to.y_names}))
return to
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names="target", y_block=MultiCategoryBlock(), splits=splits)
to.procs[2].vocab
#|exporti
@RegressionSetup
def setups(self, to:Tabular):
if self.c is not None: return
self.c = len(to.y_names)
return to
@RegressionSetup
def encodes(self, to:Tabular): return to
@RegressionSetup
def decodes(self, to:Tabular): return to
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_main = _mock_multi_label(df_main)
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names='age', splits=splits)
to.procs[-1].means
dls = to.dataloaders()
dls.valid.show_batch()
class TensorTabular(fastuple):
def get_ctxs(self, max_n=10, **kwargs):
n_samples = min(self[0].shape[0], max_n)
df = pd.DataFrame(index = range(n_samples))
return [df.iloc[i] for i in range(n_samples)]
def display(self, ctxs): display_df(pd.DataFrame(ctxs))
class TabularLine(pd.Series):
"A line of a dataframe that knows how to show itself"
def show(self, ctx=None, **kwargs): return self if ctx is None else ctx.append(self)
class ReadTabLine(ItemTransform):
def __init__(self, proc): self.proc = proc
def encodes(self, row):
cats,conts = (o.map(row.__getitem__) for o in (self.proc.cat_names,self.proc.cont_names))
return TensorTabular(tensor(cats).long(),tensor(conts).float())
def decodes(self, o):
to = TabularPandas(o, self.proc.cat_names, self.proc.cont_names, self.proc.y_names)
to = self.proc.decode(to)
return TabularLine(pd.Series({c: v for v,c in zip(to.items[0]+to.items[1], self.proc.cat_names+self.proc.cont_names)}))
class ReadTabTarget(ItemTransform):
def __init__(self, proc): self.proc = proc
def encodes(self, row): return row[self.proc.y_names].astype(np.int64)
def decodes(self, o): return Category(self.proc.classes[self.proc.y_names][o])
# tds = TfmdDS(to.items, tfms=[[ReadTabLine(proc)], ReadTabTarget(proc)])
# enc = tds[1]
# test_eq(enc[0][0], tensor([2,1]))
# test_close(enc[0][1], tensor([-0.628828]))
# test_eq(enc[1], 1)
# dec = tds.decode(enc)
# assert isinstance(dec[0], TabularLine)
# test_close(dec[0], pd.Series({'a': 1, 'b_na': False, 'b': 1}))
# test_eq(dec[1], 'a')
# test_stdout(lambda: print(show_at(tds, 1)), a 1
# b_na False
# b 1
# category a
# dtype: object)
#|hide
from nbdev.export import notebook2script
notebook2script()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Top-level distl distribution creation functions
Step2: Note that, when passing a distribution object to b.add_distribution, the values of unit, label, label_latex, and wrap_at will be handled by PHOEBE itself - based on the parameter to which the distribution is attached.
Step3: Note that since we didn't provide label or unit, those will be missing when plotting the distribution object.
Step4: However, b.add_distribution will populate unit and wrap_at (when applicable)
Step5: And whenever plotting through b.plot_distribution_collection or b.plot, the latex representation of the respective parameters will be included.
Step6: "Around" distributions
Step7: When attached in PHOEBE, any "around" distribution will have its central value updated to always be that of the current parameter value to which its attached. This can be very convenient in some cases (for example, setting init_from to always be an N-dimensional Gaussian "ball" around the current values), but in other cases is not appropriate (for priors, where any attached distribution should be based on external knowledge or assumptions, and therefore should be fixed and not update with changing parameter values).
Step8: Since the central value is currently 6000, these two distributions are currently identical (the value in the gaussian_around is the current value that will be adopted as loc whenever the distribution is actually used).
Step9: However, if we change the parameter value, we can see that the "around" distribution updates its central value accordingly.
Step10: Note that if we try to use any distribution-set with an "around" distribution as priors in a solver, a warning will be raised (as of PHOEBE 2.3.23).
Step11: Multivariate distributions
Step12: Once we create our single distl multivariate object, we then need to tell PHOEBE which "slices" of that multivariate distribution to attach to which parameters.
Step13: By calling dist.slice, we get a distl object which retains the covariance matrix, but represents a single dimension, which can then be passed to PHOEBE.
Step14: And we can see that the covariances (although stored three separate times) are maintained whenever sampling from the distributions within PHOEBE.
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
logger = phoebe.logger()
b = phoebe.default_binary()
help(phoebe.gaussian)
dist = phoebe.gaussian(6000, 100)
print(dist)
_ = dist.plot(show=True)
b.add_distribution('teff@primary', dist, distribution='mydist')
b.get_value(qualifier='teff', component='primary', distribution='mydist')
_ = b.get_value(qualifier='teff', component='primary', distribution='mydist').plot(show=True)
_ = b.plot_distribution_collection(distribution='mydist')
help(phoebe.gaussian)
help(phoebe.gaussian_around)
b = phoebe.default_binary()
print(b.get_value(qualifier='teff', component='primary'))
b.add_distribution('teff@primary', phoebe.gaussian(6000, 100), distribution='gaussian_dist')
b.add_distribution('teff@primary', phoebe.gaussian_around(100), distribution='gaussian_around_dist')
print(b.filter(context='distribution'))
b.set_value('teff', component='primary', context='component', value=6123)
print(b.filter(context='distribution'))
help(phoebe.uniform)
help(phoebe.uniform_around)
help(phoebe.mvhistogram_from_data)
help(phoebe.mvgaussian)
import numpy as np
b = phoebe.default_binary()
dist = phoebe.mvgaussian([6500, 6200, 0.9],
np.array([[ 2, 1, -1],
[ 1, 2, 1],
[-1, 1, 2]]),
allow_singular=True,
labels=['T1', 'T2', 'q'])
_ = dist.plot(show=True)
_ = dist.slice('T1').plot()
dist.slice('T1')
b.add_distribution({'teff@primary': dist.slice('T1'),
'teff@secondary': dist.slice('T2'),
'q': dist.slice('q')},
distribution='mydist')
_ = b.plot_distribution_collection('mydist', show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: IEEE 浮点数表示法
Step2: 也就是说 0.1 是通过 3602879701896397/36028797018963968 来近似表示的,很明显这样近似的表示会导致许多差距很小的数字公用相同的近似表示数,例如:
Step3: 在 Python 中所有这些可以用相同的近似数表示的数字统一采用最短有效数字来表示:
Step4: 浮点数运算
Step5: 为了解决运算中的问题,IEEE 标准还指定了一个舍入规则(round),即 Python 中内置的 round 方法,我们可以通过舍入的方式取得两个数的近似值,来判断其近似值是否相等:
Step6: 当然这种舍入的方式并不一定是可靠的,依赖于舍入的选择的位数,位数太大,就失去了 round 的作用,太小,就会引入别的错误:
Step7: Python 中使用更精确的浮点数可以通过 decimal 和 fractions 两个模块,从名字上也能猜到,decimal 表示完整的小数,而 fractions 通过分数的形式表示小数:
|
<ASSISTANT_TASK:>
Python Code:
0.1 == 0.10000000000000000000001
0.1+0.1+0.1 == 0.3
(0.1).as_integer_ratio()
(0.10000000000000001).as_integer_ratio()
print(0.10000000000000001)
a = .1 + .1 + .1
b = .3
print(a.as_integer_ratio())
print(b.as_integer_ratio())
print(a == b)
round(a, 10) == round(b, 10)
print(round(a, 17) == round(b, 17))
print(round(0.1, 1) == round(0.111, 1))
from decimal import Decimal
a = Decimal(0.1)
b = Decimal(0.1000000000000001)
c = Decimal(0.10000000000000001)
print(a)
print(b)
print(c)
a == b == c
from fractions import Fraction
f1 = Fraction(1, 10) # 0.1
print(float(f1))
f3 = Fraction(3, 10) # 0.3
print(float(f3))
print(f1 + f1 + f1 == f3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generation/rendering timing
Step2: Feature extraction timing
Step3: Thought
Step4: Preprocessing
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('/Users/spacecoffin/Development')
import GravelKicker as gk
import librosa
import numpy as np
import os
import pandas as pd
from datetime import datetime
from supriya.tools import nonrealtimetools
this_dir = '/Users/spacecoffin/Development/GravelKicker/__gen_files'
pmtx = gk.generator.gendy1.gen_params(rows=100)
df = gk.generator.gendy1.format_params(pmtx)
df.to_pickle()
pmtx
%time
for i, row in df.iterrows():
session = nonrealtimetools.Session()
builder = gk.generator.gendy1.make_builder(row)
out = gk.generator.gendy1.build_out(builder)
synthdef = builder.build()
with session.at(0):
synth_a = session.add_synth(duration=10, synthdef=synthdef)
gk.util.render_session(session, this_dir, row["hash"])
%timeit
for i, row in df.iterrows():
y, sr = librosa.load(os.path.join(this_dir, "aif_files", row["hash"] + ".aiff"))
_y_normed = librosa.util.normalize(y)
_mfcc = librosa.feature.mfcc(y=_y_normed, sr=sr, n_mfcc=13)
_cent = np.mean(librosa.feature.spectral_centroid(y=_y_normed, sr=sr))
_mfcc_mean = gk.feature_extraction.get_stats(_mfcc)["mean"]
X_row = np.append(_mfcc_mean, _cent)
if i==0:
X_mtx = X_row
else:
X_mtx = np.vstack((X_mtx, X_row))
for i, row in df.iterrows():
session = nonrealtimetools.Session()
builder = gk.generator.gendy1.make_builder(row)
out = gk.generator.gendy1.build_out(builder)
synthdef = builder.build()
with session.at(0):
synth_a = session.add_synth(duration=10, synthdef=synthdef)
gk.util.render_session(session, this_dir, row["hash"])
y, sr = librosa.load(os.path.join(this_dir, "aif_files", row["hash"] + ".aiff"))
_y_normed = librosa.util.normalize(y)
_mfcc = librosa.feature.mfcc(y=_y_normed, sr=sr, n_mfcc=13)
_cent = np.mean(librosa.feature.spectral_centroid(y=_y_normed, sr=sr))
_mfcc_mean = gk.feature_extraction.get_stats(_mfcc)["mean"]
X_row = np.append(_mfcc_mean, _cent)
if i==0:
X_mtx = X_row
else:
X_mtx = np.vstack((X_mtx, X_row))
X_mtx.shape
def col_rename_4_mfcc(c):
if (c < 13):
return "mfcc_mean_{}".format(c)
else:
return "spectral_centroid"
pd.DataFrame(X_mtx).rename_axis(lambda c: col_rename_4_mfcc(c), axis=1)
from sklearn import linear_model
from sklearn import model_selection
from sklearn import preprocessing
import sklearn as sk
import matplotlib.pyplot as plt
%matplotlib inline
pmtx.shape
X_mtx.shape
X_mtx[0]
X_train, X_test, y_train, y_test = sk.model_selection.train_test_split(
X_mtx, pmtx, test_size=0.4, random_state=1)
# Create linear regression objectc
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((regr.predict(X_test) - y_test) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_test, y_test))
# Scale data
standard_scaler = sk.preprocessing.StandardScaler()
X_scaled = standard_scaler.fit_transform(X_mtx)
#Xte_s = standard_scaler.transform(X_test)
robust_scaler = sk.preprocessing.RobustScaler()
X_rscaled = robust_scaler.fit_transform(X_mtx)
#Xte_r = robust_scaler.transform(X_test)
X_scaled.mean(axis=0)
X_scaled.mean(axis=0).mean()
X_scaled.std(axis=0)
X_train, X_test, y_train, y_test = sk.model_selection.train_test_split(
X_scaled, pmtx, test_size=0.4, random_state=1)
# Create linear regression objectc
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((regr.predict(X_test) - y_test) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_test, y_test))
y_test[0]
X_test[0]
regr.predict(X_test[0])
y_test[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create an independence bivariate copula
Step2: Create a Gaussian copula
Step3: Create a 90 degrees rotated Clayon copula with parameter = 3
Step4: Create a t copula with correlation of 0.5 and 4 degrees of freedom
Step5: Different ways to fit a copula...
Step6: Similarly, when the family is unkown,
|
<ASSISTANT_TASK:>
Python Code:
import pyvinecopulib as pv
pv.Bicop()
pv.Bicop(family=pv.BicopFamily.gaussian)
pv.Bicop(family=pv.BicopFamily.clayton, rotation=90, parameters=[3])
cop = pv.Bicop(family=pv.BicopFamily.student, rotation=0, parameters=[0.5, 4])
u = cop.simulate(n=10, seeds=[1, 2, 3])
fcts = [cop.pdf, cop.cdf,
cop.hfunc1, cop.hfunc2,
cop.hinv1, cop.hinv2,
cop.loglik, cop.aic, cop.bic]
[f(u) for f in fcts]
u = cop.simulate(n=1000, seeds=[1, 2, 3])
# Create a new object an sets its parameters by fitting afterwards
cop2 = pv.Bicop(pv.BicopFamily.student)
cop2.fit(data=u)
print(cop2)
# Otherwise, define first an object to control the fits:
# - pv.FitControlsBicop objects store the controls
# - here, we only restrict the parametric family
# - see help(pv.FitControlsBicop) for more details
# Then, create a copula from the data
controls = pv.FitControlsBicop(family_set=[pv.BicopFamily.student])
print(controls)
cop2 = pv.Bicop(data=u, controls=controls)
print(cop2)
# Create a new object an selects both its family and parameters afterwards
cop3 = pv.Bicop()
cop3.select(data=u)
print(cop3)
# Or create directly from data
cop3 = pv.Bicop(data=u)
print(cop3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-sr5', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Interact with SVG display
Step5: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
Step6: Use interactive to build a user interface for exploing the draw_circle function
Step7: Use the display function to show the widgets created by interactive
|
<ASSISTANT_TASK:>
Python Code:
from matplotlib import pyplot as plt
import numpy as np
from IPython.display import SVG
from IPython.display import display
from IPython.html.widgets import interactive, fixed
s =
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
SVG(s)
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
circ_image =
<svg width="%d" height="%d">
<circle cx="%d" cy="%d" r="%d" fill="%s" />
</svg>
%(width,height,cx,cy,r,fill)
display(SVG(circ_image))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
w = interactive(draw_circle,width = fixed(300),height = fixed(300),cx = (0,300),cy = (0,300,10),r = (0,50), fill = "red")
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
display(w)
assert True # leave this to grade the display of the widget
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CONTENTS
Step2: The _init _ method takes in the following parameters
Step3: Finally we instantize the class with the parameters for our MDP in the picture.
Step4: With this we have successfully represented our MDP. Later we will look at ways to solve this MDP.
Step5: The _init _ method takes grid as an extra parameter compared to the MDP class. The grid is a nested list of rewards in states.
Step6: VALUE ITERATION
Step7: It takes as inputs two parameters, an MDP to solve and epsilon, the maximum error allowed in the utility of any state. It returns a dictionary containing utilities where the keys are the states and values represent utilities. <br> Value Iteration starts with arbitrary initial values for the utilities, calculates the right side of the Bellman equation and plugs it into the left hand side, thereby updating the utility of each state from the utilities of its neighbors.
Step8: The pseudocode for the algorithm
Step9: AIMA3e
Step10: Next, we define a function to create the visualisation from the utilities returned by value_iteration_instru. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io
Step11: Move the slider above to observe how the utility changes across iterations. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step. There is also an interactive editor for grid-world problems grid_mdp.py in the gui folder for you to play around with.
Step12: <br>Fortunately, it is not necessary to do exact policy evaluation.
Step13: Let us now solve sequential_decision_environment using policy_iteration.
Step14: AIMA3e
Step15: To completely define our task environment, we need to specify the utility function for the agent.
Step16: This method directly encodes the actions that the agent can take (described above) to characters representing arrows and shows it in a grid format for human visalization purposes.
Step17: Now that we have all the tools required and a good understanding of the agent and the environment, we consider some cases and see how the agent should behave for each case.
Step18: We will use the best_policy function to find the best policy for this environment.
Step19: We can now use the to_arrows method to see how our agent should pick its actions in the environment.
Step20: This is exactly the output we expected
Step21: This is exactly the output we expected
Step22: This is exactly the output we expected
Step23: In this case, the output we expect is
Step24: The POMDP class includes all variables of the MDP class and additionally also stores the sensor model in e_prob.
Step25: We have defined our POMDP object.
Step26: Let's have a look at the pomdp_value_iteration function.
Step27: This function uses two aptly named helper methods from the POMDP class, remove_dominated_plans and max_difference.
Step28: We have defined the POMDP object.
|
<ASSISTANT_TASK:>
Python Code:
from mdp import *
from notebook import psource, pseudocode, plot_pomdp_utility
psource(MDP)
# Transition Matrix as nested dict. State -> Actions in state -> List of (Probability, State) tuples
t = {
"A": {
"X": [(0.3, "A"), (0.7, "B")],
"Y": [(1.0, "A")]
},
"B": {
"X": {(0.8, "End"), (0.2, "B")},
"Y": {(1.0, "A")}
},
"End": {}
}
init = "A"
terminals = ["End"]
rewards = {
"A": 5,
"B": -10,
"End": 100
}
class CustomMDP(MDP):
def __init__(self, init, terminals, transition_matrix, reward = None, gamma=.9):
# All possible actions.
actlist = []
for state in transition_matrix.keys():
actlist.extend(transition_matrix[state])
actlist = list(set(actlist))
MDP.__init__(self, init, actlist, terminals, transition_matrix, reward, gamma=gamma)
def T(self, state, action):
if action is None:
return [(0.0, state)]
else:
return self.t[state][action]
our_mdp = CustomMDP(init, terminals, t, rewards, gamma=.9)
psource(GridMDP)
sequential_decision_environment
psource(value_iteration)
value_iteration(sequential_decision_environment)
pseudocode("Value-Iteration")
def value_iteration_instru(mdp, iterations=20):
U_over_time = []
U1 = {s: 0 for s in mdp.states}
R, T, gamma = mdp.R, mdp.T, mdp.gamma
for _ in range(iterations):
U = U1.copy()
for s in mdp.states:
U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)])
for a in mdp.actions(s)])
U_over_time.append(U)
return U_over_time
columns = 4
rows = 3
U_over_time = value_iteration_instru(sequential_decision_environment)
%matplotlib inline
from notebook import make_plot_grid_step_function
plot_grid_step = make_plot_grid_step_function(columns, rows, U_over_time)
import ipywidgets as widgets
from IPython.display import display
from notebook import make_visualize
iteration_slider = widgets.IntSlider(min=1, max=15, step=1, value=0)
w=widgets.interactive(plot_grid_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(description = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
psource(expected_utility)
psource(policy_iteration)
psource(policy_evaluation)
policy_iteration(sequential_decision_environment)
pseudocode('Policy-Iteration')
psource(GridMDP.T)
psource(GridMDP.to_arrows)
psource(GridMDP.to_grid)
# Note that this environment is also initialized in mdp.py by default
sequential_decision_environment = GridMDP([[-0.04, -0.04, -0.04, +1],
[-0.04, None, -0.04, -1],
[-0.04, -0.04, -0.04, -0.04]],
terminals=[(3, 2), (3, 1)])
pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))
from utils import print_table
print_table(sequential_decision_environment.to_arrows(pi))
sequential_decision_environment = GridMDP([[-0.4, -0.4, -0.4, +1],
[-0.4, None, -0.4, -1],
[-0.4, -0.4, -0.4, -0.4]],
terminals=[(3, 2), (3, 1)])
pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))
from utils import print_table
print_table(sequential_decision_environment.to_arrows(pi))
sequential_decision_environment = GridMDP([[-4, -4, -4, +1],
[-4, None, -4, -1],
[-4, -4, -4, -4]],
terminals=[(3, 2), (3, 1)])
pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))
from utils import print_table
print_table(sequential_decision_environment.to_arrows(pi))
sequential_decision_environment = GridMDP([[4, 4, 4, +1],
[4, None, 4, -1],
[4, 4, 4, 4]],
terminals=[(3, 2), (3, 1)])
pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))
from utils import print_table
print_table(sequential_decision_environment.to_arrows(pi))
psource(POMDP)
# transition probability P(s'|s,a)
t_prob = [[[0.9, 0.1], [0.1, 0.9]], [[0.1, 0.9], [0.9, 0.1]]]
# evidence function P(e|s)
e_prob = [[[0.6, 0.4], [0.4, 0.6]], [[0.6, 0.4], [0.4, 0.6]]]
# reward function
rewards = [[0.0, 0.0], [1.0, 1.0]]
# discount factor
gamma = 0.95
# actions
actions = ('0', '1')
# states
states = ('0', '1')
pomdp = POMDP(actions, t_prob, e_prob, rewards, states, gamma)
pseudocode('POMDP-Value-Iteration')
psource(pomdp_value_iteration)
# transition function P(s'|s,a)
t_prob = [[[0.65, 0.35], [0.65, 0.35]], [[0.65, 0.35], [0.65, 0.35]], [[1.0, 0.0], [0.0, 1.0]]]
# evidence function P(e|s)
e_prob = [[[0.5, 0.5], [0.5, 0.5]], [[0.5, 0.5], [0.5, 0.5]], [[0.8, 0.2], [0.3, 0.7]]]
# reward function
rewards = [[5, -10], [-20, 5], [-1, -1]]
gamma = 0.95
actions = ('0', '1', '2')
states = ('0', '1')
pomdp = POMDP(actions, t_prob, e_prob, rewards, states, gamma)
utility = pomdp_value_iteration(pomdp, epsilon=0.1)
%matplotlib inline
plot_pomdp_utility(utility)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Contribute data on Refractive Index
Step2: Explore and extract refractive index data
Step3: Prepare a single contribution for testing
Step4: Retrieve and update project info
Step5: Try searching for refractive in MPContribs browse page now
Step6: Submit test contribution
Step7: Your first contribution should now show up in your project on https
Step8: Publish contributions
Step9: Retrieve and query contributions
Step10: Grab the table ID and retrieve it as Pandas DataFrame. Show a graph.
Step11: Finally, let's build up a more complicated query to reduce the list of contributions to the ones we might be interested in.
|
<ASSISTANT_TASK:>
Python Code:
name = "your-project-name"
apikey = "your-api-key" # profile.materialsproject.org
from mpcontribs.client import Client
from mp_api.matproj import MPRester
from refractivesqlite import dboperations as DB
from pandas import DataFrame
db = DB.Database("refractive.db")
#db.create_database_from_url()
db.search_pages("Au", exact=True)
materials = db.search_custom(
'select * from pages where book="Au" and hasrefractive=1 and hasextinction=1'
)
page_id = materials[0][0]
mat = db.get_material(page_id)
mpr = MPRester(apikey)
def get_contrib(mat):
info = mat.get_page_info()
formula = info["book"]
mpid = mpr.get_materials_ids(formula)[0]
rmin, rmax = info['rangeMin']*1000, info['rangeMax']*1000
mid = (rmin + rmax) / 2
n = mat.get_refractiveindex(mid)
k = mat.get_extinctioncoefficient(mid)
x = "wavelength λ [μm]"
refrac = DataFrame(mat.get_complete_refractive(), columns=[x, "n"])
refrac.set_index(x, inplace=True)
extinct = DataFrame(mat.get_complete_extinction(), columns=[x, "k"])
extinct.set_index(x, inplace=True)
df = refrac.join(extinct["k"])
df.attrs["title"] = f"Complex refractive index (n+ik) for {formula}"
df.attrs["labels"] = {
"value": "n, k", # y-axis label
"variable": "Re/Im" # legend name (= df.columns.name)
}
df.plot(**df.attrs)#.show()
df.attrs["name"] = "n,k(λ)"
return {
"project": name,
"identifier": str(mpid),
"data": {
"n": float(n),
"k": float(k),
"range.min": f"{rmin} nm",
"range.mid": f"{mid} nm",
"range.max": f"{rmax} nm",
"points": info["points"],
"page": info["page"]
},
"tables": [df]
}
contrib = get_contrib(mat)
client = Client(
host="workshop-contribs-api.materialsproject.org",
apikey=apikey
)
db.check_url_version()
update = {
'unique_identifiers': False,
'references': [
{'label': 'website', 'url': 'https://refractiveindex.info'},
{'label': 'source', 'url': "https://refractiveindex.info/download/database/rii-database-2019-02-11.zip"}
],
"other": { # describe the root fields here to automatically include tooltips on MP
"n": "real part of complex refractive index",
"k": "imaginary part of complex refractive index",
"range": "wavelength range for n,k in nm",
"points": "number of λ points in range",
"page": "reference to data source/publication"
}
}
# could also update authors, title, long_title, description
client.projects.update_entry(pk=name, project=update).result()
client.init_columns(name, {
"n": "", # dimensionless
"k": "",
"range.min": "nm",
"range.mid": "nm",
"range.max": "nm",
"points": "",
"page": None # text
})
client.submit_contributions([contrib])
contributions = []
for material in materials:
page_id = material[0]
mat = db.get_material(page_id)
contrib = get_contrib(mat)
contributions.append(contrib)
contributions[10]
client.delete_contributions(name)
client.submit_contributions(contributions, ignore_dupes=True)
client.make_public(name)
all_ids = client.get_all_ids(
{"project": name},
include=["tables"],
data_id_fields={name: "page"},
fmt="map"
)
tid = all_ids[name]["mp-81"]["Johnson"]["tables"]["n,k(λ)"]["id"]
client.get_table(tid).display()
query = {
"project": name,
"formula__contains": "Au",
#"identifier__in": []
"data__n__value__lt": 200,
"data__k__value__gte": 7,
"_sort": "-data__range__mid__value",
"_fields": [
"id", "identifier", "formula",
"data.range.mid.value",
"data.n.value",
"data.k.value"
]
}
print(client.get_totals(query))
client.contributions.get_entries(**query).result()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Examples
Step2: Example 1
Step3: Example 2
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.misc
def mosaic(f,N,s=1.0):
f = np.asarray(f)
d,h,w = f.shape
N = int(N)
nLines = int(np.ceil(float(d)/N))
nCells = int(nLines*N)
# Add black slices to match the exact number of mosaic cells
fullf = np.resize(f, (nCells,h,w))
fullf[d:nCells,:,:] = 0
Y,X = np.indices((nLines*h,N*w))
Pts = np.array([
(np.floor(Y/h)*N + np.floor(X/w)).ravel(),
np.mod(Y,h).ravel(),
np.mod(X,w).ravel() ]).astype(int).reshape((3,int(nLines*h),int(N*w)))
g = fullf[Pts[0],Pts[1],Pts[2]]
if (s != 1.0):
#g = scipy.ndimage.interpolation.zoom(g,s,order=5)
g = scipy.misc.imresize(g,s,interp='bilinear')
return g
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python mosaic.ipynb
import sys
import os
import numpy as np
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
if testing:
t = np.arange(60)
t.shape = (5,3,4)
print('Original 3D image:\n', t, '\n\n')
for i in range(1,7):
g = ia.mosaic(t,i)
print('Mosaic with %d image column(s):\n' % i)
print(g)
print()
for i in range(1,7):
g = ia.mosaic(t,i,0.8)
print('Mosaic with %d image column(s):\n' % i)
print(g)
print()
if testing:
d,r,c = np.indices((5,3,4))
t = ((r + c)%2) >0
for i in range(1,7):
g = ia.mosaic(t,i,0.8)
print('Mosaic with %d image column(s):\n' % i)
print(g)
print()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Choosing the sampling ratio $h$
Step2: Obervability
Step3: Observer design
Step4: The characteristic polynomial of the observer is given by
Step5: Desired observer poles
Step6: Reachability
Step7: State feedback
Step8: The characteristic polynomial of the closed-loop system is given by
Step9: Desired closed-loop characteristic polynomial
Step10: Determining $l_0$
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import sympy as sy
sy.init_printing(use_latex='mathjax', order='lex')
h,omega0 = sy.symbols('h,omega0', real=True, positive=True)
z = sy.symbols('z')
beta = sy.cos(omega0*h)
Phi = sy.Matrix([[2*beta, 1],[-1, 0]])
Gamma = sy.Matrix([[1-beta],[1-beta]])
C = sy.Matrix([[1, 0]])
H = sy.simplify(C*(z*sy.eye(2)-Phi).inv()*Gamma)
H
omegaval = 0.5
hval = np.pi/6/omegaval
Phi_np = np.array(Phi.subs({h:hval, omega0:omegaval})).astype(np.float64)
Phi_np
Gamma_np = np.array(Gamma.subs({h:hval, omega0:omegaval})).astype(np.float64)
Gamma_np
C_n = np.array(C).astype(np.float64)
Wo_n = np.vstack((C_n, np.dot(C_n,Phi_np)))
Wo_n
np.linalg.det(Wo_n)
k1,k2 = sy.symbols('k1,k2')
K = sy.Matrix([[k1], [k2]])
Phi_o=Phi.subs({h:hval, omega0:omegaval}) - K*C
Phi_o
z = sy.symbols('z')
zIminPhio = z*sy.eye(2) - Phi_o
ch_poly = sy.Poly(zIminPhio.det(), z)
ch_poly.as_expr()
po = sy.symbols('p_o')
K = sy.Matrix([[Phi[0,0]-2*po],[Phi[1,0]+po**2]])
K
Phi_o = Phi - K*C
Phi_o
Phi_o.eigenvals()
Wc = sy.BlockMatrix([[Gamma, Phi*Gamma]]).as_explicit()
Wc
Wc.det()
Wc.det().subs({h:hval, omega0:omegaval})
l1,l2 = sy.symbols('l1,l2')
L = sy.Matrix([[l1, l2]])
Phi_c=Phi.subs({h:hval, omega0:omegaval}) - Gamma.subs({h:hval, omega0:omegaval})*L
Phi_c
zIminPhic = z*sy.eye(2) - Phi_c
zIminPhic
ch_poly = sy.Poly(zIminPhic.det(), z)
ch_poly.as_expr()
p = sy.symbols('p')
des_ch_poly = sy.Poly((z-p)**2, z)
dioph_eqn = ch_poly - des_ch_poly
sol = sy.solve(dioph_eqn.coeffs(), (l1,l2))
sol
l0 = (1-p)**2/(2*(1-beta)) # We have assumed b=1 all along
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So, let's start!
Step2: Experiment parameters
Step3: Specify Nodes
Step4: Specify input & output stream
Step5: Specify Workflow
Step6: Visualize the workflow
Step7: Run the Workflow
Step8: Inspect output
Step9: Visualize results
Step10: How do the motion parameters look like?
|
<ASSISTANT_TASK:>
Python Code:
!nib-ls /data/ds102/sub-01/*/*.nii.gz
%pylab inline
from os.path import join as opj
from nipype.interfaces.fsl import MCFLIRT, FLIRT
from nipype.interfaces.afni import Resample
from nipype.interfaces.spm import Smooth
from nipype.interfaces.utility import IdentityInterface
from nipype.interfaces.io import SelectFiles, DataSink
from nipype.pipeline.engine import Workflow, Node
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'
# list of subject identifiers
subject_list = ['sub-01', 'sub-02', 'sub-03', 'sub-04', 'sub-05']
# list of session identifiers
session_list = ['run-1', 'run-2']
# Smoothing widths to apply
fwhm = [4, 8]
# MCFLIRT - motion correction
mcflirt = Node(MCFLIRT(mean_vol=True,
save_plots=True,
output_type='NIFTI'),
name="mcflirt")
# Resample - resample anatomy to 3x3x3 voxel resolution
resample = Node(Resample(voxel_size=(3, 3, 3),
outputtype='NIFTI'),
name="resample")
# FLIRT - coregister functional images to anatomical images
coreg_step1 = Node(FLIRT(output_type='NIFTI'), name="coreg_step1")
coreg_step2 = Node(FLIRT(output_type='NIFTI',
apply_xfm=True), name="coreg_step2")
# Smooth - image smoothing
smooth = Node(Smooth(), name="smooth")
smooth.iterables = ("fwhm", fwhm)
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['subject_id', 'session_id']),
name="infosource")
infosource.iterables = [('subject_id', subject_list),
('session_id', session_list)]
# SelectFiles - to grab the data (alternativ to DataGrabber)
anat_file = opj('{subject_id}', 'anat', '{subject_id}_T1w.nii.gz')
func_file = opj('{subject_id}', 'func',
'{subject_id}_task-flanker_{session_id}_bold.nii.gz')
templates = {'anat': anat_file,
'func': func_file}
selectfiles = Node(SelectFiles(templates,
base_directory='/data/ds102'),
name="selectfiles")
# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
container=output_dir),
name="datasink")
# Use the following DataSink output substitutions
substitutions = [('_subject_id', ''),
('_session_id_', ''),
('_task-flanker', ''),
('_mcf.nii_mean_reg', '_mean'),
('.nii.par', '.par'),
]
subjFolders = [('%s_%s/' % (sess, sub), '%s/%s' % (sub, sess))
for sess in session_list
for sub in subject_list]
subjFolders += [('%s_%s' % (sub, sess), '')
for sess in session_list
for sub in subject_list]
subjFolders += [('%s%s_' % (sess, sub), '')
for sess in session_list
for sub in subject_list]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions
# Create a preprocessing workflow
preproc = Workflow(name='preproc')
preproc.base_dir = opj(experiment_dir, working_dir)
# Connect all components of the preprocessing workflow
preproc.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
('session_id', 'session_id')]),
(selectfiles, mcflirt, [('func', 'in_file')]),
(selectfiles, resample, [('anat', 'in_file')]),
(mcflirt, coreg_step1, [('mean_img', 'in_file')]),
(resample, coreg_step1, [('out_file', 'reference')]),
(mcflirt, coreg_step2, [('out_file', 'in_file')]),
(resample, coreg_step2, [('out_file', 'reference')]),
(coreg_step1, coreg_step2, [('out_matrix_file',
'in_matrix_file')]),
(coreg_step2, smooth, [('out_file', 'in_files')]),
(mcflirt, datasink, [('par_file', 'preproc.@par')]),
(resample, datasink, [('out_file', 'preproc.@resample')]),
(coreg_step1, datasink, [('out_file', 'preproc.@coregmean')]),
(smooth, datasink, [('smoothed_files', 'preproc.@smooth')]),
])
# Create preproc output graph
preproc.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename=opj(preproc.base_dir, 'preproc', 'graph.dot.png'))
# Visualize the detailed graph
preproc.write_graph(graph2use='flat', format='png', simple_form=True)
Image(filename=opj(preproc.base_dir, 'preproc', 'graph_detailed.dot.png'))
preproc.run('MultiProc', plugin_args={'n_procs': 4})
!tree /output/datasink/preproc/sub-01/
preproc.run('MultiProc', plugin_args={'n_procs': 4})
from nilearn import image, plotting
plotting.plot_epi(
'/output/datasink/preproc/sub-01/run-1_bold_mean_flirt.nii', title="fwhm = 0mm",
display_mode='ortho', annotate=False, draw_cross=False, cmap='gray')
mean_img = image.mean_img('/output/datasink/preproc/sub-01/run-1_fwhm_4/s_bold_mcf_flirt.nii')
plotting.plot_epi(mean_img, title="fwhm = 4mm", display_mode='ortho',
annotate=False, draw_cross=False, cmap='gray')
mean_img = image.mean_img('/output/datasink/preproc/sub-01/run-1_fwhm_8/s_bold_mcf_flirt.nii')
plotting.plot_epi(mean_img, title="fwhm = 8mm", display_mode='ortho',
annotate=False, draw_cross=False, cmap='gray')
par = np.loadtxt('/output/datasink/preproc/sub-01/run-1_bold_mcf.par')
fig, axes = plt.subplots(2, 1, figsize=(15, 5))
axes[0].set_ylabel('rotation (radians)')
axes[0].plot(par[0:, :3])
axes[1].plot(par[0:, 3:])
axes[1].set_xlabel('time (TR)')
axes[1].set_ylabel('translation (mm)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Antes que todo, debemos preguntarnos
Step2: Como podemos notar, es posible que tengamos más de una página de issues, por lo que tendremos que acceder al header y obtener la url a la siguiente página, repitiendo la operación hasta llegar a la última. Para facilitar esto, podemos agregar el parametro page a la url indicando el número de la página que queremos. ¿Cómo sabremos cuándo detenernos? Cuando en el header, en Link, no exista una url a la siguiente página ("next")
Step3: Guardemos la información que nos interesa en objetos de una clase Issue, extraeremos los nombres de las labels con la funcion labels y guardaremos a estos objetos en una lista llamada ISSUES_LIST
Step4: Pediremos todas las issues e iremos guardando los objetos correspondientes
Step5: Ya tenemos nuestra lista de issues. Ahora, veamos cuales son las labels que se han usado
Step6: Empecemos con los gráficos. Para esto usaremos matplotlib
Step7: Vamos a desplegar un grafo donde cada nodo es un label y cada arista representa las uniones de labels (cuando una issue tiene más de un label). El tamaño de los nodos dependerá de la cantidad de issues etiquetadas con aquel label y el grosor de las aristas dependerá de la cantidad de issues en la que ambos labels estuvieron. Si en alguna issue hubo mas de dos etiquetas, consideraremos los pares de labels y no un conjunto.
Step8: Para la siguiente sección usaremos nextworkx para graficar la red de nodos
|
<ASSISTANT_TASK:>
Python Code:
# Primero tenemos que importar las librerias que usaremos para recopilar datos
import base64
import json
import requests
# Si queremos imprimir los json de respuesta
# de una forma mas agradable a la vista podemos usar
def print_pretty(jsonstring, indent=4, sort_keys=False):
print(json.dumps(jsonstring, indent=indent, sort_keys=sort_keys))
# Importamos nuestras credenciales
with open("credentials") as f:
credentials = tuple(f.read().splitlines())
# Constantes que usaremos
OWNER = "IIC2233-2016-1"
REPO = "syllabus"
url = "https://api.github.com/repos/{}/{}/issues".format(OWNER, REPO)
params = {
"state": "closed",
"sort": "created",
"direction": "asc",
"page": 1
}
req = requests.get(url, params=params, auth=credentials)
print(req.status_code)
# Para obtener el json asociado: req.json()
# Pero queremos imprimirlo de una manera mas legible
for issue in req.json():
print(issue["number"], issue["title"])
# Si esta fuese la ultima pagina no existiria `rel="next"`
print(dict(req.headers)["Link"])
print("¿Es la ultima pagina? {}".format("No" if 'rel="next"' in dict(req.headers)["Link"] else "Si"))
class Issue:
def __init__(self, number, title, labels):
self.number = number
self.title = title
self.labels = labels
def labels(labels_json):
return [label_item["name"] for label_item in labels_json]
ISSUES_LIST = list()
url = "https://api.github.com/repos/{}/{}/issues".format(OWNER, REPO)
params = {
"state": "closed",
"sort": "created",
"direction": "asc",
"page": 1
}
# Primera pagina
req = requests.get(url, params=params, auth=credentials)
for issue in req.json():
ISSUES_LIST.append(Issue(issue["number"], issue["title"], labels(issue["labels"])))
# Paginas siguientes
while 'rel="next"' in dict(req.headers)["Link"]:
print("Page: {} ready".format(params["page"]))
params["page"] += 1
req = requests.get(url, params=params, auth=credentials)
for issue in req.json():
ISSUES_LIST.append(Issue(issue["number"], issue["title"], labels(issue["labels"])))
print("Tenemos {} issues en la lista".format(len(ISSUES_LIST)))
LABELS = list(label for issue in ISSUES_LIST for label in issue.labels)
print(set(LABELS))
%matplotlib inline
# Cantidad de issues por label
LABEL_ISSUES = {label: LABELS.count(label) for label in LABELS}
# Imprimiendo de mayor a menor numero de issues...
print("Top {}".format(min(5, len(LABEL_ISSUES))))
for v, k in sorted(((v,k) for k,v in LABEL_ISSUES.items()), reverse=True)[:min(5, len(LABEL_ISSUES))]:
print("{}: {}".format(k, v))
# Todos los pares de labels
from itertools import combinations
LABEL_PAIRS = dict()
PAIRS = list()
for issue in ISSUES_LIST:
combs = combinations(issue.labels, 2)
for pair in combs:
k = "{} + {}".format(*pair)
if k not in LABEL_PAIRS:
LABEL_PAIRS[k] = 0
PAIRS.append(pair)
LABEL_PAIRS[k] += 1
# Imprimiendo de mayor a menor numero de issues
print("Top {}".format(min(5, len(LABEL_PAIRS))))
for v, k in sorted(((v,k) for k,v in LABEL_PAIRS.items()), reverse=True)[:min(5, len(LABEL_PAIRS))]:
print("{}: {}".format(k, v))
from pylab import rcParams
rcParams['figure.figsize'] = 8, 8
import matplotlib.pylab as plt
import networkx as nx
# Crear grafo
G = nx.Graph()
# Max peso
mp = LABEL_PAIRS[max(LABEL_PAIRS, key=LABEL_PAIRS.get)]
# Crear nodos
G.add_nodes_from(LABEL_ISSUES.keys())
# Crear arcos
for o_pair in LABEL_PAIRS.keys():
pair = o_pair.split(' + ')
G.add_edge(*pair,
weight=mp-LABEL_PAIRS[o_pair],
width=mp-LABEL_PAIRS[o_pair])
# Asignar color
n_colors = list()
for node in G.nodes():
color = 'white'
if 'Tarea' in node:
color = 'blue'
elif node in ['Tengo un error', 'Setup']:
color = 'purple'
elif node in ['Actividades', 'Ayudantía', 'Ayudantia', 'Interrogación', 'Interrogacion', 'Materia', 'Material']:
color = 'green'
elif node in ['Código', 'Codigo', 'Git']:
color = 'yellow'
elif node in ['Duplicada', 'Invalida']:
color = 'grey'
elif node in ['IMPORTANTE']:
color = 'red'
n_colors.append(color)
# Asignar tamaños de nodos
sizes = list()
for node in G.nodes():
sizes.append(20 * LABEL_ISSUES[node])
# Asignar grosores de aarcos
average = round(sum(LABEL_PAIRS.values()) / (0.75 * len(LABEL_PAIRS)))
styles = list()
widths = list()
for edge in G.edges():
k = "{} + {}".format(edge[0], edge[1])
if k not in LABEL_PAIRS:
k = "{} + {}".format(edge[1], edge[0])
if LABEL_PAIRS[k] < average:
styles.append('dashed')
widths.append(LABEL_PAIRS[k] + average)
else:
styles.append('solid')
widths.append(LABEL_PAIRS[k] + 1)
# Colores de arcos
e_colors = list(i + 10 for i in range(len(G.edges())))
# Desplegar graficos
nx.draw(G,
edge_color=e_colors,
edge_cmap=plt.cm.Blues,
node_color=n_colors,
node_size=sizes,
style=styles,
width=widths,
with_labels=True)
nx.draw_circular(G,
edge_color=e_colors,
edge_cmap=plt.cm.Blues,
node_color=n_colors,
node_size=sizes,
style=styles,
width=widths,
with_labels=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading in and Cleaning up the Data
Step2: GDP Growth and GDP Growth Rate in Brazil
Step3: GDP Growth vs. GDP Growth Rate
Step4: Actual
Step5: Actual
Step6: Actual
Step7: Actual
Step8: Actual
Step9: Population Growth
Step10: Actual
|
<ASSISTANT_TASK:>
Python Code:
# Inportant Packages
import pandas as pd
import matplotlib.pyplot as plt
import sys
import datetime as dt
print('Python version is:', sys.version)
print('Pandas version:', pd.__version__)
print('Date:', dt.date.today())
path = 'C:\\Users\\emeka_000\\Desktop\\Bootcamp_Emeka.xlsx'
odata = pd.read_excel(path,
usecols = ['Series Name','2000 [YR2000]', '2001 [YR2001]', '2002 [YR2002]',
'2003 [YR2003]', '2004 [YR2004]', '2005 [YR2005]', '2006 [YR2006]',
'2007 [YR2007]', '2008 [YR2008]', '2009 [YR2009]', '2010 [YR2010]',
'2011 [YR2011]', '2012 [YR2012]']
) #retained only the necessary columns
odata.columns = ['Metric', '2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008',
'2009', '2010', '2011', '2012'] #easier column names
odata = odata.drop([20, 21, 22, 23, 24]) ##delete NaN values
odata = odata.transpose() #transpose to make diagram easier
odata #data with metrics description for the chart below
data = pd.read_excel(path,
usecols = ['2000 [YR2000]', '2001 [YR2001]', '2002 [YR2002]',
'2003 [YR2003]', '2004 [YR2004]', '2005 [YR2005]', '2006 [YR2006]',
'2007 [YR2007]', '2008 [YR2008]', '2009 [YR2009]', '2010 [YR2010]',
'2011 [YR2011]', '2012 [YR2012]']
) #same data but modified for pandas edits
data.columns = ['2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008',
'2009', '2010', '2011', '2012'] #all columns are now string
data = data.transpose() #data used for the rest of the project
data[4].plot(kind = 'line', #line plot
title = 'Brazil Yearly GDP (2000-2012) (current US$)', #title
fontsize=15,
color='Green',
linewidth=4, #width of plot line
figsize=(20,5),).title.set_size(20) #set figure size and title size
plt.xlabel("Year").set_size(15)
plt.ylabel("GDP (current US$) * 1e12").set_size(15) #set x and y axis, with their sizes
data[6].plot(kind = 'line',
title = 'Brazil Yearly GDP Per Capita (2000-2012) (current US$)',
fontsize=15,
color='blue',
linewidth=4,
figsize=(20,5)).title.set_size(20)
plt.xlabel("Year").set_size(15)
plt.ylabel("GDP per capita (current US$)").set_size(15)
data[5].plot(kind = 'line',
title = 'Brazil Yearly GDP Growth (2000-2012) (%)',
fontsize=15,
color='red',
linewidth=4,
figsize=(20,5)).title.set_size(20)
plt.xlabel("Year").set_size(15)
plt.ylabel("GDP Growth (%)").set_size(15)
fig, ax1 = plt.subplots(figsize = (20,5))
y1 = data[8]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'green') #household consumption
ax2.plot(y2, 'blue') #GDP growth
plt.title("Household Consumption (% of GDP) vs. GDP").set_size(20)
fig, ax1 = plt.subplots(figsize = (20, 5))
y1 = data[11]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'red') #Net official development assistance
ax2.plot(y2, 'blue') #GDP growth
plt.title("Foreign Aid vs. GDP").set_size(20)
fig, ax1 = plt.subplots(figsize = (20, 5))
y1 = data[2]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'yellow') #household consumption
ax2.plot(y2, 'blue') #GDP growth
plt.title("Foreign Direct Investment (Inflows) (% of GDP) vs. GDP").set_size(20)
fig, ax1 = plt.subplots(figsize = (20, 5))
y1 = data[14]
y2 = data[4]
ax2 = ax1.twinx()
ax1.plot(y1, 'purple') #household consumption
ax2.plot(y2, 'blue') #GDP growth
plt.title("Government Spending vs. GDP").set_size(20)
data.plot.scatter(x = 5, y = 0,
title = 'Population Growth vs. GDP Growth',
figsize=(20,5)).title.set_size(20)
plt.xlabel("GDP Growth Rate").set_size(15)
plt.ylabel("Population Growth Rate").set_size(15)
data.plot.scatter(x = 6, y = 0,
title = 'Population Growth vs. GDP per Capita',
figsize=(20,5)).title.set_size(20)
plt.xlabel("GDP per Capita").set_size(15)
plt.ylabel("Population Growth Rate").set_size(15)
data[15].plot(kind = 'bar',
title = 'Renewable energy consumption (% of total) (2000-2012)',
fontsize=15,
color='green',
linewidth=4,
figsize=(20,5)).title.set_size(20)
data[12].plot(kind = 'bar',
title = 'CO2 emissions from liquid fuel consumption (2000-2012)',
fontsize=15,
color='red',
linewidth=4,
figsize=(20,5)).title.set_size(20)
data[13].plot(kind = 'bar',
title = 'CO2 emissions from gaseous fuel consumption (2000-2012)',
fontsize=15,
color='blue',
linewidth=4,
figsize=(20,5)).title.set_size(20)
data.plot.scatter(x = 7, y = 19, #scatter plot
title = 'Health Expenditures vs. Life Expectancy',
figsize=(20,5)).title.set_size(20)
plt.xlabel("Health Expenditures").set_size(15)
plt.ylabel("Life Expectancy").set_size(15)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run for 90-bp sequences (top 500 by prevalence in 90-bp biom table)
Step2: Run for 100-bp sequences (top 500 by prevalence in 100-bp biom table)
Step3: Run for 150-bp sequences (top 500 by prevalence in 150-bp biom table)
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import Bio.Blast.NCBIXML
from cStringIO import StringIO
from __future__ import print_function
# convert RDP-style lineage to Greengenes-style lineage
def rdp_lineage_to_gg(lineage):
d = {}
linlist = lineage.split(';')
for i in np.arange(0, len(linlist), 2):
d[linlist[i+1]] = linlist[i]
linstr = ''
for level in ['domain', 'kingdom', 'phylum', 'class', 'order', 'family', 'genus']:
try:
linstr += level[0] + '__' + d[level].replace('"', '') + '; '
except:
linstr += level[0] + '__' + '; '
linstr = linstr[:-2]
return(linstr)
# parse blast xml record
def parse_record_alignments_taxonomy(record):
df = pd.DataFrame(columns=('strain', 'lineage'))
for alignment in record.alignments:
strain, lineage = alignment.hit_def.split(' ')
linstr = rdp_lineage_to_gg(lineage)
df = df.append({'strain': strain, 'lineage': linstr}, ignore_index=True)
df['species'] = [(x.split(' ')[0] + ' ' + x.split(' ')[1]).replace(';', '') for x in df.strain]
num_hits = df.shape[0]
vc_species = df.species.value_counts()
vc_lineage = df.lineage.value_counts()
return(num_hits, vc_species, vc_lineage)
# main function
def xml_to_taxonomy(path_xml, path_output):
# read file as single string, generate handle, and parse xml handle to records generator
with open(path_xml) as file:
str_xml = file.read()
handle_xml = StringIO(str_xml)
records = Bio.Blast.NCBIXML.parse(handle_xml)
# write top lineage and top 3 strains for each query
with open(path_output, 'w') as target:
# write header
target.write('query\tlineage_count\tspecies_1st_count\tspecies_2nd_count\tspecies_3rd_count\n')
# iterate over records generator
for record in records:
target.write('%s' % record.query)
try:
num_hits, vc_species, vc_lineage = parse_record_alignments_taxonomy(record)
except:
pass
try:
target.write('\t%s (%s/%s)' % (vc_lineage.index[0], vc_lineage[0], num_hits))
except:
pass
try:
target.write('\t%s (%s/%s)' % (vc_species.index[0], vc_species[0], num_hits))
except:
pass
try:
target.write('\t%s (%s/%s)' % (vc_species.index[1], vc_species[1], num_hits))
except:
pass
try:
target.write('\t%s (%s/%s)' % (vc_species.index[2], vc_species[2], num_hits))
except:
pass
target.write('\n')
path_xml = '../../data/sequence-lookup/rdp-taxonomy/otu_seqs_top_500_prev.emp_deblur_90bp.subset_2k.rare_5000.xml'
path_output = 'otu_seqs_top_500_prev.emp_deblur_90bp.subset_2k.rare_5000.tsv'
xml_to_taxonomy(path_xml, path_output)
path_xml = '../../data/sequence-lookup/rdp-taxonomy/otu_seqs_top_500_prev.emp_deblur_100bp.subset_2k.rare_5000.xml'
path_output = 'otu_seqs_top_500_prev.emp_deblur_100bp.subset_2k.rare_5000.tsv'
xml_to_taxonomy(path_xml, path_output)
path_xml = '../../data/sequence-lookup/rdp-taxonomy/otu_seqs_top_500_prev.emp_deblur_150bp.subset_2k.rare_5000.xml'
path_output = 'otu_seqs_top_500_prev.emp_deblur_150bp.subset_2k.rare_5000.tsv'
xml_to_taxonomy(path_xml, path_output)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Grid-Search and Cross-Validation
Step2: Processing Pipelines
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
X, y = make_classification(random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
lr = LogisticRegression().fit(X_train, y_train)
print("predictions: %s" % lr.predict(X_test))
print("accuracy: %.2f" % lr.score(X_test, y_test))
%matplotlib inline
from plot_forest import plot_forest_interactive
plot_forest_interactive()
X = [{'age': 15.9, 'likes puppies': 'yes', 'location': 'Tokyo'},
{'age': 21.5, 'likes puppies': 'no', 'location': 'New York'},
{'age': 31.3, 'likes puppies': 'no', 'location': 'Paris'},
{'age': 25.1, 'likes puppies': 'yes', 'location': 'New York'},
{'age': 63.6, 'likes puppies': 'no', 'location': 'Tokyo'},
{'age': 14.4, 'likes puppies': 'yes', 'location': 'Tokyo'}]
from sklearn.feature_extraction import DictVectorizer
vect = DictVectorizer(sparse=False).fit(X)
print(vect.transform(X))
print("feature names: %s" % vect.get_feature_names())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Any object expecting to be the target of a for loop, if not already an iterator, needs to either
Step3: The generator function below shows what most Pythonistas consider the base case for the keyword yield
Step4: Now lets take a look at pretty much the same iterator written a different way
Step5: Feeding an open-ended iterator to a for loop runs the risk of a "forever loop". The itertools module is filled with tools designed to rein in the infinite loopers. Just use islice(obj, start, stop) to keep the for loop finite
Step8: In the code below, we see (yield) turned around and used not to return objects, but to receive them. The parentheses are by convention and suggest a "mouth" bringing something in from outside.
|
<ASSISTANT_TASK:>
Python Code:
powers = (lambda x: pow(x, n) for n in range(-4,5))
phi = (1 + pow(5,0.5)) * 0.5 # golden proportion
for n, f in enumerate(powers, start=-4): # iterates through lambda expressions
print("phi ** {:2} == {:10.8f}".format(n, f(phi)))
class Any:
def __init__(self):
self.__dict__ = {0:'scissors', 1:'paper', 2:'rock'}
def __getitem__(self, n): # enough for iter() to go on
if n == len(self.__dict__):
raise StopIteration # tells for loop when to stop
return self.__dict__[n]
for thing in Any():
print(thing)
import pprint
def primes():
generate successive prime numbers (trial by division)
candidate = 1
_primes_so_far = [2] # first prime, only even prime
yield _primes_so_far[0] # share it!
while True:
candidate += 2 # check odds only from now on
for prev in _primes_so_far:
if prev**2 > candidate:
yield candidate # new prime!
_primes_so_far.append(candidate)
break
if not divmod(candidate, prev)[1]: # no remainder!
break # done looping
p = primes() # generator function based iterator
pp = pprint.PrettyPrinter(width=40, compact=True)
pp.pprint([next(p) for _ in range(50)]) # next 30 primes please!
class Primes:
def __init__(self):
self.candidate = 1
self._primes_so_far = [2] # first prime, only even prime
def __iter__(self):
return self
def __next__(self):
while True:
self.candidate += 2 # check odds only from now on
for prev in self._primes_so_far:
if prev**2 > self.candidate:
self._primes_so_far.append(self.candidate)
return self._primes_so_far[-2]
if not divmod(self.candidate, prev)[1]: # no remainder!
break
pp = pprint.PrettyPrinter(width=40, compact=True)
p = Primes() # class based iterator
pp.pprint([next(p) for _ in range(30)]) # n
from itertools import islice
p = Primes()
for n in islice(p, 0, 20):
print(n, end=", ")
# -*- coding: utf-8 -*-
Created on Thu Oct 13 13:48:52 2016
@author: Kirby Urner
David Beazley:
https://youtu.be/Z_OAlIhXziw?t=23m42s
Trial by division, but this time the primes coroutine acts
more as a filter, passing qualified candidates through to
print_me, which writes to a file.
import pprint
def coroutine(func):
Advances decorated generator function to the first yield
def start(*args, **kwargs):
cr = func(*args, **kwargs)
cr.send(None) # or next(cr) or cr.__next__()
return cr
return start
@coroutine
def print_me(file_name):
with open(file_name, 'w') as file_obj:
while True:
to_print = (yield)
file_obj.write(str(to_print)+"\n")
@coroutine
def primes(target):
_primes_so_far = [2]
target.send(2)
while True:
candidate = (yield)
for prev in _primes_so_far:
if not divmod(candidate, prev)[1]:
break
if prev**2 > candidate:
_primes_so_far.append(candidate)
target.send(candidate)
break
output = print_me("primes.txt")
p = primes(output)
for x in range(3, 200, 2): # test odds 3-199
p.send(x)
with open("primes.txt", 'r') as file_obj:
print(", ".join(file_obj.read().split("\n"))[:-2])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step11: Embedding
Step12: LSTM cell
Step13: RNN forward pass
Step14: Output
Step15: Validation accuracy
Step16: Batching
Step17: Training
Step18: Testing
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('./reviews.txt', 'r') as f:
reviews = f.read()
with open('./labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
# Create your dictionary that maps vocab words to integers here
vocab = set(words)
vocab_to_int = {word: i for i, word in enumerate(vocab)}
def text_to_ints(text):
text_words = text.split()
return [vocab_to_int[word] for word in text_words]
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = [text_to_ints(review) for review in reviews]
print(reviews_ints[0:3])
print(labels[0:20])
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = labels.split("\n")
labels = [1 if label == 'positive' else 0 for label in labels]
print(labels[0:20])
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
# Filter out that review with 0 length
empty_indices = np.where(len(reviews_ints) == 0)
empty_indices = set([i for i in range(len(reviews_ints)) if len(reviews_ints[i]) == 0])
print(empty_indices)
reviews_ints = [review for index, review in enumerate(reviews_ints) if index not in empty_indices]
labels = [label for index, label in enumerate(labels) if index not in empty_indices]
print(len(reviews_ints))
print(len(labels))
seq_len = 200
def process_review(review):
truncated = review[0:seq_len]
length = len(truncated)
padding = [0] * (seq_len - length)
return padding + truncated
features = np.array([process_review(review) for review in reviews_ints])
#print(features[0:3])
features[:10,:100]
shuffled_indices = np.random.permutation(range(len(reviews_ints)))
print(shuffled_indices[0:20])
split_frac = 0.8
labels = np.array(labels)
max_training_index = int(split_frac * len(features))
shuffled_indices = np.random.permutation(range(len(features)))
train_x, val_x = features[shuffled_indices[:max_training_index], :], features[shuffled_indices[max_training_index:], :]
train_y, val_y = labels[shuffled_indices[:max_training_index]], labels[shuffled_indices[max_training_index:]]
half_index = len(val_x) // 2
val_x, test_x = val_x[:half_index, :], val_x[half_index:, :]
val_y, test_y = val_y[:half_index], val_y[half_index:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 512
lstm_layers = 3
batch_size = 512
learning_rate = 0.001
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, shape=(None, seq_len))
labels_ = tf.placeholder(tf.int32, shape=(None, None))
keep_prob = tf.placeholder(tf.float32)
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.truncated_normal([n_words, embed_size], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embedding, inputs_)
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 20
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up evaluation parameters and paths within the mounted Google Drive. Clone repository and download checkpoint.
Step2: Do other necessary imports.
Step3: Create random 2-second samples from source audio file.
Step4: Create evaluator object and load reference samples.
Step5: Carry out distance estimation.
Step6: Plot results.
|
<ASSISTANT_TASK:>
Python Code:
!pip install python_speech_features
!pip install resampy
!pip install scipy
!pip install gdown
!pip install tqdm -U
PATH = '/content/drive/My Drive/DeepSpeechDistances'
SAMPLE_PATH = '/content/drive/My Drive/DeepSpeechDistances/abstract_samples'
NUM_SPLITS = 3 # number of data splits to comute std of DSD
SAMPLES_PER_SPLIT = 500 # number of samples in a single DSD run.
# We recommend at least 10k samples for evaluation to get reasonable estimates.
AUDIO_LENGTH = 2 # length of individual sample, in seconds
NUM_NOISE_LEVELS = 3 # number of different noise levels for samples to evaluate
from google.colab import drive
drive.mount('/content/drive/', force_remount=True)
import sys, os
sh = lambda path: "'" + path + "'"
if not os.path.exists(PATH):
!git clone https://github.com/mbinkowski/DeepSpeechDistances.git {sh(PATH)}
else:
print('Found DeepSpeechDistances directory, skipping git clone.')
sys.path.append(PATH)
if not os.path.exists(os.path.join(PATH, 'checkpoint', 'ds2_large')):
CKPT = sh(PATH + '/ds2.tar.gz')
!gdown https://drive.google.com/uc?id=1EDvL9wMCO2vVE-ynBvpwkFTultbzLNQX -O {CKPT}
!tar -C {sh(PATH +'/checkpoint/')} -xvf {CKPT}
!rm {CKPT}
else:
print('Found checkpoint directory, skipping download.')
%tensorflow_version 1.x
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()
import audio_distance
from sample_utils import subsample_audio
subsample_audio(os.path.join(PATH, 'abstract.wav'),
SAMPLE_PATH,
num_samples=NUM_SPLITS * SAMPLES_PER_SPLIT,
length=AUDIO_LENGTH,
num_noise_levels=NUM_NOISE_LEVELS)
reference_path = os.path.join(SAMPLE_PATH, 'ref', '*.wav')
eval_paths = [os.path.join(SAMPLE_PATH, f'noisy_{i+1}', '*.wav') for i
in range(NUM_NOISE_LEVELS)]
evaluator = audio_distance.AudioDistance(
load_path=os.path.join(PATH, 'checkpoint', 'ds2_large', 'model.ckpt-54800'),
meta_path=os.path.join(PATH, 'checkpoint', 'collection-stripped-meta.meta'),
required_sample_size=NUM_SPLITS * SAMPLES_PER_SPLIT,
num_splits=NUM_SPLITS)
evaluator.load_real_data(reference_path)
dist_names = ['FDSD', 'KDSD', 'cFDSD', 'cKDSD']
def print_results(values):
print('\n' + ', '.join(['%s = %.5f (%.5f)' % (n, v[0], v[1]) for n, v
in zip(dist_names, values)]))
with tf.Session(config=evaluator.sess_config) as sess:
print('Computing reference DeepSpeech distances.')
values = evaluator.get_distance(sess=sess)
print_results(values)
distances = [values]
for eval_path in eval_paths:
print('\nComputing DeepSpeech distances for files in the directory:\n'
+ os.path.dirname(eval_path))
values = evaluator.get_distance(sess=sess, files=eval_path)
print_results(values)
distances.append(values)
all_paths = [reference_path] + eval_paths
prefix_len = len(os.path.commonpath(all_paths))
sample_names = [path[prefix_len + 1:] for path in all_paths]
if all([os.path.basename(p) == '*.wav' for p in all_paths]):
sample_names = [name[:-6] for name in sample_names]
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(12, 4))
x = range(NUM_NOISE_LEVELS + 1)
for i, kind in enumerate(['Frechet', 'kernel']):
ax[i].set_title(kind + ' distances')
ax[i].set_xticks(x)
ax[i].set_xticklabels(sample_names)
for j in [0, 2]:
k = i + j
ax[i].plot(x, [d[k][0] for d in distances], color='cmyk'[k],
label=dist_names[k])
ax[i].fill_between(x, [d[k][0] - d[k][1] for d in distances],
[d[k][0] + d[k][1] for d in distances], color='cmyk'[k],
alpha=.3)
ax[i].legend()
ax[i].grid()
drive.flush_and_unmount()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Ingestion
Step2: Data Preprocessing
Step4: LSTM
Step5: Model Evaluation on Test set
Step6: Model Evaluation on Validation set
Step7: Regression
Step8: Data Ingestion
Step11: Data Preprocessing
Step13: LSTM
Step14: Model Evaluation on Test set
Step15: Evaluate on Validation set
|
<ASSISTANT_TASK:>
Python Code:
import keras
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
# Setting seed for reproducibility
np.random.seed(1234)
PYTHONHASHSEED = 0
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix, recall_score, precision_score
from keras.models import Sequential,load_model
from keras.layers import Dense, Dropout, LSTM
# define path to save model
model_path = 'binary_model.h5'
# read training data - It is the aircraft engine run-to-failure data.
train_df = pd.read_csv('PM_train.txt', sep=" ", header=None)
train_df.drop(train_df.columns[[26, 27]], axis=1, inplace=True)
train_df.columns = ['id', 'cycle', 'setting1', 'setting2', 'setting3', 's1', 's2', 's3',
's4', 's5', 's6', 's7', 's8', 's9', 's10', 's11', 's12', 's13', 's14',
's15', 's16', 's17', 's18', 's19', 's20', 's21']
train_df = train_df.sort_values(['id','cycle'])
# read test data - It is the aircraft engine operating data without failure events recorded.
test_df = pd.read_csv('PM_test.txt', sep=" ", header=None)
test_df.drop(test_df.columns[[26, 27]], axis=1, inplace=True)
test_df.columns = ['id', 'cycle', 'setting1', 'setting2', 'setting3', 's1', 's2', 's3',
's4', 's5', 's6', 's7', 's8', 's9', 's10', 's11', 's12', 's13', 's14',
's15', 's16', 's17', 's18', 's19', 's20', 's21']
# read ground truth data - It contains the information of true remaining cycles for each engine in the testing data.
truth_df = pd.read_csv('PM_truth.txt', sep=" ", header=None)
truth_df.drop(truth_df.columns[[1]], axis=1, inplace=True)
#######
# TRAIN
#######
# Data Labeling - generate column RUL(Remaining Usefull Life or Time to Failure)
rul = pd.DataFrame(train_df.groupby('id')['cycle'].max()).reset_index()
rul.columns = ['id', 'max']
train_df = train_df.merge(rul, on=['id'], how='left')
train_df['RUL'] = train_df['max'] - train_df['cycle']
train_df.drop('max', axis=1, inplace=True)
# generate label columns for training data
# we will only make use of "label1" for binary classification,
# while trying to answer the question: is a specific engine going to fail within w1 cycles?
w1 = 30
w0 = 15
train_df['label1'] = np.where(train_df['RUL'] <= w1, 1, 0 )
train_df['label2'] = train_df['label1']
train_df.loc[train_df['RUL'] <= w0, 'label2'] = 2
# MinMax normalization (from 0 to 1)
train_df['cycle_norm'] = train_df['cycle']
cols_normalize = train_df.columns.difference(['id','cycle','RUL','label1','label2'])
min_max_scaler = preprocessing.MinMaxScaler()
norm_train_df = pd.DataFrame(min_max_scaler.fit_transform(train_df[cols_normalize]),
columns=cols_normalize,
index=train_df.index)
join_df = train_df[train_df.columns.difference(cols_normalize)].join(norm_train_df)
train_df = join_df.reindex(columns = train_df.columns)
######
# TEST
######
# MinMax normalization (from 0 to 1)
test_df['cycle_norm'] = test_df['cycle']
norm_test_df = pd.DataFrame(min_max_scaler.transform(test_df[cols_normalize]),
columns=cols_normalize,
index=test_df.index)
test_join_df = test_df[test_df.columns.difference(cols_normalize)].join(norm_test_df)
test_df = test_join_df.reindex(columns = test_df.columns)
test_df = test_df.reset_index(drop=True)
# We use the ground truth dataset to generate labels for the test data.
# generate column max for test data
rul = pd.DataFrame(test_df.groupby('id')['cycle'].max()).reset_index()
rul.columns = ['id', 'max']
truth_df.columns = ['more']
truth_df['id'] = truth_df.index + 1
truth_df['max'] = rul['max'] + truth_df['more']
truth_df.drop('more', axis=1, inplace=True)
# generate RUL for test data
test_df = test_df.merge(truth_df, on=['id'], how='left')
test_df['RUL'] = test_df['max'] - test_df['cycle']
test_df.drop('max', axis=1, inplace=True)
# generate label columns w0 and w1 for test data
test_df['label1'] = np.where(test_df['RUL'] <= w1, 1, 0 )
test_df['label2'] = test_df['label1']
test_df.loc[test_df['RUL'] <= w0, 'label2'] = 2
# pick a large window size of 50 cycles
sequence_length = 50
# function to reshape features into (samples, time steps, features)
def gen_sequence(id_df, seq_length, seq_cols):
Only sequences that meet the window-length are considered, no padding is used. This means for testing
we need to drop those which are below the window-length. An alternative would be to pad sequences so that
we can use shorter ones
# for one id I put all the rows in a single matrix
data_matrix = id_df[seq_cols].values
num_elements = data_matrix.shape[0]
# Iterate over two lists in parallel.
# For example id1 have 192 rows and sequence_length is equal to 50
# so zip iterate over two following list of numbers (0,112),(50,192)
# 0 50 -> from row 0 to row 50
# 1 51 -> from row 1 to row 51
# 2 52 -> from row 2 to row 52
# ...
# 111 191 -> from row 111 to 191
for start, stop in zip(range(0, num_elements-seq_length), range(seq_length, num_elements)):
yield data_matrix[start:stop, :]
# pick the feature columns
sensor_cols = ['s' + str(i) for i in range(1,22)]
sequence_cols = ['setting1', 'setting2', 'setting3', 'cycle_norm']
sequence_cols.extend(sensor_cols)
# generator for the sequences
seq_gen = (list(gen_sequence(train_df[train_df['id']==id], sequence_length, sequence_cols))
for id in train_df['id'].unique())
# generate sequences and convert to numpy array
seq_array = np.concatenate(list(seq_gen)).astype(np.float32)
seq_array.shape
# function to generate labels
def gen_labels(id_df, seq_length, label):
# For one id I put all the labels in a single matrix.
# For example:
# [[1]
# [4]
# [1]
# [5]
# [9]
# ...
# [200]]
data_matrix = id_df[label].values
num_elements = data_matrix.shape[0]
# I have to remove the first seq_length labels
# because for one id the first sequence of seq_length size have as target
# the last label (the previus ones are discarded).
# All the next id's sequences will have associated step by step one label as target.
return data_matrix[seq_length:num_elements, :]
# generate labels
label_gen = [gen_labels(train_df[train_df['id']==id], sequence_length, ['label1'])
for id in train_df['id'].unique()]
label_array = np.concatenate(label_gen).astype(np.float32)
label_array.shape
# Next, we build a deep network.
# The first layer is an LSTM layer with 100 units followed by another LSTM layer with 50 units.
# Dropout is also applied after each LSTM layer to control overfitting.
# Final layer is a Dense output layer with single unit and sigmoid activation since this is a binary classification problem.
# build the network
nb_features = seq_array.shape[2]
nb_out = label_array.shape[1]
model = Sequential()
model.add(LSTM(
input_shape=(sequence_length, nb_features),
units=100,
return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(
units=50,
return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(units=nb_out, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
# fit the network
history = model.fit(seq_array, label_array, epochs=100, batch_size=200, validation_split=0.05, verbose=2,
callbacks = [keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=0, mode='min'),
keras.callbacks.ModelCheckpoint(model_path,monitor='val_loss', save_best_only=True, mode='min', verbose=0)]
)
# list all data in history
print(history.history.keys())
# summarize history for Accuracy
fig_acc = plt.figure(figsize=(10, 10))
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
fig_acc.savefig("model_accuracy.png")
# summarize history for Loss
fig_acc = plt.figure(figsize=(10, 10))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
fig_acc.savefig("model_loss.png")
# training metrics
scores = model.evaluate(seq_array, label_array, verbose=1, batch_size=200)
print('Accurracy: {}'.format(scores[1]))
# make predictions and compute confusion matrix
y_pred = model.predict_classes(seq_array,verbose=1, batch_size=200)
y_true = label_array
test_set = pd.DataFrame(y_pred)
test_set.to_csv('binary_submit_train.csv', index = None)
print('Confusion matrix\n- x-axis is true labels.\n- y-axis is predicted labels')
cm = confusion_matrix(y_true, y_pred)
print(cm)
# compute precision and recall
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
print( 'precision = ', precision, '\n', 'recall = ', recall)
# We pick the last sequence for each id in the test data
seq_array_test_last = [test_df[test_df['id']==id][sequence_cols].values[-sequence_length:]
for id in test_df['id'].unique() if len(test_df[test_df['id']==id]) >= sequence_length]
seq_array_test_last = np.asarray(seq_array_test_last).astype(np.float32)
#print("seq_array_test_last")
#print(seq_array_test_last)
#print(seq_array_test_last.shape)
# Similarly, we pick the labels
#print("y_mask")
# serve per prendere solo le label delle sequenze che sono almeno lunghe 50
y_mask = [len(test_df[test_df['id']==id]) >= sequence_length for id in test_df['id'].unique()]
#print("y_mask")
#print(y_mask)
label_array_test_last = test_df.groupby('id')['label1'].nth(-1)[y_mask].values
label_array_test_last = label_array_test_last.reshape(label_array_test_last.shape[0],1).astype(np.float32)
#print(label_array_test_last.shape)
#print("label_array_test_last")
#print(label_array_test_last)
# if best iteration's model was saved then load and use it
if os.path.isfile(model_path):
estimator = load_model(model_path)
# test metrics
scores_test = estimator.evaluate(seq_array_test_last, label_array_test_last, verbose=2)
print('Accurracy: {}'.format(scores_test[1]))
# make predictions and compute confusion matrix
y_pred_test = estimator.predict_classes(seq_array_test_last)
y_true_test = label_array_test_last
test_set = pd.DataFrame(y_pred_test)
test_set.to_csv('binary_submit_test.csv', index = None)
print('Confusion matrix\n- x-axis is true labels.\n- y-axis is predicted labels')
cm = confusion_matrix(y_true_test, y_pred_test)
print(cm)
# compute precision and recall
precision_test = precision_score(y_true_test, y_pred_test)
recall_test = recall_score(y_true_test, y_pred_test)
f1_test = 2 * (precision_test * recall_test) / (precision_test + recall_test)
print( 'Precision: ', precision_test, '\n', 'Recall: ', recall_test,'\n', 'F1-score:', f1_test )
# Plot in blue color the predicted data and in green color the
# actual data to verify visually the accuracy of the model.
fig_verify = plt.figure(figsize=(10, 5))
plt.plot(y_pred_test, color="blue")
plt.plot(y_true_test, color="green")
plt.title('prediction')
plt.ylabel('value')
plt.xlabel('row')
plt.legend(['predicted', 'actual data'], loc='upper left')
plt.show()
fig_verify.savefig("model_verify.png")
import keras
import keras.backend as K
from keras.layers.core import Activation
from keras.models import Sequential,load_model
from keras.layers import Dense, Dropout, LSTM
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
from sklearn import preprocessing
# Setting seed for reproducibility
np.random.seed(1234)
PYTHONHASHSEED = 0
# define path to save model
model_path = 'regression_model.h5'
# read training data - It is the aircraft engine run-to-failure data.
train_df = pd.read_csv('PM_train.txt', sep=" ", header=None)
train_df.drop(train_df.columns[[26, 27]], axis=1, inplace=True)
train_df.columns = ['id', 'cycle', 'setting1', 'setting2', 'setting3', 's1', 's2', 's3',
's4', 's5', 's6', 's7', 's8', 's9', 's10', 's11', 's12', 's13', 's14',
's15', 's16', 's17', 's18', 's19', 's20', 's21']
train_df = train_df.sort_values(['id','cycle'])
# read test data - It is the aircraft engine operating data without failure events recorded.
test_df = pd.read_csv('PM_test.txt', sep=" ", header=None)
test_df.drop(test_df.columns[[26, 27]], axis=1, inplace=True)
test_df.columns = ['id', 'cycle', 'setting1', 'setting2', 'setting3', 's1', 's2', 's3',
's4', 's5', 's6', 's7', 's8', 's9', 's10', 's11', 's12', 's13', 's14',
's15', 's16', 's17', 's18', 's19', 's20', 's21']
# read ground truth data - It contains the information of true remaining cycles for each engine in the testing data.
truth_df = pd.read_csv('PM_truth.txt', sep=" ", header=None)
truth_df.drop(truth_df.columns[[1]], axis=1, inplace=True)
##################################
# Data Preprocessing
##################################
#######
# TRAIN
#######
# Data Labeling - generate column RUL(Remaining Usefull Life or Time to Failure)
rul = pd.DataFrame(train_df.groupby('id')['cycle'].max()).reset_index()
rul.columns = ['id', 'max']
train_df = train_df.merge(rul, on=['id'], how='left')
train_df['RUL'] = train_df['max'] - train_df['cycle']
train_df.drop('max', axis=1, inplace=True)
# generate label columns for training data
# we will only make use of "label1" for binary classification,
# while trying to answer the question: is a specific engine going to fail within w1 cycles?
w1 = 30
w0 = 15
train_df['label1'] = np.where(train_df['RUL'] <= w1, 1, 0 )
train_df['label2'] = train_df['label1']
train_df.loc[train_df['RUL'] <= w0, 'label2'] = 2
# MinMax normalization (from 0 to 1)
train_df['cycle_norm'] = train_df['cycle']
cols_normalize = train_df.columns.difference(['id','cycle','RUL','label1','label2'])
min_max_scaler = preprocessing.MinMaxScaler()
norm_train_df = pd.DataFrame(min_max_scaler.fit_transform(train_df[cols_normalize]),
columns=cols_normalize,
index=train_df.index)
join_df = train_df[train_df.columns.difference(cols_normalize)].join(norm_train_df)
train_df = join_df.reindex(columns = train_df.columns)
#train_df.to_csv('PredictiveManteinanceEngineTraining.csv', encoding='utf-8',index = None)
######
# TEST
######
# MinMax normalization (from 0 to 1)
test_df['cycle_norm'] = test_df['cycle']
norm_test_df = pd.DataFrame(min_max_scaler.transform(test_df[cols_normalize]),
columns=cols_normalize,
index=test_df.index)
test_join_df = test_df[test_df.columns.difference(cols_normalize)].join(norm_test_df)
test_df = test_join_df.reindex(columns = test_df.columns)
test_df = test_df.reset_index(drop=True)
print(test_df.head())
# We use the ground truth dataset to generate labels for the test data.
# generate column max for test data
rul = pd.DataFrame(test_df.groupby('id')['cycle'].max()).reset_index()
rul.columns = ['id', 'max']
truth_df.columns = ['more']
truth_df['id'] = truth_df.index + 1
truth_df['max'] = rul['max'] + truth_df['more']
truth_df.drop('more', axis=1, inplace=True)
# generate RUL for test data
test_df = test_df.merge(truth_df, on=['id'], how='left')
test_df['RUL'] = test_df['max'] - test_df['cycle']
test_df.drop('max', axis=1, inplace=True)
# generate label columns w0 and w1 for test data
test_df['label1'] = np.where(test_df['RUL'] <= w1, 1, 0 )
test_df['label2'] = test_df['label1']
test_df.loc[test_df['RUL'] <= w0, 'label2'] = 2
#test_df.to_csv('PredictiveManteinanceEngineValidation.csv', encoding='utf-8',index = None)
# pick a large window size of 50 cycles
sequence_length = 50
# function to reshape features into (samples, time steps, features)
def gen_sequence(id_df, seq_length, seq_cols):
Only sequences that meet the window-length are considered, no padding is used. This means for testing
we need to drop those which are below the window-length. An alternative would be to pad sequences so that
we can use shorter ones
# for one id I put all the rows in a single matrix
data_matrix = id_df[seq_cols].values
num_elements = data_matrix.shape[0]
# Iterate over two lists in parallel.
# For example id1 have 192 rows and sequence_length is equal to 50
# so zip iterate over two following list of numbers (0,112),(50,192)
# 0 50 -> from row 0 to row 50
# 1 51 -> from row 1 to row 51
# 2 52 -> from row 2 to row 52
# ...
# 111 191 -> from row 111 to 191
for start, stop in zip(range(0, num_elements-seq_length), range(seq_length, num_elements)):
yield data_matrix[start:stop, :]
# pick the feature columns
sensor_cols = ['s' + str(i) for i in range(1,22)]
sequence_cols = ['setting1', 'setting2', 'setting3', 'cycle_norm']
sequence_cols.extend(sensor_cols)
# TODO for debug
# val is a list of 192 - 50 = 142 bi-dimensional array (50 rows x 25 columns)
val=list(gen_sequence(train_df[train_df['id']==1], sequence_length, sequence_cols))
print(len(val))
# generator for the sequences
# transform each id of the train dataset in a sequence
seq_gen = (list(gen_sequence(train_df[train_df['id']==id], sequence_length, sequence_cols))
for id in train_df['id'].unique())
# generate sequences and convert to numpy array
seq_array = np.concatenate(list(seq_gen)).astype(np.float32)
print(seq_array.shape)
# function to generate labels
def gen_labels(id_df, seq_length, label):
Only sequences that meet the window-length are considered, no padding is used. This means for testing
we need to drop those which are below the window-length. An alternative would be to pad sequences so that
we can use shorter ones
# For one id I put all the labels in a single matrix.
# For example:
# [[1]
# [4]
# [1]
# [5]
# [9]
# ...
# [200]]
data_matrix = id_df[label].values
num_elements = data_matrix.shape[0]
# I have to remove the first seq_length labels
# because for one id the first sequence of seq_length size have as target
# the last label (the previus ones are discarded).
# All the next id's sequences will have associated step by step one label as target.
return data_matrix[seq_length:num_elements, :]
# generate labels
label_gen = [gen_labels(train_df[train_df['id']==id], sequence_length, ['RUL'])
for id in train_df['id'].unique()]
label_array = np.concatenate(label_gen).astype(np.float32)
label_array.shape
def r2_keras(y_true, y_pred):
Coefficient of Determination
SS_res = K.sum(K.square( y_true - y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
# Next, we build a deep network.
# The first layer is an LSTM layer with 100 units followed by another LSTM layer with 50 units.
# Dropout is also applied after each LSTM layer to control overfitting.
# Final layer is a Dense output layer with single unit and linear activation since this is a regression problem.
nb_features = seq_array.shape[2]
nb_out = label_array.shape[1]
model = Sequential()
model.add(LSTM(
input_shape=(sequence_length, nb_features),
units=100,
return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(
units=50,
return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(units=nb_out))
model.add(Activation("linear"))
model.compile(loss='mean_squared_error', optimizer='rmsprop',metrics=['mae',r2_keras])
print(model.summary())
# fit the network
history = model.fit(seq_array, label_array, epochs=100, batch_size=200, validation_split=0.05, verbose=2,
callbacks = [keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=0, mode='min'),
keras.callbacks.ModelCheckpoint(model_path,monitor='val_loss', save_best_only=True, mode='min', verbose=0)]
)
# list all data in history
print(history.history.keys())
# summarize history for R^2
fig_acc = plt.figure(figsize=(10, 10))
plt.plot(history.history['r2_keras'])
plt.plot(history.history['val_r2_keras'])
plt.title('model r^2')
plt.ylabel('R^2')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
fig_acc.savefig("model_r2.png")
# summarize history for MAE
fig_acc = plt.figure(figsize=(10, 10))
plt.plot(history.history['mean_absolute_error'])
plt.plot(history.history['val_mean_absolute_error'])
plt.title('model MAE')
plt.ylabel('MAE')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
fig_acc.savefig("model_mae.png")
# summarize history for Loss
fig_acc = plt.figure(figsize=(10, 10))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
fig_acc.savefig("model_regression_loss.png")
# training metrics
scores = model.evaluate(seq_array, label_array, verbose=1, batch_size=200)
print('\nMAE: {}'.format(scores[1]))
print('\nR^2: {}'.format(scores[2]))
y_pred = model.predict(seq_array,verbose=1, batch_size=200)
y_true = label_array
test_set = pd.DataFrame(y_pred)
test_set.to_csv('submit_train.csv', index = None)
# We pick the last sequence for each id in the test data
seq_array_test_last = [test_df[test_df['id']==id][sequence_cols].values[-sequence_length:]
for id in test_df['id'].unique() if len(test_df[test_df['id']==id]) >= sequence_length]
seq_array_test_last = np.asarray(seq_array_test_last).astype(np.float32)
#print("seq_array_test_last")
#print(seq_array_test_last)
#print(seq_array_test_last.shape)
# Similarly, we pick the labels
#print("y_mask")
y_mask = [len(test_df[test_df['id']==id]) >= sequence_length for id in test_df['id'].unique()]
label_array_test_last = test_df.groupby('id')['RUL'].nth(-1)[y_mask].values
label_array_test_last = label_array_test_last.reshape(label_array_test_last.shape[0],1).astype(np.float32)
#print(label_array_test_last.shape)
#print("label_array_test_last")
#print(label_array_test_last)
# if best iteration's model was saved then load and use it
if os.path.isfile(model_path):
estimator = load_model(model_path,custom_objects={'r2_keras': r2_keras})
# test metrics
scores_test = estimator.evaluate(seq_array_test_last, label_array_test_last, verbose=2)
print('\nMAE: {}'.format(scores_test[1]))
print('\nR^2: {}'.format(scores_test[2]))
y_pred_test = estimator.predict(seq_array_test_last)
y_true_test = label_array_test_last
test_set = pd.DataFrame(y_pred_test)
test_set.to_csv('submit_test.csv', index = None)
# Plot in blue color the predicted data and in green color the
# actual data to verify visually the accuracy of the model.
fig_verify = plt.figure(figsize=(10, 5))
plt.plot(y_pred_test, color="blue")
plt.plot(y_true_test, color="green")
plt.title('prediction')
plt.ylabel('value')
plt.xlabel('row')
plt.legend(['predicted', 'actual data'], loc='upper left')
plt.show()
fig_verify.savefig("model_regression_verify.png")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This will load bifacial_radiance and other libraries from python that will be useful for this Jupyter Journal
Step2: <a id='step2'></a>
Step3: This will create all the folder structure of the bifacial_radiance Scene in the designated testfolder in your computer, and it should look like this
Step4: If a number between 0 and 1 is passed, it assumes it's an albedo value. For this example, we want a high-reflectivity rooftop albedo surface, so we will set the albedo to 0.62
Step5: <a id='step4'></a>
Step6: The downloaded EPW will be in the EPWs folder.
Step7: <a id='step5'></a>
Step8: The method gencumSky calculates the hourly radiance of the sky hemisphere by dividing it into 145 patches. Then it adds those hourly values to generate one single <b> cumulative sky</b>. Here is a visualization of this patched hemisphere for Richmond, VA, US. Can you deduce from the radiance values of each patch which way is North?
Step9: In case you want to use a pre-defined module or a module you've created previously, they are stored in a JSON format in data/module.json, and the options available can be called with printModules
Step10: <a id='step7'></a>
Step11: To make the scene we have to create a Scene Object through the method makeScene. This method will create a .rad file in the objects folder, with the parameters specified in sceneDict and the module created above. You can alternatively pass a string with the name of the moduletype.
Step12: <a id='step8'></a>
Step13: To see what files got merged into the octfile, you can use the helper method getfilelist. This is useful for advanced simulations too, specially when you want to have different Scene objects in the same simulation, or if you want to add other custom elements to your scene (like a building, for example)
Step14: <a id='step9'></a>
Step15: Then let's specify the sensor location. If no parameters are passed to moduleAnalysis, it will scan the center module of the center row
Step16: The frontscan and backscan include a linescan along a chord of the module, both on the front and back.
Step17: The results are also automatically saved in the results folder. Some of our input/output functions can be used to read the results and work with them, for example
Step18: As can be seen in the results for the Wm2Front and WM2Back, the irradiance values are quite high. This is because a cumulative sky simulation was performed on <b> step 5 </b>, so this is the total irradiance over all the hours of the year that the module at each sampling point will receive. Dividing the back irradiance average by the front irradiance average will give us the bifacial gain for the year
Step19: <a id='step10'></a>
Step20: This <b> objview </b> has 3 different light sources of its own, so the shading is not representative.
Step21: The <b> rvu </b> manual can be found here
|
<ASSISTANT_TASK:>
Python Code:
import os
from pathlib import Path
testfolder = Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_01'
# Another option using relative address; for some operative systems you might need '/' instead of '\'
# testfolder = os.path.abspath(r'..\..\bifacial_radiance\TEMP')
print ("Your simulation will be stored in %s" % testfolder)
if not os.path.exists(testfolder):
os.makedirs(testfolder)
try:
from bifacial_radiance import *
except ImportError:
raise RuntimeError('bifacial_radiance is required. download distribution')
import numpy as np
# Create a RadianceObj 'object' named bifacial_example. no whitespace allowed
demo = RadianceObj('tutorial_1',str(testfolder))
# Input albedo number or material name like 'concrete'.
demo.setGround() # This prints available materials.
albedo = 0.62
demo.setGround(albedo)
# Pull in meteorological data using pyEPW for any global lat/lon
epwfile = demo.getEPW(lat = 37.5, lon = -77.6) # This location corresponds to Richmond, VA.
# Read in the weather data pulled in above.
metdata = demo.readWeatherFile(epwfile, coerce_year=2001)
fullYear = True
if fullYear:
demo.genCumSky() # entire year.
else:
timeindex = metdata.datetime.index(pd.to_datetime('2001-06-17 12:0:0 -7'))
demo.gendaylit(timeindex) # Noon, June 17th (timepoint # 4020)
module_type = 'test-module'
module = demo.makeModule(name=module_type,x=1.695, y=0.984)
print(module)
availableModules = demo.printModules()
sceneDict = {'tilt':10,'pitch':3,'clearance_height':0.2,'azimuth':180, 'nMods': 20, 'nRows': 7}
scene = demo.makeScene(module,sceneDict)
octfile = demo.makeOct(demo.getfilelist())
demo.getfilelist()
analysis = AnalysisObj(octfile, demo.basename)
frontscan, backscan = analysis.moduleAnalysis(scene)
results = analysis.analysis(octfile, demo.basename, frontscan, backscan)
load.read1Result('results\irr_tutorial_1.csv')
bifacialityfactor = 0.9
print('Annual bifacial ratio: %0.2f ' %( np.mean(analysis.Wm2Back) * bifacialityfactor / np.mean(analysis.Wm2Front)) )
## Comment the ! line below to run rvu from the Jupyter notebook instead of your terminal.
## Simulation will stop until you close the rvu window
# !objview materials\ground.rad objects\test-module_C_0.20000_rtr_3.00000_tilt_10.00000_20modsx7rows_origin0,0.rad
## Comment the line below to run rvu from the Jupyter notebook instead of your terminal.
## Simulation will stop until you close the rvu window
#!rvu -vf views\front.vp -e .01 tutorial_1.oct
# Make a color render and falsecolor image of the scene.
analysis.makeImage('side.vp')
analysis.makeFalseColor('side.vp')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Create a light curve
Step2: Now let's turn noisy into a Lightcurve object.
Step3: Here we plot it to see what it looks like.
Step4: 2. Pass the light curve to the Powerspectrum class to create a Powerspectrum object.
Step5: Note that, in principle, the Powerspectrum object could have been initialized directly as
Step6: Properties
Step7: We can plot the power as a function of Fourier frequency. Notice how there's a spike at our signal frequency of 2 Hz!
Step8: You'll notice that the power spectrum is a bit noisy. This is because we're only using one segment of data. Let's try averaging together power spectra from multiple segments of data.
Step9: 2. Pass the light curve to the AveragedPowerspectrum class with a specified segment_size.
Step10: We can check how many segments were averaged together by printing the m attribute.
Step11: AveragedPowerspectrum has the same properties as Powerspectrum, but with m $>$1.
Step12: Now we'll show examples of all the things you can do with a Powerspectrum or AveragedPowerspectrum object using built-in stingray methods.
Step13: Re-binning a power spectrum in frequency
Step14: 2. And we can logarithmically/geometrically re-bin a power spectrum
Step15: Like rebin, rebin_log returns a Powerspectrum or AveragedPowerspectrum object (depending on the input object)
Step16: Power spectra of normal-distributed light curves
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import numpy as np
from stingray import Lightcurve, Powerspectrum, AveragedPowerspectrum
import matplotlib.pyplot as plt
import matplotlib.font_manager as font_manager
%matplotlib inline
font_prop = font_manager.FontProperties(size=16)
dt = 0.03125 # seconds
exposure = 8. # seconds
times = np.arange(0, exposure, dt) # seconds
signal = 300 * np.sin(2.*np.pi*times/0.5) + 1000 # counts/s
noisy = np.random.poisson(signal*dt) # counts
lc = Lightcurve(times, noisy, dt=dt, skip_checks=True)
fig, ax = plt.subplots(1,1,figsize=(10,6))
ax.plot(lc.time, lc.counts, lw=2, color='blue')
ax.set_xlabel("Time (s)", fontproperties=font_prop)
ax.set_ylabel("Counts (cts)", fontproperties=font_prop)
ax.tick_params(axis='x', labelsize=16)
ax.tick_params(axis='y', labelsize=16)
ax.tick_params(which='major', width=1.5, length=7)
ax.tick_params(which='minor', width=1.5, length=4)
plt.show()
ps = Powerspectrum.from_lightcurve(lc, norm="leahy")
print(ps)
print("\nSize of positive Fourier frequencies:", len(ps.freq))
print("Number of data points per segment:", ps.n)
print(ps.freq)
print(ps.power)
print(ps.df)
print(ps.m)
print(ps.n)
print(ps.nphots1)
fig, ax1 = plt.subplots(1,1,figsize=(9,6), sharex=True)
ax1.plot(ps.freq, ps.power, lw=2, color='blue')
ax1.set_ylabel("Frequency (Hz)", fontproperties=font_prop)
ax1.set_ylabel("Power (raw)", fontproperties=font_prop)
ax1.set_yscale('log')
ax1.tick_params(axis='x', labelsize=16)
ax1.tick_params(axis='y', labelsize=16)
ax1.tick_params(which='major', width=1.5, length=7)
ax1.tick_params(which='minor', width=1.5, length=4)
for axis in ['top', 'bottom', 'left', 'right']:
ax1.spines[axis].set_linewidth(1.5)
plt.show()
long_dt = 0.03125 # seconds
long_exposure = 1600. # seconds
long_times = np.arange(0, long_exposure, long_dt) # seconds
# In count rate units here
long_signal = 300 * np.sin(2.*np.pi*long_times/0.5) + 1000
# Multiply by dt to get count units, then add Poisson noise
long_noisy = np.random.poisson(long_signal*dt)
long_lc = Lightcurve(long_times, long_noisy, dt=long_dt, skip_checks=True)
fig, ax = plt.subplots(1,1,figsize=(10,6))
ax.plot(long_lc.time, long_lc.counts, lw=2, color='blue')
ax.set_xlim(0,20)
ax.set_xlabel("Time (s)", fontproperties=font_prop)
ax.set_ylabel("Counts (cts)", fontproperties=font_prop)
ax.tick_params(axis='x', labelsize=16)
ax.tick_params(axis='y', labelsize=16)
ax.tick_params(which='major', width=1.5, length=7)
ax.tick_params(which='minor', width=1.5, length=4)
plt.show()
avg_ps = AveragedPowerspectrum.from_lightcurve(long_lc, 8., norm="leahy")
print("Number of segments: %d" % avg_ps.m)
fig, ax1 = plt.subplots(1,1,figsize=(9,6))
ax1.plot(avg_ps.freq, avg_ps.power, lw=2, color='blue')
ax1.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax1.set_ylabel("Power (raw)", fontproperties=font_prop)
ax1.set_yscale('log')
ax1.tick_params(axis='x', labelsize=16)
ax1.tick_params(axis='y', labelsize=16)
ax1.tick_params(which='major', width=1.5, length=7)
ax1.tick_params(which='minor', width=1.5, length=4)
for axis in ['top', 'bottom', 'left', 'right']:
ax1.spines[axis].set_linewidth(1.5)
plt.show()
avg_ps_leahy = AveragedPowerspectrum.from_lightcurve(long_lc, 8, norm='leahy')
avg_ps_frac = AveragedPowerspectrum.from_lightcurve(long_lc, 8., norm='frac')
avg_ps_abs = AveragedPowerspectrum.from_lightcurve(long_lc, 8., norm='abs')
fig, [ax1, ax2, ax3] = plt.subplots(3,1,figsize=(6,12))
ax1.plot(avg_ps_leahy.freq, avg_ps_leahy.power, lw=2, color='black')
ax1.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax1.set_ylabel("Power (Leahy)", fontproperties=font_prop)
ax1.set_yscale('log')
ax1.tick_params(axis='x', labelsize=14)
ax1.tick_params(axis='y', labelsize=14)
ax1.tick_params(which='major', width=1.5, length=7)
ax1.tick_params(which='minor', width=1.5, length=4)
ax1.set_title("Leahy norm.", fontproperties=font_prop)
ax2.plot(avg_ps_frac.freq, avg_ps_frac.power, lw=2, color='black')
ax2.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax2.set_ylabel("Power (rms)", fontproperties=font_prop)
ax2.tick_params(axis='x', labelsize=14)
ax2.tick_params(axis='y', labelsize=14)
ax2.set_yscale('log')
ax2.tick_params(which='major', width=1.5, length=7)
ax2.tick_params(which='minor', width=1.5, length=4)
ax2.set_title("Fractional rms-squared norm.", fontproperties=font_prop)
ax3.plot(avg_ps_abs.freq, avg_ps_abs.power, lw=2, color='black')
ax3.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax3.set_ylabel("Power (abs)", fontproperties=font_prop)
ax3.tick_params(axis='x', labelsize=14)
ax3.tick_params(axis='y', labelsize=14)
ax3.set_yscale('log')
ax3.tick_params(which='major', width=1.5, length=7)
ax3.tick_params(which='minor', width=1.5, length=4)
ax3.set_title("Absolute rms-squared norm.", fontproperties=font_prop)
for axis in ['top', 'bottom', 'left', 'right']:
ax1.spines[axis].set_linewidth(1.5)
ax2.spines[axis].set_linewidth(1.5)
ax3.spines[axis].set_linewidth(1.5)
plt.tight_layout()
plt.show()
print("DF before:", avg_ps.df)
# Both of the following ways are allowed syntax:
# lin_rb_ps = Powerspectrum.rebin(avg_ps, 0.25, method='mean')
lin_rb_ps = avg_ps.rebin(0.25, method='mean')
print("DF after:", lin_rb_ps.df)
# Both of the following ways are allowed syntax:
# log_rb_ps, log_rb_freq, binning = Powerspectrum.rebin_log(avg_ps, f=0.02)
log_rb_ps = ps.rebin_log(f=0.02)
print(type(lin_rb_ps))
long_norm = (long_noisy - long_noisy.mean()) / long_noisy.max()
err = np.sqrt(long_noisy.mean()) / long_noisy.max()
long_lc_gauss = Lightcurve(long_times, long_norm, err=np.zeros_like(long_norm) + err, dt=long_dt, skip_checks=True, err_dist='gauss')
fig, ax = plt.subplots(1,1,figsize=(10, 6))
ax.plot(long_lc.time, long_lc.counts, lw=2, color='blue', label='Original light curve')
ax.plot(long_lc_gauss.time, long_lc_gauss.counts, lw=2, color='red', label='Normalized light curve')
ax.set_xlim(0,20)
ax.set_xlabel("Time (s)", fontproperties=font_prop)
ax.set_ylabel("Counts (cts)", fontproperties=font_prop)
ax.tick_params(axis='x', labelsize=16)
ax.tick_params(axis='y', labelsize=16)
ax.tick_params(which='major', width=1.5, length=7)
ax.tick_params(which='minor', width=1.5, length=4)
plt.legend()
plt.show()
avg_ps_gauss_leahy = AveragedPowerspectrum.from_lightcurve(long_lc_gauss, 8, norm='leahy')
avg_ps_gauss_frac = AveragedPowerspectrum.from_lightcurve(long_lc_gauss, 8., norm='frac')
avg_ps_gauss_abs = AveragedPowerspectrum.from_lightcurve(long_lc_gauss, 8., norm='abs')
fig, [ax1, ax2, ax3] = plt.subplots(3,1,figsize=(6,12))
ax1.plot(avg_ps_leahy.freq, avg_ps_leahy.power, lw=2, color='black')
ax1.plot(avg_ps_gauss_leahy.freq, avg_ps_gauss_leahy.power, lw=2, color='red', zorder=10)
ax1.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax1.set_ylabel("Power (Leahy)", fontproperties=font_prop)
ax1.set_yscale('log')
ax1.tick_params(axis='x', labelsize=14)
ax1.tick_params(axis='y', labelsize=14)
ax1.tick_params(which='major', width=1.5, length=7)
ax1.tick_params(which='minor', width=1.5, length=4)
ax1.set_title("Leahy norm.", fontproperties=font_prop)
ax2.plot(avg_ps_frac.freq, avg_ps_frac.power, lw=2, color='black')
ax2.plot(avg_ps_gauss_frac.freq, avg_ps_gauss_frac.power, lw=2, color='red')
ax2.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax2.set_ylabel("Power (rms)", fontproperties=font_prop)
ax2.tick_params(axis='x', labelsize=14)
ax2.tick_params(axis='y', labelsize=14)
ax2.set_yscale('log')
ax2.tick_params(which='major', width=1.5, length=7)
ax2.tick_params(which='minor', width=1.5, length=4)
ax2.set_title("Fractional rms-squared norm.", fontproperties=font_prop)
ax3.plot(avg_ps_abs.freq, avg_ps_abs.power, lw=2, color='black')
ax3.plot(avg_ps_gauss_abs.freq, avg_ps_gauss_abs.power, lw=2, color='red')
ax3.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax3.set_ylabel("Power (abs)", fontproperties=font_prop)
ax3.tick_params(axis='x', labelsize=14)
ax3.tick_params(axis='y', labelsize=14)
ax3.set_yscale('log')
ax3.tick_params(which='major', width=1.5, length=7)
ax3.tick_params(which='minor', width=1.5, length=4)
ax3.set_title("Absolute rms-squared norm.", fontproperties=font_prop)
for axis in ['top', 'bottom', 'left', 'right']:
ax1.spines[axis].set_linewidth(1.5)
ax2.spines[axis].set_linewidth(1.5)
ax3.spines[axis].set_linewidth(1.5)
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Saving figures
Step4: Generating the qa overlay figures
Step5: Input and Graphs
Step6: This is a good example of well-registed images where pink and green are used to represent the registered output and reference image. The registration step of this image is first using function flirt and then fnirt in FSL.
Step7: This is a final image of the registration step where pink and green are used to represent registered output and reference image.
|
<ASSISTANT_TASK:>
Python Code:
import os
import nibabel as nb
import matplotlib.image as mpimg
from m2g.utils.gen_utils import get_braindata, get_filename
from m2g.utils.qa_utils import get_min_max, opaque_colorscale, pad_im
from argparse import ArgumentParser
from scipy import ndimage
from matplotlib.colors import LinearSegmentedColormap
from nilearn.plotting.edge_detect import _edge_map as edge_map
import matplotlib.pyplot as plt
%matplotlib inline
def reg_mri_pngs(
mri, atlas, outdir, loc=0, mean=False, minthr=2, maxthr=95, edge=False
):
A function to create and save registered brain slice figures.
Parameter
---------
mri: nifti file
the registered brain file generated in each registration step.
atlas: nifti file
the reference brain file used in each registration step.
outdir: str
directory where output png file is saved.
loc: int
which dimension of the 4d brain data to use
mean: bool
whether to calculate the mean of the 4d brain data
If False, the loc=0 dimension of the data (mri_data[:, :, :, loc]) is used
minthr: int
lower percentile threshold
maxthr: int
upper percentile threshold
atlas_data = nb.load(atlas).get_data()
mri_data = nb.load(mri).get_data()
if mri_data.ndim == 4: # 4d data, so we need to reduce a dimension
if mean:
mr_data = mri_data.mean(axis=3)
else:
mr_data = mri_data[:, :, :, loc]
else: # dim=3
mr_data = mri_data
cmap1 = LinearSegmentedColormap.from_list("mycmap1", ["white", "magenta"])
cmap2 = LinearSegmentedColormap.from_list("mycmap2", ["white", "green"])
fig = plot_overlays(atlas_data, mr_data, [cmap1, cmap2], minthr, maxthr, edge)
# name and save the file
fig.savefig(outdir + "/" + "{"+get_filename(mri) +"}"+ "2" +"{"+get_filename(atlas)+"}"+ ".png", format="png")
#plt.close()
def plot_overlays(atlas, b0, cmaps=None, minthr=2, maxthr=95, edge=False):
A function to plot the overlay figures of registered and reference brain slices.
Parameter
---------
atlas: str, nifti image, numpy.ndarray
an object to open the data for a registered brain. Can be a string (path to a brain file),
nibabel.nifti1.nifti1image, or a numpy.ndarray.
b0: str, nifti image, numpy.ndarray
an object to open the data for a reference brain. Can be a string (path to a brain file),
nibabel.nifti1.nifti1image, or a numpy.ndarray.
cmap: Colormap objects based on lookup tables using linear segments.
minthr: int
lower percentile threshold
maxthr: int
upper percentile threshold
edge: bool
whether to use normalized luminance data
If None, the respective min and max of the color array is used.
Returns
---------
foverlay: matplotlib.figure.Figure
plt.rcParams.update({"axes.labelsize": "x-large", "axes.titlesize": "x-large"})
foverlay = plt.figure()
atlas = get_braindata(atlas)
b0 = get_braindata(b0)
if atlas.shape != b0.shape:
raise ValueError("Brains are not the same shape.")
if cmaps is None:
cmap1 = LinearSegmentedColormap.from_list("mycmap1", ["white", "magenta"])
cmap2 = LinearSegmentedColormap.from_list("mycmap2", ["white", "green"])
cmaps = [cmap1, cmap2]
if b0.shape == (182, 218, 182):
x = [78, 90, 100]
y = [82, 107, 142]
z = [88, 103, 107]
else:
brain_volume = b0.shape
x = [int(brain_volume[0] * 0.35), int(brain_volume[0] * 0.51), int(brain_volume[0] * 0.65)]
y = [int(brain_volume[1] * 0.35), int(brain_volume[1] * 0.51), int(brain_volume[1] * 0.65)]
z = [int(brain_volume[2] * 0.35), int(brain_volume[2] * 0.51), int(brain_volume[2] * 0.65)]
coords = (x, y, z)
atlas = pad_im(atlas, max(brain_volume[0:3]), 0, False)
b0 = pad_im(b0, max(brain_volume[0:3]), 0, False)
x = [int(max(brain_volume[0:3]) * 0.35), int(max(brain_volume[0:3]) * 0.51), int(max(brain_volume[0:3]) * 0.65)]
y = [int(max(brain_volume[0:3]) * 0.35), int(max(brain_volume[0:3]) * 0.51), int(max(brain_volume[0:3]) * 0.65)]
z = [int(max(brain_volume[0:3]) * 0.35), int(max(brain_volume[0:3]) * 0.51), int(max(brain_volume[0:3]) * 0.65)]
coords = (x, y, z)
labs = [
"Sagittal Slice",
"Coronal Slice",
"Axial Slice",
]
var = ["X", "Y", "Z"]
# create subplot for first slice
# and customize all labels
idx = 0
if edge:
min_val = 0
max_val = 1
else:
min_val, max_val = get_min_max(b0, minthr, maxthr)
for i, coord in enumerate(coords):
for pos in coord:
idx += 1
ax = foverlay.add_subplot(3, 3, idx)
ax.set_title(var[i] + " = " + str(pos))
if i == 0:
image = ndimage.rotate(b0[pos, :, :], 90)
atl = ndimage.rotate(atlas[pos, :, :], 90)
elif i == 1:
image = ndimage.rotate(b0[:, pos, :], 90)
atl = ndimage.rotate(atlas[:, pos, :], 90)
else:
image = b0[:, :, pos]
atl = atlas[:, :, pos]
if idx % 3 == 1:
ax.set_ylabel(labs[i])
ax.yaxis.set_ticks([0, image.shape[0] / 2, image.shape[0] - 1])
ax.xaxis.set_ticks([0, image.shape[1] / 2, image.shape[1] - 1])
if edge:
image = edge_map(image).data
image[image > 0] = max_val
image[image == 0] = min_val
#Set the axis invisible
plt.xticks([])
plt.yticks([])
#Set the frame invisible
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.imshow(atl, interpolation="none", cmap=cmaps[0], alpha=0.9)
ax.imshow(
opaque_colorscale(
cmaps[1], image, alpha=0.9, vmin=min_val, vmax=max_val
)
)
#set the legend
if idx == 3:
plt.plot(0,0,"-",c="pink",label='registered')
plt.plot(0,0,"-",c="green",label='reference')
plt.legend(loc='best',fontsize=12,bbox_to_anchor=(1.5,1.5))
#Set title for the whole picture
a,b,c = brain_volume
title = 'QA For Registration. Brain Volume:'+ str(a) +'*'+ str(b) + '*' + str(c)+'\n'
foverlay.suptitle(title,fontsize=24)
foverlay.set_size_inches(12.5, 10.5, forward=True)
foverlay.tight_layout()
return foverlay
mri = r"/Users/xueminzhu/Desktop/test/t1w_aligned_mni.nii.gz"
atlas = r"/Users/xueminzhu/Desktop/test/MNI152_T1_2mm_brain.nii.gz"
output_dir = r"/Users/xueminzhu/Desktop/test"
reg_mri_pngs(mri, atlas,output_dir)
mri = r"/Users/xueminzhu/Desktop/test/desikan_space-MNI152NLin6_res-2x2x2_reor_RAS_nores_aligned_atlas.nii.gz"
atlas = r"/Users/xueminzhu/Desktop/test/nodif_B0.nii.gz"
output_dir = r"/Users/xueminzhu/Desktop/test"
reg_mri_pngs(mri, atlas,output_dir)
bad_example = r"/Users/xueminzhu/Desktop/test/example.png"
display(Image.open(bad_example))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wine data set
Step2: Synthetic data set
Step3: Animate num trees in RF
Step4: Animate decision tree max depth
Step5: Animate decision tree min samples per leaf
|
<ASSISTANT_TASK:>
Python Code:
! pip install --quiet -U pltvid # simple animation support by parrt
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \
load_breast_cancer, load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, precision_score, recall_score
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
%config InlineBackend.figure_format = 'svg' # Looks MUCH better than retina
# %config InlineBackend.figure_format = 'retina'
from rfpimp import * # pip install rfpimp
from sklearn import tree
import dtreeviz
from dtreeviz import clfviz
wine = load_wine()
X = wine.data
X = X[:,[12,6]]
y = wine.target
rf = RandomForestClassifier(n_estimators=50, min_samples_leaf=20, n_jobs=-1)
rf.fit(X, y)
import pltvid
dpi = 300
camera = pltvid.Capture(dpi=dpi)
max = 10
for depth in range(1,max+1):
t = DecisionTreeClassifier(max_depth=depth)
t.fit(X,y)
fig,ax = plt.subplots(1,1, figsize=(4,3.5), dpi=dpi)
clfviz(t, X, y,
feature_names=['proline', 'flavanoid'], target_name="wine",
ax=ax)
plt.title(f"Wine tree depth {depth}")
plt.tight_layout()
if depth>=max:
camera.snap(8)
else:
camera.snap()
# plt.show()
camera.save("wine-dtree-maxdepth.png", duration=500) # animated png
def smiley(n = 1000):
# mouth
x1 = np.random.normal(1.0,.2,n).reshape(-1,1)
x2 = np.random.normal(0.4,.05,n).reshape(-1,1)
cl = np.full(shape=(n,1), fill_value=0, dtype=int)
d = np.hstack([x1,x2,cl])
data = d
# left eye
x1 = np.random.normal(.7,.2,n).reshape(-1,1)
x2 = x1 + .3 + np.random.normal(0,.1,n).reshape(-1,1)
cl = np.full(shape=(n,1), fill_value=1, dtype=int)
d = np.hstack([x1,x2,cl])
data = np.vstack([data, d])
# right eye
x1 = np.random.normal(1.3,.2,n).reshape(-1,1)
x2 = np.random.normal(0.8,.1,n).reshape(-1,1)
x2 = x1 - .5 + .3 + np.random.normal(0,.1,n).reshape(-1,1)
cl = np.full(shape=(n,1), fill_value=2, dtype=int)
d = np.hstack([x1,x2,cl])
data = np.vstack([data, d])
# face outline
noise = np.random.normal(0,.1,n).reshape(-1,1)
x1 = np.linspace(0,2,n).reshape(-1,1)
x2 = (x1-1)**2 + noise
cl = np.full(shape=(n,1), fill_value=3, dtype=int)
d = np.hstack([x1,x2,cl])
data = np.vstack([data, d])
df = pd.DataFrame(data, columns=['x1','x2','class'])
return df
import pltvid
df = smiley(n=100)
X = df[['x1','x2']]
y = df['class']
rf = RandomForestClassifier(n_estimators=10, min_samples_leaf=1, n_jobs=-1)
rf.fit(X, y)
dpi = 300
camera = pltvid.Capture(dpi=dpi)
max = 100
tree_sizes = [*range(1,10)]+[*range(10,max+1,5)]
for nt in tree_sizes:
np.random.seed(1) # use same bagging sets for animation
rf = RandomForestClassifier(n_estimators=nt, min_samples_leaf=1, n_jobs=-1)
rf.fit(X, y)
fig,ax = plt.subplots(1,1, figsize=(5,3.5), dpi=dpi)
clfviz(rf, X.values, y, feature_names=['x1', 'x2'],
ntiles=70, dot_w=15, boundary_markersize=.4, ax=ax)
plt.title(f"Synthetic dataset, {nt} trees")
plt.tight_layout()
if nt>=tree_sizes[-1]:
camera.snap(5)
else:
camera.snap()
# plt.show()
camera.save("smiley-numtrees.png", duration=500)
import pltvid
df = smiley(n=100) # more stark changes with fewer
X = df[['x1','x2']]
y = df['class']
dpi = 300
camera = pltvid.Capture(dpi=dpi)
max = 10
for depth in range(1,max+1):
t = DecisionTreeClassifier(max_depth=depth)
t.fit(X,y)
fig,ax = plt.subplots(1,1, figsize=(5,3.5), dpi=dpi)
clfviz(t, X, y,
feature_names=['x1', 'x2'], target_name="class",
colors={'scatter_edge': 'black',
'tesselation_alpha':.6},
ax=ax)
plt.title(f"Synthetic dataset, tree depth {depth}")
plt.tight_layout()
if depth>=max:
camera.snap(8)
else:
camera.snap()
# plt.show()
camera.save("smiley-dtree-maxdepth.png", duration=500)
import pltvid
df = smiley(n=100)
X = df[['x1','x2']]
y = df['class']
dpi = 300
camera = pltvid.Capture(dpi=dpi)
max = 20
for leafsz in range(1,max+1):
t = DecisionTreeClassifier(min_samples_leaf=leafsz)
t.fit(X,y)
fig,ax = plt.subplots(1,1, figsize=(5,3.5), dpi=dpi)
clfviz(t, X, y,
feature_names=['x1', 'x2'], target_name="class",
colors={'scatter_edge': 'black',
'tesselation_alpha':.4},
ax=ax)
plt.title(f"Synthetic dataset, {leafsz} samples/leaf")
plt.tight_layout()
if leafsz>=max:
camera.snap(8)
else:
camera.snap()
# plt.show()
camera.save("smiley-dtree-minsamplesleaf.png", duration=500)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'awi-cm-1-0-mr', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unzipping files with house sales data
Step2: Polynomial regression, revisited
Step3: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
Step4: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step5: Plotting the data we are working with
Step6: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_dataframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5
Step7: Note
Step8: Observe overfitting
Step9: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Step10: Plotting the data and the 4 different 15 degree polynomials we learned from the data.
Step11: The four curves should differ from one another a lot, as should the coefficients you learned.
Step12: Ridge regression comes to rescue
Step13: These curves should vary a lot less, now that you applied a high degree of regularization.
Step14: Selecting an L2 penalty via cross-validation
Step15: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
Step16: Let us familiarize ourselves with array slicing with DataFrames. To extract a continuous slice from a DataFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
Step17: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Step18: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
Step19: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0
Step20: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
Step21: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
Step22: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
Step23: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following
Step24: QUIZ QUESTIONS
Step25: You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
Step26: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
Step27: QUIZ QUESTION
|
<ASSISTANT_TASK:>
Python Code:
import os
import zipfile
import numpy as np
import pandas as pd
from sklearn import linear_model
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]
# Filenames of unzipped files
unzip_files = ['kc_house_data.csv','wk3_kc_house_set_1_data.csv', 'wk3_kc_house_set_2_data.csv',
'wk3_kc_house_set_3_data.csv', 'wk3_kc_house_set_4_data.csv', 'wk3_kc_house_test_data.csv',
'wk3_kc_house_train_data.csv', 'wk3_kc_house_valid_data.csv', 'wk3_kc_house_train_valid_shuffled.csv']
# If upzipped file not in files_list, unzip the file
for filename in unzip_files:
if filename not in files_list:
zip_file = filename + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close
def polynomial_dataframe(feature, degree): # feature is pandas.Series type
# assume that degree >= 1
# initialize the dataframe:
poly_dataframe = pd.DataFrame()
# and set poly_dataframe['power_1'] equal to the passed feature
poly_dataframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# assign poly_dataframe[name] to be feature^power; use apply(*)
poly_dataframe[name] = poly_dataframe['power_1'].apply(lambda x: x**power)
return poly_dataframe
# Dictionary with the correct dtypes for the DataFrame columns
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float,
'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float,
'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float,
'floors':str, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int,
'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}
sales = pd.read_csv('kc_house_data.csv', dtype = dtype_dict)
sales = sales.sort_values(['sqft_living', 'price'])
sales[['sqft_living', 'price']].head()
plt.figure(figsize=(8,6))
plt.plot(sales['sqft_living'], sales['price'],'.')
plt.xlabel('Living Area (ft^2)', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.title('King County, Seattle House Price Data', fontsize=18)
plt.axis([0.0, 14000.0, 0.0, 8000000.0])
plt.show()
# Bulding dataframe with 15 polynomial features
poly15_data = polynomial_dataframe(sales['sqft_living'], 15)
l2_small_penalty = 1e-5
model = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)
model.fit(poly15_data, sales['price'])
print 'Weight for power_1 feature is: %.2f' % (model.coef_[0])
set_1 = pd.read_csv('wk3_kc_house_set_1_data.csv', dtype=dtype_dict)
set_2 = pd.read_csv('wk3_kc_house_set_2_data.csv', dtype=dtype_dict)
set_3 = pd.read_csv('wk3_kc_house_set_3_data.csv', dtype=dtype_dict)
set_4 = pd.read_csv('wk3_kc_house_set_4_data.csv', dtype=dtype_dict)
# Putting data and keys in a list for looping
data_list = [set_1, set_2, set_3, set_4]
key_list = ['set_1', 'set_2', 'set_3', 'set_4']
# model_poly_deg is a dict which holds all the regression models for the ith polynomial fit
poly_15_dframe_dict = {}
models_poly_15_dict = {}
# First, learn models with a really small L2 penalty
l2_small_penalty = 1e-9
# Looping over polynomial features from 1-15
for key, dframe in zip(key_list, data_list):
# Entering each dataframe returned from polynomial_dataframe function into a dict
# Then, saving col_names into a list to do regression w/ these features. Then, adding price column to dataframe
poly_15_dframe_dict[key] = polynomial_dataframe(dframe['sqft_living'], 15)
# Adding regression models to dicts
models_poly_15_dict[key] = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)
models_poly_15_dict[key].fit( poly_15_dframe_dict[key], dframe['price'] )
plt.figure(figsize=(8,6))
plt.plot(sales['sqft_living'], sales['price'],'.', label= 'House Price Data')
plt.hold(True)
#
for i, key in enumerate(key_list):
leg_label = 'Model ' + str(i+1)
plt.plot( poly_15_dframe_dict[key]['power_1'], models_poly_15_dict[key].predict(poly_15_dframe_dict[key]), '-', label = leg_label )
#
plt.hold(False)
plt.legend(loc='upper left', fontsize=16)
plt.xlabel('Living Area (ft^2)', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.title('4 Diff. 15th Deg. Polynomial Regr. Models, Small L2 Penalty', fontsize=16)
plt.axis([0.0, 14000.0, 0.0, 8000000.0])
plt.show()
power_l_coeff_list = []
for key in key_list:
power_l_coeff_list.append( models_poly_15_dict[key].coef_[0] )
print 'Smallest power_1 weight with small L2 penalty is: %.2f' %( min(power_l_coeff_list))
print 'Largest power_1 weight with small L2 penalty is: %.2f' %( max(power_l_coeff_list))
# model_poly_deg is a dict which holds all the regression models for the ith polynomial fit
poly_15_dframe_dict = {}
models_poly_15_dict = {}
# Re-learn models with a large L2 penalty
l2_large_penalty=1.23e2
# Looping over polynomial features from 1-15
for key, dframe in zip(key_list, data_list):
# Entering each dataframe returned from polynomial_dataframe function into a dict
# Then, saving col_names into a list to do regression w/ these features. Then, adding price column to dataframe
poly_15_dframe_dict[key] = polynomial_dataframe(dframe['sqft_living'], 15)
# Adding regression models to dicts
models_poly_15_dict[key] = linear_model.Ridge(alpha=l2_large_penalty, normalize=True)
models_poly_15_dict[key].fit( poly_15_dframe_dict[key], dframe['price'] )
plt.figure(figsize=(8,6))
plt.plot(sales['sqft_living'], sales['price'],'.', label= 'House Price Data')
plt.hold(True)
#
for i, key in enumerate(key_list):
leg_label = 'Model ' + str(i+1)
plt.plot( poly_15_dframe_dict[key]['power_1'], models_poly_15_dict[key].predict(poly_15_dframe_dict[key]), '-', label = leg_label )
#
plt.hold(False)
plt.legend(loc='upper left', fontsize=16)
plt.xlabel('Living Area (ft^2)', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.title('4 Diff. 15th Deg. Polynomial Regr. Models, Large L2 Penalty', fontsize=16)
plt.axis([0.0, 14000.0, 0.0, 8000000.0])
plt.show()
power_l_coeff_list = []
for key in key_list:
power_l_coeff_list.append( models_poly_15_dict[key].coef_[0] )
print 'Smallest power_1 weight with large L2 penalty is: %.2f' %( min(power_l_coeff_list))
print 'Largest power_1 weight with large L2 penalty is: %.2f' %( max(power_l_coeff_list))
train_valid_shuffled = pd.read_csv('wk3_kc_house_train_valid_shuffled.csv', dtype=dtype_dict)
test = pd.read_csv('wk3_kc_house_test_data.csv', dtype=dtype_dict)
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
train_valid_shuffled[0:10] # rows 0 to 9
i = 3
start_ind = (n*3)/k
end_ind = (n*(3+1))/k-1
validation4 = train_valid_shuffled[ start_ind : end_ind + 1]
print int(round(validation4['price'].mean(), 0))
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
i = 3
k = 10
n = len(train_valid_shuffled)
start_ind = (n*3)/k
end_ind = (n*(3+1))/k-1
train4 = train_valid_shuffled[0:start].append(train_valid_shuffled[end+1:n])
print int(round(train4['price'].mean(), 0))
def k_fold_cross_validation(k, l2_penalty, data, output_vals):
# Defining n as the number of observations and an empty list to store the k cross_validation errors
n = len(data)
cv_error_list = []
# Looping to compute k slices. Computing the array index to get the kth_slice.
for i in range(k):
# Getting the starting and ending index of the kth slice
start = (n*i)/k
end = (n*(i+1))/k-1
# Using start and end to split data into cross-validation and training set
cv_set = data[start: end + 1]
training_set = data[0:start].append(data[end+1:n])
# Using the training data to create a linear regression model
model_train_data = linear_model.Ridge(alpha=l2_penalty, normalize=True)
model_train_data.fit( data, output_vals )
# Computing np.array with predictions from the model we learn
predictions = model_train_data.predict(data)
# Computing the error on the cross-validation set
RSS_cv_set = sum( (predictions - output_vals)**2 )
cv_error_list.append(RSS_cv_set)
# Return the average validation error
return sum(cv_error_list)/float(len(cv_error_list))
l2_penalty_list = np.logspace(3, 9, num=26)
poly_15_dframe = polynomial_dataframe(train_valid_shuffled['sqft_living'], 15)
output_values = train_valid_shuffled['price']
l2_RSS_list = []
for l2_pen in l2_penalty_list:
RSS_error = k_fold_cross_validation(10, l2_pen, poly_15_dframe, output_values)
l2_RSS_list.append( (RSS_error, l2_pen) )
print 'Minimum value for RSS error is : %.2e' %min(l2_RSS_list)[0]
print 'L2 penalty for this RSS error is: %.2e' %min(l2_RSS_list)[1]
# Putting all L2 penalties and RSS errors for plotting
L2_plot_list = []
RSS_plot_list = []
for entry in l2_RSS_list:
L2_plot_list.append(entry[1])
RSS_plot_list.append(entry[0])
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
plt.figure(figsize=(8,6))
plt.plot(L2_plot_list, RSS_plot_list,'-')
plt.xscale('log')
#
plt.xlabel('L2 penalty ' + r'$(\lambda)$', fontsize=16)
plt.ylabel('Average Cross-Validation RSS', fontsize=16)
plt.title('Cross-Validation RSS vs. L2 Penalty', fontsize=16)
#
plt.show()
min_L2_pen = min(l2_RSS_list)[1]
# Loading the training set data and defining the dataframe w/ 15 polynomial features
train_data = pd.read_csv('wk3_kc_house_train_data.csv', dtype = dtype_dict)
train_data = train_data.sort_values(['sqft_living', 'price'])
poly_15_train_data = polynomial_dataframe(train_data['sqft_living'], 15)
# Training a linear regression model with L2 penalty that gave the smallest RSS error
model_train_data = linear_model.Ridge(alpha=min_L2_pen, normalize=True)
model_train_data.fit( poly_15_train_data, train_data['price'] )
# Now, loading the test data and defining the dataframe w/ 15 polynomial features
test_data = pd.read_csv('wk3_kc_house_test_data.csv', dtype = dtype_dict)
poly_15_test_data = polynomial_dataframe(test_data['sqft_living'], 15)
# Using the weights learning from the training data to calculate the predictions on the test data
predictions = model_train_data.predict(poly_15_test_data)
# Computing the RSS on the test data
RSS_test_set = sum( (predictions - test_data['price'])**2 )
print 'RSS on test data with min_L2_pen: %.2e' %(RSS_test_set)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the data
Step2: The dataset includes 7613 samples with 5 columns
Step3: Shuffling and dropping unnecessary columns
Step4: Printing information about the shuffled dataframe
Step5: Total number of "disaster" and "non-disaster" tweets
Step6: Let's preview a few samples
Step7: Splitting dataset into training and test sets
Step8: Total number of "disaster" and "non-disaster" tweets in the training data
Step9: Total number of "disaster" and "non-disaster" tweets in the test data
Step10: Convert data to a tf.data.Dataset
Step11: Downloading pretrained embeddings
Step12: Creating our models
Step13: Building model_2
Step14: Train the models
Step15: Prints training logs of model_1
Step16: Prints training logs of model_2
Step17: The model.summary() method prints a variety of information about your decision tree
Step18: Plotting training an logs
Step19: Evaluating on test data
Step20: Predicting on validation data
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import keras
import tensorflow_hub as hub
from tensorflow.keras import layers
import tensorflow_decision_forests as tfdf
import matplotlib.pyplot as plt
# Turn .csv files into pandas DataFrame's
df = pd.read_csv(
"https://raw.githubusercontent.com/IMvision12/Tweets-Classification-NLP/main/train.csv"
)
print(df.head())
print(f"Training dataset shape: {df.shape}")
df_shuffled = df.sample(frac=1, random_state=42)
# Dropping id, keyword and location columns as these columns consists of mostly nan values
# we will be using only text and target columns
df_shuffled.drop(["id", "keyword", "location"], axis=1, inplace=True)
df_shuffled.reset_index(inplace=True, drop=True)
print(df_shuffled.head())
print(df_shuffled.info())
print(
"Total Number of disaster and non-disaster tweets: "
f"{df_shuffled.target.value_counts()}"
)
for index, example in df_shuffled[:5].iterrows():
print(f"Example #{index}")
print(f"\tTarget : {example['target']}")
print(f"\tText : {example['text']}")
test_df = df_shuffled.sample(frac=0.1, random_state=42)
train_df = df_shuffled.drop(test_df.index)
print(f"Using {len(train_df)} samples for training and {len(test_df)} for validation")
print(train_df["target"].value_counts())
print(test_df["target"].value_counts())
def create_dataset(dataframe):
dataset = tf.data.Dataset.from_tensor_slices(
(df["text"].to_numpy(), df["target"].to_numpy())
)
dataset = dataset.batch(100)
dataset = dataset.prefetch(tf.data.AUTOTUNE)
return dataset
train_ds = create_dataset(train_df)
test_ds = create_dataset(test_df)
sentence_encoder_layer = hub.KerasLayer(
"https://tfhub.dev/google/universal-sentence-encoder/4"
)
inputs = layers.Input(shape=(), dtype=tf.string)
outputs = sentence_encoder_layer(inputs)
preprocessor = keras.Model(inputs=inputs, outputs=outputs)
model_1 = tfdf.keras.GradientBoostedTreesModel(preprocessing=preprocessor)
model_2 = tfdf.keras.GradientBoostedTreesModel()
# Compiling model_1
model_1.compile(metrics=["Accuracy", "Recall", "Precision", "AUC"])
# Here we do not specify epochs as, TF-DF trains exactly one epoch of the dataset
model_1.fit(train_ds)
# Compiling model_2
model_2.compile(metrics=["Accuracy", "Recall", "Precision", "AUC"])
# Here we do not specify epochs as, TF-DF trains exactly one epoch of the dataset
model_2.fit(train_ds)
logs_1 = model_1.make_inspector().training_logs()
print(logs_1)
logs_2 = model_2.make_inspector().training_logs()
print(logs_2)
print("model_1 summary: ")
print(model_1.summary())
print()
print("model_2 summary: ")
print(model_2.summary())
def plot_curve(logs):
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot([log.num_trees for log in logs], [log.evaluation.accuracy for log in logs])
plt.xlabel("Number of trees")
plt.ylabel("Accuracy")
plt.subplot(1, 2, 2)
plt.plot([log.num_trees for log in logs], [log.evaluation.loss for log in logs])
plt.xlabel("Number of trees")
plt.ylabel("Loss")
plt.show()
plot_curve(logs_1)
plot_curve(logs_2)
results = model_1.evaluate(test_ds, return_dict=True, verbose=0)
print("model_1 Evaluation: \n")
for name, value in results.items():
print(f"{name}: {value:.4f}")
results = model_2.evaluate(test_ds, return_dict=True, verbose=0)
print("model_2 Evaluation: \n")
for name, value in results.items():
print(f"{name}: {value:.4f}")
test_df.reset_index(inplace=True, drop=True)
for index, row in test_df.iterrows():
text = tf.expand_dims(row["text"], axis=0)
preds = model_1.predict_step(text)
preds = tf.squeeze(tf.round(preds))
print(f"Text: {row['text']}")
print(f"Prediction: {int(preds)}")
print(f"Ground Truth : {row['target']}")
if index == 10:
break
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Github
|
<ASSISTANT_TASK:>
Python Code:
df = pd.read_csv("../data/coal_prod_cleaned.csv")
df.head()
df.shape
df.columns
qgrid_widget = qgrid.show_grid(
df[["Year", "Mine_State", "Labor_Hours", "Production_short_tons"]],
show_toolbar=True,
)
qgrid_widget
df2 = df.groupby('Mine_State').sum()
df3 = df.groupby('Mine_State').sum()
df2.loc['Wyoming', 'Production_short_tons'] = 5.181732e+08
# have to run the next line then restart your kernel
# !cd ../insight; python setup.py develop
%aimport insight.plotting
insight.plotting.plot_prod_vs_hours(df2, color_index=1)
insight.plotting.plot_prod_vs_hours(df3, color_index=0)
def plot_prod_vs_hours(
df, color_index=0, output_file="../img/production-vs-hours-worked.png"
):
fig, ax = plt.subplots(figsize=(10, 8))
sns.regplot(
df["Labor_Hours"],
df["Production_short_tons"],
ax=ax,
color=sns.color_palette()[color_index],
)
ax.set_xlabel("Labor Hours Worked")
ax.set_ylabel("Total Amount Produced")
x = ax.set_xlim(-9506023.213266129, 204993853.21326613)
y = ax.set_ylim(-51476801.43653282, 746280580.4034251)
fig.tight_layout()
fig.savefig(output_file)
plot_prod_vs_hours(df2, color_index=0)
plot_prod_vs_hours(df3, color_index=1)
# make a change via qgrid
df3 = qgrid_widget.get_changed_df()
qgrid_widget = qgrid.show_grid(
df2[["Year", "Labor_Hours", "Production_short_tons"]],
show_toolbar=True,
)
qgrid_widget
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 映画レビューを使ったテキスト分類
Step2: IMDB データセットをダウンロードする
Step3: データの観察
Step4: 最初の 10 個のサンプルを出力しましょう。
Step5: 最初の 10 個のラベルも出力しましょう。
Step6: モデルを構築する
Step7: 今度は、完全なモデルを構築しましょう。
Step8: これらのレイヤーは、分類器を構成するため一列に積み重ねられます。
Step9: 検証セットを作成する
Step10: モデルのトレーニング
Step11: モデルを評価する
Step12: このかなり単純なアプローチで、約 87% の正解率が達成されます。より高度なアプローチを使えば、95% に近づくでしょう。
Step13: トレーニングと検証中に監視されている各メトリックに対して 1 つずつ、計 4 つのエントリがあります。このエントリを使用して、トレーニングと検証の損失とトレーニングと検証の精度を比較したグラフを作成することができます。
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"],
batch_size=-1, as_supervised=True)
train_examples, train_labels = tfds.as_numpy(train_data)
test_examples, test_labels = tfds.as_numpy(test_data)
print("Training entries: {}, test entries: {}".format(len(train_examples), len(test_examples)))
train_examples[:10]
train_labels[:10]
model = "https://tfhub.dev/google/nnlm-en-dim50/2"
hub_layer = hub.KerasLayer(model, input_shape=[], dtype=tf.string, trainable=True)
hub_layer(train_examples[:3])
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1))
model.summary()
model.compile(optimizer='adam',
loss=tf.losses.BinaryCrossentropy(from_logits=True),
metrics=[tf.metrics.BinaryAccuracy(threshold=0.0, name='accuracy')])
x_val = train_examples[:10000]
partial_x_train = train_examples[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
results = model.evaluate(test_examples, test_labels)
print(results)
history_dict = history.history
history_dict.keys()
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 40)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
vocab = set(text)
vocab_to_int = { word: i for (i, word) in enumerate(vocab) }
int_to_vocab = { i: word for (i, word) in enumerate(vocab) }
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
PUNCT = {
'Period': '.',
'Comma': ',',
'Quotation': '"',
'Semicolon': ';',
'Exclamation': '!',
'Question': '?',
'LeftParen': '(',
'RightParen': ')',
'Dash': '--',
'Return': '\n'
}
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
lookup = {}
for (punc, token) in PUNCT.items():
symbol = ''.join(['||', punc, '||'])
lookup[token] = symbol
return lookup
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return (input, targets, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
rnn_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell])
zero_state = rnn_cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(zero_state, name='initial_state')
return (rnn_cell, initial_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
return tf.contrib.layers.embed_sequence(
input_data,
vocab_size=vocab_size,
embed_dim=embed_dim
)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return (outputs, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embedding = get_embed(input_data, vocab_size, embed_dim)
rnn_output, rnn_state = build_rnn(cell, embedding)
logits = tf.contrib.layers.fully_connected(
rnn_output,
vocab_size,
activation_fn=None
)
return (logits, rnn_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
batches = []
num_ids_per_batch = batch_size * seq_length
num_batches = len(int_text) // num_ids_per_batch
# the offset to use to determine the next sequence start index
seq_offset = num_batches * seq_length
# create the batches
for batch_index in range(num_batches):
batch = [[], []]
inputs = batch[0]
targets = batch[1]
seq_start = batch_index * seq_length
# create the sequences for this batch
for seq_index in range(batch_size):
seq_end = seq_start + seq_length
# create the next input sequence for this batch
input_seq = int_text[seq_start:seq_end]
inputs.append(input_seq)
# create the next target sequence for this batch
target_seq = int_text[seq_start + 1 : seq_end + 1]
# set the last target to be the first input of the first batch
if batch_index == num_batches - 1 and seq_index == batch_size - 1:
target_seq[-1] = batches[0][0][0][0]
targets.append(target_seq)
seq_start += seq_offset
batches.append(batch)
return np.array(batches)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 16
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 128
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
input_tensor = loaded_graph.get_tensor_by_name('input:0')
initial_state_tensor = loaded_graph.get_tensor_by_name('initial_state:0')
final_state_tensor = loaded_graph.get_tensor_by_name('final_state:0')
probs_tensor = loaded_graph.get_tensor_by_name('probs:0')
return (input_tensor, initial_state_tensor, final_state_tensor, probs_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
max_prob = np.argmax(probabilities)
next_word = int_to_vocab[max_prob]
return next_word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Function Code for brute force implementation
Step2: Function code for bidimensional matrix implementation
Step3: Examples
Step4: Numerical examples
Step5: Example 1
Step6: Speed performance
Step7: Equation
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def histogram(f):
return np.bincount(f.ravel())
def histogram_eq(f):
from numpy import amax, zeros, arange, sum
n = amax(f) + 1
h = zeros((n,),int)
for i in arange(n):
h[i] = sum(i == f)
return h
def histogram_eq1(f):
import numpy as np
n = f.size
m = f.max() + 1
haux = np.zeros((m,n),int)
fi = f.ravel()
i = np.arange(n)
haux[fi,i] = 1
h = np.add.reduce(haux,axis=1)
return h
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python histogram.ipynb
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
if testing:
f = np.array([[0,1,2,3,4],
[4,3,2,1,1]], 'uint8')
h = ia.histogram(f)
print(h.dtype)
print(h)
if testing:
h1 = ia.histogram_eq(f)
print(h1.dtype)
print(h1)
if testing:
h1 = ia.histogram_eq1(f)
print(h1.dtype)
print(h1)
if testing:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
f = mpimg.imread('../data/woodlog.tif')
plt.imshow(f,cmap='gray')
if testing:
h = ia.histogram(f)
plt.plot(h)
if testing:
import numpy as np
from numpy.random import rand, seed
seed([10])
f = (255 * rand(1000,1000)).astype('uint8')
%timeit h = ia.histogram(f)
%timeit h1 = ia.histogram_eq(f)
%timeit h2 = ia.histogram_eq1(f)
if testing:
print(ia.histogram(np.array([3,7,0,0,3,0,10,7,0,7])) == \
np.array([4, 0, 0, 2, 0, 0, 0, 3, 0, 0, 1]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load up the tptY3 buzzard mocks.
Step2: Load up a snapshot at a redshift near the center of this bin.
Step3: This code load a particular snapshot and and a particular HOD model. In this case, 'redMagic' is the Zheng07 HOD with the f_c variable added in.
Step4: Take the zspec in our selected zbin to calculate the dN/dz distribution. The below cell calculate the redshift distribution prefactor
Step5: If we happened to choose a model with assembly bias, set it to 0. Leave all parameters as their defaults, for now.
Step6: Use my code's wrapper for halotools' xi calculator. Full source code can be found here.
Step7: Interpolate with a Gaussian process. May want to do something else "at scale", but this is quick for now.
Step8: This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta).
Step9: Perform the below integral in each theta bin
Step10: The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say.
Step11: The below cell calculates the integrals jointly instead of separately. It doesn't change the results significantly, but is quite slow. I've disabled it for that reason.
|
<ASSISTANT_TASK:>
Python Code:
from pearce.mocks import cat_dict
import numpy as np
from os import path
from astropy.io import fits
from astropy import constants as const, units as unit
import george
from george.kernels import ExpSquaredKernel
import matplotlib
#matplotlib.use('Agg')
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
fname = '/u/ki/jderose/public_html/bcc/measurement/y3/3x2pt/buzzard/flock/buzzard-2/tpt_Y3_v0.fits'
hdulist = fits.open(fname)
z_bins = np.array([0.15, 0.3, 0.45, 0.6, 0.75, 0.9])
zbin=1
a = 0.81120
z = 1.0/a - 1.0
print z
cosmo_params = {'simname':'chinchilla', 'Lbox':400.0, 'scale_factors':[a]}
cat = cat_dict[cosmo_params['simname']](**cosmo_params)#construct the specified catalog!
cat.load_catalog(a)
#cat.h = 1.0
#halo_masses = cat.halocat.halo_table['halo_mvir']
cat.load_model(a, 'redMagic')
hdulist.info()
nz_zspec = hdulist[8]
#N = 0#np.zeros((5,))
N_total = np.sum([row[2+zbin] for row in nz_zspec.data])
dNdzs = []
zs = []
W = 0
for row in nz_zspec.data:
N = row[2+zbin]
dN = N*1.0/N_total
#volIn, volOut = cat.cosmology.comoving_volume(row[0]), cat.cosmology.comoving_volume(row[2])
#fullsky_volume = volOut-volIn
#survey_volume = fullsky_volume*area/full_sky
#nd = dN/survey_volume
dz = row[2] - row[0]
#print row[2], row[0]
dNdz = dN/dz
H = cat.cosmology.H(row[1])
W+= dz*H*(dNdz)**2
dNdzs.append(dNdz)
zs.append(row[1])
#for idx, n in enumerate(row[3:]):
# N[idx]+=n
W = 2*W/const.c
print W
N_z = [row[2+zbin] for row in nz_zspec.data]
N_total = np.sum(N_z)#*0.01
plt.plot(zs,N_z/N_total)
plt.xlim(0,1.0)
len(dNdzs)
plt.plot(zs, dNdzs)
plt.vlines(z, 0,8)
plt.xlim(0,1.0)
plt.xlabel(r'$z$')
plt.ylabel(r'$dN/dz$')
len(nz_zspec.data)
np.sum(dNdzs)
np.sum(dNdzs)/len(nz_zspec.data)
W.to(1/unit.Mpc)
4.51077317e-03
params = cat.model.param_dict.copy()
#params['mean_occupation_centrals_assembias_param1'] = 0.0
#params['mean_occupation_satellites_assembias_param1'] = 0.0
params['logMmin'] = 13.4
params['sigma_logM'] = 0.1
params['f_c'] = 0.19
params['alpha'] = 1.0
params['logM1'] = 14.0
params['logM0'] = 12.0
print params
cat.populate(params)
nd_cat = cat.calc_analytic_nd()
print nd_cat
area = 5063 #sq degrees
full_sky = 41253 #sq degrees
volIn, volOut = cat.cosmology.comoving_volume(z_bins[zbin-1]), cat.cosmology.comoving_volume(z_bins[zbin])
fullsky_volume = volOut-volIn
survey_volume = fullsky_volume*area/full_sky
nd_mock = N_total/survey_volume
print nd_mock
nd_mock.value/nd_cat
#compute the mean mass
mf = cat.calc_mf()
HOD = cat.calc_hod()
mass_bin_range = (9,16)
mass_bin_size = 0.01
mass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )
mean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\
np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])
print mean_host_mass
10**0.35
N_total
theta_bins = np.logspace(np.log10(0.004), 0, 24)#/60
tpoints = (theta_bins[1:]+theta_bins[:-1])/2
r_bins = np.logspace(-0.5, 1.7, 16)
rpoints = (r_bins[1:]+r_bins[:-1])/2
xi = cat.calc_xi(r_bins, do_jackknife=False)
kernel = ExpSquaredKernel(0.05)
gp = george.GP(kernel)
gp.compute(np.log10(rpoints))
print xi
xi[xi<=0] = 1e-2 #ack
from scipy.stats import linregress
m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))
plt.plot(rpoints, (2.22353827e+03)*(rpoints**(-1.88359)))
#plt.plot(rpoints, b2*(rpoints**m2))
plt.scatter(rpoints, xi)
plt.loglog();
plt.plot(np.log10(rpoints), b+(np.log10(rpoints)*m))
#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))
#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))
plt.scatter(np.log10(rpoints), np.log10(xi) )
#plt.loglog();
print m,b
rpoints_dense = np.logspace(-0.5, 2, 500)
plt.scatter(rpoints, xi)
plt.plot(rpoints_dense, np.power(10, gp.predict(np.log10(xi), np.log10(rpoints_dense))[0]))
plt.loglog();
theta_bins_rm = np.logspace(np.log10(2.5), np.log10(250), 21)/60 #binning used in buzzard mocks
tpoints_rm = (theta_bins_rm[1:]+theta_bins_rm[:-1])/2.0
rpoints_dense = np.logspace(-1.5, 2, 500)
x = cat.cosmology.comoving_distance(z)
plt.scatter(rpoints, xi)
plt.plot(rpoints_dense, np.power(10, gp.predict(np.log10(xi), np.log10(rpoints_dense))[0]))
plt.vlines((a*x*np.radians(tpoints_rm)).value, 1e-2, 1e4)
plt.vlines((a*np.sqrt(x**2*np.radians(tpoints_rm)**2+unit.Mpc*unit.Mpc*10**(1.7*2))).value, 1e-2, 1e4, color = 'r')
plt.loglog();
x = cat.cosmology.comoving_distance(z)
print x
-
np.radians(tpoints_rm)
#a subset of the data from above. I've verified it's correct, but we can look again.
wt_redmagic = np.loadtxt('/u/ki/swmclau2/Git/pearce/bin/mcmc/buzzard2_wt_%d%d.npy'%(zbin,zbin))
tpoints_rm
mathematica_calc = np.array([122.444, 94.8279, 73.4406, 56.8769, 44.049, 34.1143, 26.4202, \
20.4614, 15.8466, 12.2726, 9.50465, 7.36099, 5.70081, 4.41506, \
3.41929, 2.64811, 2.05086, 1.58831, 1.23009, 0.952656])#*W
print W.value
print W.to("1/Mpc").value
print W.value
from scipy.special import gamma
def wt_analytic(m,b,t,x):
return W.to("1/Mpc").value*b*np.sqrt(np.pi)*(t*x)**(1 + m)*(gamma(-(1./2) - m/2.)/(2*gamma(-(m/2.))) )
plt.plot(tpoints_rm, wt, label = 'My Calculation')
plt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')
#plt.plot(tpoints_rm, W.to("1/Mpc").value*mathematica_calc, label = 'Mathematica Calc')
#plt.plot(tpoints_rm, wt_analytic(m,10**b, np.radians(tpoints_rm), x),label = 'Mathematica Calc' )
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
wt_redmagic/(W.to("1/Mpc").value*mathematica_calc)
import cPickle as pickle
with open('/u/ki/jderose/ki23/bigbrother-addgals/bbout/buzzard-flock/buzzard-0/buzzard0_lb1050_xigg_ministry.pkl') as f:
xi_rm = pickle.load(f)
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].mbins
xi_rm.metrics[0].cbins
#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))
#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))
plt.scatter(rpoints, xi)
for i in xrange(3):
for j in xrange(3):
plt.plot(xi_rm.metrics[0].rbins[:-1], xi_rm.metrics[0].xi[:,i,j,0])
plt.loglog();
plt.subplot(211)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
#plt.ylim([0,10])
plt.subplot(212)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
plt.ylim([2.0,4])
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].rbins #Mpc/h
x = cat.cosmology.comoving_distance(z)*a
#ubins = np.linspace(10**-6, 10**2.0, 1001)
ubins = np.logspace(-6, 2.0, 51)
ubc = (ubins[1:]+ubins[:-1])/2.0
#NLL
def liklihood(params, wt_redmagic,x, tpoints):
#print _params
#prior = np.array([ PRIORS[pname][0] < v < PRIORS[pname][1] for v,pname in zip(_params, param_names)])
#print param_names
#print prior
#if not np.all(prior):
# return 1e9
#params = {p:v for p,v in zip(param_names, _params)}
#cat.populate(params)
#nd_cat = cat.calc_analytic_nd(parmas)
#wt = np.zeros_like(tpoints_rm[:-5])
#xi = cat.calc_xi(r_bins, do_jackknife=False)
#m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))
#if np.any(xi < 0):
# return 1e9
#kernel = ExpSquaredKernel(0.05)
#gp = george.GP(kernel)
#gp.compute(np.log10(rpoints))
#for bin_no, t_med in enumerate(np.radians(tpoints_rm[:-5])):
# int_xi = 0
# for ubin_no, _u in enumerate(ubc):
# _du = ubins[ubin_no+1]-ubins[ubin_no]
# u = _u*unit.Mpc*a
# du = _du*unit.Mpc*a
#print np.sqrt(u**2+(x*t_med)**2)
# r = np.sqrt((u**2+(x*t_med)**2))#*cat.h#not sure about the h
#if r > unit.Mpc*10**1.7: #ignore large scales. In the full implementation this will be a transition to a bias model.
# int_xi+=du*0
#else:
# the GP predicts in log, so i predict in log and re-exponate
# int_xi+=du*(np.power(10, \
# gp.predict(np.log10(xi), np.log10(r.value), mean_only=True)[0]))
# int_xi+=du*(10**b)*(r.to("Mpc").value**m)
#print (((int_xi*W))/wt_redmagic[0]).to("m/m")
#break
# wt[bin_no] = int_xi*W.to("1/Mpc")
wt = wt_analytic(params[0],params[1], tpoints, x.to("Mpc").value)
chi2 = np.sum(((wt - wt_redmagic[:-5])**2)/(1e-3*wt_redmagic[:-5]) )
#chi2=0
#print nd_cat
#print wt
#chi2+= ((nd_cat-nd_mock.value)**2)/(1e-6)
#mf = cat.calc_mf()
#HOD = cat.calc_hod()
#mass_bin_range = (9,16)
#mass_bin_size = 0.01
#mass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )
#mean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\
# np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])
#chi2+=((13.35-np.log10(mean_host_mass))**2)/(0.2)
print chi2
return chi2 #nll
print nd_mock
print wt_redmagic[:-5]
import scipy.optimize as op
results = op.minimize(liklihood, np.array([-2.2, 10**1.7]),(wt_redmagic,x, tpoints_rm[:-5]))
results
#plt.plot(tpoints_rm, wt, label = 'My Calculation')
plt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')
plt.plot(tpoints_rm, wt_analytic(-1.88359, 2.22353827e+03,tpoints_rm, x.to("Mpc").value), label = 'Mathematica Calc')
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
plt.plot(np.log10(rpoints), np.log10(2.22353827e+03)+(np.log10(rpoints)*(-1.88)))
plt.scatter(np.log10(rpoints), np.log10(xi) )
np.array([v for v in params.values()])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: import
Step2: Aside
Step3: The rest of this tutorial uses v to refer to current Veneer client
Step4: You can also perform some basic topological queries on the network, such as finding all network outlets, or finding upstream/downstream links for a given node
Step5: If you have GeoPandas installed, you can convert network to a GeoDataFrame, which is useful for visualisation, reporting and advanced filtering.
Step6: With a GeoDataFrame, you can filter against any column
Step7: Querying Functions and Variables
Step8: Both v.functions() and v.variables() return a searchable list
Step9: Note
Step10: v.variables() only returns summary information about the variable. If you want to get at the actual time series (or the piecewise linear function or similar), use v.variable_time_series(name) (or v.variable_piecewise(name))
Step11: Note
Step12: Using .as_dataframe() can help to interpret the information
Step13: This highlights that the Items column has nested information
Step14: So, v.data_sources() returns information about all data sources. Specifically, it returns the list of data sources, and, for each one, information about the different time series. However, v.data_sources does not return the entire time series. (This is mainly for speed - on big models, this can be HUGE)
Step15: You can see that the time series are included, but they are still nested within the returned data. Essentially, Veneer has returned the data for each input set, with each being a separate Pandas DataFrame.
Step16: You can jump to a particular data source item (typically a column from an original file) with v.data_source_item(source,name)
Step17: Input Sets
Step18: Within a given input set, the individual commands are available in the Configuration property
Step19: Input sets are a lot more interesting when you are changing them and applying them from Python...
Step20: After a successful run, veneer-py returns the URL of the new run - this can be used to retrieve an index of all the available results and, from there, the actual time series results
Step21: This index lists all of the time series (under ['Results']) and can be used to retrieve individual time series, or collections of time series
Step22: The easiest way to retrieve time series results is with v.retrieve_multiple_time_series, which takes a set of run results, some search criteria by which it identifies wanted time series. There are also options for retrieve aggregated data and the ability to rename time series as they are retrieved.
Step23: The following examples retrieve data from the latest run, using the index index we saved, earlier
Step24: Other run related actions
Step25: Note
Step26: Manipulating model setup and the ‘Allow Scripts’ option
Step27: At this point, you can check in the Source user interface (Edit|Scenario Input Sets) to see that the input set has been updated. Alternatively, you can query the input set again
Step28: The Default Input Set would be applied when we next run, but if we want those parameters set on the model immediately, we can do so, with v.apply_input_set('Default Input Set')
Step29: v.model and the Allow Scripts option
Step30: This shows that, for the example model, there are three Straight Through Routing links and two Storage Routing links.
Step31: PEST and Parallel processing
Step32: Each copy of the Veneer command line is a copy of the Source engine with a Veneer server.
|
<ASSISTANT_TASK:>
Python Code:
import veneer
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
%matplotlib inline
plt.plot(range(10))
%matplotlib notebook
plt.plot(range(10))
%matplotlib inline
v = veneer.Veneer()
# Equiavelent to
# v = veneer.Veneer(host='localhost',port=9876)
the_network = v.network()
all_nodes = the_network['features'].find_by_feature_type('node')
len(all_nodes)
all_nodes._all_values('name')
outlets = the_network.outlet_nodes()
outlets
the_network.upstream_links(outlets[0])
network_df = the_network.as_dataframe()
network_df
network_df[network_df.feature_type=='node']
network_df[network_df.icon=='/resources/StorageNodeModel']
model_functions = v.functions()
model_functions
model_variables = v.variables()
model_variables
model_variables.find_by_Name('$Runoff')
model_variables.as_dataframe()
runoff_ts = v.variable_time_series('$Runoff')
runoff_ts.plot()
mfr_piecewise = v.variable_piecewise('MFR_Piecewise')
mfr_piecewise
data_sources = v.data_sources()
data_sources
data_sources.as_dataframe()
data_sources[0]['Items']
forcing = v.data_source('ForcingData_csv')
forcing
forcing_as_df = forcing['Items'].find_one_by_Name('Default Input Set')['Details']
forcing_as_df[0:30] # Look at the first 30 time steps
rainfall = v.data_source_item('ForcingData_csv','Rainfall')
rainfall[0:30]
input_sets = v.input_sets()
input_sets
input_sets.find_one_by_Name('Default Input Set')['Configuration']
response,run_url = v.run_model()
response,run_url
run_url
index = v.retrieve_run(run_url)
index
index['Results'].as_dataframe()
help(v.retrieve_multiple_time_series)
# Retrieve Downstream Flow Volume, anywhere it is recorded.
# Name the columns for the network location
ds_flow = v.retrieve_multiple_time_series(run_data=index,criteria={'RecordingVariable':'Downstream Flow Volume'},
name_fn=veneer.name_for_location)
ds_flow[0:10]
# Retrieve everything available from the Water User node
# Name the columns for the variable
water_user = v.retrieve_multiple_time_series(run_data=index,criteria={'NetworkElement':'Water User'},
name_fn=veneer.name_for_variable)
water_user[0:10]
v.drop_all_runs()
v.configure_recording(enable=[{'RecordingVariable':'Storage Surface Area'}])
v.run_model()
surface_area = v.retrieve_multiple_time_series(criteria={'RecordingVariable':'Storage Surface Area'})
surface_area.plot()
v.drop_all_runs()
default_input_set = v.input_sets()[0]
default_input_set
command_of_interest = default_input_set['Configuration'][0]
command_of_interest
default_input_set['Configuration'][0] = command_of_interest.replace('25 m','50 m')
default_input_set
v.update_input_set('Default Input Set',default_input_set)
v.input_sets()
v.apply_input_set('Default Input Set')
v.model.link.routing.get_models()
v.model.link.routing.get_models(by_name=True)
from veneer.manage import start
help(start)
help(v.network)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hematopoiesis
Step2: Defining properties of stem cells
Step3: Efficiency of the CFUs
Step4: Converting to actual CFU numbers
Step5: What's the catch here?
Step6: Instead
Step7: Conclusion
Step9: Cumulative distribution function
Step10: Figure 4 from the paper
Step11: Comparison to a Possion distribution
Step12: Comparison to a $\Gamma$-distribution
Step15: Uncertainty revisited
Step16: Create some trees/colonies, calculate number of CFUs within
Step17: Distribution of #CFUs over time
Step18: Last generation
Step19: <img src="figures/ExperimentData.png" alt="Drawing" style="width
Step20: Trying to reproduce Figure 8
Step21: Exercise
|
<ASSISTANT_TASK:>
Python Code:
import scipy.stats as stats
from scipy.stats import binom
from __future__ import division
%pylab
%matplotlib inline
import seaborn as sns
plt.plot([1,2,3], [2,3,5])
pylab.rcParams['figure.figsize'] = 12, 6
nCells = 1e6
nDays = 10
nDivisions = np.log2(nCells)
time_per_division = nDays * 24 / nDivisions
print "#Generations/Divisions: %d\n\nAverage generation time %.2f (hours)" %(round(nDivisions), time_per_division)
colonies_per_spleen = dict()
colonies_per_spleen["exp1"] = np.concatenate([np.repeat(0,24),
np.repeat(1,6),
np.array([2,6,19,21,23,36])
])
colonies_per_spleen["exp2"] = np.concatenate([np.repeat(0,3),
np.repeat(1,3),
np.repeat(2,2),
np.array([3,3, 20,20, 32])
])
colonies_per_spleen["exp3"]= np.concatenate([np.repeat(0,12),
np.repeat(1,8),
np.repeat(2,5),
np.repeat(3,2),
np.array([4,5,5,7,8,8,11,13,20,23,29,46])
])
fraction = 0.17 # the fraction of CFUs that actually go to the spleen
#just divide each dict entry by 'fraction'
tmpFunc = lambda x: np.ceil(x/fraction)
# apply this function to each dictionary entry
trueCFUs = {k: tmpFunc(v) for k,v in colonies_per_spleen.items()}
print "#Colonies per spleen"
print colonies_per_spleen["exp1"]
print "Estimated #CFU per spleen"
print trueCFUs["exp1"]
#observations
N = 100
k = 10
#inference
tmpP = np.linspace(0,1,100)
like_theta = binom.pmf(k,n=N,p=tmpP)
prior_theta = 1 # simply constant
posterior_theta = like_theta * prior_theta/ np.sum(like_theta * prior_theta)
plot(tmpP,posterior_theta)
xlabel('theta'), ylabel('Posterior(theta|k)'), xlim([0,0.4])
n = np.arange(300)
data = unique(colonies_per_spleen["exp1"]) #just considering the first experiment
posteriorArray = np.zeros((len(data), len(n)))
for i,d in enumerate(data):
likelihood_n = binom.pmf(d, n, fraction)
prior_n = 1 #constant
posterior_n = likelihood_n * prior_n / np.sum(likelihood_n*prior_n)
posteriorArray[i,:] = posterior_n
plot(n,posteriorArray.T)
xlabel('N'), ylabel('Posterior(N|k,f)'), legend(data), ylim([0, 0.1])
#normal hists
binning = np.arange(0,300,step=5)
plt.figure()
for exp,data in trueCFUs.iteritems():
plt.hist(data, bins=binning, histtype="stepfilled", alpha=.3)
plt.legend(trueCFUs.keys())
xlabel('#CFU'), ylabel('Frequency')
def ecdf(x):
calculate the empirical distribution function of data x
sortedX=np.sort( x )
yvals=np.arange(len(sortedX))/float(len(sortedX))
return sortedX, yvals
# plot the CDF for each experiment individually
for exp,data in trueCFUs.iteritems():
x,y = ecdf(data)
plt.plot(x,y)
xlabel('#CFUs'), ylabel('cumulative density')
#put all three together
trueCFUs_all = np.concatenate(trueCFUs.values())
x,y = ecdf(trueCFUs_all)
plt.plot(x,y, linewidth=5)
leg= trueCFUs.keys()
leg.append('all')
plt.legend(leg)
plt.figure() # the figure as presented in the paper (scaled back to colonies)
plt.plot(x*0.17,y*len(trueCFUs_all), linewidth=2)
xlim([0,25])
ylim([0,120])
title('as in the paper')
mu = np.mean(trueCFUs_all)
sigma= np.std(trueCFUs_all)
print "Mean: %f Fano factor %f" % (mu,sigma**2/mu)
# put the poisson distribution as a comparision which has the same mean as the data
x_ECDF,y_ECDF = ecdf(trueCFUs_all)
x = np.arange(0,100)
y =stats.poisson.cdf(x,mu)
plt.subplot(1,2,1)
plt.plot(x_ECDF, y_ECDF, linewidth=2)
plt.plot(x,y, linewidth=2)
plt.xlim([0,300])
plt.xlabel('#CFUs'), plt.ylabel('CDF'), plt.legend(['Data','Poisson fit'])
# DO THE PDF as well, much easier to see
#plt.figure()
plt.subplot(1,2,2)
sns.distplot(trueCFUs_all,norm_hist=True, kde=False)
plt.plot(x,stats.poisson.pmf(x,mu), linewidth=2)
plt.xlabel('#CFUs'), plt.ylabel('CDF'), plt.legend(['Poisson fit', 'Data'])
plt.subplot(1,2,1)
sns.distplot(trueCFUs_all, fit =stats.gamma, kde=False)
xlim([0, 300])
plt.xlabel('#CFUs'), plt.ylabel('PDF'), plt.legend(['Gamma fit', 'Data'])
# lets look at the CDF as well
shape,loc,scale = stats.gamma.fit(trueCFUs_all)
plt.title('Fitted Gamma(Shape=%.2f, scale=%.2f)' % (shape,scale))
x_Gamma = np.arange(0,200)
y_Gamma = stats.gamma(shape,scale=scale).cdf(x_Gamma)
plt.subplot(1,2,2)
plt.plot(x_ECDF, y_ECDF, linewidth=2)
plt.plot(x_Gamma, y_Gamma)
plt.xlabel('#CFUs'), plt.ylabel('CDF'), plt.legend(['Data', 'Gamma fit'])
def simulateTree(p0, nGens):
simulates a single tree from the Till/McCulloch model
inputs:
- p0: probability that a single cell undergoes terminal
differentiation (i.e. no more division)
- nGens: number of generations to simulate
returns:
- a list (one element per generation) of single cells
present at that generation.
- a single element is just an array of cells present at that time
(zeros for stem cells, 1s for differentiated cells).
# cell state is either 0 (stem cell) or 1 (differentiated),
# which is the only thing we keep track of here
theGenerations = list()
theGenerations.append(np.array(0))
for g in range(nGens):
lastGen = theGenerations[-1]
# for each of the last generation, roll a dice whether it terminally diffs
newState = roll_the_dice(lastGen, p0)
#all the zeros divide, the 1's just stay
n0 = sum(newState==0)
n1 = sum(newState==1)
nextGen = np.concatenate([np.repeat(0, 2*n0), np.repeat(1,n1)])
theGenerations.append(nextGen)
return theGenerations
def roll_the_dice(cellstate_array, p0):
decide if a cell goes from 0->1 (wit probability p0)
does that for an entire vector of zeros and ones in paralell
# helper function so that we can index into it via generation
# makes sure that as soon as cell_state==1 it wont change anymore
tmpP = np.array([p0, 1])
p = tmpP[cellstate_array]
r = np.random.rand(cellstate_array.size)
newGeneration = r<p
return newGeneration.astype(int)
def pretty_print_tree(tree):
for gen in tree:
print gen
p0 = 0.4
nGens = 5
aTree = simulateTree(p0,nGens)
bTree = simulateTree(p0,nGens)
print 'First tree'
pretty_print_tree(aTree)
print 'Second tree'
pretty_print_tree(bTree)
#CFUs per generation
print '\n#CFUs, first tree'
print map(lambda x: sum(x==0), aTree)
print '\n#CFUs, second tree'
print map(lambda x: sum(x==0), bTree)
nTrees = 1000
nGens = 20
p0 = 0.4
# assemble a matrix of tree vs #CFUs(t) (one row, one tree; one col one timepoint)
cfus_over_time = np.zeros((nTrees, nGens+1))
for i in range(nTrees):
tree = simulateTree(p0, nGens)
nCFU = map(lambda x: np.sum(x==0), tree)
cfus_over_time[i,:] = nCFU
print cfus_over_time[0:5,:].astype(int)
import seaborn as sns
plt.plot(log10(cfus_over_time.T))
plt.xlabel('Generation')
plt.ylabel("log10(#CFU)")
# violin plot
plt.figure()
sns.violinplot((log10(cfus_over_time+1)), inner='points')
plt.xlabel('Generation')
plt.ylabel("log10(#CFU+1)")
lastGen_nCFUs = cfus_over_time[:,-1]
plt.subplot(1,2,1)
plt.hist(lastGen_nCFUs, bins=50)
xlabel('#CFUs')
ylabel('Frequency in last generation')
# CDF
plt.subplot(1,2,2)
x_ECDF_lastGen,y_ECDF_lastGen = ecdf(lastGen_nCFUs)
plt.plot(x_ECDF_lastGen, y_ECDF_lastGen, linewidth=2)
xlabel('#CFUs')
ylabel('CDF')
shape, loc, scale = stats.gamma.fit(lastGen_nCFUs)
fittedGamma = stats.gamma(a=shape, loc=loc, scale=scale)
print "Shape: %.3f, Location: %.3f, Scale: %.3f" %(shape, loc, scale)
# histogram of the simulated data
n,bins = np.histogram(lastGen_nCFUs, bins=arange(0, 300, step=5))
# pdf, cdf
fittedPDF, fittedCDF = fittedGamma.pdf(bins[:-1]), fittedGamma.cdf(bins[:-1])
plt.subplot(1,2,1)
plt.plot(x_ECDF, y_ECDF, linewidth=2)
plt.plot(bins[:-1], fittedCDF)
xlabel('#CFU'), ylabel('CDF'), legend(['synthetic Data','Gamma Fit'])
## the PDF derived from the cdf
plt.subplot(1,2,2)
plt.plot(bins[:-1],n/n.sum())
plt.plot(bins[:-1], diff(np.insert(fittedCDF, 0, 0)))
#estimates, based on matching means an vars
gamma_mean, gamma_var = fittedGamma.stats(moments='mv')
sample_mean, sample_var = np.mean(lastGen_nCFUs), np.var(lastGen_nCFUs)
myShape = sample_mean**2/sample_var
myScale = sample_var/sample_mean
print myShape,myScale
#fittedGamma = stats.gamma(a=myShape, loc=0, scale=myScale)
print "Sample Mean: %.2f Gamma Mean: %.2f" % (sample_mean,gamma_mean)
print "Sample Var: %.2f Gamma Var: %.2f" % (sample_var,gamma_var)
nGens = 20
p0 = 0.4
nTrees_fig8 = 71
fig8_cfu_20 = np.zeros((nTrees_fig8))
for i in range(nTrees_fig8):
tree = simulateTree(p0, nGens)
nCFU = map(lambda x: np.sum(x==0), tree)
fig8_cfu_20[i] = nCFU[-1]
xTmp, yTmp = ecdf(fig8_cfu_20)
plot(xTmp, yTmp*nTrees_fig8)
#fit by gamma
shape, loc, scale = stats.gamma.fit(fig8_cfu_20)
fittedGamma_synthData = stats.gamma(a=shape, loc=loc, scale=scale)
fittedCDF = fittedGamma_synthData.cdf(xTmp[:-1])
plot(xTmp[:-1], fittedCDF*nTrees_fig8)
"Simulate a bunch of trees and get the #CFUs over time"
def sim_cfu_over_time(p0, nTrees, nGens):
cfus_over_time = np.zeros((nTrees, nGens+1))
for i in range(nTrees):
tree = simulateTree(p0, nGens)
nCFU = map(lambda x: np.sum(x==0), tree)
cfus_over_time[i,:] = nCFU
return cfus_over_time
"calculate a KS test between observed and simulated dists"
def distance_function(obs, sim):
K,pval = stats.ks_2samp(obs,sim)
return K, pval
#return np.abs(np.mean(obs)- np.mean(sim))
p0_grid = np.linspace(0, 1, 50)
distances, pvals = np.zeros(p0_grid.shape), np.zeros(p0_grid.shape)
observedData = trueCFUs_all # experimental data after 10 days/20divisions
for i, p0 in enumerate(p0_grid):
simData = sim_cfu_over_time(p0, nTrees = 100, nGens= 20)
simData_lastGen = simData[:,-1]
distances[i], pvals[i] = distance_function(obs= observedData, sim=simData_lastGen)
def plot_fitted_stoch_model(p0_array, K_array, pval_array, observedData):
bestP0 = p0_array[argmin(K_array)]
plt.subplot(2,2,1)
plt.plot(p0_array, K_array)
plt.vlines(bestP0,0,1)
xlabel('p0'), ylabel('KS distance')
plt.subplot(2,2,2)
plt.plot(p0_array, log10(pvals))
xlabel('p0'), ylabel('KS pval')
bestpval = pval_array[argmin(K_array)]
title(bestpval)
plt.subplot(2,2,3)
simData = sim_cfu_over_time(p0=0.4, nTrees = 100, nGens= 20)
x,y =ecdf(observedData)
plt.plot(x, y)
x,y =ecdf(simData[:,-1])
plt.plot(x, y)
xlabel('#CFU'), ylabel('CDF')
legend(['observed','simulated'])
binning = np.arange(0,300,step=5)
plt.subplot(2,2,4)
plt.hist(observedData, bins=binning, histtype="stepfilled", alpha=.5)
plt.hist(simData[:,-1], bins=binning, histtype="stepfilled", alpha=.5)
xlabel('#CFU'), ylabel('PDF')
plot_fitted_stoch_model(p0_grid, distances, pvals, observedData)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From Pyomo, we use the following procedures for writing our LP models
Step3: Creating random 2D discrete measures
Step4: And to create two random measures over a grid of size $32 \times 32$, the first with 300 support points, and the second with 200 support points
Step5: Later on, we need a cost function between a pair of support points defined in $\mathbb{R}^2$.
Step6: Solving the Bipartite model
Step7: In order to compute the distance between the two models, we have only to call our function with the two discrete meausures randomly defined before.
Step8: In the following, we look at a different LP model which approximately solves an equivalent problem.
Step9: For instance, if we want to build a small set of coprimes locations, we could use $L=3$.
Step10: Using the set of coprimes pair, we can build our small flow network as follows.
Step11: We add also a function to plot a network in the plane
Step12: Let start we a couple of small example, to get an idea about the networks that are built.
Step13: Note that the degree of every node is limited, and much smaller than the case where every node (location) is connected with every other possible locations, as in the bipartite graphs.
Step14: Solving the Flow Problem
Step15: At this point, we can compute the distance between the same pair of discrete measures $Mu$ and $Nu$ defined before, but using our new LP model.
Step16: Note that in this case, the difference in running time is limited, and it is dominanted by the time for building the model (Pyomo is not very fast with this respect, despite its flexibility).
|
<ASSISTANT_TASK:>
Python Code:
import shutil
import sys
import os.path
if not shutil.which("pyomo"):
!pip install -q pyomo
assert(shutil.which("pyomo"))
if not (shutil.which("glpk") or os.path.isfile("glpk")):
if "google.colab" in sys.modules:
!apt-get install -y -qq glpk-utils
else:
try:
!conda install -c conda-forge glpk
except:
pass
from pyomo.environ import ConcreteModel, Var, Objective, Constraint, SolverFactory
from pyomo.environ import RangeSet, ConstraintList, NonNegativeReals
import numpy as np
# Support function to normalize a numpy vector
Normalize = lambda x: x/sum(x)
# We define a data type for 2D-histograms as defined before
class Measure2D(object):
def __init__(self, N, M=32, seed=13):
default c'tor: N random points
# Fix the seed for debugging
np.random.seed(seed)
# N random weights
self.W = Normalize(np.random.uniform(0, 1, size=N))
# N random support points
x = set()
while len(x) < N:
x.add((np.random.randint(1, M),
np.random.randint(1, M)))
self.X = list(x)
# Map point to weight
self.D = {}
for i in range(N):
self.D[self.X[i]] = self.W[i]
# Create two random measures
GridSize = 32
Mu = Measure2D(300, GridSize, seed=13)
Nu = Measure2D(200, GridSize, seed=14)
from math import sqrt
# Euclidean distance in the plane
Cost = lambda x, y: sqrt((x[0] - y[0])**2 + (x[1] - y[1])**2)
# Others useful libraries
from time import time
# Second, we write a function that implements the model, solves the LP,
# and returns the KW distance along with an optimal transport plan.
def BipartiteDistanceW1_L2(Mu, Nu):
t0 = time()
# Main Pyomo model
model = ConcreteModel()
# Parameters
model.I = RangeSet(len(Mu.X))
model.J = RangeSet(len(Nu.X))
# Variables
model.PI = Var(model.I, model.J, within=NonNegativeReals)
# Objective Function
model.obj = Objective(
expr=sum(model.PI[i,j] * Cost(Mu.X[i-1], Nu.X[j-1]) for i,j in model.PI))
# Constraints on the marginals
model.Mu = Constraint(model.I,
rule = lambda m, i: sum(m.PI[i,j] for j in m.J) == Mu.W[i-1])
model.Nu = Constraint(model.J,
rule = lambda m, j: sum(m.PI[i,j] for i in m.I) == Nu.W[j-1])
# Solve the model
sol = SolverFactory('glpk').solve(model)
# Get a JSON representation of the solution
sol_json = sol.json_repn()
# Check solution status
if sol_json['Solver'][0]['Status'] != 'ok':
return None
if sol_json['Solver'][0]['Termination condition'] != 'optimal':
return None
return model.obj(), time()-t0
# Compute distance and runtime
distA, runtimeA = BipartiteDistanceW1_L2(Mu, Nu)
print("Optimal distance: {}, runtime: {}".format(distA, runtimeA))
# Given a value of the parameter L, build the corrisponding co-primes set
def CoprimesSet(L):
from numpy import gcd
Cs = []
for v in range(-L, L+1):
for w in range(-L, L+1):
if (not (v == 0 and w == 0)) and gcd(v, w) == 1:
Cs.append((v, w))
return Cs
L = 3
Cs = CoprimesSet(L)
print(Cs)
# Import the graph library NetworkX
import networkx as nx
# Build a Network using a precomputed set of pair of coprimes numbers
def BuildGridNetwork(N, Coprimes):
def ID(x,y):
return x*N+y
G = nx.DiGraph()
for i in range(N):
for j in range(N):
G.add_node(ID(i,j), pos=(i,j))
for i in range(N):
for j in range(N):
for (v, w) in Coprimes:
if i + v >= 0 and i + v < N and j + w >= 0 and j + w < N:
G.add_edge(ID(i,j), ID(i+v, j+w),
weight=sqrt(pow(v, 2) + pow(w, 2)))
return G
# Plot a grid network (nodes must have coordinates position labels)
def PlotGridNetwork(G, name=""):
import matplotlib.pyplot as plt
plt.figure(3,figsize=(8, 8))
plt.axis('equal')
pos = nx.get_node_attributes(G, 'pos')
nx.draw(G, pos, font_weight='bold', node_color='blue',
arrows=True, arrowstyle='->', arrowsize=15, width=1, node_size=200)
# If a name is specified, save the plot in a file
if name:
plt.savefig("grid_{}.png".format(name), format="PNG")
L = 2
Cs = CoprimesSet(L)
G1 = BuildGridNetwork(8, Cs)
PlotGridNetwork(G1)
L = 3
Cs = CoprimesSet(L)
G1 = BuildGridNetwork(8, Cs)
PlotGridNetwork(G1)
def ApproximateDistanceW1_L2(Mu, Nu, G):
t0 = time()
# Number of egdes
m = len(G.edges())
# Main Pyomo model
model = ConcreteModel()
# Parameters
model.E = RangeSet(m)
# Variables
model.PI = Var(model.E, within=NonNegativeReals)
# Map edges to cost
C = np.zeros(m)
M = {}
for e, (i, j) in enumerate(G.edges()):
C[e] = G.edges[i,j]['weight']
M[i,j] = e+1
# Objective Function
model.obj = Objective(expr=sum(model.PI[e] * C[e-1] for e in model.PI))
# Flow balance constraints (using marginals balance at each location)
model.Flow = ConstraintList()
for v in G.nodes():
Fs = [M[w] for w in G.out_edges(v)]
Bs = [M[w] for w in G.in_edges(v)]
# Compute flow balance value at given node position
x = G.nodes[v]['pos']
b = Mu.D.get(x, 0.0) - Nu.D.get(x, 0.0)
# Flow balance constraint
model.Flow.add(expr = sum(model.PI[e] for e in Fs) - sum(model.PI[e] for e in Bs) == b)
# Solve the model
sol = SolverFactory('glpk').solve(model)
# Get a JSON representation of the solution
sol_json = sol.json_repn()
# Check solution status
if sol_json['Solver'][0]['Status'] != 'ok':
return None
if sol_json['Solver'][0]['Termination condition'] != 'optimal':
return None
return model.obj(), (time()-t0)
# We build a flow network of size 32x32, using L=3
L = 3
Cs = CoprimesSet(L)
G = BuildGridNetwork(GridSize, Cs)
# Compute distance and runtime with the approximate model
distB, runtimeB = ApproximateDistanceW1_L2(Mu, Nu, G)
# ... and to compare with previous solution
print("LB Full = {:.5}, LB Apx = {:.5}".format(distA, distB))
print("Time Full = {:.5}, Time Apx = {:.5}".format(runtimeA, runtimeB))
# Create two random measures
GridSize = 32
N = 900
Mu = Measure2D(N, GridSize, seed=13)
Nu = Measure2D(N, GridSize, seed=14)
# Compute distance and runtime
distA, runtimeA = BipartiteDistanceW1_L2(Mu, Nu)
# We build a flow network of size 32x32, using L=3
L = 3
Cs = CoprimesSet(L)
G = BuildGridNetwork(GridSize, Cs)
# Compute distance and runtime with the approximate model
distB, runtimeB = ApproximateDistanceW1_L2(Mu, Nu, G)
# ... and to compare with the previous solution optimal valur and runtime
print("LB Full = {:.5}, LB Apx = {:.5}".format(distA, distB))
print("Time Full = {:.5}, Time Apx = {:.5}".format(runtimeA, runtimeB))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Name generation with LSTM
Step2: Classical neural networks, including convolutional ones, suffer from two severe limitations
Step3: The simplest way to use the Keras LSTM model to make predictions is to first start off with a seed sequence as input, generate the next character then update the seed sequence to add the generated character on the end and trim off the first character.
|
<ASSISTANT_TASK:>
Python Code:
'''
Trains a simple deep NN on the MNIST dataset.
You can get to 98.40% test accuracy after 20 epochs.
'''
from __future__ import print_function
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import RMSprop
from keras.utils import np_utils
batch_size = 128
nb_classes = 10
nb_epoch = 10
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(10))
model.add(Activation('softmax'))
# print model characteristics
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
history = model.fit(X_train,
Y_train,
batch_size=batch_size,
nb_epoch=nb_epoch,
verbose=1,
validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('\n')
print('Test score:', score[0])
print('Test accuracy:', score[1])
from __future__ import print_function
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
import numpy as np
import random
import sys
import codecs
f = codecs.open('data/NombresMujerBarcelona.txt', "r", "utf-8")
#f = codecs.open('data/toponims.txt', "r", "utf-8")
string = f.read()
string.encode('utf-8')
text = string.lower()
# text = text.replace("\n", " ")
print('corpus length:', len(text))
chars = sorted(list(set(text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
# cut the text in semi-redundant sequences of maxlen characters
maxlen = 20
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))
print('Vectorization...')
X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
# build the model
print('Build model...')
model = Sequential()
model.add(LSTM(64,
dropout=0.2,
recurrent_dropout=0.2,
input_shape=(maxlen, len(chars))))
#model.add(LSTM(64,
# dropout_W=0.2,
# dropout_U=0.2))
model.add(Dense(len(chars)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Activation('softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
# train the model, output generated text after each iteration
for iteration in range(1, 60):
print()
print('-' * 50)
print('Iteration', iteration)
model.fit(X, y, batch_size=256, epochs=1)
start_index = random.randint(0, len(text) - maxlen - 1)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence.replace("\n", " ") + '"')
for diversity in [0.5, 1.0]:
print()
print('----- diversity:', diversity)
for i in range(50):
x = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The figure above demonstrates the core hot spot temperature from 0 to 100 degrees C, for five different overload ratios. Notice that the relationship between ambient temperature and core hot spot temperature (for all overload ratios) is linear. Examine Montsinger's equation (as well as the simplifying assumptions mentioned in the code above) to determine why this is the case.
Step2: The figure above plots the ratio of initial DP (assumed to be starting at 1000) to final DP after 24 hours at a particular core hot spot temperature. Notice that below about 80 degrees C there is very little change, but after 110 degrees C the ratio drops off sharply. This behavior is the reason for derating; as the overload ratio of the transformer decreases, its core hot spot temperature will also decrease, which reduces the amount of degradation the paper insulation will experience.
Step3: As expected, when the DP of the paper insulation is low (due to damage from heat stresses), the CO accumulation is high (due to byproducts from the degradation process).
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
# Calculates the core hot spot temperature of the transformer
def core_hot_spot(ambient_temp, overload_ratio, t0=35, tc=30, N=1,
N0=0.5, Nc=0.8, L=1):
# ambient_temp is in Celsius
# overload_ratio is in decimal form (e.g. 0.75 not 75%)
# t0 is the temperature rise of oil over ambient
# tc is the temperature rise of the windings over the oil
# N = 1 (for simplification only)
# N0 = 0.8 for self-cooled transformers, =0.5 for water-cooled
# Nc = 1.0 for vertical windings, =0.8 for horizontal windings
# L = 1 (for simplification only)
T = t0*((N*overload_ratio**2+1)**0.8) + tc*L*overload_ratio**2 + ambient_temp
return T
# Plot the core hot spot as ambient temperature increases
for R in [0.75, 0.9, 1.0, 1.1, 1.25]:
# Use a listcomp to generate the core hot spots
plt.plot([core_hot_spot(ii, R) for ii in range(100)])
# Plot labels
plt.legend(['R=0.75', 'R=0.9', 'R=1.0', 'R=1.1', 'R=1.25'])
plt.xlabel('Ambient Temperature (C)')
plt.ylabel('Core Hot Spot Temperature (C)')
plt.title('Core Hot Spot at Several Overload Ratios (R)');
def calculate_dp(core_hot_spot, time, dp_initial):
# core_hot_spot is an output from the previous function
# time is measured in hours
# dp_initial is the initial degree of polymerization
# A is a constant based on the type of paper insulation
# k calculation is taken from [Emsley]
A = 3.65*10**7
# See Emsley's paper for more discussion of this equation
k = A*np.exp(-(117000/(8.314*(core_hot_spot+273.15))))
DPf=1/((k*24*7*time) + (1/dp_initial))
return DPf
# Plot the DP ratio as a function of core hot spot for 1 day
dp_ratio = [calculate_dp(chs, 24, 1000)/1000 for chs in range(50, 150)]
plt.plot(range(50, 150), dp_ratio)
plt.xlabel('Core Hot Spot Temperature (C)')
plt.ylabel('DP Ratio')
plt.title('DP Ratio After 24 Hours at Core Hot Spot Temperature');
def oil_contamination(dp):
# dp is the degree of polymerization
# CO and CO2 are the TOTAL accumulation
# of each of those dissolved gases
CO=(-0.0028*dp + 6.28) + 0.13*np.random.randn()
CO2=(-0.0026*dp + 8.08) + 0.66*np.random.randn()
return CO, CO2
# Plot the CO as a function of DP over time
co_accumulation = [oil_contamination(dp)[0] for dp in range(150, 1000)]
plt.plot(range(150, 1000), co_accumulation)
plt.xlabel('Degree of Polymerization (DP)')
plt.ylabel('CO Accumulation (ppm)')
plt.title('CO Accumulation at Paper Insulation DPs');
# Transformer INSULation SIMulator
# temps is an array of size 12x3, containing monthly low, average, and high temps
# start_month is an integer, with January = 1, February = 2, etc.
# num_transformers are the number of transformers to simulate (e.g. 20)
# overload_ratio is R in the equation above (suggest between 0.75 and 1.1)
def tinsul_sim(temps, start_month, num_transformers, overload_ratio = 1):
trans_co = []
trans_co2 = []
# Iterate over the number of transformers:
for ii in range(num_transformers):
month_current = start_month
failed = False
dp_current = 1000
co_accumulation = []
co2_accumulation = []
# dp_failed determines the DP at which the paper insulation will
# fail, drawn from a logistic distribution centered at 200
dp_failed = np.random.logistic(loc=200.0, scale=40.0)
# Run the simulation until the transformer fails
while not failed:
# Remember- 6 hours at low, 12 hours at avg, and 6 hours at high temps
# The first index is temp, the second index is hours at that temp
ambient_low = [temps[int(month_current)-1][0], 6]
ambient_avg = [temps[int(month_current)-1][1], 12]
ambient_high = [temps[int(month_current)-1][2], 6]
# Update DP based on the heat stresses from the core hot spot
for ambient, time in [ambient_low, ambient_avg, ambient_high]:
chs = core_hot_spot(ambient, overload_ratio)
dp_current = calculate_dp(chs, time, dp_current)
# Calculate the CO and CO2 accumulation based on DP
co, co2 = oil_contamination(dp_current)
co_accumulation.append(co)
co2_accumulation.append(co2)
# Check if the transformer has failed, using a "soft" distribution
if dp_current<=dp_failed:
failed = True
# Add the value (Months/Weeks) to get proportion of month per week
month_current += 0.230769
# Rollover to the next year, if necessary
if month_current >= 13.0:
month_current = 1
trans_co.append(co_accumulation)
trans_co2.append(co2_accumulation)
return trans_co, trans_co2
# Average temperature per month (Washington DC)
# The array is ordered: [low, average, high]
dc_temps = [[-2, 1, 6], # January
[-1, 3, 8], # February
[3, 7, 13], # March
[8, 13, 19], # April
[14, 18, 24], # May
[19, 24, 29], # June
[22, 28, 31], # July
[21, 27, 30], # August
[17, 22, 26], # September
[10, 15, 20], # October
[5, 10, 14], # November
[0, 4, 8]] # December
# Use the function above to simulate transformers
transformers = tinsul_sim(dc_temps, 1, 20)
# Plot the results from the first transformer
plt.plot(transformers[0][0])
plt.xlabel('Time (weeks)')
plt.ylabel('CO Accumulation (ppm)')
plt.title('CO Accumulation Per Week, Transformer 1');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading PostgreSQL database from TensorFlow IO
Step2: Install and setup PostgreSQL (optional)
Step3: Setup necessary environmental variables
Step4: Prepare data in PostgreSQL server
Step5: Create Dataset from PostgreSQL server and use it in TensorFlow
Step6: As you could see from the output of dataset.element_spec above, the element of the created Dataset is a python dict object with column names of the database table as keys.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
# Install postgresql server
!sudo apt-get -y -qq update
!sudo apt-get -y -qq install postgresql
!sudo service postgresql start
# Setup a password `postgres` for username `postgres`
!sudo -u postgres psql -U postgres -c "ALTER USER postgres PASSWORD 'postgres';"
# Setup a database with name `tfio_demo` to be used
!sudo -u postgres psql -U postgres -c 'DROP DATABASE IF EXISTS tfio_demo;'
!sudo -u postgres psql -U postgres -c 'CREATE DATABASE tfio_demo;'
%env TFIO_DEMO_DATABASE_NAME=tfio_demo
%env TFIO_DEMO_DATABASE_HOST=localhost
%env TFIO_DEMO_DATABASE_PORT=5432
%env TFIO_DEMO_DATABASE_USER=postgres
%env TFIO_DEMO_DATABASE_PASS=postgres
!curl -s -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/postgresql/AirQualityUCI.sql
!PGPASSWORD=$TFIO_DEMO_DATABASE_PASS psql -q -h $TFIO_DEMO_DATABASE_HOST -p $TFIO_DEMO_DATABASE_PORT -U $TFIO_DEMO_DATABASE_USER -d $TFIO_DEMO_DATABASE_NAME -f AirQualityUCI.sql
import os
import tensorflow_io as tfio
endpoint="postgresql://{}:{}@{}?port={}&dbname={}".format(
os.environ['TFIO_DEMO_DATABASE_USER'],
os.environ['TFIO_DEMO_DATABASE_PASS'],
os.environ['TFIO_DEMO_DATABASE_HOST'],
os.environ['TFIO_DEMO_DATABASE_PORT'],
os.environ['TFIO_DEMO_DATABASE_NAME'],
)
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT co, pt08s1 FROM AirQualityUCI;",
endpoint=endpoint)
print(dataset.element_spec)
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT nox, no2 FROM AirQualityUCI;",
endpoint=endpoint)
dataset = dataset.map(lambda e: (e['nox'] - e['no2']))
# check only the first 20 record
dataset = dataset.take(20)
print("NOx - NO2:")
for difference in dataset:
print(difference.numpy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now that we have the metrics in place we can load the data an run them iteratively at each size of the images.
Step2: data looks good. now we can get the contrast and structures for the full size images.
Step3: Lets test these functions on a couple images and see how they look.
Step4: This looks pretty good. Now lets do some down sampling and see how those look
Step5: Now that we have the downsampled images, we can run the structure and contrast functions on them again.
Step6: One more down sampling will be done when we do the ms ssim scores so lets just take a look at one more level.
Step7: A lot of the information is lost when the images are so small, but so it goes. Now the last componenet of the ms ssim calculation is the luminance scale which is calclulated at the lowest resolution used (24x24) in this case.
Step8: Lets check luminance on the larger images to make sure it is sensible.
Step9: Not really what i expected, but it looks right. Now lets do it for the lowest resolution images.
Step10: paper plot
Step11: Now we have all the compnents of ms ssim. Now we can create the data sets we need in order to train the network. First lets create a full size image version of contrast times structure.
|
<ASSISTANT_TASK:>
Python Code:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
import numpy as np
np.set_printoptions(threshold=np.nan)
import tensorflow as tf
import time
import pandas as pd
import matplotlib.pyplot as plt
import progressbar
def calculate_contrast(window_orig, window_recon):
'''
calculates the contrast metric for a given window of two images
'''
k_2, L = 0.03, 255
c_2 = (L*k_2)**2
orig_data, recon_data = window_orig.flatten(), window_recon.flatten()
var_x, var_y = np.var(recon_data), np.var(orig_data)
num = 2*np.sqrt(var_x)*np.sqrt(var_y) + c_2
den = var_x + var_y + c_2
return num/den
def calculate_structure(window_orig, window_recon):
'''
calculates the structure metric for a given window of two images
'''
k_2, L = 0.03, 255
c_2 = (L*k_2)**2
c_3 = c_2 / 2
orig_data, recon_data = window_orig.flatten(), window_recon.flatten()
std_x, std_y = np.std(recon_data), np.std(orig_data)
covar = np.cov(orig_data, recon_data)[0][1]
num = covar + c_3
den = std_x * std_y + c_3
return num/den
raw_image_dim, filter_dim = 96, 11
train_size, test_size = 500, 140
# data input
data_path = 'https://raw.githubusercontent.com/michaelneuder/image_quality_analysis/master/data/sample_data/'
# train data --- 500 images, 96x96 pixels
orig_500_raw = pd.read_csv('{}orig_500.txt'.format(data_path), header=None, delim_whitespace = True)
recon_500_raw = pd.read_csv('{}recon_500.txt'.format(data_path), header=None, delim_whitespace = True)
# test data --- 140 images, 96x96 pixels
orig_140_raw = pd.read_csv('{}orig_140.txt'.format(data_path), header=None, delim_whitespace = True)
recon_140_raw = pd.read_csv('{}recon_140.txt'.format(data_path), header=None, delim_whitespace = True)
# reshape
orig_500 = np.reshape(orig_500_raw.values, (train_size, raw_image_dim, raw_image_dim))
recon_500 = np.reshape(recon_500_raw.values, (train_size, raw_image_dim, raw_image_dim))
orig_140 = np.reshape(orig_140_raw.values, (test_size, raw_image_dim, raw_image_dim))
recon_140 = np.reshape(recon_140_raw.values, (test_size, raw_image_dim, raw_image_dim))
# make sure pictures look right
f, axarr = plt.subplots(nrows=3,ncols=4, figsize=(12,9))
for i in range(3):
index = np.random.randint(140)
axarr[i,0].imshow(orig_500[index,:,:], cmap='gray')
axarr[i,1].imshow(recon_500[index,:,:], cmap='gray')
axarr[i,2].imshow(orig_140[index,:,:], cmap='gray')
axarr[i,3].imshow(recon_140[index,:,:], cmap='gray')
axarr[0,0].set_title('original')
axarr[0,1].set_title('reconstructed')
axarr[0,2].set_title('original test')
axarr[0,3].set_title('recon test')
for ax_row in axarr:
for ax in ax_row:
ax.set_xticklabels([])
ax.set_yticklabels([])
f.suptitle('training & testing data sample', size=20)
# plt.savefig('tt_data_sample.png')
plt.show()
def image_contrast(orig_im, recon_im):
'''
gets the contrast for each patch of a set of images.
'''
contrast_res = []
number_windows = orig_im.shape[0] - filter_dim + 1
for i in range(number_windows):
for j in range(number_windows):
orig_window = orig_im[i:i+11, j:j+11]
recon_window = recon_im[i:i+11, j:j+11]
temp = calculate_contrast(orig_window, recon_window)
contrast_res.append(temp)
return np.reshape(contrast_res, (number_windows, number_windows))
def image_structure(orig_im, recon_im):
'''
gets the structure for each patch of a set of images.
'''
structure_res = []
number_windows = orig_im.shape[0] - filter_dim + 1
for i in range(number_windows):
for j in range(number_windows):
orig_window = orig_im[i:i+11, j:j+11]
recon_window = recon_im[i:i+11, j:j+11]
temp = calculate_structure(orig_window, recon_window)
structure_res.append(temp)
return np.reshape(structure_res, (number_windows, number_windows))
contrast_res, structure_res = [], []
indeces = []
for ii in range(3):
index = np.random.randint(140)
indeces.append(index)
temp_im_orig = orig_500[index,:,:]
temp_im_recon = recon_500[index,:,:]
contrast_res.append(image_contrast(temp_im_orig, temp_im_recon))
structure_res.append(image_structure(temp_im_orig, temp_im_recon))
contrast_res = np.asarray(contrast_res)
structure_res = np.asarray(structure_res)
f, axarr = plt.subplots(nrows=3,ncols=4, figsize=(12,9))
for i in range(3):
axarr[i,0].imshow(orig_500[indeces[i],:,:], cmap='gray')
axarr[i,1].imshow(recon_500[indeces[i],:,:], cmap='gray')
axarr[i,2].imshow(contrast_res[i,:,:], cmap='viridis')
axarr[i,3].imshow(structure_res[i,:,:], cmap='BuPu')
axarr[0,0].set_title('original', size=15)
axarr[0,1].set_title('reconstructed', size=15)
axarr[0,2].set_title('contrast', size=15)
axarr[0,3].set_title('structure', size=15)
for ax_row in axarr:
for ax in ax_row:
ax.set_xticklabels([])
ax.set_yticklabels([])
f.suptitle('contrast & structure data sample', size=17)
# plt.savefig('contrast_structure_data_sample.png')
plt.show()
def average_pool(window):
return np.mean(window)
def down_sample(orig_im, recon_im, pool_size):
'''
down samples image by a factor of pool_size
'''
reduce_im_orig, reduce_im_recon = [], []
number_pools = int(orig_im.shape[0] / pool_size)
for i in range(number_pools):
for j in range(number_pools):
orig_pool = orig_im[i*pool_size:i*pool_size+pool_size, j*pool_size:j*pool_size+pool_size]
recon_pool = recon_im[i*pool_size:i*pool_size+pool_size, j*pool_size:j*pool_size+pool_size]
temp_orig, temp_recon = np.mean(orig_pool), np.mean(recon_pool)
reduce_im_orig.append(temp_orig)
reduce_im_recon.append(temp_recon)
return np.reshape(reduce_im_orig, (number_pools,number_pools)), np.reshape(reduce_im_recon, (number_pools,number_pools))
down_sample_res = []
indeces = []
pool_size = 2
for ii in range(3):
index = np.random.randint(140)
indeces.append(index)
temp_im_orig = orig_500[index,:,:]
temp_im_recon = recon_500[index,:,:]
down_sample_res.append(down_sample(temp_im_orig, temp_im_recon, pool_size))
down_sample_res = np.asarray(down_sample_res)
down_sample_res.shape
f, axarr = plt.subplots(nrows=3,ncols=4, figsize=(12,9))
for i in range(3):
axarr[i,0].imshow(down_sample_res[i,0,:,:], cmap='gray')
axarr[i,1].imshow(down_sample_res[i,1,:,:], cmap='gray')
axarr[i,2].imshow(image_contrast(down_sample_res[i,0,:,:], down_sample_res[i,1,:,:]), cmap='viridis')
axarr[i,3].imshow(image_structure(down_sample_res[i,0,:,:], down_sample_res[i,1,:,:]), cmap='BuPu')
axarr[0,0].set_title('original', size=15)
axarr[0,1].set_title('reconstructed', size=15)
axarr[0,2].set_title('contrast', size=15)
axarr[0,3].set_title('structure', size=15)
for ax_row in axarr:
for ax in ax_row:
ax.set_xticklabels([])
ax.set_yticklabels([])
f.suptitle('contrast & structure data sample reduced image size', size=17)
# plt.savefig('contrast_structure_data_sample_red1.png')
plt.show()
down_sample_res = []
indeces = []
pool_size = 2
for ii in range(3):
index = np.random.randint(140)
indeces.append(index)
temp_im_orig = orig_500[index,:,:]
temp_im_recon = recon_500[index,:,:]
one_ds = down_sample(temp_im_orig, temp_im_recon, pool_size)
two_ds = down_sample(one_ds[0], one_ds[1], pool_size)
down_sample_res.append(two_ds)
down_sample_res = np.asarray(down_sample_res)
down_sample_res.shape
f, axarr = plt.subplots(nrows=3,ncols=4, figsize=(12,9))
for i in range(3):
axarr[i,0].imshow(down_sample_res[i,0,:,:], cmap='gray')
axarr[i,1].imshow(down_sample_res[i,1,:,:], cmap='gray')
axarr[i,2].imshow(image_contrast(down_sample_res[i,0,:,:], down_sample_res[i,1,:,:]), cmap='viridis')
axarr[i,3].imshow(image_structure(down_sample_res[i,0,:,:], down_sample_res[i,1,:,:]), cmap='BuPu')
axarr[0,0].set_title('original', size=15)
axarr[0,1].set_title('reconstructed', size=15)
axarr[0,2].set_title('contrast', size=15)
axarr[0,3].set_title('structure', size=15)
for ax_row in axarr:
for ax in ax_row:
ax.set_xticklabels([])
ax.set_yticklabels([])
f.suptitle('contrast & structure data sample reduced image size', size=17)
# plt.savefig('contrast_structure_data_sample_red2.png')
plt.show()
def calculate_luminance(window_orig, window_recon):
'''
calculates the contrast metric for a given window of two images
'''
k_1, L = 0.01, 255
c_1 = (L*k_1)**2
orig_data, recon_data = window_orig.flatten(), window_recon.flatten()
mean_x, mean_y = np.mean(recon_data), np.mean(orig_data)
num = 2*mean_x*mean_y + c_1
den = np.square(mean_x)+ np.square(mean_y) + c_1
return num/den
def image_luminance(orig_im, recon_im):
'''
gets the contrast for each patch of a set of images.
'''
luminance_res = []
number_windows = orig_im.shape[0] - filter_dim + 1
for i in range(number_windows):
for j in range(number_windows):
orig_window = orig_im[i:i+11, j:j+11]
recon_window = recon_im[i:i+11, j:j+11]
temp = calculate_luminance(orig_window, recon_window)
luminance_res.append(temp)
return np.reshape(luminance_res, (number_windows, number_windows))
luminance_res = []
indeces = []
for ii in range(3):
index = np.random.randint(140)
indeces.append(index)
temp_im_orig = orig_500[index,:,:]
temp_im_recon = recon_500[index,:,:]
luminance_res.append(image_luminance(temp_im_orig, temp_im_recon))
luminance_res = np.asarray(luminance_res)
f, axarr = plt.subplots(nrows=3,ncols=3, figsize=(9,9))
for i in range(3):
axarr[i,0].imshow(orig_500[indeces[i],:,:], cmap='gray')
axarr[i,1].imshow(recon_500[indeces[i],:,:], cmap='gray')
axarr[i,2].imshow(luminance_res[i,:,:], cmap='copper')
axarr[0,0].set_title('original', size=15)
axarr[0,1].set_title('reconstructed', size=15)
axarr[0,2].set_title('luminance', size=15)
for ax_row in axarr:
for ax in ax_row:
ax.set_xticklabels([])
ax.set_yticklabels([])
f.suptitle('luminance data sample', size=17)
# plt.savefig('luminance_data_sample.png')
plt.show()
f, axarr = plt.subplots(nrows=3,ncols=3, figsize=(9,9))
for i in range(3):
axarr[i,0].imshow(down_sample_res[i,0,:,:], cmap='gray')
axarr[i,1].imshow(down_sample_res[i,1,:,:], cmap='gray')
axarr[i,2].imshow(image_luminance(down_sample_res[i,0,:,:], down_sample_res[i,1,:,:]), cmap='copper')
axarr[0,0].set_title('original', size=15)
axarr[0,1].set_title('reconstructed', size=15)
axarr[0,2].set_title('luminance', size=15)
for ax_row in axarr:
for ax in ax_row:
ax.set_xticklabels([])
ax.set_yticklabels([])
f.suptitle('luminance data sample reduced image size', size=17)
# plt.savefig('luminance_data_sample_red2.png')
plt.show()
index = 269
sample_image_orig = orig_500[index,:,:]; sample_image_recon = recon_500[index,:,:];
sample_image_con = image_contrast(sample_image_orig,sample_image_recon)
sample_image_str = image_contrast(sample_image_orig,sample_image_recon)
sample_image_orig_ds1, sample_image_recon_ds1 = down_sample(sample_image_orig, sample_image_recon, 2)
sample_image_con_ds1 = image_contrast(sample_image_orig_ds1,sample_image_recon_ds1)
sample_image_str_ds1 = image_contrast(sample_image_orig_ds1,sample_image_recon_ds1)
sample_image_orig_ds2, sample_image_recon_ds2 = down_sample(sample_image_orig_ds1, sample_image_recon_ds1, 2)
sample_image_con_ds2 = image_contrast(sample_image_orig_ds2,sample_image_recon_ds2)
sample_image_str_ds2 = image_contrast(sample_image_orig_ds2,sample_image_recon_ds2)
import matplotlib.gridspec as gridspec
plt.figure(figsize = (12,9))
gs1 = gridspec.GridSpec(3, 4)
gs1.update(wspace=0.03, hspace=0.03)
ax_dict = {}
for ii in range(12):
ax_dict[ii] = plt.subplot(gs1[ii])
ax_dict[ii].set_xticklabels([])
ax_dict[ii].set_yticklabels([])
ax_dict[ii].get_xaxis().set_visible(False)
ax_dict[ii].get_yaxis().set_visible(False)
ax_dict[0].set_title('original', size=20)
ax_dict[1].set_title('reconstructed', size=20)
ax_dict[2].set_title('contrast', size=20)
ax_dict[3].set_title('structure', size=20)
ax_dict[0].imshow(sample_image_orig, cmap='gray')
ax_dict[1].imshow(sample_image_recon, cmap='gray')
ax_dict[2].imshow(sample_image_con, cmap='viridis')
ax_dict[3].imshow(sample_image_str, cmap='bone')
ax_dict[4].imshow(sample_image_orig_ds1, cmap='gray')
ax_dict[5].imshow(sample_image_recon_ds1, cmap='gray')
ax_dict[6].imshow(sample_image_con_ds1, cmap='viridis')
ax_dict[7].imshow(sample_image_str_ds1, cmap='bone')
ax_dict[8].imshow(sample_image_orig_ds2, cmap='gray')
ax_dict[9].imshow(sample_image_recon_ds2, cmap='gray')
ax_dict[10].imshow(sample_image_con_ds2, cmap='viridis')
ax_dict[11].imshow(sample_image_str_ds2, cmap='bone')
plt.savefig('data_sample_msssim.png')
plt.show()
import matplotlib.gridspec as gridspec
plt.figure(figsize = (12,12))
gs1 = gridspec.GridSpec(3, 3)
gs1.update(wspace=0.03, hspace=0.03)
ax_dict = {}
for ii in range(9):
ax_dict[ii] = plt.subplot(gs1[ii])
ax_dict[ii].set_xticklabels([])
ax_dict[ii].set_yticklabels([])
ax_dict[ii].get_xaxis().set_visible(False)
ax_dict[ii].get_yaxis().set_visible(False)
for ii in range(3):
ax_dict[3*ii].imshow(down_sample_res[ii,0,:,:], cmap='gray')
ax_dict[3*ii+1].imshow(down_sample_res[ii,1,:,:], cmap='gray')
ax_dict[3*ii+2].imshow(image_luminance(down_sample_res[i,0,:,:], down_sample_res[ii,1,:,:]), cmap='copper')
ax_dict[0].set_title('original', size=20)
ax_dict[1].set_title('reconstructed', size=20)
ax_dict[2].set_title('luminance', size=20)
plt.savefig('lum.png')
plt.show()
# contrast_res, structure_res = [], []
# bar = progressbar.ProgressBar()
# for ii in bar(range(140)):
# temp_im_orig = orig_140[ii,:,:]
# temp_im_recon = recon_140[ii,:,:]
# contrast_res.append(image_contrast(temp_im_orig, temp_im_recon))
# structure_res.append(image_structure(temp_im_orig, temp_im_recon))
# contrast_res = np.asarray(contrast_res)
# structure_res = np.asarray(structure_res)
# luminance_res = []
# bar = progressbar.ProgressBar()
# for ii in bar(range(140)):
# temp_im_orig = orig_140[ii,:,:]
# temp_im_recon = recon_140[ii,:,:]
# luminance_res.append(image_luminance(temp_im_orig, temp_im_recon))
# luminance_res = np.asarray(luminance_res)
# np.savetxt('luminance_140.csv', np.reshape(luminance_res, (140, 7396)), delimiter=',')
# np.savetxt('contrast_500.csv', np.reshape(contrast_res, (140, 7396)), delimiter=',')
# np.savetxt('structure_500.csv', np.reshape(structure_res, (140, 7396)), delimiter=',')
# np.savetxt('cxs_500.csv', np.reshape(contrast_res*structure_res, (140, 7396)), delimiter=',')
# down_sample_res = []
# pool_size = 2
# bar = progressbar.ProgressBar()
# for ii in bar(range(140)):
# temp_im_orig = orig_140[ii,:,:]
# temp_im_recon = recon_140[ii,:,:]
# one_ds = down_sample(temp_im_orig, temp_im_recon, pool_size)
# two_ds = down_sample(one_ds[0], one_ds[1], pool_size)
# down_sample_res.append(two_ds)
# down_sample_res = np.asarray(down_sample_res)
# down_sample_res.shape
# luminance_res = []
# bar = progressbar.ProgressBar()
# for ii in bar(range(140)):
# temp_im_orig = down_sample_res[ii,0,:,:]
# temp_im_recon = down_sample_res[ii,1,:,:]
# luminance_res.append(image_luminance(temp_im_orig, temp_im_recon))
# luminance_res = np.asarray(luminance_res)
# np.savetxt('luminance_140.csv', np.reshape(luminance_res, (140, 14*14)), delimiter=',')
# ds2_orig = down_sample_res[:,0,:,:]
# ds2_recon = down_sample_res[:,1,:,:]
# np.savetxt('ds2_orig_140.csv', np.reshape(ds2_orig, (140, 24*24)), delimiter=',')
# np.savetxt('ds2_recon_140.csv', np.reshape(ds2_recon, (140, 24*24)), delimiter=',')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Convolutional Networks
Step2: Convolution
Step4: Aside
Step5: Convolution
Step6: Max pooling
Step7: Max pooling
Step8: Fast layers
Step9: Convolutional "sandwich" layers
Step10: Three-layer ConvNet
Step11: Gradient check
Step12: Overfit small data
Step13: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting
Step14: Train the net
Step15: Visualize Filters
Step16: Spatial Batch Normalization
Step17: Spatial batch normalization
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]])
# Compare your output to ours; difference should be around 2e-8
print('Testing conv_forward_naive')
print('difference: ', rel_error(out, correct_out))
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d//2:-d//2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
Tiny helper to show images as uint8 and remove axis labels
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
np.random.seed(231)
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-8'
print('Testing conv_backward_naive function')
print('dx error: ', rel_error(dx, dx_num))
print('dw error: ', rel_error(dw, dw_num))
print('db error: ', rel_error(db, db_num))
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print('Testing max_pool_forward_naive function:')
print('difference: ', rel_error(out, correct_out))
np.random.seed(231)
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print('Testing max_pool_backward_naive function:')
print('dx error: ', rel_error(dx, dx_num))
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
np.random.seed(231)
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print('Testing conv_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting conv_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
print('dw difference: ', rel_error(dw_naive, dw_fast))
print('db difference: ', rel_error(db_naive, db_fast))
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
np.random.seed(231)
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print('Testing pool_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('fast: %fs' % (t2 - t1))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting pool_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
np.random.seed(231)
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print('Testing conv_relu_pool')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print('Testing conv_relu:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print('Initial loss (no regularization): ', loss)
model.reg = 0.5
loss, grads = model.loss(X, y)
print('Initial loss (with regularization): ', loss)
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
np.random.seed(231)
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
np.random.seed(231)
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=15, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
np.random.seed(231)
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print('Before spatial batch normalization:')
print(' Shape: ', x.shape)
print(' Means: ', x.mean(axis=(0, 2, 3)))
print(' Stds: ', x.std(axis=(0, 2, 3)))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization:')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization (nontrivial gamma, beta):')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
np.random.seed(231)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in range(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After spatial batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=(0, 2, 3)))
print(' stds: ', a_norm.std(axis=(0, 2, 3)))
np.random.seed(231)
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read and preprocess the data. Preprocessing consists of
Step2: Define a function that runs ICA on the raw MEG data and plots the components
Step3: FastICA
Step4: Picard
Step5: Infomax
Step6: Extended Infomax
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Pierre Ablin <pierreablin@gmail.com>
#
# License: BSD (3-clause)
from time import time
import mne
from mne.preprocessing import ICA
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True)
reject = dict(mag=5e-12, grad=4000e-13)
raw.filter(1, 30, fir_design='firwin')
def run_ica(method, fit_params=None):
ica = ICA(n_components=20, method=method, fit_params=fit_params,
random_state=0)
t0 = time()
ica.fit(raw, picks=picks, reject=reject)
fit_time = time() - t0
title = ('ICA decomposition using %s (took %.1fs)' % (method, fit_time))
ica.plot_components(title=title)
run_ica('fastica')
run_ica('picard')
run_ica('infomax')
run_ica('infomax', fit_params=dict(extended=True))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Examine the following seasonal plot
Step2: And also the periodogram
Step3: 1) Determine seasonality
Step4: 2) Create seasonal features
Step5: Now run this cell to fit the seasonal model.
Step6: Removing from a series its trend or seasons is called detrending or deseasonalizing the series.
Step7: 3) Check for remaining seasonality
Step8: The Store Sales dataset includes a table of Ecuadorian holidays.
Step9: From a plot of the deseasonalized Average Sales, it appears these holidays could have some predictive power.
Step10: 4) Create holiday features
Step11: Use this cell to fit the seasonal model with holiday features added. Do the fitted values seem to have improved?
Step12: (Optional) Submit to Store Sales competition
Step13: You can use this cell to see some of its predictions.
Step14: Finally, this cell loads the test data, creates a feature set for the forecast period, and then creates the submission file submission.csv.
|
<ASSISTANT_TASK:>
Python Code:
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.time_series.ex3 import *
# Setup notebook
from pathlib import Path
from learntools.time_series.style import * # plot style settings
from learntools.time_series.utils import plot_periodogram, seasonal_plot
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.linear_model import LinearRegression
from statsmodels.tsa.deterministic import CalendarFourier, DeterministicProcess
comp_dir = Path('../input/store-sales-time-series-forecasting')
holidays_events = pd.read_csv(
comp_dir / "holidays_events.csv",
dtype={
'type': 'category',
'locale': 'category',
'locale_name': 'category',
'description': 'category',
'transferred': 'bool',
},
parse_dates=['date'],
infer_datetime_format=True,
)
holidays_events = holidays_events.set_index('date').to_period('D')
store_sales = pd.read_csv(
comp_dir / 'train.csv',
usecols=['store_nbr', 'family', 'date', 'sales'],
dtype={
'store_nbr': 'category',
'family': 'category',
'sales': 'float32',
},
parse_dates=['date'],
infer_datetime_format=True,
)
store_sales['date'] = store_sales.date.dt.to_period('D')
store_sales = store_sales.set_index(['store_nbr', 'family', 'date']).sort_index()
average_sales = (
store_sales
.groupby('date').mean()
.squeeze()
.loc['2017']
)
X = average_sales.to_frame()
X["week"] = X.index.week
X["day"] = X.index.dayofweek
seasonal_plot(X, y='sales', period='week', freq='day');
plot_periodogram(average_sales);
# View the solution (Run this cell to receive credit!)
q_1.check()
y = average_sales.copy()
# YOUR CODE HERE
fourier = ____
dp = DeterministicProcess(
index=y.index,
constant=True,
order=1,
# YOUR CODE HERE
# ____
drop=True,
)
X = ____
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
#%%RM_IF(PROD)%%
y = average_sales.copy()
fourier = CalendarFourier(freq='M', order=12)
dp = DeterministicProcess(
index=y.index,
constant=True,
order=1,
seasonal=True,
additional_terms=[fourier],
drop=True,
)
X = dp.in_sample()
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
y = average_sales.copy()
fourier = CalendarFourier(freq='A', order=4)
dp = DeterministicProcess(
index=y.index,
constant=True,
order=1,
seasonal=True,
additional_terms=[fourier],
drop=True,
)
X = dp.in_sample()
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
y = average_sales.copy()
fourier = CalendarFourier(freq='M', order=4)
dp = DeterministicProcess(
index=y.index[1:],
constant=True,
order=1,
seasonal=True,
additional_terms=[fourier],
drop=True,
)
X = dp.in_sample()
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
y = average_sales.copy()
fourier = CalendarFourier(freq='M', order=4)
dp = DeterministicProcess(
index=y.index[1:],
constant=True,
order=1,
seasonal=False,
additional_terms=[fourier],
drop=True,
)
X = dp.in_sample()
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
y = average_sales.copy()
fourier = CalendarFourier(freq='M', order=4)
dp = DeterministicProcess(
index=y.index,
constant=True,
order=1,
seasonal=True,
# additional_terms=[fourier],
drop=True,
)
X = dp.in_sample()
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
y = average_sales.copy()
fourier = CalendarFourier(freq='M', order=4)
dp = DeterministicProcess(
index=y.index,
constant=True,
order=1,
seasonal=True,
additional_terms=[fourier],
drop=True,
)
X = dp.in_sample()
q_2.assert_check_passed()
model = LinearRegression().fit(X, y)
y_pred = pd.Series(
model.predict(X),
index=X.index,
name='Fitted',
)
y_pred = pd.Series(model.predict(X), index=X.index)
ax = y.plot(**plot_params, alpha=0.5, title="Average Sales", ylabel="items sold")
ax = y_pred.plot(ax=ax, label="Seasonal")
ax.legend();
y_deseason = y - y_pred
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, sharey=True, figsize=(10, 7))
ax1 = plot_periodogram(y, ax=ax1)
ax1.set_title("Product Sales Frequency Components")
ax2 = plot_periodogram(y_deseason, ax=ax2);
ax2.set_title("Deseasonalized");
# View the solution (Run this cell to receive credit!)
q_3.check()
# National and regional holidays in the training set
holidays = (
holidays_events
.query("locale in ['National', 'Regional']")
.loc['2017':'2017-08-15', ['description']]
.assign(description=lambda x: x.description.cat.remove_unused_categories())
)
display(holidays)
ax = y_deseason.plot(**plot_params)
plt.plot_date(holidays.index, y_deseason[holidays.index], color='C3')
ax.set_title('National and Regional Holidays');
# YOUR CODE HERE
#_UNCOMMENT_IF(PROD)_
#X_holidays = ____
#_UNCOMMENT_IF(PROD)_
#X2 = X.join(X_holidays, on='date').fillna(0.0)
# Check your answer
q_4.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_4.hint()
#_COMMENT_IF(PROD)_
q_4.hint(2)
#_COMMENT_IF(PROD)_
q_4.solution()
#%%RM_IF(PROD)%%
# Scikit-learn solution
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(sparse=False)
X_holidays = pd.DataFrame(
ohe.fit_transform(holidays),
index=holidays.index,
columns=holidays.description.unique(),
)
X2 = X.join(X_holidays, on='date').fillna(0.0)
q_4.assert_check_passed()
#%%RM_IF(PROD)%%
# Pandas solution
X_holidays = pd.get_dummies(holidays)
X2 = X.join(X_holidays, on='date').fillna(0.0)
q_4.assert_check_passed()
model = LinearRegression().fit(X2, y)
y_pred = pd.Series(
model.predict(X2),
index=X2.index,
name='Fitted',
)
y_pred = pd.Series(model.predict(X2), index=X2.index)
ax = y.plot(**plot_params, alpha=0.5, title="Average Sales", ylabel="items sold")
ax = y_pred.plot(ax=ax, label="Seasonal")
ax.legend();
y = store_sales.unstack(['store_nbr', 'family']).loc["2017"]
# Create training data
fourier = CalendarFourier(freq='M', order=4)
dp = DeterministicProcess(
index=y.index,
constant=True,
order=1,
seasonal=True,
additional_terms=[fourier],
drop=True,
)
X = dp.in_sample()
X['NewYear'] = (X.index.dayofyear == 1)
model = LinearRegression(fit_intercept=False)
model.fit(X, y)
y_pred = pd.DataFrame(model.predict(X), index=X.index, columns=y.columns)
STORE_NBR = '1' # 1 - 54
FAMILY = 'PRODUCE'
# Uncomment to see a list of product families
# display(store_sales.index.get_level_values('family').unique())
ax = y.loc(axis=1)['sales', STORE_NBR, FAMILY].plot(**plot_params)
ax = y_pred.loc(axis=1)['sales', STORE_NBR, FAMILY].plot(ax=ax)
ax.set_title(f'{FAMILY} Sales at Store {STORE_NBR}');
df_test = pd.read_csv(
comp_dir / 'test.csv',
dtype={
'store_nbr': 'category',
'family': 'category',
'onpromotion': 'uint32',
},
parse_dates=['date'],
infer_datetime_format=True,
)
df_test['date'] = df_test.date.dt.to_period('D')
df_test = df_test.set_index(['store_nbr', 'family', 'date']).sort_index()
# Create features for test set
X_test = dp.out_of_sample(steps=16)
X_test.index.name = 'date'
X_test['NewYear'] = (X_test.index.dayofyear == 1)
y_submit = pd.DataFrame(model.predict(X_test), index=X_test.index, columns=y.columns)
y_submit = y_submit.stack(['store_nbr', 'family'])
y_submit = y_submit.join(df_test.id).reindex(columns=['id', 'sales'])
y_submit.to_csv('submission.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: Initialize Vertex SDK for Python
Step12: Set hardware accelerators
Step13: Set pre-built containers
Step14: Set machine type
Step15: Tutorial
Step16: Task.py contents
Step17: Store training script on your Cloud Storage bucket
Step18: Create and run custom training job
Step19: Prepare your command-line arguments
Step20: Run the custom training job
Step21: Load the saved model
Step22: Evaluate the model
Step23: Perform the model evaluation
Step24: Get the serving function signature
Step25: Explanation Specification
Step26: Explanation Metadata
Step27: Upload the model
Step28: Send a batch prediction request
Step29: Make the batch explanation request
Step30: Wait for completion of batch prediction job
Step31: Get the explanations
Step32: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import google.cloud.aiplatform as aip
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
BATCH_SIZE = 16
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_dnn_model()
# Train the model
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)
model.save(args.model_dir)
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz
job = aip.CustomTrainingJob(
display_name="boston_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
import numpy as np
from tensorflow.keras.datasets import boston_housing
(_, _), (x_test, y_test) = boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float32)
return feature
# Let's save one data item that has not been scaled
x_test_notscaled = x_test[0:1].copy()
for _ in range(13):
x_test[_] = scale(x_test[_])
x_test = x_test.astype(np.float32)
print(x_test.shape, x_test.dtype, y_test.shape)
print("scaled", x_test[0])
print("unscaled", x_test_notscaled)
local_model.evaluate(x_test, y_test)
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
serving_output = list(loaded.signatures["serving_default"].structured_outputs.keys())[0]
print("Serving function output:", serving_output)
input_name = local_model.input.name
print("Model input name:", input_name)
output_name = local_model.output.name
print("Model output name:", output_name)
XAI = "ig" # [ shapley, ig, xrai ]
if XAI == "shapley":
PARAMETERS = {"sampled_shapley_attribution": {"path_count": 10}}
elif XAI == "ig":
PARAMETERS = {"integrated_gradients_attribution": {"step_count": 50}}
elif XAI == "xrai":
PARAMETERS = {"xrai_attribution": {"step_count": 50}}
parameters = aip.explain.ExplanationParameters(PARAMETERS)
INPUT_METADATA = {
"input_tensor_name": serving_input,
"encoding": "BAG_OF_FEATURES",
"modality": "numeric",
"index_feature_mapping": [
"crim",
"zn",
"indus",
"chas",
"nox",
"rm",
"age",
"dis",
"rad",
"tax",
"ptratio",
"b",
"lstat",
],
}
OUTPUT_METADATA = {"output_tensor_name": serving_output}
input_metadata = aip.explain.ExplanationMetadata.InputMetadata(INPUT_METADATA)
output_metadata = aip.explain.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)
metadata = aip.explain.ExplanationMetadata(
inputs={"features": input_metadata}, outputs={"medv": output_metadata}
)
model = aip.Model.upload(
display_name="boston_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
explanation_parameters=parameters,
explanation_metadata=metadata,
sync=False,
)
model.wait()
! gsutil cat $IMPORT_FILE | head -n 1 > tmp.csv
! gsutil cat $IMPORT_FILE | tail -n 10 >> tmp.csv
! cut -d, -f1-16 tmp.csv > batch.csv
gcs_input_uri = BUCKET_NAME + "/test.csv"
! gsutil cp batch.csv $gcs_input_uri
MIN_NODES = 1
MAX_NODES = 1
batch_predict_job = model.batch_predict(
job_display_name="boston_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="csv",
predictions_format="jsonl",
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
generate_explanation=True,
sync=False,
)
print(batch_predict_job)
if not os.getenv("IS_TESTING"):
batch_predict_job.wait()
if not os.getenv("IS_TESTING"):
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
explanation_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("explanation"):
explanation_results.append(blob.name)
tags = list()
for explanation_result in explanation_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{explanation_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
print(line)
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing a Neural Network
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Step6: Train the network
Step8: Load the data
Step9: Train a network
Step10: Debug the training
Step11: Tune your hyperparameters
Step12: Run on the test set
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
np.set_printoptions(precision=4)
pd.set_option('precision',1)
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
print
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1,
reg=1e-5,
num_iters=100,
verbose=True)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.legend(loc='best')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
X_train.shape
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
best_val = -1
net = TwoLayerNet(input_size, hidden_size, num_classes)
# hyperparameters
learning_rates = [1e-3, 1e-4, 1e-5]
regs = [0.4, 0.5, 0.6]
learning_rate_decays = [0.5, 0.75, 0.95]
# Train the network
for lr in learning_rates:
for rate in regs:
for decay in learning_rate_decays:
print
print("learning rate: {}, epochs: {}, decay: {}".format(lr, rate, decay))
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000,
batch_size=200,
learning_rate=lr,
learning_rate_decay=0.95,
reg=rate,
verbose=False)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
if net.loss > best_val:
best_net = net
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading Financial Data
Step2: Plotting the Data
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import pandas.io.data as web
import plotly.plotly as py
import cufflinks as cf
py.sign_in('Python-Demo-Account', 'gwt101uhh0')
symbols = ['AAPL', 'MSFT', 'YHOO']
data = pd.DataFrame()
for sym in symbols:
data[sym] = web.DataReader(sym, data_source='yahoo')['Adj Close']
data.tail()
data.iplot(filename='financial', world_readable=True)
data['AAPL'].iplot(filename='best_fit', bestfit=True, colors=['pink'],
bestfit_colors=['blue'], world_readable=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the dataset, we were provided with a baseline chest CT scan and associated clinical information for a set of patients. A patient has an image acquired at time Week = 0 and has numerous follow up visits over the course of approximately 1-2 years, at which time their FVC is measured. For this tutorial, I will use only the Patient ID, the weeks and the FVC measurements, discarding all the rest. Using only these columns enabled our team to achieve a competitive score, which shows the power of Bayesian hierarchical linear regression models especially when gauging uncertainty is an important part of the problem.
Step2: On average, each of the 176 provided patients made 9 visits, when FVC was measured. The visits happened in specific weeks in the [-12, 133] interval. The decline in lung capacity is very clear. We see, though, they are very different from patient to patient.
Step3: That's all for modelling!
Step4: Now, calling NumPyro's inference engine
Step5: 4. Checking the model
Step6: Looks like our model learned personalized alphas and betas for each patient!
Step7: Predicting the missing values in the FVC table and confidence (sigma) for each value becomes really easy
Step8: Let's now put the predictions together with the true values, to visualize them
Step9: Finally, let's see our predictions for 3 patients
Step10: The results are exactly what we expected to see! Highlight observations
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
train = pd.read_csv('https://gist.githubusercontent.com/ucals/'
'2cf9d101992cb1b78c2cdd6e3bac6a4b/raw/'
'43034c39052dcf97d4b894d2ec1bc3f90f3623d9/'
'osic_pulmonary_fibrosis.csv')
train.head()
def chart(patient_id, ax):
data = train[train['Patient'] == patient_id]
x = data['Weeks']
y = data['FVC']
ax.set_title(patient_id)
ax = sns.regplot(x, y, ax=ax, ci=None, line_kws={'color':'red'})
f, axes = plt.subplots(1, 3, figsize=(15, 5))
chart('ID00007637202177411956430', axes[0])
chart('ID00009637202177434476278', axes[1])
chart('ID00010637202177584971671', axes[2])
import numpyro
from numpyro.infer import MCMC, NUTS, Predictive
import numpyro.distributions as dist
from jax import random
def model(PatientID, Weeks, FVC_obs=None):
μ_α = numpyro.sample("μ_α", dist.Normal(0., 100.))
σ_α = numpyro.sample("σ_α", dist.HalfNormal(100.))
μ_β = numpyro.sample("μ_β", dist.Normal(0., 100.))
σ_β = numpyro.sample("σ_β", dist.HalfNormal(100.))
unique_patient_IDs = np.unique(PatientID)
n_patients = len(unique_patient_IDs)
with numpyro.plate("plate_i", n_patients):
α = numpyro.sample("α", dist.Normal(μ_α, σ_α))
β = numpyro.sample("β", dist.Normal(μ_β, σ_β))
σ = numpyro.sample("σ", dist.HalfNormal(100.))
FVC_est = α[PatientID] + β[PatientID] * Weeks
with numpyro.plate("data", len(PatientID)):
numpyro.sample("obs", dist.Normal(FVC_est, σ), obs=FVC_obs)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train['PatientID'] = le.fit_transform(train['Patient'].values)
FVC_obs = train['FVC'].values
Weeks = train['Weeks'].values
PatientID = train['PatientID'].values
numpyro.set_host_device_count(4)
nuts_kernel = NUTS(model)
mcmc = MCMC(nuts_kernel, num_samples=2000, num_warmup=2000)
rng_key = random.PRNGKey(0)
mcmc.run(rng_key, PatientID, Weeks, FVC_obs=FVC_obs)
posterior_samples = mcmc.get_samples()
import arviz as az
data = az.from_numpyro(mcmc)
az.plot_trace(data, compact=True);
pred_template = []
for i in range(train['Patient'].nunique()):
df = pd.DataFrame(columns=['PatientID', 'Weeks'])
df['Weeks'] = np.arange(-12, 134)
df['PatientID'] = i
pred_template.append(df)
pred_template = pd.concat(pred_template, ignore_index=True)
PatientID = pred_template['PatientID'].values
Weeks = pred_template['Weeks'].values
predictive = Predictive(model, posterior_samples,
return_sites=['σ', 'obs'])
samples_predictive = predictive(random.PRNGKey(0),
PatientID, Weeks, None)
df = pd.DataFrame(columns=['Patient', 'Weeks', 'FVC_pred', 'sigma'])
df['Patient'] = le.inverse_transform(pred_template['PatientID'])
df['Weeks'] = pred_template['Weeks']
df['FVC_pred'] = samples_predictive['obs'].T.mean(axis=1)
df['sigma'] = samples_predictive['obs'].T.std(axis=1)
df['FVC_inf'] = df['FVC_pred'] - df['sigma']
df['FVC_sup'] = df['FVC_pred'] + df['sigma']
df = pd.merge(df, train[['Patient', 'Weeks', 'FVC']],
how='left', on=['Patient', 'Weeks'])
df = df.rename(columns={'FVC': 'FVC_true'})
df.head()
def chart(patient_id, ax):
data = df[df['Patient'] == patient_id]
x = data['Weeks']
ax.set_title(patient_id)
ax.plot(x, data['FVC_true'], 'o')
ax.plot(x, data['FVC_pred'])
ax = sns.regplot(x, data['FVC_true'], ax=ax, ci=None,
line_kws={'color':'red'})
ax.fill_between(x, data["FVC_inf"], data["FVC_sup"],
alpha=0.5, color='#ffcd3c')
ax.set_ylabel('FVC')
f, axes = plt.subplots(1, 3, figsize=(15, 5))
chart('ID00007637202177411956430', axes[0])
chart('ID00009637202177434476278', axes[1])
chart('ID00011637202177653955184', axes[2])
y = df.dropna()
rmse = ((y['FVC_pred'] - y['FVC_true']) ** 2).mean() ** (1/2)
print(f'RMSE: {rmse:.1f} ml')
sigma_c = y['sigma'].values
sigma_c[sigma_c < 70] = 70
delta = (y['FVC_pred'] - y['FVC_true']).abs()
delta[delta > 1000] = 1000
lll = - np.sqrt(2) * delta / sigma_c - np.log(np.sqrt(2) * sigma_c)
print(f'Laplace Log Likelihood: {lll.mean():.4f}')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 创建和操控张量
Step2: 矢量加法
Step3: 输出张量不仅会返回其值,还会返回其形状(将在下一部分中讨论)以及存储在张量中的值的类型。调用张量的 numpy 方法会返回该张量的值(以 NumPy 数组形式):
Step4: 张量形状
Step5: 广播
Step6: 练习 1:矢量运算。
Step7: 解决方案
Step8: 矩阵乘法
Step9: 张量变形
Step10: 此外,您还可以使用 tf.reshape 更改张量的维数(“阶”)。
Step11: 练习 2:改变两个张量的形状,使其能够相乘。
Step12: 解决方案
Step13: 还有一个解决方案是,将 a 变形为 6x1 矩阵,并将 b 变形为 1x3 矩阵,从而在相乘后得到一个 6x3 矩阵。
Step14: 要更改变量的值,请使用 assign 操作:
Step15: 向变量赋予新值时,其形状必须和之前的形状一致:
Step16: 还有很多关于变量的内容我们并未在这里提及,例如加载和存储。要了解详情,请参阅 TensorFlow 文档。
Step17: 解决方案
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import tensorflow as tf
try:
tf.contrib.eager.enable_eager_execution()
print("TF imported with eager execution!")
except ValueError:
print("TF already imported with eager execution!")
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
print("primes:", primes
)
ones = tf.ones([6], dtype=tf.int32)
print("ones:", ones)
just_beyond_primes = tf.add(primes, ones)
print("just_beyond_primes:", just_beyond_primes)
twos = tf.constant([2, 2, 2, 2, 2, 2], dtype=tf.int32)
primes_doubled = primes * twos
print("primes_doubled:", primes_doubled)
some_matrix = tf.constant([[1, 2, 3], [4, 5, 6]], dtype=tf.int32)
print(some_matrix)
print("\nvalue of some_matrix is:\n", some_matrix.numpy())
# A scalar (0-D tensor).
scalar = tf.zeros([])
# A vector with 3 elements.
vector = tf.zeros([3])
# A matrix with 2 rows and 3 columns.
matrix = tf.zeros([2, 3])
print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.numpy())
print('vector has shape', vector.get_shape(), 'and value:\n', vector.numpy())
print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.numpy())
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
print("primes:", primes)
one = tf.constant(1, dtype=tf.int32)
print("one:", one)
just_beyond_primes = tf.add(primes, one)
print("just_beyond_primes:", just_beyond_primes)
two = tf.constant(2, dtype=tf.int32)
primes_doubled = primes * two
print("primes_doubled:", primes_doubled)
# Write your code for Task 1 here.
# Task: Square each element in the primes vector, then subtract 1.
def solution(primes):
primes_squared = tf.multiply(primes, primes)
neg_one = tf.constant(-1, dtype=tf.int32)
just_under_primes_squared = tf.add(primes_squared, neg_one)
return just_under_primes_squared
def alternative_solution(primes):
primes_squared = tf.pow(primes, 2)
one = tf.constant(1, dtype=tf.int32)
just_under_primes_squared = tf.subtract(primes_squared, one)
return just_under_primes_squared
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
just_under_primes_squared = solution(primes)
print("just_under_primes_squared:", just_under_primes_squared)
# A 3x4 matrix (2-d tensor).
x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]],
dtype=tf.int32)
# A 4x2 matrix (2-d tensor).
y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32)
# Multiply `x` by `y`; result is 3x2 matrix.
matrix_multiply_result = tf.matmul(x, y)
print(matrix_multiply_result)
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant(
[[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]],
dtype=tf.int32)
reshaped_2x8_matrix = tf.reshape(matrix, [2, 8])
reshaped_4x4_matrix = tf.reshape(matrix, [4, 4])
print("Original matrix (8x2):")
print(matrix.numpy())
print("Reshaped matrix (2x8):")
print(reshaped_2x8_matrix.numpy())
print("Reshaped matrix (4x4):")
print(reshaped_4x4_matrix.numpy())
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant(
[[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]],
dtype=tf.int32)
reshaped_2x2x4_tensor = tf.reshape(matrix, [2, 2, 4])
one_dimensional_vector = tf.reshape(matrix, [16])
print("Original matrix (8x2):")
print(matrix.numpy())
print("Reshaped 3-D tensor (2x2x4):")
print(reshaped_2x2x4_tensor.numpy())
print("1-D vector:")
print(one_dimensional_vector.numpy())
# Write your code for Task 2 here.
# Task: Reshape two tensors in order to multiply them
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
reshaped_a = tf.reshape(a, [2, 3])
reshaped_b = tf.reshape(b, [3, 1])
c = tf.matmul(reshaped_a, reshaped_b)
print("reshaped_a (2x3):")
print(reshaped_a.numpy())
print("reshaped_b (3x1):")
print(reshaped_b.numpy())
print("reshaped_a x reshaped_b (2x1):")
print(c.numpy())
# Create a scalar variable with the initial value 3.
v = tf.contrib.eager.Variable([3])
# Create a vector variable of shape [1, 4], with random initial values,
# sampled from a normal distribution with mean 1 and standard deviation 0.35.
w = tf.contrib.eager.Variable(tf.random_normal([1, 4], mean=1.0, stddev=0.35))
print("v:", v.numpy())
print("w:", w.numpy())
v = tf.contrib.eager.Variable([3])
print(v.numpy())
tf.assign(v, [7])
print(v.numpy())
v.assign([5])
print(v.numpy())
v = tf.contrib.eager.Variable([[1, 2, 3], [4, 5, 6]])
print(v.numpy())
try:
print("Assigning [7, 8, 9] to v")
v.assign([7, 8, 9])
except ValueError as e:
print("Exception:", e)
# Write your code for Task 3 here.
# Task: Simulate 10 throws of two dice. Store the results in a 10x3 matrix.
die1 = tf.contrib.eager.Variable(
tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32))
die2 = tf.contrib.eager.Variable(
tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32))
dice_sum = tf.add(die1, die2)
resulting_matrix = tf.concat(values=[die1, die2, dice_sum], axis=1)
print(resulting_matrix.numpy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Análisis de los sondeos electorales 2016
Step2: Leemos el fichero de datos obtenido de la Wikipedia y hacemos el parsing. Escogemos sólo aquellos sondeos en los que hay información para todos los partidos. Podríamos también escoger aquellos parciales porque el modelado Bayesiano los podría aceptar.
Step3: Leemos el modelo que hemos definido y hacemos el sampling.
Step4: Resultados
Step5: Sondeos y variación subyacente
Step6: Estimación de voto actual
Step7: A continuación represento la evolución temporal del voto por cada partido político, incluyendo las incertidumbres estimadas para cada sondeo.
Step8: Sesgos por casa encuestadora
Step9: Variabilidad
Step15: Push de la página web
|
<ASSISTANT_TASK:>
Python Code:
def stan_cache(model_code, model_name=None, **kwargs):
Use just as you would `stan`
code_hash = md5(model_code.encode('ascii')).hexdigest()
if model_name is None:
cache_fn = 'cached-model-{}.pkl'.format(code_hash)
else:
cache_fn = 'cached-{}-{}.pkl'.format(model_name, code_hash)
try:
sm = pickle.load(open(cache_fn, 'rb'))
except:
sm = pystan.StanModel(model_code=model_code)
with open(cache_fn, 'wb') as f:
pickle.dump(sm, f)
else:
print("Using cached StanModel")
return sm.sampling(**kwargs)
def toenglish(s):
spanish = ['ene', 'abr', 'ago', 'dic', 'de mayo de']
english = ['jan', 'apr', 'aug', 'dec', 'may']
for (j, month) in enumerate(spanish):
s = s.replace(month, english[j])
return s
def getPercentage(s):
if (s[0] not in ['0','1','2','3','4','5','6','7','8','9']):
return 0
else:
if (s.find('%') != -1):
return float(s.split('%')[0].replace(',','.')) / 100.0
else:
return float(s.split('\n')[0].replace(',','.')) / 100.0
def getSigma(s):
left = s.find('(')
right = s.find(')')
if (s[left+1:right] in ['?', '-']):
return 0.03
else:
return 1.0 / np.sqrt(float(s[left+1:right]))
def weeksDifference(d1, d2):
monday1 = (d1 - datetime.timedelta(days=d1.weekday()))
monday2 = (d2 - datetime.timedelta(days=d2.weekday()))
return int((monday2 - monday1).days / 7)
wb = openpyxl.load_workbook("data/sondeos.xlsx")
ws = wb.active
empresas = ['GAD3', 'Encuestamos', 'GESOP', 'Metroscopia', 'Celeste-Tel','Demoscopia Servicios', 'Simple Lógica', 'CIS',
'TNS Demoscopia', 'Invymark', 'NC Report', 'El Espanol', 'DYM', 'Sondaxe', 'Sigma Dos', 'Resultados de las elecciones']
empresaSondeoAll = []
sondeosAll = []
dateAll = []
sigmaAll = []
wbnew = openpyxl.Workbook()
# grab the active worksheet
wsnew = wbnew.active
wsnew.append(['Encuestador','Fecha','Muestra','PP','PSOE','PODEMOS','Cs','IU'])
otrosPartidos = ['F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'R']
nSondeos = 302
loopSondeos = 0
for i in range(130):
empresa = ws['A{0}'.format(i+2)].value
for (loop, emp) in enumerate(empresas):
if (empresa.find(emp) != -1):
empresaSondeo = loop
if (empresaSondeo == len(empresas)-1):
sigma = 0.0001
else:
sigma = getSigma(empresa)
PP = getPercentage(ws['C{0}'.format(i+2)].value)
PSOE = getPercentage(ws['D{0}'.format(i+2)].value)
IU = getPercentage(ws['E{0}'.format(i+2)].value)
PODEMOS = getPercentage(ws['P{0}'.format(i+2)].value)
CS = getPercentage(ws['Q{0}'.format(i+2)].value)
total = PP + PSOE + IU + PODEMOS + CS
otros = 1.0 - total
tmp = ws['B{0}'.format(i+2)].value
if (isinstance(tmp, datetime.date)):
date = tmp
else:
date = dateutil.parser.parse(toenglish(ws['B{0}'.format(i+2)].value.split('-')[-1].lower()))
tmp = date.year + (date.month-1.0) / 12.0
sondeo = [PP, PSOE, IU+PODEMOS, CS, otros]
if (0 not in sondeo):
loopSondeos += 1
sondeosAll.append(sondeo)
sigmaAll.append(sigma)
dateAll.append(date)
empresaSondeoAll.append(empresaSondeo+1)
print ("{0} - {1} {7} - s={8:4.2f} : PP={2:4.2f} - PSOE={3:4.2f} - UP={4:4.2f} - CS={5:4.2f} - Resto={6:4.2f}".format(i,
empresas[empresaSondeo], PP*100, PSOE*100, (IU+PODEMOS)*100, CS*100, otros*100, date, sigma*100))
wsnew.append([empresas[empresaSondeo],'{0}/{1}/{2}'.format(date.day,date.month,date.year-2000),1.0/sigma**2,PP*100,PSOE*100,(IU+PODEMOS)*100,CS*100,0.0])
wbnew.save('new.xlsx')
sondeosAll = np.array(sondeosAll)
nSondeos, nPartidos = sondeosAll.shape
nEmpresas = len(empresas)
print ("Número de sondeos incluidos : {0} de {1} posibles".format(loopSondeos,120))
# Compute week of every poll
weekAll = []
for i in range(nSondeos):
weekAll.append(weeksDifference(dateAll[nSondeos-1], dateAll[i]) + 1)
nDates = max(weekAll)
# Reverse all lists
sondeosAll = sondeosAll[::-1]
empresaSondeoAll.reverse()
weekAll.reverse()
sigmaAll.reverse()
dictionary = {'NPartidos': nPartidos, 'NSondeos': nSondeos, 'NEmpresas': nEmpresas-1, 'NDates': nDates, 'empresa': empresaSondeoAll, 'sondeos': sondeosAll,
'date': weekAll, 'sigmaSondeo': sigmaAll, 'alpha': np.ones(nPartidos)*1.0}
date.year-2000
f = open('model.stan', 'r')
model = f.read()
f.close()
out = stan_cache(model, model_name='elecciones', data=dictionary, chains=1)
thetaChain = out.extract('theta')['theta']
theta = np.percentile(thetaChain, [50.0, 50.0-68/2.0, 50.0+68/2], axis=0)
houseChain = out.extract('house')['house']
house = np.percentile(houseChain, [50.0, 50.0-68/2.0, 50.0+68/2], axis=0)
sigmaChain = out.extract('sigma')['sigma']
sigma = np.percentile(sigmaChain, [50.0, 50.0-68/2.0, 50.0+68/2], axis=0)
colors = ["blue", "red", "violet", "orange", "yellow"]
labelsPartidos = ['PP', 'PSOE', 'PODEMOS+IU', 'Cs', 'Otros']
labelsEmpresas = ['GAD3', 'Encuest', 'GESOP', 'Metroscopia', 'Celeste',' Demosc. S.', 'S. Lógica', 'CIS', 'TNS Demos.',
'Invymark', 'NC Rep.', 'Netquest']
f, ax = pl.subplots(figsize=(13,8))
for i in range(5):
ax.plot(weekAll, sondeosAll[:,i], '.', color=sns.xkcd_rgb[colors[i]], linewidth=2)
ax.plot(np.arange(max(weekAll))+1, theta[0,:,i], color=sns.xkcd_rgb[colors[i]], linewidth=2)
ax.fill_between(np.arange(max(weekAll))+1, theta[1,:,i], theta[2,:,i], color=sns.xkcd_rgb[colors[i]], alpha=0.3)
ax.set_xlabel('Semana')
ax.set_ylabel('Fracción de votos')
ax.axvline(49)
pl.savefig("sondeos.png")
f, ax = pl.subplots()
sns.boxplot(data=thetaChain[:,-1,:], ax=ax)
ax.set_xticks(np.arange(5))
ax.set_xticklabels(labelsPartidos, rotation=90)
boxes = ax.artists
for (i, box) in enumerate(boxes):
box.set_facecolor(sns.xkcd_rgb[colors[i]])
pl.savefig("estimacionActual.png")
labelsPartidos = ['PP', 'PSOE', 'PODEMOS+IU', 'Cs', 'Otros']
f, ax = pl.subplots(nrows=2, ncols=3, figsize=(10,8))
ax = ax.flatten()
for i in range(5):
ax[i].plot(weekAll, sondeosAll[:,i], '.', color=sns.xkcd_rgb[colors[i]], linewidth=2)
ax[i].set_ylim([0,0.35])
ax[i].errorbar(weekAll, sondeosAll[:,i], fmt='none', yerr=sigmaAll, color=sns.xkcd_rgb[colors[i]], linewidth=2, ecolor=sns.xkcd_rgb[colors[i]])
ax[i].plot(np.arange(max(weekAll))+1, theta[0,:,i], color=sns.xkcd_rgb[colors[i]], linewidth=2)
ax[i].fill_between(np.arange(max(weekAll))+1, theta[1,:,i], theta[2,:,i], color=sns.xkcd_rgb[colors[i]], alpha=0.3)
ax[i].set_xlabel('Semana')
ax[i].set_xlabel('Porcentaje de votos')
ax[i].set_title(labelsPartidos[i])
pl.tight_layout()
pl.savefig("porPartido.png")
f, ax = pl.subplots(ncols=3, nrows=2, figsize=(12,10))
ax = ax.flatten()
for i in range(5):
sns.boxplot(data=houseChain[:,:,i], ax=ax[i])
ax[i].set_xticks(np.arange(12))
ax[i].set_xticklabels(labelsEmpresas, rotation=90)
ax[i].set_title(labelsPartidos[i])
pl.tight_layout()
pl.savefig("bias.png")
f, ax = pl.subplots(figsize=(10,6))
sns.boxplot(data=sigmaChain, ax=ax)
ax.set_xticks(np.arange(5))
ax.set_xticklabels(labelsPartidos, rotation=90)
boxes = ax.artists
for (i, box) in enumerate(boxes):
box.set_facecolor(sns.xkcd_rgb[colors[i]])
pl.savefig("variabilidad.png", facecolor=f.get_facecolor(), edgecolor='none')
f = open('elecciones2015/index.html','r')
lines = f.readlines()
f.close()
fecha = datetime.datetime.now()
lines[29] = <a id="welcome-to-github-pages" class="anchor" href="#welcome-to-github-pages" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Resultado {0}/{1}/{2}</h3>\n.format(fecha.day,fecha.month,fecha.year)
theta = np.percentile(thetaChain, [50.0, 50.0-95/2.0, 50.0+95/2], axis=0)
lines[33] = <td style='height: 50px; background: #02B3F4; color: white; font-size: 25px; border-radius: 1px 10px 0px 0px;'>{0:4.1f}-{1:4.1f}-{2:4.1f}</td>\n.format(theta[1,-1,0]*100,theta[0,-1,0]*100,theta[2,-1,0]*100)
lines[34] = <td style='height: 50px; background: #8E1744; color: white; font-size: 25px; border-radius: 1px 10px 0px 0px;'>{0:4.1f}-{1:4.1f}-{2:4.1f}</td>\n.format(theta[1,-1,2]*100,theta[0,-1,2]*100,theta[2,-1,2]*100)
lines[35] = <td style='height: 50px; background: #FF0202; color: white; font-size: 25px; border-radius: 1px 10px 0px 0px;'>{0:4.1f}-{1:4.1f}-{2:4.1f}</td>\n.format(theta[1,-1,1]*100,theta[0,-1,1]*100,theta[2,-1,1]*100)
lines[36] = <td style='height: 50px; background: #FF800E; color: white; font-size: 25px; border-radius: 1px 10px 0px 0px;'>{0:4.1f}-{1:4.1f}-{2:4.1f}</td>\n.format(theta[1,-1,3]*100,theta[0,-1,3]*100,theta[2,-1,3]*100)
f = open('elecciones2015/index.html','w')
f.writelines(lines)
f.close()
os.system('./update.sh')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Define a function that returns data to draw vectors of a vector field with values in the plane yOz or xOy (i.e. electric or magnetic field vectors, in our case).
Step3: Define data for fixed traces in each frame (tr0, tr1, tr2, described below), representing the two orthogonal planes of the electric and magnetic field, and their common line
Step4: Set the wave parameters and the interval of simulation
Step5: Define data representing the vectors of the two vector fields
Step6: The following four traces are the base traces updated by the animation frames
Step7: Define data to be plotted in each animation frame
Step8: Set the plot layout
|
<ASSISTANT_TASK:>
Python Code:
import plotly
plotly.__version__
import numpy as np
import plotly.graph_objs as go
def rot_matrix(theta):
return np.array([[np.cos(theta), -np.sin(theta)],
[np.sin(theta), np.cos(theta)]])
def get_arrows(start, end, arrow_angle, plane=2, fract=0.1):
this i function defines 3d vectors/quivers
- start -numpy array of x, y, z-coordinates of the arrow starting points; shape (3, m)
start[0,:] contains x-coordinates, etc
- end - numpy array with the same shape as start; contains on rows the x, y and z-coords
of ending points of the arrow
- the arrowhead is an isosceles triangle with the equal sides forming an angle of 2*arrow_angle radians
- plane=0 or 2 depending on the plane where the vectors are drawn (plane=0 i.e. x=0, plane=2, z=0)
start = np.asarray(start)
end = np.asarray(end)
m = start[0,:].shape[0]
arr_dir = start-end
arr_dir_norm = np.linalg.norm(arr_dir, axis=0)
arr_dir = fract*arr_dir/arr_dir_norm[None,:] # the arrowhead is a fraction fract from the unit vector
if plane == 2:
v = np.einsum('ji, im -> jm', rot_matrix(arrow_angle), arr_dir[:plane,:]) # Einstein summation # rotation to all vectors at a time
w = np.einsum('ji, im -> jm', rot_matrix(-arrow_angle), arr_dir[:plane, :])
v = np.append(v, [[0]*m], axis=0)
w = np.append(w, [[0]*m], axis=0)
elif plane == 0:
v = np.einsum('ji, im -> jm', rot_matrix(arrow_angle), arr_dir[1:,:])
w = np.einsum('ji, im -> jm', rot_matrix(-arrow_angle), arr_dir[1:, :])
v = np.append([[0]*m], v, axis=0)
w = np.append([[0]*m], w, axis=0)
else: raise ValueError('the key plane must be 0 or 2')
p = end+v
q = end+w
suppx = np.stack((start[0,:], end[0,:], np.nan*np.ones(m ))) #supp is the line segment as support for arrow
suppy = np.stack((start[1,:], end[1,:], np.nan*np.ones(m )))
suppz = np.stack((start[2,:], end[2,:], np.nan*np.ones(m )))
x = suppx.flatten('F')#Fortran type flattening
y = suppy.flatten('F')
z = suppz.flatten('F')
x = list(map(lambda u: None if np.isnan(u) else u, x))
y = list(map(lambda u: None if np.isnan(u) else u, y))
z = list(map(lambda u: None if np.isnan(u) else u, z))
#headx, heady, headz are the x, y, z coordinates of the triangle vertices
headx = np.stack((end[0,:], p[0,:], q[0,:], end[0,:], np.nan*np.ones(m)))
heady = np.stack((end[1,:], p[1,:], q[1,:], end[1,:], np.nan*np.ones(m)))
headz = np.stack((end[2,:], p[2,:], q[2,:], end[2,:], np.nan*np.ones(m)))
xx = headx.flatten('F')
yy = heady.flatten('F')
zz = headz.flatten('F')
xx = list(map(lambda u: None if np.isnan(u) else u, xx))
yy = list(map(lambda u: None if np.isnan(u) else u, yy))
zz = list(map(lambda u: None if np.isnan(u) else u, zz))
return x, y, z, xx, yy, zz
a = 2
b = 5
xblue = [-a, a, a , -a, -a]
yblue = [-b, -b, b, b, -b]
zblue = [0]*5
xred = [0]*5+[None, 0, 0, 0]
yred = [-b, -b, b, b, -b, None, -b, b]
zred = [a, -a, -a, a, a, None, 0, 0]
x_Oy = [0, 0]
y_Oy = [-b, b]
z_Oy = [0, 0]
A = 1 # wave amplitude
lambdA = 0.5 # wavelength
omega = 1 # angular frequency
t = np.arange(0., 10., 0.2)# interval of simulation
Y = np.arange(-b, b, 0.2) # a grid of an interval on Oy, where the vector fields are evaluated
X = np.zeros(Y.shape[0])
ZZe = np.zeros(Y.shape[0])
nr_frames = t.shape[0]
theta = np.pi/13 # the characteristic angle of each arrow
start = np.stack((X, Y, np.zeros(X.shape))) # the numpy array of starting points of the the two classes of vectors
start.shape
Ze = A*np.sin(Y/lambdA-omega*t[0])
end1 = np.stack((X, Y, Ze))
x1, y1, z1, xx1, yy1, zz1 = get_arrows(start, end1, theta, plane=0)
XXe = A*np.sin(Y/lambdA-omega*t[0])
end2 = np.stack((XXe, Y, ZZe))
x2, y2, z2, xx2, yy2, zz2 = get_arrows(start, end2, theta, plane=2)
tr0 = dict(type='scatter3d', # a rectangle in xOy
x=xblue,
y=yblue,
z=zblue,
mode='lines',
line=dict(width=1.5, color='blue'))
tr1 = dict(type='scatter3d',# a rectangle in yOz
x=xred,
y=yred,
z=zred,
mode='lines',
line=dict(width=1.5, color='red'))
tr2 = dict(type='scatter3d',#line of direction Oy
x=x_Oy,
y=y_Oy,
z=z_Oy,
mode='lines',
line=dict(width=1.5, color='rgb(140,140,140)'))
tr3 = dict(
type='scatter3d',
x=x1,
y=y1,
z=z1,
mode='lines',
line=dict(color='red', width=1.5),
name=''
)
tr4 = dict(
type='scatter3d',
x=xx1,
y=yy1,
z=zz1,
mode='lines',
line=dict(color='red', width=2),
name=''
)
tr5 = dict(
type='scatter3d',
x=x2,
y=y2,
z=z2,
mode='lines',
line=dict(color='blue', width=1.5),
name=''
)
tr6 = dict(
type='scatter3d',
x=xx2,
y=yy2,
z=zz2,
mode='lines',
line=dict(color='blue', width=2),
name=''
)
data = [tr0, tr1, tr2, tr3, tr4, tr5, tr6]
frames=[]
for k in range(nr_frames):
Ze = A*np.sin(Y/lambdA-omega*t[k])
end1 = np.stack((X, Y, Ze))
x1, y1, z1, xx1, yy1, zz1 = get_arrows(start, end1, theta, plane=0)
XXe = A*np.sin(Y/lambdA-omega*t[k])
end2 = np.stack((XXe, Y, ZZe))
x2, y2, z2, xx2, yy2, zz2 = get_arrows(start, end2, theta, plane=2)
frames += [dict(data=[dict(type='scatter3d',
x=x1,
y=y1,
z=z1),
dict(type='scatter3d',
x=xx1,
y=yy1,
z=zz1),
dict(type='scatter3d',
x=x2,
y=y2,
z=z2),
dict(type='scatter3d',
x=xx2,
y=yy2,
z=zz2)],
traces=[3, 4, 5, 6]
)]
title='Electromagnetic wave propagating in the positive Oy direction<br>'+\
'The electric field vectors (red) are included in the yz-plane,<br> and the magnetic field vectors (blue), in xy'
layout = dict(title=title,
font=dict(family='Balto'),
autosize=False,
width=700,
height=700,
showlegend=False,
scene=dict(camera = dict(eye=dict(x=1.22, y=0.55, z=0.3)),
aspectratio=dict(x=1, y=1, z=0.65),
xaxis_visible=False,
yaxis_visible=False,
zaxis_visible=False,
),
updatemenus=[dict(type='buttons', showactive=False,
y=0.75,
x=1.05,
xanchor='left',
yanchor='top',
pad=dict(t=0, l=10),
buttons=[dict(label='Play',
method='animate',
args=[None,
dict(frame=dict(duration=80,
redraw=True),
transition=dict(duration=0),
fromcurrent=True,
mode='immediate'
)
]
)
]
)
]
)
import chart_studio.plotly as py
fig = go.Figure(data=data, layout=layout, frames=frames)
py.iplot(fig, filename='anim-electromagwave')
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The dataset above contains the ice cream sales, temperature, number of deaths by drowning and humidity level in a city during a timespan of 12 months.
Step2: Identify strong (i.e., correleation coefficient > 0.9) and meaningful correlations among pairs of columns in this dataset.
Step3: Clue
|
<ASSISTANT_TASK:>
Python Code:
# Run the following to import necessary packages and import dataset. Do not use any additional plotting libraries.
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
datafile = "dataset/icecream.csv"
df = pd.read_csv(datafile)
df
# Here are the correlation coefficients between pairs of columns
corr = df.corr()
corr
abs_corr = np.abs(df.corr())
indices = corr.index
corr_pairs = []
for i, idx_i in enumerate(indices):
for j, c in enumerate(abs_corr[idx_i]):
if c > .9 and i < j:
corr_pairs.append((idx_i, indices[j]))
corr_pairs
correlations = []
correlations.append(['ice_cream_sales', 'temperature'])
# do not touch
correlations.sort()
print(correlations)
# meaningful_correlation.append(['column_x', 'column_y'])
correlations_clue = []
# do not touch
correlations_clue.sort()
print(correlations_clue)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We define the model, adapted from the Keras CIFAR-10 example
Step2: We train the model using the
Step3: Now let's train the model again, using the XLA compiler.
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
# Check that GPU is available: cf. https://colab.research.google.com/notebooks/gpu.ipynb
assert(tf.test.is_gpu_available())
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(False) # Start with XLA disabled.
def load_data():
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 256
x_test = x_test.astype('float32') / 256
# Convert class vectors to binary class matrices.
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
return ((x_train, y_train), (x_test, y_test))
(x_train, y_train), (x_test, y_test) = load_data()
def generate_model():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(32, (3, 3)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Conv2D(64, (3, 3), padding='same'),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(64, (3, 3)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10),
tf.keras.layers.Activation('softmax')
])
model = generate_model()
def compile_model(model):
opt = tf.keras.optimizers.RMSprop(lr=0.0001, decay=1e-6)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
return model
model = compile_model(model)
def train_model(model, x_train, y_train, x_test, y_test, epochs=25):
model.fit(x_train, y_train, batch_size=256, epochs=epochs, validation_data=(x_test, y_test), shuffle=True)
def warmup(model, x_train, y_train, x_test, y_test):
# Warm up the JIT, we do not wish to measure the compilation time.
initial_weights = model.get_weights()
train_model(model, x_train, y_train, x_test, y_test, epochs=1)
model.set_weights(initial_weights)
warmup(model, x_train, y_train, x_test, y_test)
train_model(model, x_train, y_train, x_test, y_test)
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
# We need to clear the session to enable JIT in the middle of the program.
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(True) # Enable XLA.
model = compile_model(generate_model())
(x_train, y_train), (x_test, y_test) = load_data()
warmup(model, x_train, y_train, x_test, y_test)
%time train_model(model, x_train, y_train, x_test, y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set file names and directories
Step2: Plot the glacier outlines based on their specific mass balance
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import matplotlib.pyplot as plt
# The two statements below are used mainly to set up a plotting
# default style that's better than the default from matplotlib
#import seaborn as sns
plt.style.use('bmh')
from shapely.geometry import Point
#import pandas as pd
import geopandas as gpd
from geopandas import GeoSeries, GeoDataFrame
file_pth = 'rgi_centralasia/13_rgi32_CentralAsia.shp'
rgi_glac = gpd.read_file(file_pth)
timeframe='[time between DEMs]'
rgi_glac.head()
# test data set-up
gdf = rgi_glac
gdf.plot()
# test data set-up
import random
my_randoms = random.sample(xrange(-50,50), 15)
gdf["spec"]= my_randoms
gdf.to_file("rgi_test.shp")
f, ax = plt.subplots(1, figsize=(6, 4))
rgi_glac.plot(column='[spec mb]', scheme='fisher_jenks', k=7,
alpha=0.9, cmap=plt.cm.Blues, legend=True, ax=ax)
plt.axis('equal')
ax.set_title('Specific Mass Balance'+timeframe)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Definitions
Step2: $\lambda_0$ and $s_0$
Step3: Case
Step4: Initialize AIM solver
Step5: Calculation necessary coefficients
Step6: The Solution
Step7: Case
Step8: Case
Step9: Case
Step10: Eigenvalues and Eigenfucntions for $\alpha = 4, \gamma = 5, \lambda = 100$
|
<ASSISTANT_TASK:>
Python Code:
# Python program to use AIM tools
from asymptotic import *
# symengine (symbolic) variables for lambda_0 and s_0
En, r = se.symbols("En, r")
r0, m, λ, γ = se.symbols("r0, m, λ, γ")
# lambda_0 and s_0
l0 = 2*r - 2*se.sqrt(λ)/r**(m+1) - (1 + m)/r
s0 = se.expand(2 + m + 2*se.sqrt(λ)/r**m + ((2*γ+1-m)*(2*γ+1+m))/(4*r**2) - En)
nr0 = o* 3
nλ = o* 1/10
nm = o* 1/2
nγ = o* 2
nα = 2* (nm + 1)
print('nα =', nα)
# parameters of lambda_0 (pl0) and s_0 (ps0)
pl0 = {λ: nλ, m:nm}
ps0 = {λ: nλ, m:nm, γ:nγ}
%%time
# pass lambda_0, s_0 and variable values to aim class
singular = aim(l0, s0, pl0, ps0)
singular.display_parameters()
singular.display_l0s0(0)
singular.parameters(En, r, nr0, nmax=51, nstep=10, dprec=1000, tol=1e-101)
%%time
# create coefficients for improved AIM
singular.c0()
singular.d0()
singular.cndn()
%%time
singular.get_arb_roots(showRoots='+r', printFormat="{:25.20f}")
nr0 = o* 3
nλ = o* 1/10
nm = o* 1
nγ = o* 0
nα = 2* (nm + 1)
print('nα =', nα)
# parameters of lambda_0 (pl0) and s_0 (ps0)
pl0 = {λ: nλ, m:nm}
ps0 = {λ: nλ, m:nm, γ:nγ}
# pass lambda_0, s_0 and variable values to aim class
singular = aim(l0, s0, pl0, ps0)
singular.display_parameters()
singular.display_l0s0(0)
singular.parameters(En, r, nr0, nmax=51, nstep=10, dprec=500, tol=1e-101)
# create coefficients for improved AIM
singular.c0()
singular.d0()
singular.cndn()
singular.get_arb_roots(showRoots='+r', printFormat="{:25.20f}")
nm = o* 1
nα = 2* (nm + 1)
print('nα =', nα)
nr0 = o* 3
for nλ in [o* 1000/10**gi for gi in range(7)]:
print('\033[91m\033[1m{linea} λ ={fnλ:>8s} {linea}\033[0m'.format(linea = '='*30, fnλ = str(nλ)))
for nγ in [o* gi for gi in range(6)]:
print('{linea} γ ={fnγ:>2s} {linea}'.format(linea = '-'*30, fnγ = str(nγ)))
# parameters of lambda_0 (pl0) and s_0 (ps0)
pl0 = {λ: nλ, m:nm}
ps0 = {λ: nλ, m:nm, γ:nγ}
# pass lambda_0, s_0 and variable values to aim class
singular = aim(l0, s0, pl0, ps0)
#singular.display_parameters()
#singular.display_l0s0(0)
singular.parameters(En, r, nr0, nmax=51, nstep=10, dprec=500, tol=1e-101)
# create coefficients for improved AIM
singular.c0()
singular.d0()
singular.cndn()
singular.get_arb_roots(showRoots='+r', printFormat="{:25.20f}")
# symengine (symbolic) variables for lambda_0 and s_0
En, r = se.symbols("En, r")
r0, α, λ, γ = se.symbols("r0, α, λ, γ")
# lambda_0 and s_0
l0 = 2*(r - (1 + γ)/r)
s0 = 2*γ + 3 + λ/r**α - En
nα = o* 1
nr0 = o* 5
for nλ in [o* 1000/10**gi for gi in range(10)]:
print('\033[91m\033[1m{linea} λ ={fnλ:>11s} {linea}\033[0m'.format(linea = '='*30, fnλ = str(nλ)))
for nγ in [o* gi for gi in range(1)]:
print('{linea} γ ={fnγ:>2s} {linea}'.format(linea = '-'*30, fnγ = str(nγ)))
# parameters of lambda_0 (pl0) and s_0 (ps0)
pl0 = {λ: nλ, γ:nγ}
ps0 = {λ: nλ, γ:nγ, α:nα}
# pass lambda_0, s_0 and variable values to aim class
singular = aim(l0, s0, pl0, ps0)
#singular.display_parameters()
#singular.display_l0s0(0)
singular.parameters(En, r, nr0, nmax=51, nstep=10, dprec=500, tol=1e-101)
# create coefficients for improved AIM
singular.c0()
singular.d0()
singular.cndn()
singular.get_arb_roots(showRoots='+r', printFormat="{:25.20f}")
!pip3 install scipy
from scipy.integrate import odeint
# https://perso.crans.org/besson/publis/notebooks/Runge-Kutta_methods_for_ODE_integration_in_Python.html
%matplotlib notebook
import scipy.optimize as so
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
V = lambda r, α, β, γ, λ: r**β + γ*(γ+1)/r**2 + λ/r**α
# potential parameters
αβγλ = 4, 2, 5, 100
# Energy Eigenvalues
Estop = 34
Espectrum = [15.54392598171621432978,
19.92653800987177727185,
24.26619272363845153429,
28.57312197406556227733,
Estop]
# Draw a line for each eigenvalue
EigenLine = []
for Ei in Espectrum:
Vr = lambda x: V(x, *αβγλ) - Ei
rL, rR = so.newton(Vr, 1e-1), so.newton(Vr, 6)
EigenLine += [[[rL, rR], [Ei, Ei]]]
# create colors for the lines
Eimin = .7 * Espectrum[0]/Espectrum[-1]
Eimax = .7
cm_subsection = np.linspace(Eimin, Eimax, len(Espectrum))
colors = [ cm.hot_r(x) for x in cm_subsection ]
Scale = 4.5
plt.figure(figsize=(Scale, Scale*1.618))
for i, Ei in enumerate(EigenLine[:-1]):
plt.axhline(Espectrum[i], color=colors[i], linestyle=':')
plt.plot(*Ei, color=colors[i])
for i, Ei in enumerate(Espectrum[0:-1]):
def pend(y, t):
return np.array([y[1], -(Ei - V(t, *αβγλ))*y[0]])
y0L = 1e-1, 1e-1
y0R = (-1)**i * 1e-1, (-1)**i * 1e-1
rL, rR = EigenLine[i][0]
rM = (rL + rR)/2
tL = np.linspace(1e-1, rM, 101)
tR = np.linspace(9, rM, 101)
solL = odeint(pend, y0L, tL)
solR = odeint(pend, y0R, tR)
WL = solL[:, 0]/abs(solL[:, 0][-1])
WR = solR[:, 0]/abs(solR[:, 0][-1])
Wmax = max(max(abs(WL)), max(abs(WR)))
plt.plot(tL, Ei + 1.6*WL/Wmax, color=colors[i])
plt.plot(tR, Ei + 1.6*WR/Wmax, color=colors[i])
rs = np.linspace(*EigenLine[-1][0], 500)
plt.plot(rs, V(rs, *αβγλ), color='blue')
plt.xlabel('r [arbitrary]', fontsize=14)
plt.ylabel('$\psi(r)$, V(r), E [all arbitrary]', fontsize=14)
#plt.grid(color='lightgray')
plt.grid(axis='x', linestyle='--', color='lightgray')
#plt.grid(linestyle='--', color='lightgray')
plt.xlim(EigenLine[-1][0])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Expected output
Step3: Expected Output
Step4: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be
Step5: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
Step7: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step16: Expected Output
Step18: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Step20: Expected Output
|
<ASSISTANT_TASK:>
Python Code:
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
# GRADED FUNCTION: basic_sigmoid
import math
import numpy as np
def basic_sigmoid(x):
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + math.exp(-1 * x))
### END CODE HERE ###
return s
basic_sigmoid(3)
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
print(basic_sigmoid(x))
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
#s = np.exp(np.multiply(-1, x))
#s = np.divide(1, np.add(1, s))
s = 1 / (1 + np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s * (1 - s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
# GRADED FUNCTION: image2vector
def image2vector(image):
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
### START CODE HERE ### (≈ 1 line of code)
v = image.copy()
v = v.reshape(1, v.shape[0] * v.shape[1] * v.shape[2])
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
### Here there are m examples and n features
### Number of rows is n
### Number of examples is m
### Each example is a column (and not a row)
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
#x_norm = None
x_norm = np.linalg.norm(x, ord = 2, axis = 1, keepdims=True)
# print ('===================================')
# print (x.shape)
# print (x_norm.shape)
# Divide x by its norm.
x = x / x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
# GRADED FUNCTION: softmax
def softmax(x):
Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis=1).reshape(2,1)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp / x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
# GRADED FUNCTION: L1
def L1(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
loss = 0
### START CODE HERE ### (≈ 1 line of code)
#y_minus_yhat = y - yhat
#loss = np.linalg.norm(y_minus_yhat, ord=1)
for i in range(y.shape[0]):
loss += math.sqrt((y[i] - yhat[i]) ** 2)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
# GRADED FUNCTION: L2
def L2(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = (np.dot((y-yhat), (y-yhat)))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exact
Step2: Grid
Step3: Laplace
Step4: ADVI
Step5: HMC
|
<ASSISTANT_TASK:>
Python Code:
# https://github.com/probml/pyprobml/blob/master/scripts/beta_binom_approx_post_pymc3.py
# 1d approixmation to beta binomial model
# https://github.com/aloctavodia/BAP
# import superimport
import pymc3 as pm
import numpy as np
import seaborn as sns
import scipy.stats as stats
import matplotlib.pyplot as plt
import arviz as az
import math
# import pyprobml_utils as pml
# data = np.repeat([0, 1], (10, 3))
data = np.repeat([0, 1], (10, 1))
h = data.sum()
t = len(data) - h
plt.figure()
x = np.linspace(0, 1, 100)
xs = x # grid
dx_exact = xs[1] - xs[0]
post_exact = stats.beta.pdf(xs, h + 1, t + 1)
post_exact = post_exact / np.sum(post_exact)
plt.plot(xs, post_exact)
plt.yticks([])
plt.title("exact posterior")
plt.savefig("bb_exact.pdf")
# Grid
def posterior_grid(heads, tails, grid_points=100):
grid = np.linspace(0, 1, grid_points)
prior = np.repeat(1 / grid_points, grid_points) # uniform prior
likelihood = stats.binom.pmf(heads, heads + tails, grid)
posterior = likelihood * prior
posterior /= posterior.sum()
# posterior = posterior * grid_points
return grid, posterior
n = 20
grid, posterior = posterior_grid(h, t, n)
dx_grid = grid[1] - grid[0]
sf = dx_grid / dx_exact # Jacobian scale factor
plt.figure()
# plt.stem(grid, posterior, use_line_collection=True)
plt.bar(grid, posterior, width=1 / n, alpha=0.2)
plt.plot(xs, post_exact * sf)
plt.title("grid approximation")
plt.yticks([])
plt.xlabel("θ")
plt.savefig("bb_grid.pdf")
# Laplace
with pm.Model() as normal_aproximation:
theta = pm.Beta("theta", 1.0, 1.0)
y = pm.Binomial("y", n=1, p=theta, observed=data) # Bernoulli
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[theta])) ** 0.5)[0]
mu = mean_q["theta"]
print([mu, std_q])
plt.figure()
plt.plot(xs, stats.norm.pdf(xs, mu, std_q), "--", label="Laplace")
post_exact = stats.beta.pdf(xs, h + 1, t + 1)
plt.plot(xs, post_exact, label="exact")
plt.title("Quadratic approximation")
plt.xlabel("θ", fontsize=14)
plt.yticks([])
plt.legend()
plt.savefig("bb_laplace.pdf");
# ADVI
with pm.Model() as mf_model:
theta = pm.Beta("theta", 1.0, 1.0)
y = pm.Binomial("y", n=1, p=theta, observed=data) # Bernoulli
mean_field = pm.fit(method="advi")
trace_mf = mean_field.sample(1000)
thetas = trace_mf["theta"]
axes = az.plot_posterior(thetas, hdi_prob=0.95)
plt.savefig("bb_mf.pdf")
plt.show()
# track mean and std
with pm.Model() as mf_model:
theta = pm.Beta("theta", 1.0, 1.0)
y = pm.Binomial("y", n=1, p=theta, observed=data) # Bernoulli
advi = pm.ADVI()
tracker = pm.callbacks.Tracker(
mean=advi.approx.mean.eval, std=advi.approx.std.eval # callable that returns mean # callable that returns std
)
approx = advi.fit(callbacks=[tracker])
trace_approx = approx.sample(1000)
thetas = trace_approx["theta"]
axes = az.plot_posterior(thetas, hdi_prob=0.95)
plt.savefig("bb_mf_kde_az.pdf")
plt.figure()
plt.plot(tracker["mean"])
plt.title("Mean")
plt.savefig("bb_mf_mean.pdf")
plt.figure()
plt.plot(tracker["std"])
plt.title("Std ")
plt.savefig("bb_mf_std.pdf")
plt.figure()
plt.plot(advi.hist)
plt.title("Negative ELBO")
plt.savefig("bb_mf_elbo.pdf")
plt.figure()
sns.kdeplot(thetas)
plt.title("KDE of posterior samples")
plt.savefig("bb_mf_kde.pdf")
plt.show()
# HMC
with pm.Model() as hmc_model:
theta = pm.Beta("theta", 1.0, 1.0)
y = pm.Binomial("y", n=1, p=theta, observed=data) # Bernoulli
trace = pm.sample(1000, random_seed=42, cores=1, chains=2)
thetas = trace["theta"]
axes = az.plot_posterior(thetas, hdi_prob=0.95)
plt.savefig("bb_hmc.pdf")
az.plot_trace(trace)
plt.savefig("bb_hmc_trace.pdf", dpi=300)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Por fim, geramos nossa base de treino e teste no formato para o ScikitLearn
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
#Lendo a base de dados
df = pd.read_csv('../datasets/titanic/train.csv')
print("Tabela Original")
df.head()
df = df.drop(['Name', 'Ticket', 'Cabin'], axis=1)
df = df.dropna()
df['Gender'] = df['Sex'].map({'female': 0, 'male':1}).astype(int)
df['Port'] = df['Embarked'].map({'C':1, 'S':2, 'Q':3}).astype(int)
df = df.drop(['Sex', 'Embarked'], axis=1)
cols = df.columns.tolist()
cols = [cols[1]] + cols[0:1] + cols[2:]
df = df[cols]
print("Tabela depois de processada")
df.head()
dataset = {
'data': df.values[0:,2:],
'target': df.values[0:,0]
}
X = dataset['data']
y = dataset['target']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: By default, JAX exploits single-precision numbers float32
Step2: Random numbers in JAX
Step3: Forward mode vs. backward mode
Step4: Forward mode vs. backward mode
Step5: Hessian-by-vector product
|
<ASSISTANT_TASK:>
Python Code:
import jax
import jax.numpy as jnp
from jax.config import config
config.update("jax_enable_x64", True)
@jax.jit
def f(x, A, b):
res = A @ x - b
return res @ res
gradf = jax.grad(f, argnums=0, has_aux=False)
n = 1000
x = jax.random.normal(jax.random.PRNGKey(0), (n, ))
A = jax.random.normal(jax.random.PRNGKey(0), (n, n))
b = jax.random.normal(jax.random.PRNGKey(0), (n, ))
print("Check correctness", jnp.linalg.norm(gradf(x, A, b) - 2 * A.T @ (A @ x - b)))
print("Compare speed")
print("Analytical gradient")
%timeit 2 * A.T @ (A @ x - b)
print("Grad function")
%timeit gradf(x, A, b).block_until_ready()
jit_gradf = jax.jit(gradf)
print("Jitted grad function")
%timeit jit_gradf(x, A, b).block_until_ready()
hess_func = jax.jit(jax.hessian(f))
print("Check correctness", jnp.linalg.norm(2 * A.T @ A - hess_func(x, A, b)))
print("Time for hessian")
%timeit hess_func(x, A, b).block_until_ready()
print("Emulate hessian and check correctness",
jnp.linalg.norm(jax.jit(hess_func)(x, A, b) - jax.jacfwd(jax.jacrev(f))(x, A, b)))
print("Time of emulating hessian")
hess_umul_func = jax.jit(jax.jacfwd(jax.jacrev(f)))
%timeit hess_umul_func(x, A, b).block_until_ready()
fmode_f = jax.jit(jax.jacfwd(f))
bmode_f = jax.jit(jax.jacrev(f))
print("Check correctness", jnp.linalg.norm(fmode_f(x, A, b) - bmode_f(x, A, b)))
print("Forward mode")
%timeit fmode_f(x, A, b).block_until_ready()
print("Backward mode")
%timeit bmode_f(x, A, b).block_until_ready()
def fvec(x, A, b):
y = A @ x + b
return jnp.exp(y - jnp.max(y)) / jnp.sum(jnp.exp(y - jnp.max(y)))
grad_fvec = jax.jit(jax.grad(fvec))
jac_fvec = jax.jacobian(fvec)
fmode_fvec = jax.jit(jax.jacfwd(fvec))
bmode_fvec = jax.jit(jax.jacrev(fvec))
n = 1000
m = 1000
x = jax.random.normal(jax.random.PRNGKey(0), (n, ))
A = jax.random.normal(jax.random.PRNGKey(0), (m, n))
b = jax.random.normal(jax.random.PRNGKey(0), (m, ))
J = jac_fvec(x, A, b)
print(J.shape)
grad_fvec(x, A, b)
print("Check correctness", jnp.linalg.norm(fmode_fvec(x, A, b) - bmode_fvec(x, A, b)))
print("Check shape", fmode_fvec(x, A, b).shape, bmode_fvec(x, A, b).shape)
print("Time forward mode")
%timeit fmode_fvec(x, A, b).block_until_ready()
print("Time backward mode")
%timeit bmode_fvec(x, A, b).block_until_ready()
n = 10
m = 1000
x = jax.random.normal(jax.random.PRNGKey(0), (n, ))
A = jax.random.normal(jax.random.PRNGKey(0), (m, n))
b = jax.random.normal(jax.random.PRNGKey(0), (m, ))
print("Check correctness", jnp.linalg.norm(fmode_fvec(x, A, b) - bmode_fvec(x, A, b)))
print("Check shape", fmode_fvec(x, A, b).shape, bmode_fvec(x, A, b).shape)
print("Time forward mode")
%timeit fmode_fvec(x, A, b).block_until_ready()
print("Time backward mode")
%timeit bmode_fvec(x, A, b).block_until_ready()
def hvp(f, x, z, *args):
def g(x):
return f(x, *args)
return jax.jvp(jax.grad(g), (x,), (z,))[1]
n = 3000
x = jax.random.normal(jax.random.PRNGKey(0), (n, ))
A = jax.random.normal(jax.random.PRNGKey(0), (n, n))
b = jax.random.normal(jax.random.PRNGKey(0), (n, ))
z = jax.random.normal(jax.random.PRNGKey(0), (n, ))
print("Check correctness", jnp.linalg.norm(2 * A.T @ (A @ z) - hvp(f, x, z, A, b)))
print("Time for hvp by hands")
%timeit (2 * A.T @ (A @ z)).block_until_ready()
print("Time for hvp via jvp, NO jit")
%timeit hvp(f, x, z, A, b).block_until_ready()
print("Time for hvp via jvp, WITH jit")
%timeit jax.jit(hvp, static_argnums=0)(f, x, z, A, b).block_until_ready()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If the following cell does not reflect the version of tensorflow and tensor2tensor that you just installed, click "Reset Session" on the notebook so that the Python environment picks up the new packages.
Step2: Download data
Step3: Create training dataset
Step6: We do not need to generate the data beforehand -- instead, we can have Tensor2Tensor create the training dataset for us. So, in the code below, I will use only data/poetry/raw.txt -- obviously, this allows us to productionize our model better. Simply keep collecting raw data and generate the training/test data at the time of training.
Step7: Generate training data
Step8: Let's check to see the files that were output. If you see a broken pipe error, please ignore.
Step9: Provide Cloud ML Engine access to data
Step10: Train model locally on subset of data
Step11: Note
Step12: Option 1
Step13: Option 2
Step14: The job took about <b>25 minutes</b> for me and ended with these evaluation metrics
Step15: The job took about <b>10 minutes</b> for me and ended with these evaluation metrics
Step16: This job took <b>12 hours</b> for me and ended with these metrics
Step17: Batch-predict
Step18: Let's write out the odd-numbered lines. We'll compare how close our model can get to the beauty of Rumi's second lines given his first.
Step19: <b> Note </b> if you get an error about "AttributeError
Step20: Some of these are still phrases and not complete sentences. This indicates that we might need to train longer or better somehow. We need to diagnose the model ...
Step21: <table>
Step22: When I ran the above job, it took about 15 hours and finished with these as the best parameters
Step23: Take the first three line. I'm showing the first line of the couplet provided to the model, how the AI model that we trained complets it and how Rumi completes it
Step24: Cloud ML Engine
Step25: Kubeflow
|
<ASSISTANT_TASK:>
Python Code:
%%bash
pip freeze | grep tensor
# Choose a version of TensorFlow that is supported on TPUs
TFVERSION='1.13'
import os
os.environ['TFVERSION'] = TFVERSION
%%bash
pip install tensor2tensor==${TFVERSION} gutenberg
# install from sou
#git clone https://github.com/tensorflow/tensor2tensor.git
#cd tensor2tensor
#yes | pip install --user -e .
%%bash
pip freeze | grep tensor
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# this is what this notebook is demonstrating
PROBLEM= 'poetry_line_problem'
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['PROBLEM'] = PROBLEM
#os.environ['PATH'] = os.environ['PATH'] + ':' + os.getcwd() + '/tensor2tensor/tensor2tensor/bin/'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
rm -rf data/poetry
mkdir -p data/poetry
from gutenberg.acquire import load_etext
from gutenberg.cleanup import strip_headers
import re
books = [
# bookid, skip N lines
(26715, 1000, 'Victorian songs'),
(30235, 580, 'Baldwin collection'),
(35402, 710, 'Swinburne collection'),
(574, 15, 'Blake'),
(1304, 172, 'Bulchevys collection'),
(19221, 223, 'Palgrave-Pearse collection'),
(15553, 522, 'Knowles collection')
]
with open('data/poetry/raw.txt', 'w') as ofp:
lineno = 0
for (id_nr, toskip, title) in books:
startline = lineno
text = strip_headers(load_etext(id_nr)).strip()
lines = text.split('\n')[toskip:]
# any line that is all upper case is a title or author name
# also don't want any lines with years (numbers)
for line in lines:
if (len(line) > 0
and line.upper() != line
and not re.match('.*[0-9]+.*', line)
and len(line) < 50
):
cleaned = re.sub('[^a-z\'\-]+', ' ', line.strip().lower())
ofp.write(cleaned)
ofp.write('\n')
lineno = lineno + 1
else:
ofp.write('\n')
print('Wrote lines {} to {} from {}'.format(startline, lineno, title))
!wc -l data/poetry/*.txt
with open('data/poetry/raw.txt', 'r') as rawfp,\
open('data/poetry/input.txt', 'w') as infp,\
open('data/poetry/output.txt', 'w') as outfp:
prev_line = ''
for curr_line in rawfp:
curr_line = curr_line.strip()
# poems break at empty lines, so this ensures we train only
# on lines of the same poem
if len(prev_line) > 0 and len(curr_line) > 0:
infp.write(prev_line + '\n')
outfp.write(curr_line + '\n')
prev_line = curr_line
!head -5 data/poetry/*.txt
%%bash
rm -rf poetry
mkdir -p poetry/trainer
%%writefile poetry/trainer/problem.py
import os
import tensorflow as tf
from tensor2tensor.utils import registry
from tensor2tensor.models import transformer
from tensor2tensor.data_generators import problem
from tensor2tensor.data_generators import text_encoder
from tensor2tensor.data_generators import text_problems
from tensor2tensor.data_generators import generator_utils
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
@registry.register_problem
class PoetryLineProblem(text_problems.Text2TextProblem):
Predict next line of poetry from the last line. From Gutenberg texts.
@property
def approx_vocab_size(self):
return 2**13 # ~8k
@property
def is_generate_per_split(self):
# generate_data will NOT shard the data into TRAIN and EVAL for us.
return False
@property
def dataset_splits(self):
Splits of data to produce and number of output shards for each.
# 10% evaluation data
return [{
"split": problem.DatasetSplit.TRAIN,
"shards": 90,
}, {
"split": problem.DatasetSplit.EVAL,
"shards": 10,
}]
def generate_samples(self, data_dir, tmp_dir, dataset_split):
with open('data/poetry/raw.txt', 'r') as rawfp:
prev_line = ''
for curr_line in rawfp:
curr_line = curr_line.strip()
# poems break at empty lines, so this ensures we train only
# on lines of the same poem
if len(prev_line) > 0 and len(curr_line) > 0:
yield {
"inputs": prev_line,
"targets": curr_line
}
prev_line = curr_line
# Smaller than the typical translate model, and with more regularization
@registry.register_hparams
def transformer_poetry():
hparams = transformer.transformer_base()
hparams.num_hidden_layers = 2
hparams.hidden_size = 128
hparams.filter_size = 512
hparams.num_heads = 4
hparams.attention_dropout = 0.6
hparams.layer_prepostprocess_dropout = 0.6
hparams.learning_rate = 0.05
return hparams
@registry.register_hparams
def transformer_poetry_tpu():
hparams = transformer_poetry()
transformer.update_hparams_for_tpu(hparams)
return hparams
# hyperparameter tuning ranges
@registry.register_ranged_hparams
def transformer_poetry_range(rhp):
rhp.set_float("learning_rate", 0.05, 0.25, scale=rhp.LOG_SCALE)
rhp.set_int("num_hidden_layers", 2, 4)
rhp.set_discrete("hidden_size", [128, 256, 512])
rhp.set_float("attention_dropout", 0.4, 0.7)
%%writefile poetry/trainer/__init__.py
from . import problem
%%writefile poetry/setup.py
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = [
'tensor2tensor'
]
setup(
name='poetry',
version='0.1',
author = 'Google',
author_email = 'training-feedback@cloud.google.com',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='Poetry Line Problem',
requires=[]
)
!touch poetry/__init__.py
!find poetry
%%bash
DATA_DIR=./t2t_data
TMP_DIR=$DATA_DIR/tmp
rm -rf $DATA_DIR $TMP_DIR
mkdir -p $DATA_DIR $TMP_DIR
# Generate data
t2t-datagen \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--data_dir=$DATA_DIR \
--tmp_dir=$TMP_DIR
!ls t2t_data | head
%%bash
DATA_DIR=./t2t_data
gsutil -m rm -r gs://${BUCKET}/poetry/
gsutil -m cp ${DATA_DIR}/${PROBLEM}* ${DATA_DIR}/vocab* gs://${BUCKET}/poetry/data
%%bash
PROJECT_ID=$PROJECT
AUTH_TOKEN=$(gcloud auth print-access-token)
SVC_ACCOUNT=$(curl -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer $AUTH_TOKEN" \
https://ml.googleapis.com/v1/projects/${PROJECT_ID}:getConfig \
| python -c "import json; import sys; response = json.load(sys.stdin); \
print(response['serviceAccount'])")
echo "Authorizing the Cloud ML Service account $SVC_ACCOUNT to access files in $BUCKET"
gsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET
gsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored
gsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET
%%bash
BASE=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/subset
gsutil -m rm -r $OUTDIR
gsutil -m cp \
${BASE}/${PROBLEM}-train-0008* \
${BASE}/${PROBLEM}-dev-00000* \
${BASE}/vocab* \
$OUTDIR
%%bash
DATA_DIR=gs://${BUCKET}/poetry/subset
OUTDIR=./trained_model
rm -rf $OUTDIR
t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--hparams_set=transformer_poetry \
--output_dir=$OUTDIR --job-dir=$OUTDIR --train_steps=10
%%bash
LOCALGPU="--train_steps=7500 --worker_gpu=1 --hparams_set=transformer_poetry"
DATA_DIR=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/model
rm -rf $OUTDIR
t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--hparams_set=transformer_poetry \
--output_dir=$OUTDIR ${LOCALGPU}
%%bash
GPU="--train_steps=7500 --cloud_mlengine --worker_gpu=1 --hparams_set=transformer_poetry"
DATADIR=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/model
JOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
yes Y | t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--output_dir=$OUTDIR \
${GPU}
%%bash
## CHANGE the job name (based on output above: You will see a line such as Launched transformer_poetry_line_problem_t2t_20190322_233159)
gcloud ml-engine jobs describe transformer_poetry_line_problem_t2t_20190323_003001
%%bash
# use one of these
TPU="--train_steps=7500 --use_tpu=True --cloud_tpu_name=laktpu --hparams_set=transformer_poetry_tpu"
DATADIR=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/model_tpu
JOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
echo "'Y'" | t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--output_dir=$OUTDIR \
${TPU}
%%bash
gsutil ls gs://${BUCKET}/poetry/model_tpu
%%bash
XXX This takes 3 hours on 4 GPUs. Remove this line if you are sure you want to do this.
DATADIR=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/model_full2
JOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
echo "'Y'" | t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--hparams_set=transformer_poetry \
--output_dir=$OUTDIR \
--train_steps=75000 --cloud_mlengine --worker_gpu=4
%%bash
gsutil ls gs://${BUCKET}/poetry/model #_modeltpu
%%writefile data/poetry/rumi.txt
Where did the handsome beloved go?
I wonder, where did that tall, shapely cypress tree go?
He spread his light among us like a candle.
Where did he go? So strange, where did he go without me?
All day long my heart trembles like a leaf.
All alone at midnight, where did that beloved go?
Go to the road, and ask any passing traveler —
That soul-stirring companion, where did he go?
Go to the garden, and ask the gardener —
That tall, shapely rose stem, where did he go?
Go to the rooftop, and ask the watchman —
That unique sultan, where did he go?
Like a madman, I search in the meadows!
That deer in the meadows, where did he go?
My tearful eyes overflow like a river —
That pearl in the vast sea, where did he go?
All night long, I implore both moon and Venus —
That lovely face, like a moon, where did he go?
If he is mine, why is he with others?
Since he’s not here, to what “there” did he go?
If his heart and soul are joined with God,
And he left this realm of earth and water, where did he go?
Tell me clearly, Shams of Tabriz,
Of whom it is said, “The sun never dies” — where did he go?
%%bash
awk 'NR % 2 == 1' data/poetry/rumi.txt | tr '[:upper:]' '[:lower:]' | sed "s/[^a-z\'-\ ]//g" > data/poetry/rumi_leads.txt
head -3 data/poetry/rumi_leads.txt
%%bash
# same as the above training job ...
TOPDIR=gs://${BUCKET}
OUTDIR=${TOPDIR}/poetry/model #_tpu # or ${TOPDIR}/poetry/model_full
DATADIR=${TOPDIR}/poetry/data
MODEL=transformer
HPARAMS=transformer_poetry #_tpu
# the file with the input lines
DECODE_FILE=data/poetry/rumi_leads.txt
BEAM_SIZE=4
ALPHA=0.6
t2t-decoder \
--data_dir=$DATADIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$OUTDIR \
--t2t_usr_dir=./poetry/trainer \
--decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \
--decode_from_file=$DECODE_FILE
%%bash
DECODE_FILE=data/poetry/rumi_leads.txt
cat ${DECODE_FILE}.*.decodes
from google.datalab.ml import TensorBoard
TensorBoard().start('gs://{}/poetry/model_full'.format(BUCKET))
for pid in TensorBoard.list()['pid']:
TensorBoard().stop(pid)
print('Stopped TensorBoard with pid {}'.format(pid))
%%bash
XXX This takes about 15 hours and consumes about 420 ML units. Uncomment if you wish to proceed anyway
DATADIR=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/model_hparam
JOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
echo "'Y'" | t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--hparams_set=transformer_poetry \
--output_dir=$OUTDIR \
--hparams_range=transformer_poetry_range \
--autotune_objective='metrics-poetry_line_problem/accuracy_per_sequence' \
--autotune_maximize \
--autotune_max_trials=4 \
--autotune_parallel_trials=4 \
--train_steps=7500 --cloud_mlengine --worker_gpu=4
%%bash
# same as the above training job ...
BEST_TRIAL=28 # CHANGE as needed.
TOPDIR=gs://${BUCKET}
OUTDIR=${TOPDIR}/poetry/model_hparam/$BEST_TRIAL
DATADIR=${TOPDIR}/poetry/data
MODEL=transformer
HPARAMS=transformer_poetry
# the file with the input lines
DECODE_FILE=data/poetry/rumi_leads.txt
BEAM_SIZE=4
ALPHA=0.6
t2t-decoder \
--data_dir=$DATADIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$OUTDIR \
--t2t_usr_dir=./poetry/trainer \
--decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \
--decode_from_file=$DECODE_FILE \
--hparams="num_hidden_layers=4,hidden_size=512"
%%bash
DECODE_FILE=data/poetry/rumi_leads.txt
cat ${DECODE_FILE}.*.decodes
%%bash
TOPDIR=gs://${BUCKET}
OUTDIR=${TOPDIR}/poetry/model_full2
DATADIR=${TOPDIR}/poetry/data
MODEL=transformer
HPARAMS=transformer_poetry
BEAM_SIZE=4
ALPHA=0.6
t2t-exporter \
--model=$MODEL \
--hparams_set=$HPARAMS \
--problem=$PROBLEM \
--t2t_usr_dir=./poetry/trainer \
--decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \
--data_dir=$DATADIR \
--output_dir=$OUTDIR
%%bash
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/poetry/model_full2/export/Servo | tail -1)
echo $MODEL_LOCATION
saved_model_cli show --dir $MODEL_LOCATION --tag_set serve --signature_def serving_default
%%writefile mlengine.json
description: Poetry service on ML Engine
autoScaling:
minNodes: 1 # We don't want this model to autoscale down to zero
%%bash
MODEL_NAME="poetry"
MODEL_VERSION="v1"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/poetry/model_full2/export/Servo | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
#gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud alpha ml-engine versions create --machine-type=mls1-highcpu-4 ${MODEL_VERSION} \
--model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=1.5 --config=mlengine.json
%%bash
gcloud components update --quiet
gcloud components install alpha --quiet
%%bash
MODEL_NAME="poetry"
MODEL_VERSION="v1"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/poetry/model_full2/export/Servo | tail -1)
gcloud alpha ml-engine versions create --machine-type=mls1-highcpu-4 ${MODEL_VERSION} \
--model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=1.5 --config=mlengine.json
!cat application/app.yaml
%%bash
cd application
#gcloud app create # if this is your first app
#gcloud app deploy --quiet --stop-previous-version app.yaml
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Features and high-dimensional spaces
Step3: 1.2 Non-linear transformations
Step4: 1.3 Projection to 2D via decomposition
Step5: 1.4 Clustering in transformed feature space
Step6: 1.5 Transforming pairwise dissimilarities between objects into feature space
Step7: 1.6. Example
Step8: Features in the transformed space -- latent variables -- represent groups of genes with similar protein expression patterns in Breast Cancer. These could be, for instance, genes involved in the same pathway.
Step9: 2. Feature Selection
Step10: 2.2 Feature selection and model selection
Step11: The disadvantage with L1 regularization is that if multiple features are correlated only one of them will have a high coefficient.
Step12: Also important is to normalize the means and variances of the features before comparing the coefficients.
Step13: 2.3 Feature selection with cross-validation
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
import sklearn.metrics
import sklearn.datasets
import sklearn.manifold
import sklearn.decomposition
import sklearn.preprocessing
import sklearn.cluster
import sklearn.feature_selection
import sklearn.ensemble
import sklearn.svm
import sklearn.model_selection
# from sklearn import linear_model
%matplotlib inline
# sklearn.datasets.make_sparse_uncorrelated
X, y = sklearn.datasets.make_blobs(n_samples=30, centers=3, n_features=7, random_state=0)
print(X.shape)
sns.heatmap(X, robust=True, square=False, yticklabels=True, xticklabels=True, cbar=True)
plt.show()
# D = pairwise_distances(X, metric='euclidean', n_jobs=-1)
# sns.heatmap(D, robust=True, square=True, yticklabels=True, xticklabels=True, cbar=True)
# plt.show()
# plt.hist(np.hstack(D), 20, facecolor='orange', alpha=0.75)
# plt.xlabel('Pairwise distances')
# plt.ylabel('Frequency')
# plt.grid(True)
# plt.show()
def scatterplot_2D(R, title, labels=None):
Helper function to plot data points in 2D
Requires (N, 2) numpy array shape
assert(R.shape[1] == 2)
# class labels are turned into colors
if labels is None:
c = 'black'
else:
color_scale = np.linspace(0, 1, len(set(labels)))
c = [plt.cm.Set1(color_scale[i]) for i in labels]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.patch.set_facecolor('white')
ax.scatter(R[...,0], R[...,1], color=c)
ax.axis('square')
ax.set_xlabel('R1')
ax.set_ylabel('R2')
fig.suptitle(title)
plt.show()
# examples of non-linear transformations:
# NOTE THAT THERE IS NO transform(X)
R_MDS = sklearn.manifold.MDS(n_components=2).fit_transform(X)
scatterplot_2D(R_MDS, 'MDS', y)
R_ISO = sklearn.manifold.Isomap(n_components=2).fit_transform(X)
scatterplot_2D(R_ISO, 'Isomap', y)
R_TSNE = sklearn.manifold.TSNE(n_components=2, perplexity=10.0).fit_transform(X)
scatterplot_2D(R_TSNE, 'TSNE', y)
# Examples of projection to 2D via decomposition
# NOTE THAT THERE IS transform(X)
# Principal Component Analysis
R_PCA = sklearn.decomposition.PCA(n_components=2).fit_transform(X)
scatterplot_2D(R_PCA, 'PCA', y)
# Factor Analysus
R = sklearn.decomposition.FactorAnalysis(n_components=2).fit_transform(X)
scatterplot_2D(R, 'FA', y)
# Nonnegative matrix factorization
# NMF requires non-negative values
X_nonnegative = sklearn.preprocessing.MinMaxScaler().fit_transform(X)
R_NMF = sklearn.decomposition.NMF(n_components=2).fit_transform(X_nonnegative)
scatterplot_2D(R_NMF, 'NMF', y)
# from sklearn.metrics.cluster import v_measure_score, adjusted_rand_score
# Original labels:
scatterplot_2D(R_NMF, 'Original labels, 3 clusters', y)
# Clustering the original 7D dataset with KMeans
kmeans = sklearn.cluster.KMeans(n_clusters=3, random_state=0).fit(X)
scatterplot_2D(R_NMF, 'Kmeans in 7D, 3 clusters', kmeans.labels_)
print(y)
print(kmeans.labels_)
print("V measure", sklearn.metrics.cluster.v_measure_score(y, kmeans.labels_))
print("Adj. Rand score", sklearn.metrics.cluster.adjusted_rand_score(y, kmeans.labels_))
# Clustering 2D dataset
kmeans = sklearn.cluster.KMeans(n_clusters=3, random_state=0).fit(R_NMF)
scatterplot_2D(R_NMF, 'Kmeans in 2D, 3 clusters', kmeans.labels_)
print(y)
print(kmeans.labels_)
print("V measure", sklearn.metrics.cluster.v_measure_score(y, kmeans.labels_))
print("Adj. Rand score", sklearn.metrics.cluster.adjusted_rand_score(y, kmeans.labels_))
# Original data points (can be unknown)
X, y = sklearn.datasets.make_blobs(n_samples=4, centers=3, n_features=2, random_state=0)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.patch.set_facecolor('white')
ax.scatter(X[...,0], X[...,1], c=("red", "green", "blue", "orange"), s=120, edgecolors='none')
ax.set_autoscale_on(False)
ax.axis('square')
ax.set_xlabel('R1')
ax.set_ylabel('R2')
fig.suptitle("TRUE")
plt.show()
# Pairwise dissimilarities
D = sklearn.metrics.pairwise_distances(X, metric = 'euclidean')
####################################################################################
M = sklearn.manifold.MDS(n_components=2, n_init=1, max_iter=10000, metric=True, dissimilarity="precomputed")
K = M.fit_transform(D)
print("Stress", M.stress_)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.patch.set_facecolor('white')
ax.scatter(K[...,0], K[...,1], c=("red", "green", "blue", "orange"), s=120, edgecolors='none')
ax.set_autoscale_on(False)
ax.axis('square')
ax.set_xlabel('R1')
ax.set_ylabel('R2')
fig.suptitle("MDS on pairwise dissimilarities")
plt.show()
from pathlib import Path
ICGC_API = 'https://dcc.icgc.org/api/v1/download?fn=/release_18/Projects/BRCA-US/'
# clinical_fname = 'clinical.BRCA-US.tsv.gz'
# if not Path(clinical_fname).is_file():
# urllib.request.urlretrieve(ICGC_API + 'clinical.BRCA-US.tsv.gz', clinical_fname);
expression_fname = 'protein_expression.BRCA-US.tsv.gz'
if not Path(expression_fname).is_file():
urllib.request.urlretrieve(ICGC_API + 'protein_expression.BRCA-US.tsv.gz', expression_fname);
E = pd.read_csv(expression_fname, delimiter='\t')
E.head(1)
donors = set(E['icgc_donor_id'])
genes = set(E['gene_name'])
print("Donors (data points):", len(donors))
print("Genes (features): ", len(genes))
donor2id = {donor: i for i, donor in enumerate(donors)}
id2donor = dict(zip(donor2id.values(), donor2id.keys()))
gene2id = {gene: i for i, gene in enumerate(genes)}
id2gene = dict(zip(gene2id.values(), gene2id.keys()))
data = np.zeros((len(donors), len(genes)))
for i in range(len(E)):
data[donor2id[E.loc[i, 'icgc_donor_id']], gene2id[E.loc[i, 'gene_name']]] = float(E.loc[i, 'normalized_expression_level'])
data = sklearn.preprocessing.MinMaxScaler().fit_transform(data)
# NMF DECOMPOSITION
rank = 10
nmf = sklearn.decomposition.NMF(n_components=rank).fit(data)
V = data
W = nmf.components_
H = nmf.transform(data)
print("V ~ W dot H + error")
print("Error = ", nmf.reconstruction_err_)
print("V = ", V.shape)
print("W = ", W.shape)
print("H = ", H.shape)
sns.heatmap(V, robust=True, square=False, yticklabels=False, xticklabels=False, cbar=False)
plt.show()
g, ax = plt.subplots(figsize=(6,6))
sns.heatmap(W, robust=True, square=True, yticklabels=False, xticklabels=False, cbar=False)
plt.show()
g, ax = plt.subplots(figsize=(15, 3))
sns.heatmap(H, robust=True, square=False, yticklabels=False, xticklabels=False, cbar=False)
plt.show()
# Clustering in reduced feature space
sns.clustermap(H, xticklabels=True, yticklabels=False)
plt.show()
# Show top 5 genes in each gene group
for gene_group in range(H.shape[1]):
k = 5
topk = np.argsort(np.asarray(W[gene_group, :]).flatten())[-k:]
# print("Indices of related genes", topk)
val = W[gene_group, topk]
# print("Gene weights", val)
plt.barh(np.arange(k) + .5, val, align="center")
labels = [id2gene[idx] for idx in topk]
plt.yticks(np.arange(k) + .5, labels)
plt.xlabel("Weight")
plt.ylabel("Gene group {}".format(gene_group));
plt.show()
X = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 1, 1], [0, 1, 0], [0, 1, 1]])
print(X)
sel = sklearn.feature_selection.VarianceThreshold(threshold=(.8 * (1 - .8)))
X_selected = sel.fit_transform(X)
print(X_selected)
iris = sklearn.datasets.load_iris()
X, y = iris.data, iris.target
print(X.shape)
print(y)
X_new = sklearn.feature_selection.SelectKBest(sklearn.feature_selection.chi2, k=2).fit_transform(X, y)
print(X_new.shape)
# Load the digits dataset
digits = sklearn.datasets.load_digits()
X = digits.images.reshape((len(digits.images), -1))
y = digits.target
# Create the RFE (Recursive feature elimination) object and rank each pixel
clf = sklearn.linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6)
rfe = sklearn.feature_selection.RFE(estimator=clf, n_features_to_select=1, step=1)
rfe.fit(X, y)
ranking = rfe.ranking_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking) # cmap=plt.get_cmap('summer')
plt.colorbar()
plt.title("Ranking of pixels with RFE\n")
plt.show()
randomized_logistic = sklearn.linear_model.RandomizedLogisticRegression()
randomized_logistic.fit(X, y)
ranking = 1.0 - randomized_logistic.scores_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking)
plt.colorbar()
plt.title("Ranking of pixels\n")
plt.show()
clf = sklearn.ensemble.RandomForestClassifier(n_estimators=100)
clf.fit(X, y)
ranking = clf.feature_importances_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking)
plt.colorbar()
plt.title("Random Forest ranking\n")
plt.show()
# Build a classification task using 3 informative features
X, y = sklearn.datasets.make_classification(n_samples=1000, n_features=30, n_informative=5,
n_redundant=4, n_repeated=0, n_classes=4,
n_clusters_per_class=1, random_state=0)
# Create the RFE object and compute a cross-validated score.
svc = sklearn.svm.SVC(kernel="linear", class_weight="balanced")
rfecv = sklearn.feature_selection.RFECV(estimator=svc,
step=1,
cv=sklearn.model_selection.StratifiedKFold(2),
scoring='accuracy')
rfecv.fit(X, y)
print("Number of features corresponding to max CV score : %d" % rfecv.n_features_)
# Plot number of features VS. cross-validation scores
plt.figure()
plt.xlabel("Number of features selected")
plt.ylabel("Cross validation score (# of correct classifications)")
plt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and Visualize the Data
Step2: Visualize a Batch of Training Data
Step3: View an Image in More Detail
Step4: Define the Network Architecture
Step5: Specify Loss Function and Optimizer
Step6: Train the Network
Step7: Test the Trained Network
Step8: Visualize Sample Test Results
|
<ASSISTANT_TASK:>
Python Code:
# import libraries
import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# choose the training and test datasets
train_data = datasets.MNIST(root='data',
train=True,
download=True,
transform=transform)
test_data = datasets.MNIST(root='data',
train=False,
download=True,
transform=transform)
# prepare data loaders
train_loader = torch.utils.data.DataLoader(dataset=train_data,
batch_size=batch_size,
num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(dataset=test_data,
batch_size=batch_size,
num_workers=num_workers)
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
# print out the correct label for each image
# .item() gets the value contained in a Tensor
ax.set_title(str(labels[idx].item()))
img = np.squeeze(images[1])
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
import torch.nn as nn
import torch.nn.functional as F
## TODO: Define the NN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Linear layer (784 -> 128 hidden nodes)
self.fc1 = nn.Linear(in_features=(28 * 28),
out_features=128)
self.fc2 = nn.Linear(in_features=128,
out_features=10)
def forward(self, x):
# flatten image input
x = x.view(-1, 28 * 28)
# add hidden layer, with relu activation function
x = F.relu(self.fc1(x))
x = F.softmax(self.fc2(x), dim=1)
return x
# initialize the NN
model = Net()
print(model)
## TODO: Specify loss and optimization functions
# specify loss function
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = torch.optim.Adam(params=model.parameters(),
lr=0.001)
optimizer
# Number of epochs to train the model
n_epochs = 50 # suggest training between 20-50 epochs
# Prep model for training
model.train()
for epoch in range(n_epochs):
# Monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data, target in train_loader:
# Clear the gradients of all optimized variables
optimizer.zero_grad()
# Forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# Calculate the loss
loss = criterion(output, target)
# Backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# Perform a single optimization step (parameter update)
optimizer.step()
# Update running training loss
train_loss += loss.item() * data.size(0)
# Print training statistics
# Calculate average loss over an epoch
train_loss = train_loss/len(train_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch+1,
train_loss))
# initialize lists to monitor test loss and accuracy
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
# Prep model for *evaluation*
model.eval()
for data, target in test_loader:
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct = np.squeeze(pred.eq(target.data.view_as(pred)))
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# calculate and print avg test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
str(i), 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds = torch.max(output, 1)
# prep images for display
images = images.numpy()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())),
color=("green" if preds[idx]==labels[idx] else "red"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Welcher Mitarbeiter steht in einer alphabetisch sortierten Liste an letzter Stelle? Es sollen die Mitarbeiternummer, der Nachname und der Vorname angezeigt werden.
Step2: Welche Mitarbeiter haben an
Step3: Die Unternehmensleitung möchte die Summe der Monatsgehälter für jede Abteilung wissen. Die Spalten Abteilungsnummer, Abteilungsname und Summe der Monatsgehälter sollen angezeigt werden.
Step4: Es soll das durchschnittliche Alter aller Mitarbeiter, das Alter des ältesten Mitarbeiters und Alter des jüngsten Mitarbeiters ermittelt werden. Es genügt ein Näherungswert in Jahren
Step5: Wie viele Mitarbeiter arbeiten in der Abteilung 3?
Step6: Alle Mitarbeiter in der Abteilung 4, die mehr verdienen als der Mitarbeiter der Abteilung 5 mit dem
Step7: Welche Mitarbeiter haben ein kleineres Monatsgehalt als das durchschnittliche Monatsgehalt aller Mitarbeiter? Nachname, Vorname und Monatsgehalt dieser Mitarbeiter sollen angezeigt werden.
Step8: In welchen Abteilungen arbeiten mehr als vier Mitarbeiter?
Step9: Sie wollen wissen, wieviel Kosten (Stunden * Stundensatz) bisher für das Projekt PKR aufgelaufen sind.
|
<ASSISTANT_TASK:>
Python Code:
%load_ext sql
%sql mysql://steinam:steinam@localhost/personal
%%sql
select MNr, MName, MVorname from Mitarbeiter
order by MName desc limit 1;
%%sql
select Mitarbeiter.MNr, MName, Stunden, Projektname, Firma
from Mitarbeiter inner join Projektbearbeitung
on MItarbeiter.MNr = Projektbearbeitung.MNr
inner join Projekte
on Projektbearbeitung.ProjNr = Projekte.ProjektNr
inner join Kunden on Kunden.KundenNr = Projekte.KundenCode
%%sql
select Abteilung.AbtName, sum(Monatsgehalt)
from Abteilung inner join Mitarbeiter
on Mitarbeiter.AbtNr = Abteilung.AbtNr
inner join Gehalt
on Mitarbeiter.MNr = Gehalt.MNr
group by Abteilung.Abtname
%%sql
select avg((year(now()) - year(MGeburtsdatum))) from Mitarbeiter
%%sql
/* geht
select count(MNr) from Mitarbeiter
where Mitarbeiter.AbtNr = 3
*/
/*
gehjt auch, weil wir wegen dem where vor dem Groupen nur noch Datensätze haben,
die die AbtNr 3 haben; dann kann das group by auch wegbleiben
select Abteilung.AbtNr, Abteilung.AbtName, count(Mitarbeiter.MNr)
from Abteilung inner join Mitarbeiter
on Abteilung.AbtNr = Mitarbeiter.AbtNr
where Abteilung.AbtNr = 3
*/
%%sql
select Mitarbeiter.MName, Monatsgehalt
from Mitarbeiter inner join Gehalt
on Mitarbeiter.MNr = Gehalt.MNr
where Mitarbeiter.AbtNr = 4
and MOnatsgehalt >
(
select max(Monatsgehalt) from Gehalt inner join Mitarbeiter
on Mitarbeiter.MNr = Gehalt.MNr
where Mitarbeiter.AbtNr = 5
)
%%sql
select Mitarbeiter.MName, Monatsgehalt
from Mitarbeiter inner join Gehalt
on Mitarbeiter.MNr = Gehalt.MNr
where Monatsgehalt <
(
select avg(Monatsgehalt) from Gehalt
)
%%sql
select AbtNr, count(*) as Anzahl from Mitarbeiter M
group by AbtNr
having Anzahl > 4
%%sql
select sum(Stundensatz * Stunden), ProjNr
from Projektbearbeitung inner join Mitarbeiter
on Projektbearbeitung.MNr = Mitarbeiter.MNr
inner join Stundensatz
on Mitarbeiter.StundensatzNr = Stundensatz.StundensatzNr
where ProjNr = 'A1'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 爬取所有的专辑信息(album_by _artist.py)
Step2: http
Step3: 根据专辑信息爬取所有的歌曲信息(music_by _album.py)
Step4: 根据歌曲信息爬取其评论条数(comments_by _music.py
Step5: 翻页的实现
|
<ASSISTANT_TASK:>
Python Code:
import requests
from bs4 import BeautifulSoup
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, sdch',
'Accept-Language': 'zh-CN,zh;q=0.8,en;q=0.6',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Cookie': '_ntes_nnid=7eced19b27ffae35dad3f8f2bf5885cd,1476521011210; _ntes_nuid=7eced19b27ffae35dad3f8f2bf5885cd; usertrack=c+5+hlgB7TgnsAmACnXtAg==; Province=025; City=025; NTES_PASSPORT=6n9ihXhbWKPi8yAqG.i2kETSCRa.ug06Txh8EMrrRsliVQXFV_orx5HffqhQjuGHkNQrLOIRLLotGohL9s10wcYSPiQfI2wiPacKlJ3nYAXgM; P_INFO=hourui93@163.com|1476523293|1|study|11&12|jis&1476511733&mail163#jis&320100#10#0#0|151889&0|g37_client_check&mailsettings&mail163&study&blog|hourui93@163.com; NTES_SESS=Fa2uk.YZsGoj59AgD6tRjTXGaJ8_1_4YvGfXUkS7C1NwtMe.tG1Vzr255TXM6yj2mKqTZzqFtoEKQrgewi9ZK60ylIqq5puaG6QIaNQ7EK5MTcRgHLOhqttDHfaI_vsBzB4bibfamzx1.fhlpqZh_FcnXUYQFw5F5KIBUmGJg7xdasvGf_EgfICWV; S_INFO=1476597594|1|0&80##|hourui93; NETEASE_AUTH_SOURCE=space; NETEASE_AUTH_USERNAME=hourui93; _ga=GA1.2.1405085820.1476521280; JSESSIONID-WYYY=cbd082d2ce2cffbcd5c085d8bf565a95aee3173ddbbb00bfa270950f93f1d8bb4cb55a56a4049fa8c828373f630c78f4a43d6c3d252c4c44f44b098a9434a7d8fc110670a6e1e9af992c78092936b1e19351435ecff76a181993780035547fa5241a5afb96e8c665182d0d5b911663281967d675ff2658015887a94b3ee1575fa1956a5a%3A1476607977016; _iuqxldmzr_=25; __utma=94650624.1038096298.1476521011.1476595468.1476606177.8; __utmb=94650624.20.10.1476606177; __utmc=94650624; __utmz=94650624.1476521011.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)',
'DNT': '1',
'Host': 'music.163.com',
'Pragma': 'no-cache',
'Referer': 'http://music.163.com/',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'
}
group_id = 1001
initial = 67
params = {'id': group_id, 'initial': initial}
r = requests.get('http://music.163.com/discover/artist/cat', params=params, headers=headers)
# 网页解析
soup = BeautifulSoup(r.content.decode(), 'html.parser')
body = soup.body
hotartist_dic = {}
hot_artists = body.find_all('a', attrs={'class': 'msk'})
for artist in hot_artists:
artist_id = artist['href'].replace('/artist?id=', '').strip()
artist_name = artist['title'].replace('的音乐', '')
try:
hotartist_dic[artist_id] = artist_name
except Exception as e:
# 打印错误日志
print(e)
artist_dic = {}
artists = body.find_all('a', attrs={'class': 'nm nm-icn f-thide s-fc0'})
for artist in artists:
artist_id = artist['href'].replace('/artist?id=', '').strip()
artist_name = artist['title'].replace('的音乐', '')
try:
artist_dic[artist_id] = artist_name
except Exception as e:
# 打印错误日志
print(e)
artist_dic
def save_artist(group_id, initial, hot_artist_dic, artisti_dic):
params = {'id': group_id, 'initial': initial}
r = requests.get('http://music.163.com/discover/artist/cat', params=params)
# 网页解析
soup = BeautifulSoup(r.content.decode(), 'html.parser')
body = soup.body
hot_artists = body.find_all('a', attrs={'class': 'msk'})
artists = body.find_all('a', attrs={'class': 'nm nm-icn f-thide s-fc0'})
for artist in hot_artists:
artist_id = artist['href'].replace('/artist?id=', '').strip()
artist_name = artist['title'].replace('的音乐', '')
try:
hot_artist_dic[artist_id] = artist_name
except Exception as e:
# 打印错误日志
print(e)
for artist in artists:
artist_id = artist['href'].replace('/artist?id=', '').strip()
artist_name = artist['title'].replace('的音乐', '')
try:
artist_dic[artist_id] = artist_name
except Exception as e:
# 打印错误日志
print(e)
#return artist_dic, hot_artist_dic
gg = 1001
initial = 67
artist_dic = {}
hot_artist_dic = {}
save_artist(gg, initial, hot_artist_dic, artist_dic )
artist_dic
artist_dic = {}
hot_artist_dic = {}
for i in range(65, 91):
print(i)
save_artist(gg, i, hot_artist_dic, artist_dic )
len(hot_artist_dic)
len(artist_dic)
list(hot_artist_dic.keys())[0]
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, sdch',
'Accept-Language': 'zh-CN,zh;q=0.8,en;q=0.6',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Cookie': '_ntes_nnid=7eced19b27ffae35dad3f8f2bf5885cd,1476521011210; _ntes_nuid=7eced19b27ffae35dad3f8f2bf5885cd; usertrack=c+5+hlgB7TgnsAmACnXtAg==; Province=025; City=025; _ga=GA1.2.1405085820.1476521280; NTES_PASSPORT=6n9ihXhbWKPi8yAqG.i2kETSCRa.ug06Txh8EMrrRsliVQXFV_orx5HffqhQjuGHkNQrLOIRLLotGohL9s10wcYSPiQfI2wiPacKlJ3nYAXgM; P_INFO=hourui93@163.com|1476523293|1|study|11&12|jis&1476511733&mail163#jis&320100#10#0#0|151889&0|g37_client_check&mailsettings&mail163&study&blog|hourui93@163.com; JSESSIONID-WYYY=189f31767098c3bd9d03d9b968c065daf43cbd4c1596732e4dcb471beafe2bf0605b85e969f92600064a977e0b64a24f0af7894ca898b696bd58ad5f39c8fce821ec2f81f826ea967215de4d10469e9bd672e75d25f116a9d309d360582a79620b250625859bc039161c78ab125a1e9bf5d291f6d4e4da30574ccd6bbab70b710e3f358f%3A1476594130342; _iuqxldmzr_=25; __utma=94650624.1038096298.1476521011.1476588849.1476592408.6; __utmb=94650624.11.10.1476592408; __utmc=94650624; __utmz=94650624.1476521011.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)',
'DNT': '1',
'Host': 'music.163.com',
'Pragma': 'no-cache',
'Referer': 'http://music.163.com/',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'
}
def save_albums(artist_id, albume_dic):
params = {'id': artist_id, 'limit': '200'}
# 获取歌手个人主页
r = requests.get('http://music.163.com/artist/album', headers=headers, params=params)
# 网页解析
soup = BeautifulSoup(r.content.decode(), 'html.parser')
body = soup.body
albums = body.find_all('a', attrs={'class': 'tit s-fc0'}) # 获取所有专辑
for album in albums:
albume_id = album['href'].replace('/album?id=', '')
albume_dic[albume_id] = artist_id
albume_dic = {}
save_albums('2116', albume_dic)
albume_dic
def save_music(album_id, music_dic):
params = {'id': album_id}
# 获取专辑对应的页面
r = requests.get('http://music.163.com/album', headers=headers, params=params)
# 网页解析
soup = BeautifulSoup(r.content.decode(), 'html.parser')
body = soup.body
musics = body.find('ul', attrs={'class': 'f-hide'}).find_all('li') # 获取专辑的所有音乐
for music in musics:
music = music.find('a')
music_id = music['href'].replace('/song?id=', '')
music_name = music.getText()
music_dic[music_id] = [music_name, album_id]
list(albume_dic.keys())[0]
music_dic = {}
save_music('6423', music_dic)
music_dic
params = {
'csrf_token': ''
}
data = {
'params': '5L+s/X1qDy33tb2sjT6to2T4oxv89Fjg1aYRkjgzpNPR6hgCpp0YVjNoTLQAwWu9VYvKROPZQj6qTpBK+sUeJovyNHsnU9/StEfZwCOcKfECFFtAvoNIpulj1TDOtBir',
'encSecKey': '59079f3e07d6e240410018dc871bf9364f122b720c0735837d7916ac78d48a79ec06c6307e6a0e576605d6228bd0b377a96e1a7fc7c7ddc8f6a3dc6cc50746933352d4ec5cbe7bddd6dcb94de085a3b408d895ebfdf2f43a7c72fc783512b3c9efb860679a88ef21ccec5ff13592be450a1edebf981c0bf779b122ddbd825492'
}
print(url)
offset = 0
music_id = '65337'
url = 'http://music.163.com/api/v1/resource/comments/R_SO_4_'+ music_id + '?limit=20&offset=' + str(offset)
response = requests.post(url, headers=headers, data=data)
cj = response.json()
cj.keys()
cj['total'],len(cj['comments']), len(cj['hotComments']), len(cj['topComments'])
cj['comments'][0]
from Crypto.Cipher import AES
import base64
import requests
import json
import time
# headers
headers = {
'Host': 'music.163.com',
'Connection': 'keep-alive',
'Content-Length': '484',
'Cache-Control': 'max-age=0',
'Origin': 'http://music.163.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36',
'Content-Type': 'application/x-www-form-urlencoded',
'Accept': '*/*',
'DNT': '1',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.8,en;q=0.6,zh-TW;q=0.4',
'Cookie': 'JSESSIONID-WYYY=b66d89ed74ae9e94ead89b16e475556e763dd34f95e6ca357d06830a210abc7b685e82318b9d1d5b52ac4f4b9a55024c7a34024fddaee852404ed410933db994dcc0e398f61e670bfeea81105cbe098294e39ac566e1d5aa7232df741870ba1fe96e5cede8372ca587275d35c1a5d1b23a11e274a4c249afba03e20fa2dafb7a16eebdf6%3A1476373826753; _iuqxldmzr_=25; _ntes_nnid=7fa73e96706f26f3ada99abba6c4a6b2,1476372027128; _ntes_nuid=7fa73e96706f26f3ada99abba6c4a6b2; __utma=94650624.748605760.1476372027.1476372027.1476372027.1; __utmb=94650624.4.10.1476372027; __utmc=94650624; __utmz=94650624.1476372027.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)',
}
#获取params
def get_params(first_param, forth_param):
iv = "0102030405060708"
first_key = forth_param
second_key = 16 * 'F'
h_encText = AES_encrypt(first_param, first_key.encode(), iv.encode())
h_encText = AES_encrypt(h_encText.decode(), second_key.encode(), iv.encode())
return h_encText.decode()
# 获取encSecKey
def get_encSecKey():
encSecKey = "257348aecb5e556c066de214e531faadd1c55d814f9be95fd06d6bff9f4c7a41f831f6394d5a3fd2e3881736d94a02ca919d952872e7d0a50ebfa1769a7a62d512f5f1ca21aec60bc3819a9c3ffca5eca9a0dba6d6f7249b06f5965ecfff3695b54e1c28f3f624750ed39e7de08fc8493242e26dbc4484a01c76f739e135637c"
return encSecKey
# 解AES秘
def AES_encrypt(text, key, iv):
pad = 16 - len(text) % 16
text = text + pad * chr(pad)
encryptor = AES.new(key, AES.MODE_CBC, iv)
encrypt_text = encryptor.encrypt(text.encode())
encrypt_text = base64.b64encode(encrypt_text)
return encrypt_text
# 获取json数据
def get_json(url, data):
response = requests.post(url, headers=headers, data=data)
return response.content
# 传入post数据
def crypt_api(id, offset):
url = "http://music.163.com/weapi/v1/resource/comments/R_SO_4_%s/?csrf_token=" % id
first_param = "{rid:\"\", offset:\"%s\", total:\"true\", limit:\"20\", csrf_token:\"\"}" % offset
forth_param = "0CoJUm6Qyw8W8jud"
params = get_params(first_param, forth_param)
encSecKey = get_encSecKey()
data = {
"params": params,
"encSecKey": encSecKey
}
return url, data
offset = 0
id = '516997458'
url, data = crypt_api(id, offset)
json_text = get_json(url, data)
json_dict = json.loads(json_text.decode("utf-8"))
comments_sum = json_dict['total']
comments_sum
len(json_dict['comments'])
json_dict['comments'][0]
json_dict['comments'][4]
offset = 20
id = '516997458'
url, data = crypt_api(id, offset)
json_text = get_json(url, data)
json_dict = json.loads(json_text.decode("utf-8"))
comments_sum = json_dict['total']
json_dict['comments'][0]
offset = 40
id = '516997458'
url, data = crypt_api(id, offset)
json_text = get_json(url, data)
json_dict = json.loads(json_text.decode("utf-8"))
comments_sum = json_dict['total']
json_dict['comments'][0]
800/1018*460
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Custom ConvnetBuilder
Step3: NOTE
Step4: So, the numbers in the top layers are different from Yannet's, but the Conv layer pars are exactly the same. Pretty sure this is because when you finetune a model, the FC layer/s you stack ontop are randomly initialized, but the Conv layers coming from a pretrained network (PyTorch VGG 16), are necessarily exactly the same, given you got the same pretrained weights.
Step5: Here's code to do cross-validation
Step6: How to train is a W.I.P. --- great..
Step7: Predictions
Step8: This'll actually take forever on an i5 CPU, but it does start training so, that's good.
Step9: Alrighty, that gave me some idea of how a custom model is added to FastAI.
|
<ASSISTANT_TASK:>
Python Code:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
import pandas as pd
import numpy as np
path = 'data/gloc/'
model_path = path + 'results/'
# to override the fastai vgg16 function
from torchvision.models import vgg16
# Creates a ConvnetBuilder with all pretrained layers from vgg16 but the last fully connected layer
class ConvnetBuilderVGG():
Class representing a convolutional network.
Arguments:
c (int): size of the last layer
is_multi (bool): is multilabel classification
is_reg (bool): is a regression
ps (float): dropout parameter for last layer
def __init__(self, c, is_multi, is_reg, ps=None):
self.c, self.is_multi, self.is_reg = c, is_multi, is_reg
self.ps = ps or 0.5
vgg = vgg16(True) # NOTE: okay so I need to study how PyTorch does this
self.lr_cut = 30
layers = children(vgg.features)
layers += [Flatten()] + children(vgg.classifier)[:5]
#self.nf = 4096
# here top model is everything but the last layer
self.top_model = nn.Sequential(*layers) # NOTE: I need to find out what the fn(*arg) syntax is
fc_layers = self.create_fc_layer(4096, c, p=None)
self.n_fc = len(fc_layers)
self.fc_model = to_gpu(nn.Sequential(*fc_layers))
apply_init(self.fc_model, kaiming_normal)
self.model = to_gpu(nn.Sequential(*(layers+fc_layers)))
def create_fc_layer(self, ni, nf, p, actn=None):
res=[]
if p: res.append(nn.Dropout(p=p))
res.append(nn.Linear(in_features=ni, out_features=nf))
if actn: res.append(actn())
return res
@property # NOTE: I also need to learn Python Static Method syntax --> https://stackoverflow.com/questions/400739/what-does-asterisk-mean-in-python
def name(self): return "vgg16"
def get_layer_groups(self, do_fc=False):
if do_fc:
m, idxs = self.fc_model, []
else:
m, idxs = self.model, [self.lr_cut, -self.n_fc]
lgs = list(split_by_idxs(children(m), idxs))
return lgs
bs=32; sz=224
f_model = vgg16
n = 7637
val_idxs = get_cv_idxs(n, 0, val_pct=0.2)
tfms = tfms_from_model(f_model, sz) # NOTE: how would it know, if this is a custom/PyTorch model?
data = ImageClassifierData.from_csv(path, 'train', f'{path}train.csv', bs, tfms,
val_idxs=val_idxs, continuous=True)
# note precompute=False
models = ConvnetBuilderVGG(data.c, data.is_multi, data.is_reg)
models.model
class ConvLearnerVGG(ConvLearner):
# rewriting pretrained
@classmethod
def pretrained(cls, data, ps=None, **kwargs):
models = ConvnetBuilderVGG(data.c, data.is_multi, data.is_reg, ps=ps)
return cls(data, models, **kwargs)
# redefining freeze to freeze everything but last layer
def freeze(self):
layers = children(self.model)
n = len(layers)
for λ in layers:
λ.trainable=False
for p in λ.parameters(): p.requires_grad=False
λ = layers[n-1]
λ.trainable=True
for p in λ.parameters(): p.requires_grad=True
def unfreeze_prev_layer(self):
layers = children(self.model)
λ = layers[35]
λ.trainable=True
for p in λ.parameters(): p.requires_grad=True
bs = 32; sz = 224
f_model = vgg16
n = 7637
val_idxs = get_cv_idxs(n, 0, val_pct=0.2)
tfms = tfms_from_model(f_model, sz)
data = ImageClassifierData.from_csv(path, 'train', f'{path}train.csv', bs, tfms,
val_idxs=val_idxs, continuous=True)
learn = ConvLearnerVGG.pretrained(data, ps=0.0, precompute=False)
m = learn.models.model
trainable_params_(m)
learn.unfreeze_prev_layer()
trainable_params_(m)
bs=32; sz=224
n = 7637
transforms_basic = [RandomRotateXY(10), RandomDihedralXY()]
transforms_basic = [RandomRotateXY(10)]
def get_model_i(i=0):
val_idxs = get_cv_idxs(n, i, val_pct=0.1)
tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_basic, max_zoom=1.05)
data = ImageClassifierData.from_csv(path, 'train', f'{path}train.csv', bs, tfms,
val_idxs=val_idxs, suffix='.jpg', continuous=True)
learn = ConvLearnerVGG.pretrained(data, ps=0.0, precompute=False)
return learn
def fit_and_predict(learn):
learn.fit(1e-3, 3)
learn.fit(1e-4, 4)
print("unfreezing")
learn.unfreeze_prev_layer()
#learn.fit(1e-5, 3, cycle_len=1, cycle_mult=2)
learn.fit(1e-5, 3)
return learn.TTA()
preds = []
for i in range(11):
print("iteration ", i)
learn = get_model_i(i)
preds.append(fit_and_predict(learn))
def reshape_preds(preds):
predictions = [preds[i][0] for i in range(11)]
y = [preds[i][1] for i in range(11)]
pp = np.vstack(predictions)
yy = np.vstack(y)
print(yy.shape)
pp = np.maximum(pp, 0.0)
err = np.abs(pp - yy).mean()
print("err", err)
reshape_preds(preds)
df = pd.read_csv(path + 'train.csv')
len(df['id'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Cleaning
Step2: Change the second and third columns.
Step3: The values for the second and third columns which are Have you seen any of the 6 films in the Star Wars franchise? and Do you consider yourself to be a fan of the Star Wars film franchise? respectively are Yes, No, NaN. We want to change them to True or False.
Step4: Cleaning the columns from index 3 to 9.
Step5: Rename the columns from index 3 to 9 for better readibility.
Step6: Cleaning the columns from index 9 to 15.
Step7: Renaming columns names.
Step8: Cleaning the cloumns from index 15 to 29.
Step9: Data Analysis
Step10: From the data above, we can find that the most seen movie is the episode V.
Step11: Finding The Highest Ranked Movie.
Step12: The highest ranked movie is ranking_5 which is the episode V.
Step13: Let's break down data by Gender.
Step14: The number of movies seen.
Step15: The ranking of movies.
Step16: From the charts above, we do not find significant difference among gender.
Step17: Convert fav to pivot table.
Step18: Who Shot First?
|
<ASSISTANT_TASK:>
Python Code:
star_wars = pd.read_csv('star_wars.csv', encoding="ISO-8859-1")
star_wars.head()
star_wars.columns
star_wars = star_wars.dropna(subset=['RespondentID'])
star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'].isnull().value_counts()
star_wars['Have you seen any of the 6 films in the Star Wars franchise?'].value_counts()
star_wars['Have you seen any of the 6 films in the Star Wars franchise?'] = star_wars['Have you seen any of the 6 films in the Star Wars franchise?'].map({'Yes': True, 'No': False})
star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'] = star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'].map({'Yes': True, 'No': False})
for col in star_wars.columns[3:9]:
star_wars[col] = star_wars[col].apply(lambda x: False if pd.isnull(x) else True)
star_wars.rename(columns={'Which of the following Star Wars films have you seen? Please select all that apply.': 'seen_1', \
'Unnamed: 4': 'seen_2', \
'Unnamed: 5': 'seen_3', \
'Unnamed: 6': 'seen_4', \
'Unnamed: 7': 'seen_5', \
'Unnamed: 8': 'seen_6'}, inplace=True)
star_wars[star_wars.columns[9:15]] = star_wars[star_wars.columns[9:15]].astype(float)
star_wars.rename(columns={'Please rank the Star Wars films in order of preference with 1 being your favorite film in the franchise and 6 being your least favorite film.': 'ranking_1', \
'Unnamed: 10': 'ranking_2', \
'Unnamed: 11': 'ranking_3', \
'Unnamed: 12': 'ranking_4', \
'Unnamed: 13': 'ranking_5', \
'Unnamed: 14': 'ranking_6'}, inplace=True)
star_wars.rename(columns={'Please state whether you view the following characters favorably, unfavorably, or are unfamiliar with him/her.': 'Luck Skywalker', \
'Unnamed: 16': 'Han Solo', \
'Unnamed: 17': 'Princess Leia Oragana', \
'Unnamed: 18': 'Obi Wan Kenobi', \
'Unnamed: 19': 'Yoda', \
'Unnamed: 20': 'R2-D2', \
'Unnamed: 21': 'C-3P0', \
'Unnamed: 22': 'Anakin Skywalker', \
'Unnamed: 23': 'Darth Vader', \
'Unnamed: 24': 'Lando Calrissian', \
'Unnamed: 25': 'Padme Amidala', \
'Unnamed: 26': 'Boba Fett', \
'Unnamed: 27': 'Emperor Palpatine', \
'Unnamed: 28': 'Jar Jar Binks'}, inplace=True)
seen_sum = star_wars[['seen_1', 'seen_2', 'seen_3', 'seen_4', 'seen_5', 'seen_6']].sum()
seen_sum
seen_sum.idxmax()
ax = seen_sum.plot(kind='bar')
for p in ax.patches:
ax.annotate(str(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.01))
plt.show()
ranking_mean = star_wars[['ranking_1', 'ranking_2', 'ranking_3', 'ranking_4', 'ranking_5', 'ranking_6']].mean()
ranking_mean
ranking_mean.idxmin()
ranking_mean.plot(kind='bar')
plt.show()
males = star_wars[star_wars['Gender'] == 'Male']
females = star_wars[star_wars['Gender'] == 'Female']
males[males.columns[3:9]].sum().plot(kind='bar', title='male seen')
plt.show()
males[females.columns[3:9]].sum().plot(kind='bar', title='female seen')
plt.show()
males[males.columns[9:15]].mean().plot(kind='bar', title='Male Ranking')
plt.show()
females[males.columns[9:15]].mean().plot(kind='bar', title='Female Ranking')
plt.show()
star_wars['Luck Skywalker'].value_counts()
star_wars[star_wars.columns[15:29]].head()
fav = star_wars[star_wars.columns[15:29]].dropna()
fav.head()
fav_df_list = []
for col in fav.columns.tolist():
row = fav[col].value_counts()
d1 = pd.DataFrame(data={'favorably': row[0] + row[1], \
'neutral': row[2], \
'unfavorably': row[4] + row[5], \
'Unfamiliar': row[3]}, \
index=[col], \
columns=['favorably', 'neutral', 'unfavorably', 'Unfamiliar'])
fav_df_list.append(d1)
fav_pivot = pd.concat(fav_df_list)
fav_pivot
fig = plt.figure()
ax = plt.subplot(111)
fav_pivot.plot(kind='barh', stacked=True, figsize=(10,10), ax=ax)
# Shrink current axis's height by 10% on the bottom
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1,
box.width, box.height * 0.9])
# Put a legend below current axis
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),
fancybox=True, shadow=True, ncol=5)
plt.show()
shot_first = star_wars['Which character shot first?'].value_counts()
shot_first
shot_sum = shot_first.sum()
shot_first = shot_first.apply(lambda x: x / shot_sum * 100)
shot_first
ax = shot_first.plot(kind='barh')
for p in ax.patches:
ax.annotate(str("{0:.2f}%".format(round(p.get_width(),2))), (p.get_width() * 1.005, p.get_y() + p.get_height() * 0.5))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Generation
Step2: Plotting
Step3: Models and Training
Step4: The loss function for the discriminator is
Step5: The loss function for the generator is
Step6: Perform a training step by first updating the discriminator parameters $\phi$ using the gradient $\nabla_\phi L_D (\phi, \theta)$ and then updating the generator parameters $\theta$ using the gradient $\nabla_\theta L_G (\phi, \theta)$.
Step7: Plot Results
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q flax
from typing import Sequence
import matplotlib.pyplot as plt
import jax
import jax.numpy as jnp
import flax.linen as nn
from flax.training import train_state
import optax
import functools
import scipy as sp
import math
rng = jax.random.PRNGKey(0)
@functools.partial(jax.jit, static_argnums=(1,))
def real_data(rng, batch_size):
mog_mean = jnp.array(
[
[1.50, 1.50],
[1.50, 0.50],
[1.50, -0.50],
[1.50, -1.50],
[0.50, 1.50],
[0.50, 0.50],
[0.50, -0.50],
[0.50, -1.50],
[-1.50, 1.50],
[-1.50, 0.50],
[-1.50, -0.50],
[-1.50, -1.50],
[-0.50, 1.50],
[-0.50, 0.50],
[-0.50, -0.50],
[-0.50, -1.50],
]
)
temp = jnp.tile(mog_mean, (batch_size // 16 + 1, 1))
mus = temp[0:batch_size, :]
return mus + 0.02 * jax.random.normal(rng, shape=(batch_size, 2))
def plot_on_ax(ax, values, contours=None, bbox=None, xlabel="", ylabel="", title="", cmap="Blues"):
kernel = sp.stats.gaussian_kde(values.T)
ax.axis(bbox)
ax.set_aspect(abs(bbox[1] - bbox[0]) / abs(bbox[3] - bbox[2]))
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_xticks([])
ax.set_yticks([])
xx, yy = jnp.mgrid[bbox[0] : bbox[1] : 300j, bbox[2] : bbox[3] : 300j]
positions = jnp.vstack([xx.ravel(), yy.ravel()])
f = jnp.reshape(kernel(positions).T, xx.shape)
cfset = ax.contourf(xx, yy, f, cmap=cmap)
if contours is not None:
x = jnp.arange(-2.0, 2.0, 0.1)
y = jnp.arange(-2.0, 2.0, 0.1)
cx, cy = jnp.meshgrid(x, y)
new_set = ax.contour(
cx, cy, contours.squeeze().reshape(cx.shape), levels=20, colors="k", linewidths=0.8, alpha=0.5
)
ax.set_title(title)
class MLP(nn.Module):
features: Sequence[int]
@nn.compact
def __call__(self, x):
for feat in self.features[:-1]:
x = jax.nn.relu(nn.Dense(features=feat)(x))
x = nn.Dense(features=self.features[-1])(x)
return x
@jax.jit
def discriminator_step(disc_state, gen_state, latents, real_examples):
def loss_fn(disc_params):
fake_examples = gen_state.apply_fn(gen_state.params, latents)
real_logits = disc_state.apply_fn(disc_params, real_examples)
fake_logits = disc_state.apply_fn(disc_params, fake_examples)
disc_real = -jax.nn.log_sigmoid(real_logits)
# log(1 - sigmoid(x)) = log_sigmoid(-x)
disc_fake = -jax.nn.log_sigmoid(-fake_logits)
return jnp.mean(disc_real + disc_fake)
disc_loss, disc_grad = jax.value_and_grad(loss_fn)(disc_state.params)
disc_state = disc_state.apply_gradients(grads=disc_grad)
return disc_state, disc_loss
@jax.jit
def generator_step(disc_state, gen_state, latents):
def loss_fn(gen_params):
fake_examples = gen_state.apply_fn(gen_params, latents)
fake_logits = disc_state.apply_fn(disc_state.params, fake_examples)
disc_fake = -jax.nn.log_sigmoid(fake_logits)
return jnp.mean(disc_fake)
gen_loss, gen_grad = jax.value_and_grad(loss_fn)(gen_state.params)
gen_state = gen_state.apply_gradients(grads=gen_grad)
return gen_state, gen_loss
@jax.jit
def train_step(disc_state, gen_state, latents, real_examples):
disc_state, disc_loss = discriminator_step(disc_state, gen_state, latents, real_examples)
gen_state, gen_loss = generator_step(disc_state, gen_state, latents)
return disc_state, gen_state, disc_loss, gen_loss
batch_size = 512
latent_size = 32
discriminator = MLP(features=[25, 25, 1])
generator = MLP(features=[25, 25, 2])
# Initialize parameters for the discriminator and the generator
latents = jax.random.normal(rng, shape=(batch_size, latent_size))
real_examples = real_data(rng, batch_size)
disc_params = discriminator.init(rng, real_examples)
gen_params = generator.init(rng, latents)
# Plot real examples
bbox = [-2, 2, -2, 2]
plot_on_ax(plt.gca(), real_examples, bbox=bbox, title="Data")
plt.tight_layout()
plt.savefig("gan_gmm_data.pdf")
plt.show()
# Create train states for the discriminator and the generator
lr = 0.05
disc_state = train_state.TrainState.create(
apply_fn=discriminator.apply, params=disc_params, tx=optax.sgd(learning_rate=lr)
)
gen_state = train_state.TrainState.create(apply_fn=generator.apply, params=gen_params, tx=optax.sgd(learning_rate=lr))
# x and y grid for plotting discriminator contours
x = jnp.arange(-2.0, 2.0, 0.1)
y = jnp.arange(-2.0, 2.0, 0.1)
X, Y = jnp.meshgrid(x, y)
pairs = jnp.stack((X, Y), axis=-1)
pairs = jnp.reshape(pairs, (-1, 2))
# Latents for testing generator
test_latents = jax.random.normal(rng, shape=(batch_size * 10, latent_size))
num_iters = 20001
n_save = 2000
draw_contours = False
history = []
for i in range(num_iters):
rng_iter = jax.random.fold_in(rng, i)
data_rng, latent_rng = jax.random.split(rng_iter)
# Sample minibatch of examples
real_examples = real_data(data_rng, batch_size)
# Sample minibatch of latents
latents = jax.random.normal(latent_rng, shape=(batch_size, latent_size))
# Update both the generator
disc_state, gen_state, disc_loss, gen_loss = train_step(disc_state, gen_state, latents, real_examples)
if i % n_save == 0:
print(f"i = {i}, Discriminator Loss = {disc_loss}, " + f"Generator Loss = {gen_loss}")
# Generate examples using the test latents
fake_examples = gen_state.apply_fn(gen_state.params, test_latents)
if draw_contours:
real_logits = disc_state.apply_fn(disc_state.params, pairs)
disc_contour = -real_logits + jax.nn.log_sigmoid(real_logits)
else:
disc_contour = None
history.append((i, fake_examples, disc_contour, disc_loss, gen_loss))
# Plot generated examples from history
for i, hist in enumerate(history):
iter, fake_examples, contours, disc_loss, gen_loss = hist
plot_on_ax(
plt.gca(),
fake_examples,
contours=contours,
bbox=bbox,
xlabel=f"Disc Loss: {disc_loss:.3f} | Gen Loss: {gen_loss:.3f}",
title=f"Samples at Iteration {iter}",
)
plt.tight_layout()
plt.savefig(f"gan_gmm_iter_{iter}.pdf")
plt.show()
cols = 3
rows = math.ceil((len(history) + 1) / cols)
bbox = [-2, 2, -2, 2]
fig, axs = plt.subplots(rows, cols, figsize=(cols * 3, rows * 3), dpi=200)
axs = axs.flatten()
# Plot real examples
plot_on_ax(axs[0], real_examples, bbox=bbox, title="Data")
# Plot generated examples from history
for i, hist in enumerate(history):
iter, fake_examples, contours, disc_loss, gen_loss = hist
plot_on_ax(
axs[i + 1],
fake_examples,
contours=contours,
bbox=bbox,
xlabel=f"Disc Loss: {disc_loss:.3f} | Gen Loss: {gen_loss:.3f}",
title=f"Samples at Iteration {iter}",
)
# Remove extra plots from the figure
for i in range(len(history) + 1, len(axs)):
axs[i].remove()
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: In order to use this in a strategy, we should wrap our momentum calculator in a function
Step3: Now we implement the strategy described in the paper
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
k = 30
start = '2014-01-01'
end = '2015-01-01'
pricing = get_pricing('PEP', fields='price', start_date=start, end_date=end)
fundamentals = init_fundamentals()
num_shares = get_fundamentals(query(fundamentals.earnings_report.basic_average_shares,)
.filter(fundamentals.company_reference.primary_symbol == 'PEP',), end)
x = np.log(pricing)
v = x.diff()
m = get_pricing('PEP', fields='volume', start_date=start, end_date=end)/num_shares.values[0,0]
p0 = pd.rolling_sum(v, k)
p1 = pd.rolling_sum(m*v, k)
p2 = p1/pd.rolling_sum(m, k)
p3 = pd.rolling_mean(v, k)/pd.rolling_std(v, k)
f, (ax1, ax2) = plt.subplots(2,1)
ax1.plot(p0)
ax1.plot(p1)
ax1.plot(p2)
ax1.plot(p3)
ax1.set_title('Momentum of PEP')
ax1.legend(['p(0)', 'p(1)', 'p(2)', 'p(3)'], bbox_to_anchor=(1.1, 1))
ax2.plot(p0)
ax2.plot(p1)
ax2.plot(p2)
ax2.plot(p3)
ax2.axis([0, 300, -0.005, 0.005])
ax2.set_xlabel('Time');
def get_p(prices, m, d, k):
Returns the dth-degree rolling momentum of data using lookback window length k
x = np.log(prices)
v = x.diff()
m = np.array(m)
if d == 0:
return pd.rolling_sum(v, k)
elif d == 1:
return pd.rolling_sum(m*v, k)
elif d == 2:
return pd.rolling_sum(m*v, k)/pd.rolling_sum(m, k)
elif d == 3:
return pd.rolling_mean(v, k)/pd.rolling_std(v, k)
# Load the assets we want to trade
start = '2010-01-01'
end = '2015-01-01'
assets = sorted(['STX', 'WDC', 'CBI', 'JEC', 'VMC', 'PG', 'AAPL', 'PEP', 'AON', 'DAL'])
data = get_pricing(assets, start_date='2010-01-01', end_date='2015-01-01').loc['price', :, :]
# Get turnover rate for the assets
fundamentals = init_fundamentals()
num_shares = get_fundamentals(query(fundamentals.earnings_report.basic_average_shares,)
.filter(fundamentals.company_reference.primary_symbol.in_(assets),), end)
turnover = get_pricing(assets, fields='volume', start_date=start, end_date=end)/num_shares.values[0]
# Plot the prices just for fun
data.plot(figsize=(10,7), colors=['r', 'g', 'b', 'k', 'c', 'm', 'orange',
'chartreuse', 'slateblue', 'silver'])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylabel('Price')
plt.xlabel('Time');
# Calculate all the rolling momenta for the data and compute daily ranking of assets by momentum
lookback = 30
ps = np.array([np.array(get_p(data, turnover, j, lookback).T) for j in range(4)])
orders = [[ps[j].T[i].argsort() for i in range(len(ps[0,0]))] for j in range(4)]
ranks = [[orders[j][i].argsort() for i in range(len(orders[1]))] for j in range(4)]
# Cast data to numpy array for easier manipulation
data_array = np.array(data)
# Simulate going long on high-momentum stocks and short low-momentum stocks
# Our first 2*lookback - 2 values will be NaN since we used 2 lookback windows, so start on day 2*lookback
tots = [[0]*4 for j in range(len(data) - 2*lookback)]
for t in range(2*lookback, len(ranks[0]) - 2*lookback):
tots[t] = list(tots[t-1])
# Only update portfolio every 2*lookback days
if t%(2*lookback):
continue
# Go long top quintile of stocks and short bottom quintile
shorts = np.array([[int(x < 2)for x in ranks[j][t]] for j in range(4)])
longs = np.array([[int(x > 7) for x in ranks[j][t]] for j in range(4)])
# How many shares of each stock are in $1000
shares_in_1k = 1000/data_array[t]
# Go long and short $1000 each in the specified stocks, then clear holdings in 2*lookback days
returns = (data_array[t+2*lookback]*shares_in_1k - [1000]*len(assets))*(longs - shorts)
tots[t] += np.sum(returns, 1)
# Adjust so that tots[t] is actually money on day t
tots = [[0,0,0,0]]*2*lookback + tots
# Plot total money earned using the 3 different momentum definitions
plt.plot(tots)
plt.title('Cash in portfolio')
plt.legend(['p(0)', 'p(1)', 'p(2)', 'p(3)'], loc=4)
plt.xlabel('Time')
plt.ylabel('$');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Heat Transfer Rate
Step2: Evaporation Rate
Step3: Time for total Evaporation
Step4: Finally, the amount of time that it takes for all 0.68kg of LOX to boil off is
|
<ASSISTANT_TASK:>
Python Code:
# Import packages here:
import math as m
import numpy as np
from IPython.display import Image
import matplotlib.pyplot as plt
# Properties of Materials (engineeringtoolbox.com, Cengel, Tian)
# Conductivity
Kair = 0.026 # w/mk
Kptfe = 0.25 # w/mk
Kcf = 0.8 # transverse conductivity 0.5 -0.8 w/mk
# Fluid Properties
rhoLox = 1141 # kg/m^3
TLox = -183 # *C
# Latent Heat of Evaporation
heOxy = 214000 # j/kg
# Layer Dimensions:
r1 = 0.0381 # meters (1.5")
r2 = 0.0396 # m
r3 = 0.0399 # m
r4 = 0.0446 # m
r5 = 0.0449 # m
L = 0.13081 # meters (5.15")
# Environmental Properties:
Ts = 38 # *C
T1 = -183 #*C
Rptfe = m.log(r2/r1)/(2*m.pi*L*Kptfe)
Rcf1 = m.log(r3/r2)/(2*m.pi*L*Kcf)
Rair = m.log(r4/r3)/(2*m.pi*L*Kair)
Rcf2 = m.log(r5/r4)/(2*m.pi*L*Kcf)
Rtot = Rptfe + Rcf1 + Rair + Rcf2
print('Total Thermal Resistance equals: ', "%.2f" % Rtot, 'K/W')
#Heat transfer rate:
Qrate = (Ts - T1)/Rtot
print('Calculated Heat Transfer rate equals: ',"%.2f" % Qrate, 'W')
EvapRate = Qrate/heOxy
print ('The rate of evaporation is', "%.6f" % EvapRate, 'kg/s')
VLox = m.pi*(r1)**2*L
mLox = rhoLox*VLox
print('The mass of the liquid oxygen in tank is: ', "%.2f" % mLox, 'kg')
Tboiloff = mLox/EvapRate/60
print('%.2f' % Tboiloff, 'minutes' )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualize data
Step2: value of slope and intercept from linear regression
Step3: model
Step4: a)
Step5: posterior of red speed
Step6: Blue / A
Step7: posterior of blue speed
Step8: b)
Step9: C)
Step10: shared Speed
Step11: Transition time
Step12: d)
Step13: blue speed
Step14: red speed
Step15: transition time
Step16: coupled case with first 100 data points
Step17: shared speed
|
<ASSISTANT_TASK:>
Python Code:
loc_data = pd.read_csv('location_data_hw9.csv')
loc_data.head()
fig, axes = plt.subplots(2,2, figsize=[6, 4])
ylabels = [['red_pos_X', 'red_pos_Y'], ['blue_pos_X', 'blue_pos_Y']]
for i in range(2):
for j in range(2):
axes[i,j].plot(loc_data['t'], loc_data[ylabels[i][j]])
axes[i,j].set_xlabel('t')
axes[i,j].set_ylabel(ylabels[i][j])
plt.tight_layout()
from scipy import stats
slopes, intercepts = {}, {}
columns = ['red_pos_X', 'red_pos_Y', 'blue_pos_X', 'blue_pos_Y']
for var in columns:
a, b, _, _, _ = stats.linregress(loc_data['t'], loc_data[var])
slopes[var] = a
intercepts[var] = b
print(slopes)
print(intercepts)
import pymc3 as pm
import theano.tensor as tt
with pm.Model() as model_red:
t = loc_data['t']
xname, yname = 'red_pos_X', 'red_pos_Y'
obs_x, obs_y = loc_data[xname], loc_data[yname]
# vx --> a, x0 --> b
# we choose the range of vx and x0 to be around the values obtained from linear regression
vx = pm.Uniform('vx', slopes[xname]/5, slopes[xname]*5)
x0 = pm.Uniform('x0', intercepts[xname]-1, intercepts[xname]+1)
# vy --> a, y0 --> b
# note that vy is negative
vy = pm.Uniform('vy', slopes[yname]*5, slopes[yname]/5)
y0 = pm.Uniform('y0', intercepts[yname]-1, intercepts[yname]+1)
# speed
v = pm.Deterministic('v', tt.sqrt(vx**2 + vy**2))
sigma_x = pm.Uniform('sigma_x', 0, 1)
sigma_y = pm.Uniform('sigma_y', 0, 1)
mu_x = pm.Deterministic('mu_x', vx*t+x0)
mu_y = pm.Deterministic('mu_y', vy*t+y0)
with model_red:
mc_x = pm.Normal("mc_x", mu_x, sigma_x, observed=obs_x)
mc_y = pm.Normal("mc_y", mu_y, sigma_y, observed=obs_y)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(100000, step=step, start=start)
red_burned_trace = trace[50000::5]
pm.plots.plot_posterior(trace=red_burned_trace["v"], round_to=6, alpha_level=0.05)
with pm.Model() as model_blue:
t = loc_data['t']
xname, yname = 'blue_pos_X', 'blue_pos_Y'
obs_x, obs_y = loc_data[xname], loc_data[yname]
# vx --> a, x0 --> b
vx = pm.Uniform('vx', slopes[xname]/5, slopes[xname]*5)
x0 = pm.Uniform('x0', intercepts[xname]-1, intercepts[xname]+1)
# vy --> a, y0 --> b
# note that vy is negative
# we choose the range of vx and x0 to be around the values obtained from linear regression
vy = pm.Uniform('vy', slopes[yname]/5, slopes[yname]*5)
y0 = pm.Uniform('y0', intercepts[yname]-1, intercepts[yname]+1)
# speed
v = pm.Deterministic('v', tt.sqrt(vx**2 + vy**2))
sigma_x = pm.Uniform('sigma_x', 0, 1)
sigma_y = pm.Uniform('sigma_y', 0, 1)
mu_x = pm.Deterministic('mu_x', vx*t+x0)
mu_y = pm.Deterministic('mu_y', vy*t+y0)
with model_blue:
mc_x = pm.Normal("mc_x", mu_x, sigma_x, observed=obs_x)
mc_y = pm.Normal("mc_y", mu_y, sigma_y, observed=obs_y)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(100000, step=step, start=start)
blue_burned_trace = trace[50000::5]
pm.plots.plot_posterior(trace=blue_burned_trace["v"],round_to=6, alpha_level=0.05)
import numpy as np
t_red = -red_burned_trace["y0"]/red_burned_trace["vy"]
t_blue = -blue_burned_trace["y0"]/blue_burned_trace["vy"]
tau = np.max(np.array([t_red, t_blue]), axis=0)
pm.plots.plot_posterior(trace=tau, alpha_level=0.05, round_to=2)
with pm.Model() as model_coupled:
t = loc_data['t']
xname_blue, yname_blue = 'blue_pos_X', 'blue_pos_Y'
xname_red, yname_red = 'red_pos_X', 'red_pos_Y'
obs_x_blue, obs_y_blue = loc_data[xname_blue], loc_data[yname_blue]
obs_x_red, obs_y_red = loc_data[xname_red], loc_data[yname_red]
# vx --> a, x0 --> b
vx_blue = pm.Uniform('vx_blue', slopes[xname_blue]/5, slopes[xname_blue]*5)
x0_blue = pm.Uniform('x0_blue', intercepts[xname_blue]-1, intercepts[xname_blue]+1)
vx_red = pm.Uniform('vx_red', slopes[xname_red]/5, slopes[xname_red]*5)
x0_red = pm.Uniform('x0_red', intercepts[xname_red]-1, intercepts[xname_red]+1)
# vy --> a, y0 --> b
# we choose the range of vx and x0 to be around the values obtained from linear regression
vy_blue = pm.Uniform('vy_blue', slopes[yname_blue]/5, slopes[yname_blue]*5)
y0_blue = pm.Uniform('y0_blue', intercepts[yname_blue]-1, intercepts[yname_red]+1)
y0_red = pm.Uniform('y0_red', intercepts[yname_red]-1, intercepts[yname_red]+1)
# speed
v = pm.Deterministic('v', tt.sqrt(vx_blue**2 + vy_blue**2))
# vy_red is not independent anymore !!!
# vy_red is negative
vy_red = pm.Deterministic('vy_red', -tt.sqrt(v**2 - vx_red**2))
sigma_x_blue = pm.Uniform('sigma_x_blue', 0, 1)
sigma_y_blue = pm.Uniform('sigma_y_blue', 0, 1)
sigma_x_red = pm.Uniform('sigma_x_red', 0, 1)
sigma_y_red = pm.Uniform('sigma_y_red', 0, 1)
mu_x_blue = pm.Deterministic('mu_x_blue', vx_blue * t + x0_blue)
mu_y_blue = pm.Deterministic('mu_y_blue', vy_blue * t + y0_blue)
mu_x_red = pm.Deterministic('mu_x_red', vx_red * t + x0_red)
mu_y_red = pm.Deterministic('mu_y_red', vy_red * t + y0_red)
with model_coupled:
mc_x_blue = pm.Normal("mc_x_blue", mu_x_blue, sigma_x_blue, observed=obs_x_blue)
mc_y_blue = pm.Normal("mc_y_blue", mu_y_blue, sigma_y_blue, observed=obs_y_blue)
mc_x_red = pm.Normal("mc_x_red", mu_x_red, sigma_x_red, observed=obs_x_red)
mc_y_red = pm.Normal("mc_y_red", mu_y_red, sigma_y_red, observed=obs_y_red)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(100000, step=step, start=start)
coupled_burned_trace = trace[50000::5]
pm.plots.plot_posterior(trace=coupled_burned_trace["v"], round_to=6, alpha_level=0.05)
t_red = -coupled_burned_trace["y0_red"]/coupled_burned_trace["vy_red"]
t_blue = -coupled_burned_trace["y0_blue"]/coupled_burned_trace["vy_blue"]
tau = np.max(np.array([t_red, t_blue]), axis=0)
pm.plots.plot_posterior(trace=tau, alpha_level=0.05, round_to=2)
# We only use the first 100 data point
N = 100
from scipy import stats
slopes, intercepts = {}, {}
columns = ['red_pos_X', 'red_pos_Y', 'blue_pos_X', 'blue_pos_Y']
for var in columns:
a, b, _, _, _ = stats.linregress(loc_data['t'][:N], loc_data[var][:N])
slopes[var] = a
intercepts[var] = b
print(slopes)
print(intercepts)
with pm.Model() as model_decoupled_small:
t = loc_data['t'][:N]
xname_blue, yname_blue = 'blue_pos_X', 'blue_pos_Y'
xname_red, yname_red = 'red_pos_X', 'red_pos_Y'
obs_x_blue, obs_y_blue = loc_data[xname_blue][:N], loc_data[yname_blue][:N]
obs_x_red, obs_y_red = loc_data[xname_red][:N], loc_data[yname_red][:N]
# vx --> a, x0 --> b
vx_blue = pm.Uniform('vx_blue', slopes[xname_blue]/5, slopes[xname_blue]*5)
x0_blue = pm.Uniform('x0_blue', intercepts[xname_blue]-1, intercepts[xname_blue]+1)
vx_red = pm.Uniform('vx_red', slopes[xname_red]/5, slopes[xname_red]*5)
x0_red = pm.Uniform('x0_red', intercepts[xname_red]-1, intercepts[xname_red]+1)
# vy --> a, y0 --> b
# we choose the range of vx and x0 to be around the values obtained from linear regression
vy_blue = pm.Uniform('vy_blue', slopes[yname_blue]/5, slopes[yname_blue]*5)
y0_blue = pm.Uniform('y0_blue', intercepts[yname_blue]-1, intercepts[yname_blue]+1)
# notice that vy_red is negative
vy_red = pm.Uniform('vy_red', slopes[yname_red]*5, slopes[yname_red]/5)
y0_red = pm.Uniform('y0_red', intercepts[yname_red]-1, intercepts[yname_red]+1)
# speed
v_blue = pm.Deterministic('v_blue', tt.sqrt(vx_blue**2 + vy_blue**2))
v_red = pm.Deterministic('v_red', tt.sqrt(vx_red**2 + vy_red**2))
sigma_x_blue = pm.Uniform('sigma_x_blue', 0, 1)
sigma_y_blue = pm.Uniform('sigma_y_blue', 0, 1)
sigma_x_red = pm.Uniform('sigma_x_red', 0, 1)
sigma_y_red = pm.Uniform('sigma_y_red', 0, 1)
mu_x_blue = pm.Deterministic('mu_x_blue', vx_blue * t + x0_blue)
mu_y_blue = pm.Deterministic('mu_y_blue', vy_blue * t + y0_blue)
mu_x_red = pm.Deterministic('mu_x_red', vx_red * t + x0_red)
mu_y_red = pm.Deterministic('mu_y_red', vy_red * t + y0_red)
with model_decoupled_small:
mc_x_blue = pm.Normal("mc_x_blue", mu_x_blue, sigma_x_blue, observed=obs_x_blue)
mc_y_blue = pm.Normal("mc_y_blue", mu_y_blue, sigma_y_blue, observed=obs_y_blue)
mc_x_red = pm.Normal("mc_x_red", mu_x_red, sigma_x_red, observed=obs_x_red)
mc_y_red = pm.Normal("mc_y_red", mu_y_red, sigma_y_red, observed=obs_y_red)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(100000, step=step, start=start)
decoupled_small_burned_trace = trace[50000::5]
pm.plots.plot_posterior(trace=decoupled_small_burned_trace["v_blue"], round_to=6, alpha_level=0.05)
pm.plots.plot_posterior(trace=decoupled_small_burned_trace["v_red"], round_to=6, alpha_level=0.05)
t_red = -decoupled_small_burned_trace["y0_red"]/decoupled_small_burned_trace["vy_red"]
t_blue = -decoupled_small_burned_trace["y0_blue"]/decoupled_small_burned_trace["vy_blue"]
tau = np.max(np.array([t_red, t_blue]), axis=0)
pm.plots.plot_posterior(trace=tau, alpha_level=0.05, round_to=2)
with pm.Model() as model_coupled_small:
t = loc_data['t'][:N]
xname_blue, yname_blue = 'blue_pos_X', 'blue_pos_Y'
xname_red, yname_red = 'red_pos_X', 'red_pos_Y'
obs_x_blue, obs_y_blue = loc_data[xname_blue][:N], loc_data[yname_blue][:N]
obs_x_red, obs_y_red = loc_data[xname_red][:N], loc_data[yname_red][:N]
# vx --> a, x0 --> b
vx_blue = pm.Uniform('vx_blue', slopes[xname_blue]/5, slopes[xname_blue]*5)
x0_blue = pm.Uniform('x0_blue', intercepts[xname_blue]-1, intercepts[xname_blue]+1)
vx_red = pm.Uniform('vx_red', slopes[xname_red]/5, slopes[xname_red]*5)
x0_red = pm.Uniform('x0_red', intercepts[xname_red]-1, intercepts[xname_red]+1)
# vy --> a, y0 --> b
# we choose the range of vx and x0 to be around the values obtained from linear regression
vy_blue = pm.Uniform('vy_blue', slopes[yname_blue]/5, slopes[yname_blue]*5)
y0_blue = pm.Uniform('y0_blue', intercepts[yname_blue]-1, intercepts[yname_red]+1)
y0_red = pm.Uniform('y0_red', intercepts[yname_red]-1, intercepts[yname_red]+1)
# speed
v = pm.Deterministic('v', tt.sqrt(vx_blue**2 + vy_blue**2))
# vy_red is not independent anymore !!!
# vy_red is negative
vy_red = pm.Deterministic('vy_red', -tt.sqrt(v**2 - vx_red**2))
sigma_x_blue = pm.Uniform('sigma_x_blue', 0, 1)
sigma_y_blue = pm.Uniform('sigma_y_blue', 0, 1)
sigma_x_red = pm.Uniform('sigma_x_red', 0, 1)
sigma_y_red = pm.Uniform('sigma_y_red', 0, 1)
mu_x_blue = pm.Deterministic('mu_x_blue', vx_blue * t + x0_blue)
mu_y_blue = pm.Deterministic('mu_y_blue', vy_blue * t + y0_blue)
mu_x_red = pm.Deterministic('mu_x_red', vx_red * t + x0_red)
mu_y_red = pm.Deterministic('mu_y_red', vy_red * t + y0_red)
with model_coupled_small:
mc_x_blue = pm.Normal("mc_x_blue", mu_x_blue, sigma_x_blue, observed=obs_x_blue)
mc_y_blue = pm.Normal("mc_y_blue", mu_y_blue, sigma_y_blue, observed=obs_y_blue)
mc_x_red = pm.Normal("mc_x_red", mu_x_red, sigma_x_red, observed=obs_x_red)
mc_y_red = pm.Normal("mc_y_red", mu_y_red, sigma_y_red, observed=obs_y_red)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(100000, step=step, start=start)
coupled_small_burned_trace = trace[50000::5]
pm.plots.plot_posterior(trace=coupled_small_burned_trace["v"], round_to=6, alpha_level=0.05)
t_red = -coupled_small_burned_trace["y0_red"]/coupled_small_burned_trace["vy_red"]
t_blue = -coupled_small_burned_trace["y0_blue"]/coupled_small_burned_trace["vy_blue"]
tau = np.max(np.array([t_red, t_blue]), axis=0)
pm.plots.plot_posterior(trace=tau, alpha_level=0.05, round_to=2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
dataFrame = dataFrame.drop(row)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step6: Snatching asteroids. A hijacking guide to the galaxy!
Step7: Output format
|
<ASSISTANT_TASK:>
Python Code:
class MPCORB:
class for accessing MRCORB database records
def __init__(self, file='MRCORB.DAT'):
self.file = file
class MPOrbit:
parse and process MPCORB entries
http://www.minorplanetcenter.org/iau/info/MPOrbitFormat.html
def __init__(self, line):
parse line and fill self.data dictionary with fields according
to header line in MPCORB.DAT file. Fix MPCORB format issues:
- adds header for uncertainty parameter U
headerline = "Des'n H G Epoch M Peri. Node Incl. e n a "\
"Reference #Obs #Opp Arc rms Perts Computer"
self.headers = headerline.split()
# fill missing Uncertainty header
self.headers.insert(11, 'U')
data = line.split()
# join space separated Perts argument
data[17:19] = [data[17]+' '+data[18]]
for i, h in enumerate(self.headers):
self.data = OrderedDict()
self.data[h] = data[i]
# dict-type access methods: mporbit['h']
def __getitem__(self, name):
return self.data.__getitem__(name)
def __setitem__(self, name, value):
return self.data.__setitem__(name, value)
def find_semiminor(major, e):
Given semimajor and eccentricity, return semiminor
return float(major)*math.sqrt(1-float(e)**2)
def find_diameter(absmag, albedo=0.15):
Return asteroid diameter in km
http://www.physics.sfasu.edu/astro/asteroids/sizemagnitude.html
return (10**(-0.2*float(absmag)))*1329/math.sqrt(albedo)
def norm_radius(minr, maxr, minv, maxv, v):
Given minimal/maximum radius, min/max value and value,
return radius.
return
table = []
headers = []
for idx, line in enumerate(open('MPCORB.DAT', 'r')):
if idx == 38:
headers = line.split()
headers.insert(11, 'U') # fill missing Uncertainty header
#print(len(headers))
if 40 < idx < 42+100:
data = line.split()
data[17:19] = [data[17]+' '+data[18]]
entry = MPOrbit(line)
#entry = dict()
for i, h in enumerate(headers):
if h == 'a':
h = 'semimajor'
if h == 'Des\'n':
h = 'id'
entry[h] = data[i]
entry['semiminor'] = find_semiminor(entry['semimajor'], entry['e'])
entry['size'] = find_diameter(entry['H'])
table.append(entry)
minv = min(e['semimajor'] for e in table)
maxv = max(e['semimajor'] for e in table)
asteroids = []
aheaders = 'id', 'size'
print('// id size distance platinum fuel terraforming orbit')
tojson = []
seed(3.14)
for e in table:
a = OrderedDict()
a['id'] = e['id']
a['size'] = e['size']
a['distance'] = float(e['semimajor'])
a['period'] = 365/float(e['n']) # n = Mean daily motion (degrees per day)
a['---'] = ''
a['platinum'] = random()
a['fuel'] = random()
a['terraforming'] = random()
a['orbit'] = random()
tojson.append(a)
import json
print('')
#e[radius] = norm_radius()
#print('{id} {size} {distance}'.format(**a))
print(json.dumps(tojson, indent=4))
with open('dataset.js', 'w') as fw:
fw.write('var dataset = ')
fw.write(json.dumps(tojson, indent=4))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sin embargo no existen funciones trigonométricas cargadas por default. Para esto tenemos que importarlas de la libreria math
Step2: Variables
Step3: Ejercicio
Step4: Ejecuta la prueba de abajo para saber si has creado el codigo correcto
Step5: Listas
Step6: Pero si intentamos multiplicar estos datos por un numero, no tendrá el comportamiento esperado.
Step7: Funciones
Step8: Esta linea de codigo es equivalente a definir una función matemática de la siguiente manera
Step9: Esta notación que introducimos es muy util para funciones matemáticas, pero esto nos obliga a pensar en las definiciones de una manera funcional, lo cual no siempre es la solución (sobre todo en un lenguaje con un paradigma de programación orientado a objetos).
Step10: Con los mismos resultados
Step11: Ejercicio
Step12: Y para probar trata de convertir algunos datos
Step13: Ciclos de control
Step14: ó agregarlo en una lista nueva
Step15: Ejercicio
Step16: Ejecuta las pruebas de abajo
Step17: Matrices
Step18: Ejercicio
|
<ASSISTANT_TASK:>
Python Code:
2 + 3
2*3
2**3
sin(pi)
from math import sin, pi
sin(pi)
a = 10
a
# YOUR CODE HERE
raise NotImplementedError()
# YOUR CODE HERE
raise NotImplementedError()
from nose.tools import assert_equal
assert_equal(_, c)
print("Sin errores")
A = [2, 4, 8, 10]
A
A*2
f = lambda x: x**2 + 1
f(2)
def g(x):
y = x**2 + 1
return y
g(2)
def cel_a_faren(grados_cel):
# YOUR CODE HERE
raise NotImplementedError()
return grados_faren
cel_a_faren(-1)
from nose.tools import assert_equal
assert_equal(cel_a_faren(10), 50)
assert_equal(cel_a_faren(50), 122)
print("Sin errores")
for dato in A:
print(dato*2)
B = []
for dato in A:
B.append(dato*2)
B
# YOUR CODE HERE
raise NotImplementedError()
C
# YOUR CODE HERE
raise NotImplementedError()
D
from numpy.testing import assert_array_equal
print("Sin errores")
from numpy import matrix
A = matrix([[1, 2], [3, 4]])
A
v1 = matrix([[1], [2]])
v1
# Dependiendo de la version de python que exista en tu computadora,
# esta operacion pudiera no funcionar, en dado caso solo hay que
# cambiar @ por *
A@v1
# La siguiente linea no va a funcionar, porque?
v1@A
from numpy import sin, cos, pi
τ = 2*pi
# YOUR CODE HERE
raise NotImplementedError()
vec_rot
from numpy.testing import assert_array_equal
assert_array_equal(vec_rot, matrix([[2*(cos(τ/12)-sin(τ/12))], [2*(cos(τ/12)+sin(τ/12))]]))
print("Sin errores")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
# source_text.split() with empty string to get all words without the '\n'
# source_text.split('\n') to get all sentences
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids_helper(text, vocab_to_int, appendEOS=False):
sentences = [sentence if not appendEOS else sentence + ' <EOS>' for sentence in text.split('\n')]
ids = [[vocab_to_int[word] for word in sentence.split()] for sentence in sentences]
return ids
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = text_to_ids_helper(source_text, source_vocab_to_int)
target_id_text = text_to_ids_helper(target_text, target_vocab_to_int, True)
return (source_id_text, target_id_text)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return input, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
id_GO = target_vocab_to_int['<GO>']
decoder_input = tf.concat([tf.fill([batch_size, 1], id_GO), ending], 1)
return decoder_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
_, encoder_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return encoder_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
# Per discussion on Slack, dropout could be applied to encoder and train decoder.
# https://nd101.slack.com/archives/C3SEUBC5C/p1492633046150410
drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(cell=drop,
decoder_fn=train_decoder_fn,
inputs=dec_embed_input,
sequence_length=sequence_length,
scope=decoding_scope)
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = \
tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn=output_fn,
encoder_state=encoder_state,
embeddings=dec_embeddings,
start_of_sequence_id=start_of_sequence_id,
end_of_sequence_id=end_of_sequence_id,
maximum_length=maximum_length-1,
num_decoder_symbols=vocab_size)
infer_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(cell=dec_cell,
decoder_fn=infer_decoder_fn,
scope=decoding_scope)
return infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
tf.reset_default_graph()
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
with tf.variable_scope('decoding') as decoding_scope:
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
output_fn = lambda x: tf.contrib.layers.fully_connected(inputs=x,
num_outputs=vocab_size,
activation_fn=None,
scope=decoding_scope)
train_logits = \
decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
with tf.variable_scope('decoding', reuse=True) as decoding_scope:
infer_logits = \
decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
decoder_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, decoder_input)
train_logits, infer_logits = \
decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length,
rnn_size, num_layers, target_vocab_to_int, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 1
# Embedding Size: according to unique words (Is this assumption valid?)
encoding_embedding_size = 300
decoding_embedding_size = 300
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.70
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
id_UNK = vocab_to_int['<UNK>']
ids = [vocab_to_int.get(word, id_UNK) for word in sentence.lower().split()]
return ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import matplotlib (and other useful libraries)
Step2: Exploring relationships between quantitative variables
Step3: Line plot
Step4: Line plot with confidence intervals
Step5: Line plot with two y axes
Step6: Examining distributions
Step7: Overlaid histogram
Step8: Density plot
Step9: Between group comparisons
Step10: There are several options when there is more than one level to how we are grouping our data. To expand on the example above, we can divide check outs into those by registered and casual users. So we now have two grouping levels
Step11: Grouped bar plot
Step12: Box plot
|
<ASSISTANT_TASK:>
Python Code:
# Download data, unzip, etc.
import pandas as pd
import urllib
import tempfile
import shutil
import zipfile
temp_dir = tempfile.mkdtemp()
data_source = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00275/Bike-Sharing-Dataset.zip'
zipname = temp_dir + '/Bike-Sharing-Dataset.zip'
urllib.urlretrieve(data_source, zipname)
zip_ref = zipfile.ZipFile(zipname, 'r')
zip_ref.extractall(temp_dir)
zip_ref.close()
daily_path = temp_dir + '/day.csv'
daily_data = pd.read_csv(daily_path)
daily_data['dteday'] = pd.to_datetime(daily_data['dteday'])
drop_list = ['instant', 'season', 'yr', 'mnth', 'holiday', 'workingday', 'weathersit', 'atemp', 'hum']
daily_data.drop(drop_list, inplace = True, axis = 1)
shutil.rmtree(temp_dir)
daily_data.head()
from __future__ import division, print_function
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
# In a notebook environment, display the plots inline
%matplotlib inline
# Set some parameters to apply to all plots. These can be overridden in each plot if desired
import matplotlib
# Plot size to 14" x 7"
matplotlib.rc('figure', figsize = (14, 7))
# Font size to 14
matplotlib.rc('font', size = 14)
# Do not display top and right frame lines
matplotlib.rc('axes.spines', top = False, right = False)
# Remove grid lines
matplotlib.rc('axes', grid = False)
# Set backgound color to white
matplotlib.rc('axes', facecolor = 'white')
# Define a function to create the scatterplot. This makes it easy to reuse code within and across notebooks
def scatterplot(x_data, y_data, x_label, y_label, title):
# Create the plot object
_, ax = plt.subplots()
# Plot the data, set the size (s), color and transparency (alpha) of the points
ax.scatter(x_data, y_data, s = 30, color = '#539caf', alpha = 0.75)
# Label the axes and provide a title
ax.set_title(title)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
# Call the function to create plot
scatterplot(x_data = daily_data['temp']
, y_data = daily_data['cnt']
, x_label = 'Normalized temperature (C)'
, y_label = 'Check outs'
, title = 'Number of Check Outs vs Temperature')
# Perform linear regression
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import summary_table
x = sm.add_constant(daily_data['temp'])
y = daily_data['cnt']
regr = sm.OLS(y, x)
res = regr.fit()
# Get fitted values from model to plot
st, data, ss2 = summary_table(res, alpha=0.05)
fitted_values = data[:,2]
# Define a function for the line plot
def lineplot(x_data, y_data, x_label, y_label, title):
# Create the plot object
_, ax = plt.subplots()
# Plot the best fit line, set the linewidth (lw), color and transparency (alpha) of the line
ax.plot(x_data, y_data, lw = 2, color = '#539caf', alpha = 1)
# Label the axes and provide a title
ax.set_title(title)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
# Call the function to create plot
lineplot(x_data = daily_data['temp']
, y_data = fitted_values
, x_label = 'Normalized temperature (C)'
, y_label = 'Check outs'
, title = 'Line of Best Fit for Number of Check Outs vs Temperature')
# Get the confidence intervals of the model
predict_mean_ci_low, predict_mean_ci_upp = data[:,4:6].T
# Data for regions where we want to shade to indicate the intervals has to be sorted by the x axis to display correctly
CI_df = pd.DataFrame(columns = ['x_data', 'low_CI', 'upper_CI'])
CI_df['x_data'] = daily_data['temp']
CI_df['low_CI'] = predict_mean_ci_low
CI_df['upper_CI'] = predict_mean_ci_upp
CI_df.sort_values('x_data', inplace = True)
# Define a function for the line plot with intervals
def lineplotCI(x_data, y_data, sorted_x, low_CI, upper_CI, x_label, y_label, title):
# Create the plot object
_, ax = plt.subplots()
# Plot the data, set the linewidth, color and transparency of the line, provide a label for the legend
ax.plot(x_data, y_data, lw = 1, color = '#539caf', alpha = 1, label = 'Fit')
# Shade the confidence interval
ax.fill_between(sorted_x, low_CI, upper_CI, color = '#539caf', alpha = 0.4, label = '95% CI')
# Label the axes and provide a title
ax.set_title(title)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
# Display legend
ax.legend(loc = 'best')
# Call the function to create plot
lineplotCI(x_data = daily_data['temp']
, y_data = fitted_values
, sorted_x = CI_df['x_data']
, low_CI = CI_df['low_CI']
, upper_CI = CI_df['upper_CI']
, x_label = 'Normalized temperature (C)'
, y_label = 'Check outs'
, title = 'Line of Best Fit for Number of Check Outs vs Temperature')
# Define a function for a plot with two y axes
def lineplot2y(x_data, x_label, y1_data, y1_color, y1_label, y2_data, y2_color, y2_label, title):
# Each variable will actually have its own plot object but they will be displayed in just one plot
# Create the first plot object and draw the line
_, ax1 = plt.subplots()
ax1.plot(x_data, y1_data, color = y1_color)
# Label axes
ax1.set_ylabel(y1_label, color = y1_color)
ax1.set_xlabel(x_label)
ax1.set_title(title)
# Create the second plot object, telling matplotlib that the two objects have the same x-axis
ax2 = ax1.twinx()
ax2.plot(x_data, y2_data, color = y2_color)
ax2.set_ylabel(y2_label, color = y2_color)
# Show right frame line
ax2.spines['right'].set_visible(True)
# Call the function to create plot
lineplot2y(x_data = daily_data['dteday']
, x_label = 'Day'
, y1_data = daily_data['cnt']
, y1_color = '#539caf'
, y1_label = 'Check outs'
, y2_data = daily_data['windspeed']
, y2_color = '#7663b0'
, y2_label = 'Normalized windspeed'
, title = 'Check Outs and Windspeed Over Time')
# Define a function for a histogram
def histogram(data, x_label, y_label, title):
_, ax = plt.subplots()
ax.hist(data, color = '#539caf')
ax.set_ylabel(y_label)
ax.set_xlabel(x_label)
ax.set_title(title)
# Call the function to create plot
histogram(data = daily_data['registered']
, x_label = 'Check outs'
, y_label = 'Frequency'
, title = 'Distribution of Registered Check Outs')
# Define a function for an overlaid histogram
def overlaid_histogram(data1, data1_name, data1_color, data2, data2_name, data2_color, x_label, y_label, title):
# Set the bounds for the bins so that the two distributions are fairly compared
max_nbins = 10
data_range = [min(min(data1), min(data2)), max(max(data1), max(data2))]
binwidth = (data_range[1] - data_range[0]) / max_nbins
bins = np.arange(data_range[0], data_range[1] + binwidth, binwidth)
# Create the plot
_, ax = plt.subplots()
ax.hist(data1, bins = bins, color = data1_color, alpha = 1, label = data1_name)
ax.hist(data2, bins = bins, color = data2_color, alpha = 0.75, label = data2_name)
ax.set_ylabel(y_label)
ax.set_xlabel(x_label)
ax.set_title(title)
ax.legend(loc = 'best')
# Call the function to create plot
overlaid_histogram(data1 = daily_data['registered']
, data1_name = 'Registered'
, data1_color = '#539caf'
, data2 = daily_data['casual']
, data2_name = 'Casual'
, data2_color = '#7663b0'
, x_label = 'Check outs'
, y_label = 'Frequency'
, title = 'Distribution of Check Outs By Type')
# We must first create a density estimate from our data
from scipy.stats import gaussian_kde
data = daily_data['registered']
density_est = gaussian_kde(data)
density_est.covariance_factor = lambda : .25 # This controls the 'smoothness' of the estimate. Higher values give smoother estimates.
density_est._compute_covariance()
x_data = np.arange(min(data), max(data), 200)
# Define a function for a density plot
def densityplot(x_data, density_est, x_label, y_label, title):
_, ax = plt.subplots()
ax.plot(x_data, density_est(x_data), color = '#539caf', lw = 2)
ax.set_ylabel(y_label)
ax.set_xlabel(x_label)
ax.set_title(title)
# Call the function to create plot
densityplot(x_data = x_data
, density_est = density_est
, x_label = 'Check outs'
, y_label = 'Frequency'
, title = 'Distribution of Registered Check Outs')
# Calculate the mean and standard deviation for number of check outs each day
mean_total_co_day = daily_data[['weekday', 'cnt']].groupby('weekday').agg([np.mean, np.std])
mean_total_co_day.columns = mean_total_co_day.columns.droplevel()
# Define a function for a bar plot
def barplot(x_data, y_data, error_data, x_label, y_label, title):
_, ax = plt.subplots()
# Draw bars, position them in the center of the tick mark on the x-axis
ax.bar(x_data, y_data, color = '#539caf', align = 'center')
# Draw error bars to show standard deviation, set ls to 'none' to remove line between points
ax.errorbar(x_data, y_data, yerr = error_data, color = '#297083', ls = 'none', lw = 2, capthick = 2)
ax.set_ylabel(y_label)
ax.set_xlabel(x_label)
ax.set_title(title)
# Call the function to create plot
barplot(x_data = mean_total_co_day.index.values
, y_data = mean_total_co_day['mean']
, error_data = mean_total_co_day['std']
, x_label = 'Day of week'
, y_label = 'Check outs'
, title = 'Total Check Outs By Day of Week (0 = Sunday)')
mean_by_reg_co_day = daily_data[['weekday', 'registered', 'casual']].groupby('weekday').mean()
mean_by_reg_co_day
# Calculate the mean number of check outs for each day by registration status
mean_by_reg_co_day = daily_data[['weekday', 'registered', 'casual']].groupby('weekday').mean()
# Calculate proportion of each category of user for each day
mean_by_reg_co_day['total'] = mean_by_reg_co_day['registered'] + mean_by_reg_co_day['casual']
mean_by_reg_co_day['reg_prop'] = mean_by_reg_co_day['registered'] / mean_by_reg_co_day['total']
mean_by_reg_co_day['casual_prop'] = mean_by_reg_co_day['casual'] / mean_by_reg_co_day['total']
# Define a function for a stacked bar plot
def stackedbarplot(x_data, y_data_list, y_data_names, colors, x_label, y_label, title):
_, ax = plt.subplots()
# Draw bars, one category at a time
for i in range(0, len(y_data_list)):
if i == 0:
ax.bar(x_data, y_data_list[i], color = colors[i], align = 'center', label = y_data_names[i])
else:
# For each category after the first, the bottom of the bar will be the top of the last category
ax.bar(x_data, y_data_list[i], color = colors[i], bottom = y_data_list[i - 1], align = 'center', label = y_data_names[i])
ax.set_ylabel(y_label)
ax.set_xlabel(x_label)
ax.set_title(title)
ax.legend(loc = 'upper right')
# Call the function to create plot
stackedbarplot(x_data = mean_by_reg_co_day.index.values
, y_data_list = [mean_by_reg_co_day['reg_prop'], mean_by_reg_co_day['casual_prop']]
, y_data_names = ['Registered', 'Casual']
, colors = ['#539caf', '#7663b0']
, x_label = 'Day of week'
, y_label = 'Proportion of check outs'
, title = 'Check Outs By Registration Status and Day of Week (0 = Sunday)')
# Define a function for a grouped bar plot
def groupedbarplot(x_data, y_data_list, y_data_names, colors, x_label, y_label, title):
_, ax = plt.subplots()
# Total width for all bars at one x location
total_width = 0.8
# Width of each individual bar
ind_width = total_width / len(y_data_list)
# This centers each cluster of bars about the x tick mark
alteration = np.arange(-(total_width/2), total_width/2, ind_width)
# Draw bars, one category at a time
for i in range(0, len(y_data_list)):
# Move the bar to the right on the x-axis so it doesn't overlap with previously drawn ones
ax.bar(x_data + alteration[i], y_data_list[i], color = colors[i], label = y_data_names[i], width = ind_width)
ax.set_ylabel(y_label)
ax.set_xlabel(x_label)
ax.set_title(title)
ax.legend(loc = 'upper right')
# Call the function to create plot
groupedbarplot(x_data = mean_by_reg_co_day.index.values
, y_data_list = [mean_by_reg_co_day['registered'], mean_by_reg_co_day['casual']]
, y_data_names = ['Registered', 'Casual']
, colors = ['#539caf', '#7663b0']
, x_label = 'Day of week'
, y_label = 'Check outs'
, title = 'Check Outs By Registration Status and Day of Week (0 = Sunday)')
# Unlike with bar plots, there is no need to aggregate the data before plotting
# However the data for each group (day) needs to be defined
days = np.unique(daily_data['weekday'])
bp_data = []
for day in days:
bp_data.append(daily_data[daily_data['weekday'] == day]['cnt'].values)
# Define a function to create a boxplot:
def boxplot(x_data, y_data, base_color, median_color, x_label, y_label, title):
_, ax = plt.subplots()
# Draw boxplots, specifying desired style
ax.boxplot(y_data
# patch_artist must be True to control box fill
, patch_artist = True
# Properties of median line
, medianprops = {'color': median_color}
# Properties of box
, boxprops = {'color': base_color, 'facecolor': base_color}
# Properties of whiskers
, whiskerprops = {'color': base_color}
# Properties of whiker caps
, capprops = {'color': base_color})
# By default, the tick label starts at 1 and increments by 1 for each box drawn. This sets the labels to the ones we want
ax.set_xticklabels(x_data)
ax.set_ylabel(y_label)
ax.set_xlabel(x_label)
ax.set_title(title)
# Call the function to create plot
boxplot(x_data = days
, y_data = bp_data
, base_color = '#539caf'
, median_color = '#297083'
, x_label = 'Day of week'
, y_label = 'Check outs'
, title = 'Total Check Outs By Day of Week (0 = Sunday)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A variable holds a value. You can change the value of a variable at any point.
Step2: Naming rules
Step3: Let's look through this error message. First, we see it is a NameError. Then we see the file that caused the error, and a green arrow shows us what line in that file caused the error. Then we get some more specific feedback, that "name 'mesage' is not defined".
Step4: In case you didn't know Guido van Rossum created the Python language over 20 years ago, and he is considered Python's Benevolent Dictator for Life. Guido still signs off on all major changes to the core Python language.
Step5: This lets us make strings that contain quotations.
Step6: Changing case
Step7: It is often good to store data in lower case, and then change the case as you want to for presentation. This catches some TYpos. It also makes sure that 'eric', 'Eric', and 'ERIC' are not considered three different people.
Step8: You will see this syntax quite often, where a variable name is followed by a dot and then the name of an action, followed by a set of parentheses. The parentheses may be empty, or they may contain some values.
Step9: The plus sign combines two strings into one, which is called "concatenation". You can use as many plus signs as you want in composing messages. In fact, many web pages are written as giant strings which are put together through a long series of string concatenations.
Step10: If you don't know who Ada Lovelace is, you might want to go read what Wikipedia or the Computer History Museum have to say about her. Her life and her work are also the inspiration for the Ada Initiative, which supports women who are involved in technical fields.
Step11: The combination "\n" makes a newline appear in a string. You can use newlines anywhere you like in a string.
Step12: Stripping whitespace
Step13: It's hard to see exactly what is happening, so maybe the following will make it a little more clear
Step14: <a id="Exercises-strings"></a>
Step15: You can use parenthesis to modify the standard order of operations.
Step16: Floating-Point numbers
Step17: However, sometimes you will get an answer with an unexpectly long decimal part
Step18: This happens because of the way computers represent numbers internally; this has nothing to do with Python itself. Basically, we are used to working in powers of ten, where one tenth plus two tenths is just three tenths. But computers work in powers of two. So your computer has to represent 0.1 in a power of two, and then 0.2 as a power of two, and express their sum as a power of two. There is no exact representation for 0.3 in powers of two, and we see that in the answer to 0.1+0.2.
Step19: Integers in Python 2.7
Step20: Division in Python 3.3
Step21: If you are getting numerical results that you don't expect, or that don't make sense, check if the version of Python you are using is treating integers differently than you expect.
Step22: What makes a good comment?
Step23: There is a lot here. Let's just take a few lines, and see what they mean for you as a new programmer.
|
<ASSISTANT_TASK:>
Python Code:
message = "Hello Python world!"
print(message)
###highlight=[5,6]
message = "Hello Python world!"
print(message)
message = "Python is my favorite language!"
print(message)
message = "Thank you for sharing Python with the world, Guido!"
print(mesage)
###highlight=[3]
message = "Thank you for sharing Python with the world, Guido!"
print(message)
my_string = "This is a double-quoted string."
my_string = 'This is a single-quoted string.'
quote = "Linus Torvalds once said, 'Any program is only as good as it is useful.'"
first_name = 'eric'
print(first_name)
print(first_name.title())
###highlight=[6,8,9]
first_name = 'eric'
print(first_name)
print(first_name.title())
print(first_name.upper())
first_name = 'Eric'
print(first_name.lower())
first_name = 'ada'
last_name = 'lovelace'
full_name = first_name + ' ' + last_name
print(full_name.title())
###highlight=[6,7,8]
first_name = 'ada'
last_name = 'lovelace'
full_name = first_name + ' ' + last_name
message = full_name.title() + ' ' + "was considered the world's first computer programmer."
print(message)
print("Hello everyone!")
print("\tHello everyone!")
print("Hello \teveryone!")
print("Hello everyone!")
print("\nHello everyone!")
print("Hello \neveryone!")
print("\n\n\nHello everyone!")
name = ' eric '
print(name.lstrip())
print(name.rstrip())
print(name.strip())
name = ' eric '
print('-' + name.lstrip() + '-')
print('-' + name.rstrip() + '-')
print('-' + name.strip() + '-')
print(3+2)
print(3-2)
print(3*2)
print(3/2)
print(3**2)
standard_order = 2+3*4
print(standard_order)
my_order = (2+3)*4
print(my_order)
print(0.1+0.1)
print(0.1+0.2)
print(3*0.1)
# Python 2.7
print 4/2
# Python 2.7
print 3/2
# Python 3.3
print(4/2)
# Python 3.3
print(3/2)
# This line is a comment.
print("This line is not a comment, it is code.")
import this
# I learned how to strip whitespace from strings.
name = '\t\teric'
print("I can strip tabs from my name: " + name.strip())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Вопросы
Step5: ORM - Object-Relational Mapping
Step6: Нужно добавить наше приложение в INSTALLED_APPS в settings.py
Step7: Создадим миграцию для наших новых моделей
Step8: Админка
Step9: python manage.py runserver
|
<ASSISTANT_TASK:>
Python Code:
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
c.execute(
CREATE TABLE employees (
id int unsigned NOT NULL,
first_name string NOT NULL,
last_name string NOT NULL,
department_id int unsigned,
PRIMARY KEY (id)
))
c.execute(
CREATE TABLE departments (
id int unsigned NOT NULL,
title string NOT NULL,
PRIMARY KEY (id)
))
conn.commit()
c.execute(
INSERT INTO `employees` (`id`, `first_name`, `last_name`, `department_id`) VALUES
('1', 'Darth', 'Vader', 1),
('2', 'Darth', 'Maul', 1),
('3', 'Kylo', 'Ren', 1),
('4', 'Magister', 'Yoda', 2),
('5', 'Leia', 'Organa', 2),
('6', 'Luke', 'Skywalker', 2),
('7', 'Jar Jar', 'Binks', NULL)
)
c.execute(
INSERT INTO `departments` (`id`, `title`) VALUES
('1', 'Dark Side Inc.'),
('2', 'Light Side Ltd.'),
('3', 'Rebels'),
('4', 'Wookie')
)
conn.commit()
c.execute("SELECT emp.last_name AS Surname, d.title AS Department FROM departments d LEFT JOIN employees emp ON (d.id = emp.department_id)")
print(c.fetchall())
# hello/models.py
from django.db import models
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
class Choice(models.Model):
question = models.ForeignKey(Question, on_delete=models.CASCADE)
choice_text = models.CharField(max_length=200)
votes = models.IntegerField(default=0)
INSTALLED_APPS = [
'hello', # <---- вот сюда, например
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
import django
django.setup()
from django.utils import timezone # поддержка временных зон
from hello.models import Question, Choice
Question.objects.all() # вернуть все объекты из базы
q = Question(question_text="Чёкак?", pub_date=timezone.now()) # создать объект
q.save() # сохранить объект в базу
q.question_text = "Чёкаво?"
q.save()
str(q.query) # заглянуть внутрь
Question.objects.filter(question_text__startswith='Чё') # фильтруем по строчкам
current_year = timezone.now().year
Question.objects.get(pub_date__year=current_year) # фильтруем по году
Question.objects.get(id=2)
q.choice_set.all() # все варианты ответа для данного вопроса
c = q.choice_set.create(choice_text='Кто бы знал', votes=0) # создаем связанный объект
c.delete() # удаляем объект
# hello/admin.py
from django.contrib import admin
from .models import Question
admin.site.register(Question)
import asyncio
import aiohttp
from aioes import Elasticsearch
from datetime import datetime
es = Elasticsearch(['localhost:9200'])
URL = "https://ghibliapi.herokuapp.com/species/603428ba-8a86-4b0b-a9f1-65df6abef3d3"
async def create_db():
async with aiohttp.ClientSession() as session:
async with session.get(URL) as resp:
films_urls = (await resp.json())["films"]
for i, film_url in enumerate(films_urls):
async with session.get(film_url) as resp:
res = await es.index(
index="coding-index",
doc_type='film',
id=i,
body=await resp.json()
)
print(res['created'])
loop = asyncio.get_event_loop()
loop.run_until_complete(create_db())
# https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html
async def get_by_id(key):
return await es.get(index='coding-index', doc_type='film', id=key)
async def search_by_director(director):
return await es.search(index='coding-index', body={"query": {"match": {'director': director}}})
async def search_in_description(sentence):
return await es.search(index='coding-index', body={"query": {"match": {'description': sentence}}})
# loop.run_until_complete(get_by_id(0))
# loop.run_until_complete(search_by_director("Hayao Miyazaki"))
loop.run_until_complete(search_in_description("cat"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ''The experiments I am about to relate ... may be repeated with great ease,
Step2: Según la figura, $\Delta = r_2 - r_1$ lo podemos escribir como $\Delta = a sen(\theta)$, siendo $a$ la separación entre las rendijas. Si éste ángulo es pequeño (lo que significa que la distancia entre las fuentes y la pantalla de observación sea grande comparada con la separación entre las fuentes), esta expresión la podemos simplificar,
Step3: Como podemos ver, los máximos están equiespaciados (lo mismo sucede con los míminos), siendo la distancia entre dos máximos consecutivos
Step4: Cuestiones. Preguntas
Step5: Como se puede observar, en el caso de luz blanca, cada una de las longitudes de onda que la componen forma un sistema de franjas con los máximos situados en posiciones distintas y con una interfranja diferente. Esto dificulta enormemente la visualización de la interferencia y nos llevará a definir el concepto de luz coherente e incoherente.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename="YoungTwoSlitExperiment.JPG")
from IPython.display import Image
Image(filename="ExperimentoYoung.jpg")
from matplotlib.pyplot import *
from numpy import *
%matplotlib inline
style.use('fivethirtyeight')
###################################################################################
# PARÁMETROS. SE PUEDEN MODIFICAR SUS VALORES
###################################################################################
Lambda =400e-9 # en metros, longitud de onda de la radiación
D = 4.5 # en metros, distancia entre el plano que contiene las fuentes y la pantalla de observación
a = 0.003 # en metros, separación entre fuentes
###################################################################################
interfranja=Lambda*D/a # cálculo de la interfranja
k = 2.0*pi/Lambda
x = linspace(-5*interfranja,5*interfranja,500)
I1 = 1 # Consideramos irradiancias normalizadas a un cierto valor.
I2 = 0.01
X,Y = meshgrid(x,x)
delta = k*a*X/D
Itotal = I1 + I2 + 2.0*sqrt(I1*I2)*cos(delta)
figure(figsize=(14,5))
subplot(121)
pcolormesh(x*1e3,x*1e3,Itotal,cmap = 'gray',vmin=0,vmax=4)
xlabel("x (mm)"); ylabel("y (mm)")
subplot(122)
plot(x*1e3,Itotal[x.shape[0]/2,:])
xlabel("x (mm)"); ylabel("Irradiancia total normalizada")
interfranja=Lambda*D/a # cálculo de la interfranja
C = (Itotal.max() - Itotal.min())/(Itotal.max() + Itotal.min()) # cálculo del contraste
print "a=",a*1e3,"mm ","D=",D,"m ","Longitud de onda=",Lambda*1e9,"nm" # valores de los parámetros
print "Interfranja=",interfranja*1e3,"mm" # muestra el valor de la interfranja en mm
print 'Contraste=',C # muestra el valor del contraste
from IPython.display import Image
Image(filename="FranjasYoungWhiteLight.jpg")
from IPython.display import YouTubeVideo
YouTubeVideo("B34bAGtQL9A")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Загрузите данные из файла advertising.csv в объект pandas DataFrame. Источник данных.
Step2: Посмотрите на первые 5 записей и на статистику признаков в этом наборе данных.
Step3: Создайте массивы NumPy X из столбцов TV, Radio и Newspaper и y - из столбца Sales. Используйте атрибут values объекта pandas DataFrame.
Step4: Отмасштабируйте столбцы матрицы X, вычтя из каждого значения среднее по соответствующему столбцу и поделив результат на стандартное отклонение. Для определенности, используйте методы mean и std векторов NumPy (реализация std в Pandas может отличаться). Обратите внимание, что в numpy вызов функции .mean() без параметров возвращает среднее по всем элементам массива, а не по столбцам, как в pandas. Чтобы произвести вычисление по столбцам, необходимо указать параметр axis.
Step5: Добавьте к матрице X столбец из единиц, используя методы hstack, ones и reshape библиотеки NumPy. Вектор из единиц нужен для того, чтобы не обрабатывать отдельно коэффициент $w_0$ линейной регрессии.
Step6: 2. Реализуйте функцию mserror - среднеквадратичную ошибку прогноза. Она принимает два аргумента - объекты Series y (значения целевого признака) и y_pred (предсказанные значения). Не используйте в этой функции циклы - тогда она будет вычислительно неэффективной.
Step7: Какова среднеквадратичная ошибка прогноза значений Sales, если всегда предсказывать медианное значение Sales по исходной выборке? Запишите ответ в файл '1.txt'.
Step8: 3. Реализуйте функцию normal_equation, которая по заданным матрицам (массивам NumPy) X и y вычисляет вектор весов $w$ согласно нормальному уравнению линейной регрессии.
Step9: Какие продажи предсказываются линейной моделью с весами, найденными с помощью нормального уравнения, в случае средних инвестиций в рекламу по ТВ, радио и в газетах? (то есть при нулевых значениях масштабированных признаков TV, Radio и Newspaper). Запишите ответ в файл '2.txt'.
Step10: 4. Напишите функцию linear_prediction, которая принимает на вход матрицу X и вектор весов линейной модели w, а возвращает вектор прогнозов в виде линейной комбинации столбцов матрицы X с весами w.
Step11: Какова среднеквадратичная ошибка прогноза значений Sales в виде линейной модели с весами, найденными с помощью нормального уравнения? Запишите ответ в файл '3.txt'.
Step12: 5. Напишите функцию stochastic_gradient_step, реализующую шаг стохастического градиентного спуска для линейной регрессии. Функция должна принимать матрицу X, вектора y и w, число train_ind - индекс объекта обучающей выборки (строки матрицы X), по которому считается изменение весов, а также число $\eta$ (eta) - шаг градиентного спуска (по умолчанию eta=0.01). Результатом будет вектор обновленных весов. Наша реализация функции будет явно написана для данных с 3 признаками, но несложно модифицировать для любого числа признаков, можете это сделать.
Step13: 6. Напишите функцию stochastic_gradient_descent, реализующую стохастический градиентный спуск для линейной регрессии. Функция принимает на вход следующие аргументы
Step14: Запустите $10^5$ итераций стохастического градиентного спуска. Укажите вектор начальных весов w_init, состоящий из нулей. Оставьте параметры eta и seed равными их значениям по умолчанию (eta=0.01, seed=42 - это важно для проверки ответов).
Step15: Посмотрим, чему равна ошибка на первых 50 итерациях стохастического градиентного спуска. Видим, что ошибка не обязательно уменьшается на каждой итерации.
Step16: Теперь посмотрим на зависимость ошибки от номера итерации для $10^5$ итераций стохастического градиентного спуска. Видим, что алгоритм сходится.
Step17: Посмотрим на вектор весов, к которому сошелся метод.
Step18: Посмотрим на среднеквадратичную ошибку на последней итерации.
Step19: Какова среднеквадратичная ошибка прогноза значений Sales в виде линейной модели с весами, найденными с помощью градиентного спуска? Запишите ответ в файл '4.txt'.
|
<ASSISTANT_TASK:>
Python Code:
def write_answer_to_file(answer, filename):
with open(filename, 'w') as f_out:
f_out.write(str(round(answer, 3)))
import pandas as pd
adver_data = pd.read_csv('advertising.csv')
adver_data.head(5)
adver_data.describe()
import numpy as np
X = np.array(adver_data.values[:,0:3])
y = np.array(adver_data.values[:,3])
means, stds = np.mean(X,axis=0), np.std(X, axis=0)
X = (X-means)/stds
n = np.shape(X)[0]
ones = np.reshape(np.ones(n),(n,1))
X = np.hstack((X,ones))
def mserror(y, y_pred):
return np.mean((y-y_pred)**2)
y_med = np.median(y)
answer1 = mserror(y,y_med)
print(answer1)
write_answer_to_file(answer1, '1.txt')
def normal_equation(X, y):
return np.linalg.solve(np.dot(X.transpose(),X),np.dot(X.transpose(),y))
norm_eq_weights = normal_equation(X, y)
print(norm_eq_weights)
answer2 = np.sum([0., 0., 0., 1.]*norm_eq_weights)
print(answer2)
write_answer_to_file(answer2, '2.txt')
def linear_prediction(X, w):
return np.dot(X,w)
lin_pred = linear_prediction(X,norm_eq_weights)
answer3 = mserror(y,lin_pred)
print(answer3)
write_answer_to_file(answer3, '3.txt')
def stochastic_gradient_step(X, y, w, train_ind, eta=0.01):
return w + 2 * eta/X.shape[0] * X[train_ind] * (y[train_ind] - linear_prediction(X[train_ind], w))
def stochastic_gradient_descent(X, y, w_init, eta=1e-2, max_iter=1e4,
min_weight_dist=1e-8, seed=42, verbose=False):
# Инициализируем расстояние между векторами весов на соседних
# итерациях большим числом.
weight_dist = np.inf
# Инициализируем вектор весов
w = w_init
# Сюда будем записывать ошибки на каждой итерации
errors = []
# Счетчик итераций
iter_num = 0
# Будем порождать псевдослучайные числа
# (номер объекта, который будет менять веса), а для воспроизводимости
# этой последовательности псевдослучайных чисел используем seed.
np.random.seed(seed)
# Основной цикл
while weight_dist > min_weight_dist and iter_num < max_iter:
# порождаем псевдослучайный
# индекс объекта обучающей выборки
random_ind = np.random.randint(X.shape[0])
w_new = stochastic_gradient_step(X, y, w, random_ind, eta)
weight_dist = np.linalg.norm(w-w_new)
w = w_new
errors.append(mserror(y, linear_prediction(X, w)))
iter_num += 1
return w, errors
%%time
stoch_grad_desc_weights, stoch_errors_by_iter = stochastic_gradient_descent(X, y, np.zeros(X.shape[1]),max_iter=1e5)
%pylab inline
plot(range(50), stoch_errors_by_iter[:50])
xlabel('Iteration number')
ylabel('MSE')
%pylab inline
plot(range(len(stoch_errors_by_iter)), stoch_errors_by_iter)
xlabel('Iteration number')
ylabel('MSE')
stoch_grad_desc_weights
stoch_errors_by_iter[-1]
answer4 = mserror(y, linear_prediction(X, stoch_grad_desc_weights))
print(answer4)
write_answer_to_file(answer4, '4.txt')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Do some calculations to try and match Phil's analysis
Step4: Can we get a noise estimate?
Step5: Noise estimates
Step6: Maybe we should consider just part of the MCP, lets get the min,max X and min,max Y where there are counts and just use that area. This will increase the cts/pixel.
Step7: Looks to go all the way to all sides in X-Y.
Step8: Run again allowing some uncertainity on witness and foil areas
|
<ASSISTANT_TASK:>
Python Code:
import itertools
from pprint import pprint
from operator import getitem
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import numpy as np
import spacepy.plot as spp
import pymc as mc
import tqdm
from MCA_file_viewer_v001 import GetMCAfile
def plot_box(x, y, c='r', lw=0.6, ax=None):
if ax is None:
plt.plot((xind[0], xind[0]), (yind[0], yind[1]), lw=lw, c=c)
plt.plot((xind[1], xind[1]), (yind[0], yind[1]), lw=lw, c=c)
plt.plot((xind[0], xind[1]), (yind[0], yind[0]), lw=lw, c=c)
plt.plot((xind[0], xind[1]), (yind[1], yind[1]), lw=lw, c=c)
else:
ax.plot((xind[0], xind[0]), (yind[0], yind[1]), lw=lw, c=c)
ax.plot((xind[1], xind[1]), (yind[0], yind[1]), lw=lw, c=c)
ax.plot((xind[0], xind[1]), (yind[0], yind[0]), lw=lw, c=c)
ax.plot((xind[0], xind[1]), (yind[1], yind[1]), lw=lw, c=c)
ZZ, XX, YY = GetMCAfile('16090203.mca')
# It is believed as of 2016-09-19 that the MCA records 2 counts for each count.
# This means all data are even and all the data can be divided by 2 to give the
# right number of counts. Per emails Larsen-Fernandes 2016-09-17
# These data are integres and care muct be taken to assure that /2 does not
# lead to number that are not representable in float
ZZ = ZZ.astype(float)
ZZ /= 2
XX = XX.astype(np.uint16) # as they all should be integers anyway
xind = (986, 1003)
yind = (492, 506)
fig = plt.figure(figsize=(20,8))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
pc = ax1.pcolormesh(XX, YY, ZZ, norm=LogNorm())
plt.colorbar(pc, ax=ax1)
plot_box(xind, yind, ax=ax1)
ax2.hist(ZZ.flatten(), 20)
ax2.set_yscale('log')
ax3.hist(ZZ.flatten(), 20, normed=True)
ax3.set_yscale('log')
total_cnts = ZZ.sum()
print('Total counts:{0} -- Phil got {1} -- remember /2'.format(total_cnts, 4570/2)) # remember we did a /2
# Is the whitness hole at x=1000, y=500?
XX.shape, YY.shape, ZZ.shape
print(ZZ[yind[0]:yind[1], xind[0]:xind[1]])
plt.figure()
plt.pcolormesh(XX[xind[0]:xind[1]], YY[yind[0]:yind[1]], ZZ[yind[0]:yind[1], xind[0]:xind[1]] , norm=LogNorm())
plt.colorbar()
witness_counts = ZZ[yind[0]:yind[1], xind[0]:xind[1]].sum()
print('Witness counts: {0}, Phil got {1}/2={2}'.format(witness_counts, 658, 658/2))
wit_pixels = 46
print('There {0} pixels in the witness peak'.format(wit_pixels))
total_counts = ZZ.sum()
print("There are a total of {0} counts".format(total_counts))
def neighbor_inds(x, y, xlim=(0,1023), ylim=(0,1023), center=False, mask=False):
given an x and y index return the 8 neighbor indices
if center also return the center index
if mask return a boolean mask over the whole 2d array
xi = np.clip([x + v for v in [-1, 0, 1]], xlim[0], xlim[1])
yi = np.clip([y + v for v in [-1, 0, 1]], ylim[0], ylim[1])
ans = [(i, j) for i, j in itertools.product(xi, yi)]
if not center:
ans.remove((x,y))
if mask:
out = np.zeros((np.diff(xlim)+1, np.diff(ylim)+1), dtype=np.bool)
for c in ans:
out[c] = True
else:
out = ans
return np.asarray(out)
print(neighbor_inds(2,2))
print(neighbor_inds(2,2, mask=True))
print(ZZ[neighbor_inds(500, 992, mask=True)])
def get_alone_pixels(dat):
loop over all the data and store the value of all lone pixels
ans = []
for index, x in tqdm.tqdm_notebook(np.ndenumerate(dat)):
if (np.sum([ZZ[i, j] for i, j in neighbor_inds(index[0], index[1])]) == 0) and x != 0:
ans.append((index, x))
return ans
# print((neighbor_inds(5, 4)))
alone = get_alone_pixels(ZZ)
pprint(alone)
# ZZ[neighbor_inds(5, 4)[0]].shape
# print((neighbor_inds(5, 4))[0])
# print(ZZ[(neighbor_inds(5, 4))[0]].shape)
# ZZ[4,3]
ZZ[(965, 485)]
print(neighbor_inds(4,3)[0])
print(ZZ[neighbor_inds(4,3)[0]])
print(ZZ[3,2])
ni = neighbor_inds(4,3)[0]
print(ZZ[ni[0], ni[1]])
(ZZ % 2).any() # not all even any longer
n_noise = np.sum([v[1] for v in alone])
n_pixels = 1024*1024
noise_pixel = n_noise/n_pixels
print("There were a total of {0} random counts over {1} pixels, {2} cts/pixel".format(n_noise, n_pixels, noise_pixel))
minx_tmp = ZZ.sum(axis=0)
minx_tmp.shape
print(minx_tmp)
miny_tmp = ZZ.sum(axis=1)
miny_tmp.shape
print(miny_tmp)
Aw = np.pi*(0.2/2)**2 # mm**2
Af = 182.75 # mm**2 this is the area of the foil
W_F_ratio = Aw/Af
print(Aw, Af, W_F_ratio)
C = wit_pixels/n_pixels
D = (n_pixels-wit_pixels)/n_pixels
print('C', C, 'D', D)
nbkg = mc.Uniform('nbkg', 1, n_noise*5) # just 1 to some large number
obsnbkg = mc.Poisson('obsnbkg', nbkg, observed=True, value=n_noise)
witc = mc.Uniform('witc', 0, witness_counts*5) # just 0 to some large number
obswit = mc.Poisson('obswit', nbkg*C + witc, observed=True, value=witness_counts)
realc = mc.Uniform('realc', 0, total_counts*5) # just 0 to some large number
obsopen = mc.Poisson('obsopen', nbkg*D + realc, observed=True, value=total_counts-witness_counts)
@mc.deterministic(plot=True)
def open_area(realc=realc, witc=witc):
return realc*Aw/witc/Af
model = mc.MCMC([nbkg, obsnbkg, witc, obswit, realc, obsopen, open_area])
model.sample(200000, burn=100, thin=30, burn_till_tuned=True)
mc.Matplot.plot(model)
# 1000, burn=100, thin=30 0.000985 +/- 0.000058
# 10000, burn=100, thin=30 0.000982 +/- 0.000061
# 100000, burn=100, thin=30 0.000984 +/- 0.000059
# 200000, burn=100, thin=30 0.000986 +/- 0.000059
# 1000000, burn=100, thin=30 0.000985 +/- 0.000059
print("Foil 1 \n")
witc_mean = np.mean(witc.trace()[...])
witc_std = np.std(witc.trace()[...])
print("Found witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(witness_counts, witc_mean, witc_std, witc_std/witc_mean*100))
realc_mean = np.mean(realc.trace()[...])
realc_std = np.std(realc.trace()[...])
print("Found non-witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(total_counts-witness_counts, realc_mean, realc_std, realc_std/realc_mean*100))
nbkg_mean = np.mean(nbkg.trace()[...])
nbkg_std = np.std(nbkg.trace()[...])
print("Found noise counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(0, nbkg_mean, nbkg_std, nbkg_std/nbkg_mean*100))
OA_median = np.median(open_area.trace()[...])
OA_mean = np.mean(open_area.trace()[...])
OA_std = np.std(open_area.trace()[...])
print("The open area fraction is {0:.6f} +/- {1:.6f} ({2:.2f}%) at the 1 stddev level from 1 measurement\n".format(OA_mean, OA_std,OA_std/OA_mean*100 ))
print("Phil got {0} for 1 measurement\n".format(0.00139))
print("The ratio Brian/Phil is: {0:.6f} or {1:.6f}".format(OA_mean/0.00139, 0.00139/OA_mean))
_Aw = np.pi*(0.2/2)**2 # mm**2
_Af = 182.75 # mm**2 this is the area of the foil
Aw = mc.Normal('Aw', _Aw, (_Aw*0.2)**-2) # 20%
Af = mc.Normal('Af', _Af, (_Af*0.1)**-2) # 10%
print(_Aw, _Af)
C = wit_pixels/n_pixels
D = (n_pixels-wit_pixels)/n_pixels
print('C', C, 'D', D)
nbkg = mc.Uniform('nbkg', 1, n_noise*5) # just 1 to some large number
obsnbkg = mc.Poisson('obsnbkg', nbkg, observed=True, value=n_noise)
witc = mc.Uniform('witc', 0, witness_counts*5) # just 0 to some large number
obswit = mc.Poisson('obswit', nbkg*C + witc, observed=True, value=witness_counts)
realc = mc.Uniform('realc', 0, total_counts*5) # just 0 to some large number
obsopen = mc.Poisson('obsopen', nbkg*D + realc, observed=True, value=total_counts-witness_counts)
@mc.deterministic(plot=True)
def open_area(realc=realc, witc=witc, Aw=Aw, Af=Af):
return realc*Aw/witc/Af
model = mc.MCMC([nbkg, obsnbkg, witc, obswit, realc, obsopen, open_area, Af, Aw])
model.sample(200000, burn=100, thin=30, burn_till_tuned=True)
mc.Matplot.plot(nbkg)
mc.Matplot.plot(witc)
mc.Matplot.plot(realc)
# mc.Matplot.plot(open_area)
mc.Matplot.plot(Aw)
_ = spp.plt.hist(open_area.trace(), 20)
print("Foil 1 \n")
witc_mean = np.mean(witc.trace()[...])
witc_std = np.std(witc.trace()[...])
print("Found witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(witness_counts, witc_mean, witc_std, witc_std/witc_mean*100))
realc_mean = np.mean(realc.trace()[...])
realc_std = np.std(realc.trace()[...])
print("Found non-witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(total_counts-witness_counts, realc_mean, realc_std, realc_std/realc_mean*100))
nbkg_mean = np.mean(nbkg.trace()[...])
nbkg_std = np.std(nbkg.trace()[...])
print("Found noise counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(0, nbkg_mean, nbkg_std, nbkg_std/nbkg_mean*100))
OA_median = np.median(open_area.trace()[...])
OA_mean = np.mean(open_area.trace()[...])
OA_std = np.std(open_area.trace()[...])
print("The open area fraction is {0:.6f} +/- {1:.6f} ({2:.2f}%) at the 1 stddev level from 1 measurement\n".format(OA_mean, OA_std,OA_std/OA_mean*100 ))
print("Phil got {0} for 1 measurement\n".format(0.00139))
print("The ratio Brian/Phil is: {0:.6f} or {1:.6f}".format(OA_mean/0.00139, 0.00139/OA_mean))
mc.Matplot.plot(Aw)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's first organize the results from the previous notebooks
Step2: Which is the best model before hyperparameter tuning?
Step3: Before hyperparameter tuning, it seems like the linear regression is doing better in all the predictions. Clearly, as the days ahead are more, the r2 value drops, and the mre goes up.
Step4: Build the hyperparameters DataFrame
Step5: Ahead days = 1
Step6: Ahead days = 7
Step7: Ahead days = 14
Step8: Ahead days = 28
Step9: Ahead days = 56
|
<ASSISTANT_TASK:>
Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import utils.misc as misc
res_df = pd.read_pickle('../../data/results_ahead1_linear_df.pkl')
res_df.head()
RELEVANT_COLUMNS = ['base_days',
'train_days',
'r2',
'mre',
'ahead_days',
'train_val_time',
'step_days',
'GOOD_DATA_RATIO',
'SAMPLES_GOOD_DATA_RATIO',
'x_filename',
'y_filename']
best_params_df = res_df[RELEVANT_COLUMNS].loc[np.argmin(res_df['mre']),:]
best_params_df['model'] = 'linear'
best_params_df
test_df = pd.DataFrame()
test_df.append(best_params_df, ignore_index=True)
RELEVANT_COLUMNS = ['base_days',
'train_days',
'r2',
'mre',
'ahead_days',
'train_val_time',
'step_days',
'GOOD_DATA_RATIO',
'SAMPLES_GOOD_DATA_RATIO',
'x_filename',
'y_filename']
ahead_days_list = [1, 7, 14, 28, 56]
models_list = ['linear', 'random_forest']
results_df = pd.DataFrame()
for ahead_days in ahead_days_list:
for model in models_list:
res_df = pd.read_pickle('../../data/results_ahead{}_{}_df.pkl'.format(ahead_days, model))
best_params_df = res_df[RELEVANT_COLUMNS].loc[np.argmax(res_df['r2']),:]
best_params_df['ahead_days'] = ahead_days
best_params_df['model'] = model
results_df = results_df.append(best_params_df, ignore_index=True)
results_df
results_df.to_pickle('../../data/best_dataset_params_raw_df.pkl')
def keep_max_r2(record):
return record.loc[np.argmax(record['r2']),:]
best_r2_df = results_df.groupby('ahead_days').apply(keep_max_r2)
best_r2_df
best_r2_df[['mre', 'r2']].plot()
initial_performance_df = results_df[results_df['model']=='random_forest']
initial_performance_df.set_index('ahead_days', inplace=True)
initial_performance_df
initial_performance_df.loc[14, 'base_days']
n_estimators = [10, 50, 100, 200]
max_depth = [None, 5, 10, 15]
hyper_df = pd.DataFrame([(x, y) for x in n_estimators for y in max_depth], columns=['n_estimators', 'max_depth'])
hyper_df['n_jobs'] = -1
hyper_df
params_df = initial_performance_df.loc[1]
params_df
AHEAD_DAYS = 1
# Get the normal parameters set
params_df = initial_performance_df.loc[AHEAD_DAYS].copy()
params_df['ahead_days'] = AHEAD_DAYS
tic = time()
from predictor.random_forest_predictor import RandomForestPredictor
PREDICTOR_NAME = 'random_forest'
# Global variables
eval_predictor_class = RandomForestPredictor
step_eval_days = 60 # The step to move between training/validation pairs
# Build the params list
params = {'params_df': params_df,
'step_eval_days': step_eval_days,
'eval_predictor_class': eval_predictor_class}
results_df = misc.parallelize_dataframe(hyper_df, misc.search_mean_score_eval, params)
# Some postprocessing... -----------------------------------------------------------
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/hyper_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['r2'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
# -----------------------------------------------------------------------------------
toc = time()
print('Elapsed time: {} seconds.'.format((toc-tic)))
AHEAD_DAYS = 7
# Get the normal parameters set
params_df = initial_performance_df.loc[AHEAD_DAYS].copy()
params_df['ahead_days'] = AHEAD_DAYS
tic = time()
from predictor.random_forest_predictor import RandomForestPredictor
PREDICTOR_NAME = 'random_forest'
# Global variables
eval_predictor_class = RandomForestPredictor
step_eval_days = 60 # The step to move between training/validation pairs
# Build the params list
params = {'params_df': params_df,
'step_eval_days': step_eval_days,
'eval_predictor_class': eval_predictor_class}
results_df = misc.parallelize_dataframe(hyper_df, misc.search_mean_score_eval, params)
# Some postprocessing... -----------------------------------------------------------
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/hyper_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['r2'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
# -----------------------------------------------------------------------------------
toc = time()
print('Elapsed time: {} seconds.'.format((toc-tic)))
AHEAD_DAYS = 14
# Get the normal parameters set
params_df = initial_performance_df.loc[AHEAD_DAYS].copy()
params_df['ahead_days'] = AHEAD_DAYS
tic = time()
from predictor.random_forest_predictor import RandomForestPredictor
PREDICTOR_NAME = 'random_forest'
# Global variables
eval_predictor_class = RandomForestPredictor
step_eval_days = 60 # The step to move between training/validation pairs
# Build the params list
params = {'params_df': params_df,
'step_eval_days': step_eval_days,
'eval_predictor_class': eval_predictor_class}
results_df = misc.parallelize_dataframe(hyper_df, misc.search_mean_score_eval, params)
# Some postprocessing... -----------------------------------------------------------
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/hyper_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['r2'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
# -----------------------------------------------------------------------------------
toc = time()
print('Elapsed time: {} seconds.'.format((toc-tic)))
AHEAD_DAYS = 28
# Get the normal parameters set
params_df = initial_performance_df.loc[AHEAD_DAYS].copy()
params_df['ahead_days'] = AHEAD_DAYS
tic = time()
from predictor.random_forest_predictor import RandomForestPredictor
PREDICTOR_NAME = 'random_forest'
# Global variables
eval_predictor_class = RandomForestPredictor
step_eval_days = 60 # The step to move between training/validation pairs
# Build the params list
params = {'params_df': params_df,
'step_eval_days': step_eval_days,
'eval_predictor_class': eval_predictor_class}
results_df = misc.parallelize_dataframe(hyper_df, misc.search_mean_score_eval, params)
# Some postprocessing... -----------------------------------------------------------
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/hyper_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['r2'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
# -----------------------------------------------------------------------------------
toc = time()
print('Elapsed time: {} seconds.'.format((toc-tic)))
AHEAD_DAYS = 56
# Get the normal parameters set
params_df = initial_performance_df.loc[AHEAD_DAYS].copy()
params_df['ahead_days'] = AHEAD_DAYS
tic = time()
from predictor.random_forest_predictor import RandomForestPredictor
PREDICTOR_NAME = 'random_forest'
# Global variables
eval_predictor_class = RandomForestPredictor
step_eval_days = 60 # The step to move between training/validation pairs
# Build the params list
params = {'params_df': params_df,
'step_eval_days': step_eval_days,
'eval_predictor_class': eval_predictor_class}
results_df = misc.parallelize_dataframe(hyper_df, misc.search_mean_score_eval, params)
# Some postprocessing... -----------------------------------------------------------
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/hyper_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['r2'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
# -----------------------------------------------------------------------------------
toc = time()
print('Elapsed time: {} seconds.'.format((toc-tic)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple exponential basis
|
<ASSISTANT_TASK:>
Python Code:
def AofT(time,T, ai, taui):
return ai*np.exp(-time/taui)/(1.+np.exp(-T/(2*taui)))
from SimPEG import *
import sys
sys.path.append("./DoubleLog/")
from plotting import mapDat
class LinearSurvey(Survey.BaseSurvey):
nD = None
def __init__(self, time, **kwargs):
self.time = time
self.nD = time.size
def projectFields(self, u):
return u
class LinearProblem(Problem.BaseProblem):
surveyPair = LinearSurvey
def __init__(self, mesh, G, **kwargs):
Problem.BaseProblem.__init__(self, mesh, **kwargs)
self.G = G
def fields(self, m, u=None):
return self.G.dot(m)
def Jvec(self, m, v, u=None):
return self.G.dot(v)
def Jtvec(self, m, v, u=None):
return self.G.T.dot(v)
# time = np.cumsum(np.r_[0., 1e-5*np.ones(10), 5e-5*np.ones(10), 1e-4*np.ones(10), 5e-4*np.ones(10), 1e-3*np.ones(10)])
time = np.cumsum(np.r_[0., 1e-5*np.ones(10), 5e-5*np.ones(10),1e-4*np.ones(10), 5e-4*np.ones(10)])
M = 41
tau = np.logspace(-5.1, -1, M)
N = time.size
A = np.zeros((N, M))
for j in range(M):
A[:,j] = np.exp(-time/tau[j])//tau[j]
mtrue = np.zeros(M)
np.random.seed(1)
inds = np.random.random_integers(0, 41, size=5)
mtrue[inds] = np.r_[-3, 2, 1, 4, 5]
out = np.dot(A,mtrue)
fig = plt.figure(figsize=(6,4.5))
ax = plt.subplot(111)
for i, ind in enumerate(inds):
temp, dum, dum = mapDat(mtrue[inds][i]*np.exp(-time/tau[ind])/tau[j], 1e-1, stretch=2)
plt.semilogx(time, temp, 'k', alpha = 0.5)
outmap, ticks, tickLabels = mapDat(out, 1e-1, stretch=2)
ax.semilogx(time, outmap, 'k', lw=2)
ax.set_yticks(ticks)
ax.set_yticklabels(tickLabels)
# ax.set_ylim(ticks.min(), ticks.max())
ax.set_ylim(ticks.min(), ticks.max())
ax.set_xlim(time.min(), time.max())
ax.grid(True)
# from pymatsolver import MumpsSolver
abs(survey.dobs).min()
mesh = Mesh.TensorMesh([M])
prob = LinearProblem(mesh, A)
survey = LinearSurvey(time)
survey.pair(prob)
survey.makeSyntheticData(mtrue, std=0.01)
# survey.dobs = out
reg = Regularization.BaseRegularization(mesh)
dmis = DataMisfit.l2_DataMisfit(survey)
dmis.Wd = 1./(0.05*abs(survey.dobs)+0.05*100.)
opt = Optimization.ProjectedGNCG(maxIter=20)
# opt = Optimization.InexactGaussNewton(maxIter=20)
opt.lower = -1e-10
invProb = InvProblem.BaseInvProblem(dmis, reg, opt)
invProb.beta = 1e-4
beta = Directives.BetaSchedule()
beta.coolingFactor = 2
target = Directives.TargetMisfit()
inv = Inversion.BaseInversion(invProb, directiveList=[beta, target])
m0 = np.zeros_like(survey.mtrue)
mrec = inv.run(m0)
plt.semilogx(tau, mtrue, '.')
plt.semilogx(tau, mrec, '.')
fig = plt.figure(figsize=(6,4.5))
ax = plt.subplot(111)
obsmap, ticks, tickLabels = mapDat(survey.dobs, 1e0, stretch=2)
predmap, dum, dum = mapDat(invProb.dpred, 1e0, stretch=2)
ax.loglog(time, survey.dobs, 'k', lw=2)
ax.loglog(time, invProb.dpred, 'k.', lw=2)
# ax.set_yticks(ticks)
# ax.set_yticklabels(tickLabels)
# ax.set_ylim(ticks.min(), ticks.max())
# ax.set_ylim(ticks.min(), ticks.max())
ax.set_xlim(time.min(), time.max())
ax.grid(True)
time = np.cumsum(np.r_[0., 1e-5*np.ones(10), 5e-5*np.ones(10), 1e-4*np.ones(10), 5e-4*np.ones(10), 1e-3*np.ones(10)])
N = time.size
A = np.zeros((N, M))
for j in range(M):
A[:,j] = np.exp(-time/tau[j]) /tau[j]
mfund = mtrue.copy()
mfund[mfund<0.] = 0.
obs = np.dot(A, mtrue)
fund = np.dot(A, mfund)
pred = np.dot(A, mrec)
ip = obs-fund
ipobs = obs-pred
plt.loglog(time, obs, 'k.-', lw=2)
plt.loglog(time, -obs, 'k--', lw=2)
plt.loglog(time, fund, 'b.', lw=2)
plt.loglog(time, pred, 'b-', lw=2)
plt.loglog(time, -ip, 'r--', lw=2)
plt.loglog(time, abs(ipobs), 'r.', lw=2)
plt.ylim(abs(obs).min(), abs(obs).max())
plt.xlim(time.min(), time.max())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A slide with just text
Step2: Here we show some code
|
<ASSISTANT_TASK:>
Python Code:
attend = sns.load_dataset("attention").query("subject <= 12")
g = sns.FacetGrid(attend, col="subject", col_wrap=4, size=2, ylim=(0, 10))
g.map(sns.pointplot, "solutions", "score", color=".3", ci=None);
tips = sns.load_dataset("tips")
with sns.axes_style("white"):
g = sns.FacetGrid(tips, row="sex", col="smoker", margin_titles=True, size=2.5)
g.map(plt.scatter, "total_bill", "tip", alpha=0.5);
g.set_axis_labels("Total bill (US Dollars)", "Tip");
g.set(xticks=[10, 30, 50], yticks=[2, 6, 10]);
g.fig.subplots_adjust(wspace=.02, hspace=.02);
g = sns.FacetGrid(tips, col="day", size=4, aspect=.5)
g.map(sns.barplot, "sex", "total_bill");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step2: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step3: PrettyTensor version
Step4: Load Data
Step5: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets, but we will make random training-sets further below.
Step6: Class numbers
Step7: Helper-function for creating random training-sets
Step8: Check that the shape of the combined arrays is correct.
Step9: Size of the combined data-set.
Step10: Define the size of the training-set used for each neural network. You can try and change this.
Step11: We do not use a validation-set during training, but this would be the size.
Step12: Helper-function for splitting the combined data-set into a random training- and validation-set.
Step13: Data Dimensions
Step14: Helper-function for plotting images
Step15: Plot a few images to see if data is correct
Step16: TensorFlow Graph
Step17: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is
Step18: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step19: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
Step20: Neural Network
Step21: Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Step22: Optimization Method
Step23: Performance Measures
Step24: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
Step25: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
Step26: Saver
Step27: This is the directory used for saving and retrieving the data.
Step28: Create the directory if it does not exist.
Step29: This function returns the save-path for the data-file with the given network number.
Step30: TensorFlow Run
Step31: Initialize variables
Step32: Helper-function to create a random training batch.
Step33: Function for selecting a random training-batch of the given size.
Step34: Helper-function to perform optimization iterations
Step35: Create ensemble of neural networks
Step36: Number of optimization iterations for each neural network.
Step37: Create the ensemble of neural networks. All networks use the same TensorFlow graph that was defined above. For each neural network the TensorFlow weights and variables are initialized to random values and then optimized. The variables are then saved to disk so they can be reloaded later.
Step38: Helper-functions for calculating and predicting classifications
Step39: Calculate a boolean array whether the predicted classes for the images are correct.
Step40: Calculate a boolean array whether the images in the test-set are classified correctly.
Step41: Calculate a boolean array whether the images in the validation-set are classified correctly.
Step42: Helper-functions for calculating the classification accuracy
Step43: Calculate the classification accuracy on the test-set.
Step44: Calculate the classification accuracy on the original validation-set.
Step45: Results and analysis
Step46: Summarize the classification accuracies on the test-set for the neural networks in the ensemble.
Step47: The predicted labels of the ensemble is a 3-dim array, the first dim is the network-number, the second dim is the image-number, the third dim is the classification vector.
Step48: Ensemble predictions
Step49: The ensemble's predicted class number is then the index of the highest number in the label, which is calculated using argmax as usual.
Step50: Boolean array whether each of the images in the test-set was correctly classified by the ensemble of neural networks.
Step51: Negate the boolean array so we can use it to lookup incorrectly classified images.
Step52: Best neural network
Step53: The index of the neural network with the highest classification accuracy.
Step54: The best neural network's classification accuracy on the test-set.
Step55: Predicted labels of the best neural network.
Step56: The predicted class-number.
Step57: Boolean array whether the best neural network classified each image in the test-set correctly.
Step58: Boolean array whether each image is incorrectly classified.
Step59: Comparison of ensemble vs. the best single network
Step60: The number of images in the test-set that were correctly classified by the best neural network.
Step61: Boolean array whether each image in the test-set was correctly classified by the ensemble and incorrectly classified by the best neural network.
Step62: Number of images in the test-set where the ensemble was better than the best single network
Step63: Boolean array whether each image in the test-set was correctly classified by the best single network and incorrectly classified by the ensemble.
Step64: Number of images in the test-set where the best single network was better than the ensemble.
Step65: Helper-functions for plotting and printing comparisons
Step66: Function for printing the predicted labels.
Step67: Function for printing the predicted labels for the ensemble of neural networks.
Step68: Function for printing the predicted labels for the best single network.
Step69: Function for printing the predicted labels of all the neural networks in the ensemble. This only prints the labels for the first image.
Step70: Examples
Step71: The ensemble's predicted labels for the first of these images (top left image)
Step72: The best network's predicted labels for the first of these images
Step73: The predicted labels of all the networks in the ensemble, for the first of these images
Step74: Examples
Step75: The ensemble's predicted labels for the first of these images (top left image)
Step76: The best single network's predicted labels for the first of these images
Step77: The predicted labels of all the networks in the ensemble, for the first of these images
Step78: Close TensorFlow Session
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image('images/02_network_flowchart.png')
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
tf.__version__
pt.__version__
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
combined_images = np.concatenate([data.train.images, data.validation.images], axis=0)
combined_labels = np.concatenate([data.train.labels, data.validation.labels], axis=0)
print(combined_images.shape)
print(combined_labels.shape)
combined_size = len(combined_images)
combined_size
train_size = int(0.8 * combined_size)
train_size
validation_size = combined_size - train_size
validation_size
def random_training_set():
# Create a randomized index into the full / combined training-set.
idx = np.random.permutation(combined_size)
# Split the random index into training- and validation-sets.
idx_train = idx[0:train_size]
idx_validation = idx[train_size:]
# Select the images and labels for the new training-set.
x_train = combined_images[idx_train, :]
y_train = combined_labels[idx_train, :]
# Select the images and labels for the new validation-set.
x_validation = combined_images[idx_validation, :]
y_validation = combined_labels[idx_validation, :]
# Return the new training- and validation-sets.
return x_train, y_train, x_validation, y_validation
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
def plot_images(images, # Images to plot, 2-d array.
cls_true, # True class-no for images.
ensemble_cls_pred=None, # Ensemble predicted class-no.
best_cls_pred=None): # Best-net predicted class-no.
assert len(images) == len(cls_true)
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing if we need to print ensemble and best-net.
if ensemble_cls_pred is None:
hspace = 0.3
else:
hspace = 1.0
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# For each of the sub-plots.
for i, ax in enumerate(axes.flat):
# There may not be enough images for all sub-plots.
if i < len(images):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if ensemble_cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
msg = "True: {0}\nEnsemble: {1}\nBest Net: {2}"
xlabel = msg.format(cls_true[i],
ensemble_cls_pred[i],
best_cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)
x_pretty = pt.wrap(x_image)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
y_pred_cls = tf.argmax(y_pred, dimension=1)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver(max_to_keep=100)
save_dir = 'checkpoints/'
if not os.path.exists(save_dir):
os.makedirs(save_dir)
def get_save_path(net_number):
return save_dir + 'network' + str(net_number)
session = tf.Session()
def init_variables():
session.run(tf.initialize_all_variables())
train_batch_size = 64
def random_batch(x_train, y_train):
# Total number of images in the training-set.
num_images = len(x_train)
# Create a random index into the training-set.
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
# Use the random index to select random images and labels.
x_batch = x_train[idx, :] # Images.
y_batch = y_train[idx, :] # Labels.
# Return the batch.
return x_batch, y_batch
def optimize(num_iterations, x_train, y_train):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = random_batch(x_train, y_train)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if i % 100 == 0:
# Calculate the accuracy on the training-batch.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Status-message for printing.
msg = "Optimization Iteration: {0:>6}, Training Batch Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
num_networks = 5
num_iterations = 10000
if True:
# For each of the neural networks.
for i in range(num_networks):
print("Neural network: {0}".format(i))
# Create a random training-set. Ignore the validation-set.
x_train, y_train, _, _ = random_training_set()
# Initialize the variables of the TensorFlow graph.
session.run(tf.global_variables_initializer())
# Optimize the variables using this training-set.
optimize(num_iterations=num_iterations,
x_train=x_train,
y_train=y_train)
# Save the optimized variables to disk.
saver.save(sess=session, save_path=get_save_path(i))
# Print newline.
print()
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_labels(images):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted labels which
# will be calculated in batches and filled into this array.
pred_labels = np.zeros(shape=(num_images, num_classes),
dtype=np.float)
# Now calculate the predicted labels for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images between index i and j.
feed_dict = {x: images[i:j, :]}
# Calculate the predicted labels using TensorFlow.
pred_labels[i:j] = session.run(y_pred, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
return pred_labels
def correct_prediction(images, labels, cls_true):
# Calculate the predicted labels.
pred_labels = predict_labels(images=images)
# Calculate the predicted class-number for each image.
cls_pred = np.argmax(pred_labels, axis=1)
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct
def test_correct():
return correct_prediction(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
def validation_correct():
return correct_prediction(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
def classification_accuracy(correct):
# When averaging a boolean array, False means 0 and True means 1.
# So we are calculating: number of True / len(correct) which is
# the same as the classification accuracy.
return correct.mean()
def test_accuracy():
# Get the array of booleans whether the classifications are correct
# for the test-set.
correct = test_correct()
# Calculate the classification accuracy and return it.
return classification_accuracy(correct)
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
correct = validation_correct()
# Calculate the classification accuracy and return it.
return classification_accuracy(correct)
def ensemble_predictions():
# Empty list of predicted labels for each of the neural networks.
pred_labels = []
# Classification accuracy on the test-set for each network.
test_accuracies = []
# Classification accuracy on the validation-set for each network.
val_accuracies = []
# For each neural network in the ensemble.
for i in range(num_networks):
# Reload the variables into the TensorFlow graph.
saver.restore(sess=session, save_path=get_save_path(i))
# Calculate the classification accuracy on the test-set.
test_acc = test_accuracy()
# Append the classification accuracy to the list.
test_accuracies.append(test_acc)
# Calculate the classification accuracy on the validation-set.
val_acc = validation_accuracy()
# Append the classification accuracy to the list.
val_accuracies.append(val_acc)
# Print status message.
msg = "Network: {0}, Accuracy on Validation-Set: {1:.4f}, Test-Set: {2:.4f}"
print(msg.format(i, val_acc, test_acc))
# Calculate the predicted labels for the images in the test-set.
# This is already calculated in test_accuracy() above but
# it is re-calculated here to keep the code a bit simpler.
pred = predict_labels(images=data.test.images)
# Append the predicted labels to the list.
pred_labels.append(pred)
return np.array(pred_labels), \
np.array(test_accuracies), \
np.array(val_accuracies)
pred_labels, test_accuracies, val_accuracies = ensemble_predictions()
print("Mean test-set accuracy: {0:.4f}".format(np.mean(test_accuracies)))
print("Min test-set accuracy: {0:.4f}".format(np.min(test_accuracies)))
print("Max test-set accuracy: {0:.4f}".format(np.max(test_accuracies)))
pred_labels.shape
ensemble_pred_labels = np.mean(pred_labels, axis=0)
ensemble_pred_labels.shape
ensemble_cls_pred = np.argmax(ensemble_pred_labels, axis=1)
ensemble_cls_pred.shape
ensemble_correct = (ensemble_cls_pred == data.test.cls)
ensemble_incorrect = np.logical_not(ensemble_correct)
test_accuracies
best_net = np.argmax(test_accuracies)
best_net
test_accuracies[best_net]
best_net_pred_labels = pred_labels[best_net, :, :]
best_net_cls_pred = np.argmax(best_net_pred_labels, axis=1)
best_net_correct = (best_net_cls_pred == data.test.cls)
best_net_incorrect = np.logical_not(best_net_correct)
np.sum(ensemble_correct)
np.sum(best_net_correct)
ensemble_better = np.logical_and(best_net_incorrect,
ensemble_correct)
ensemble_better.sum()
best_net_better = np.logical_and(best_net_correct,
ensemble_incorrect)
best_net_better.sum()
def plot_images_comparison(idx):
plot_images(images=data.test.images[idx, :],
cls_true=data.test.cls[idx],
ensemble_cls_pred=ensemble_cls_pred[idx],
best_cls_pred=best_net_cls_pred[idx])
def print_labels(labels, idx, num=1):
# Select the relevant labels based on idx.
labels = labels[idx, :]
# Select the first num labels.
labels = labels[0:num, :]
# Round numbers to 2 decimal points so they are easier to read.
labels_rounded = np.round(labels, 2)
# Print the rounded labels.
print(labels_rounded)
def print_labels_ensemble(idx, **kwargs):
print_labels(labels=ensemble_pred_labels, idx=idx, **kwargs)
def print_labels_best_net(idx, **kwargs):
print_labels(labels=best_net_pred_labels, idx=idx, **kwargs)
def print_labels_all_nets(idx):
for i in range(num_networks):
print_labels(labels=pred_labels[i, :, :], idx=idx, num=1)
plot_images_comparison(idx=ensemble_better)
print_labels_ensemble(idx=ensemble_better, num=1)
print_labels_best_net(idx=ensemble_better, num=1)
print_labels_all_nets(idx=ensemble_better)
plot_images_comparison(idx=best_net_better)
print_labels_ensemble(idx=best_net_better, num=1)
print_labels_best_net(idx=best_net_better, num=1)
print_labels_all_nets(idx=best_net_better)
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Slicing ranges
Step2: Watchout... this is a rather incoherent use of the indexinf method over rows. That's why it is said that loc provides a more coherent use.
Step3: The .loc attribute is the primary access method. The following are valid inputs
Step4: Boolean accessing
Step5: Accessing by position by .iloc
Step6: Selection by callable
Step7: Selection by isin
Step8: The where() Method
Step9: Applying Functions
Step10: Over dataframes
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D'])
df["A"] #indexing
df.A #attribute
type(df.A)
df.A[0]
df[["A","B"]]
type(df[["A","B"]])
s = df["A"]
s[:5]
s[::2]
s[::-1]
df[:3] # for convenience as it is a common use
df["A"]
df.loc[:,"B":]
df.loc[4:,"C":] = 0
df
df.loc[:,"A"]>0
df.loc[df.loc[:,"A"]>0]
df.iloc[0,0]
df.iloc[3:,2:]
df.iloc[[0,1,3],[1,3]]
df.loc[:,lambda df: df.columns == "A"]
df["X"] = range(0, df.shape[0])
df
df[df["X"].isin([0,2])]
df.where(df["A"]>0)
df.where(df["A"]>0,100)
s * 2
s.max()
np.max(s)
s.apply(np.max)
def multiply_by_2(x):
return x*2
s.apply(multiply_by_2)
s.apply(lambda x: x*2)
s.map(lambda x: x*2)
mydict={2:"a"}
df["X"].map(mydict)
df.apply(np.max,axis=0)
df.apply(np.max,axis=1)
df.applymap(lambda x: x*2)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.