Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
4,600
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
# Loading airline data
import numpy as np
data = np.load('airline.npz')
X_train, Y_train = data['X_train'], data['Y_train']
D = Y_train.shape[1];
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel('Time (years)')
ax.set_ylabel('Airline passengers ($10^3$)')
ax.plot(X_train.flatten(),Y_train.flatten(), c='b')
ax.set_xticklabels([1949, 1952, 1955, 1958, 1961, 1964])
plt.tight_layout()
from gpflow.kernels import RBF, Cosine, Linear, Bias, Matern52
from gpflow import transforms
from gpflow.gpr import GPR
Q = 10 # nr of terms in the sum
max_iters = 1000
# Trains a model with a spectral mixture kernel, given an ndarray of 2Q frequencies and lengthscales
def create_model(hypers):
f = np.clip(hypers[:Q], 0, 5)
weights = np.ones(Q) / Q
lengths = hypers[Q:]
kterms = []
for i in range(Q):
rbf = RBF(D, lengthscales=lengths[i], variance=1./Q)
rbf.lengthscales.transform = transforms.Exp()
cos = Cosine(D, lengthscales=f[i])
kterms.append(rbf * cos)
k = np.sum(kterms) + Linear(D) + Bias(D)
m = GPR(X_train, Y_train, kern=k)
return m
X_test, X_complete = data['X_test'], data['X_complete']
def plotprediction(m):
# Perform prediction
mu, var = m.predict_f(X_complete)
# Plot
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel('Time (years)')
ax.set_ylabel('Airline passengers ($10^3$)')
ax.set_xticklabels([1949, 1952, 1955, 1958, 1961, 1964, 1967, 1970, 1973])
ax.plot(X_train.flatten(),Y_train.flatten(), c='b')
ax.plot(X_complete.flatten(), mu.flatten(), c='g')
lower = mu - 2*np.sqrt(var)
upper = mu + 2*np.sqrt(var)
ax.plot(X_complete, upper, 'g--', X_complete, lower, 'g--', lw=1.2)
ax.fill_between(X_complete.flatten(), lower.flatten(), upper.flatten(),
color='g', alpha=.1)
plt.tight_layout()
m = create_model(np.ones((2*Q,)))
m.optimize(maxiter=max_iters)
plotprediction(m)
from gpflowopt.domain import ContinuousParameter
from gpflowopt.objective import batch_apply
# Objective function for our optimization
# Input: N x 2Q ndarray, output: N x 1.
# returns the negative log likelihood obtained by training with given frequencies and rbf lengthscales
# Applies some tricks for stability similar to GPy's jitchol
@batch_apply
def objectivefx(freq):
m = create_model(freq)
for i in [0] + [10**exponent for exponent in range(6,1,-1)]:
try:
mean_diag = np.mean(np.diag(m.kern.compute_K_symm(X_train)))
m.likelihood.variance = 1 + mean_diag * i
m.optimize(maxiter=max_iters)
return -m.compute_log_likelihood()
except:
pass
raise RuntimeError("Frequency combination failed indefinately.")
# Setting up optimization domain.
lower = [0.]*Q
upper = [5.]*int(Q)
df = np.sum([ContinuousParameter('freq{0}'.format(i), l, u) for i, l, u in zip(range(Q), lower, upper)])
lower = [1e-5]*Q
upper = [300]*int(Q)
dl = np.sum([ContinuousParameter('l{0}'.format(i), l, u) for i, l, u in zip(range(Q), lower, upper)])
domain = df + dl
domain
from gpflowopt.design import LatinHyperCube
from gpflowopt.acquisition import ExpectedImprovement
from gpflowopt import optim, BayesianOptimizer
design = LatinHyperCube(6, domain)
X = design.generate()
Y = objectivefx(X)
m = GPR(X, Y, kern=Matern52(domain.size, ARD=False))
ei = ExpectedImprovement(m)
opt = optim.StagedOptimizer([optim.MCOptimizer(domain, 5000), optim.SciPyOptimizer(domain)])
optimizer = BayesianOptimizer(domain, ei, optimizer=opt)
with optimizer.silent():
result = optimizer.optimize(objectivefx, n_iter=24)
m = create_model(result.x[0,:])
m.optimize()
plotprediction(m)
f, axes = plt.subplots(1, 1, figsize=(7, 5))
f = ei.data[1][:,0]
axes.plot(np.arange(0, ei.data[0].shape[0]), np.minimum.accumulate(f))
axes.set_ylabel('fmin')
axes.set_xlabel('Number of evaluated points');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data set
Step2: Modeling
Step3: In total, a lot of hyperparameters must be optimized. Furthermore, the optimization surface of the spectral mixture is highly multimodal. Starting from the default hyperparameter values the optimized GP is able to pick up the linear trend, and the RBF kernels perform local interpolation. However, the kernel is not able to extrapolate away from the data. In sum, with this starting point, the likelihood optimization ends in a local minimum.
Step4: High-dimensional Bayesian optimization is tricky, although the complexity of the problem is significantly reduced due to symmetry in the optimization domain (interchanging frequencies does not make a difference) and because we still optimize the likelihood given the starting point. Therefore, getting near a mode is sufficient. Furthermore we disable ARD of the kernel of the model approximating the objective function to avoid optimizing a lot of lengthscales with little data. We then use EI to pick new candidate starting points and evaluate our objective.
Step5: Clearly, the optimization point identified with BO is a lot better than the default values. We now obtain a proper forecasting. By inspecting the evolution of the best likelihood value obtained so far, we see the solution is identified quickly.
|
4,601
|
<ASSISTANT_TASK:>
Python Code:
import gachon_autograder_client as g_autograder
THE_TEMLABIO_ID = "#YOUR_ID"
PASSWORD = "#YOUR_PASSWORD"
ASSIGNMENT_NAME = "linear_algebra_basic_I" # I는 대문자 i를 의미함
g_autograder.get_assignment(THE_TEMLABIO_ID , PASSWORD, ASSIGNMENT_NAME)
def vector_size_check(*vector_variables):
return None
# 실행결과
print(vector_size_check([1,2,3], [2,3,4], [5,6,7])) # Expected value: True
print(vector_size_check([1, 3], [2,4], [6,7])) # Expected value: True
print(vector_size_check([1, 3, 4], [4], [6,7])) # Expected value: False
def vector_addition(*vector_variables):
return None
# 실행결과
print(vector_addition([1, 3], [2, 4], [6, 7])) # Expected value: [9, 14]
print(vector_addition([1, 5], [10, 4], [4, 7])) # Expected value: [15, 16]
print(vector_addition([1, 3, 4], [4], [6,7])) # Expected value: ArithmeticError
def vector_subtraction(*vector_variables):
if vector_size_check(*vector_variables) == False:
raise ArithmeticError
return None
# 실행결과
print(vector_subtraction([1, 3], [2, 4])) # Expected value: [-1, -1]
print(vector_subtraction([1, 5], [10, 4], [4, 7])) # Expected value: [-13, -6]
def scalar_vector_product(alpha, vector_variable):
return None
# 실행결과
print (scalar_vector_product(5,[1,2,3])) # Expected value: [5, 10, 15]
print (scalar_vector_product(3,[2,2])) # Expected value: [6, 6]
print (scalar_vector_product(4,[1])) # Expected value: [4]
def matrix_size_check(*matrix_variables):
return None
# 실행결과
matrix_x = [[2, 2], [2, 2], [2, 2]]
matrix_y = [[2, 5], [2, 1]]
matrix_z = [[2, 4], [5, 3]]
matrix_w = [[2, 5], [1, 1], [2, 2]]
print (matrix_size_check(matrix_x, matrix_y, matrix_z)) # Expected value: False
print (matrix_size_check(matrix_y, matrix_z)) # Expected value: True
print (matrix_size_check(matrix_x, matrix_w)) # Expected value: True
def is_matrix_equal(*matrix_variables):
return None
# 실행결과
matrix_x = [[2, 2], [2, 2]]
matrix_y = [[2, 5], [2, 1]]
print (is_matrix_equal(matrix_x, matrix_y, matrix_y, matrix_y)) # Expected value: False
print (is_matrix_equal(matrix_x, matrix_x)) # Expected value: True
def matrix_addition(*matrix_variables):
if matrix_size_check(*matrix_variables) == False:
raise ArithmeticError
return None
# 실행결과
matrix_x = [[2, 2], [2, 2]]
matrix_y = [[2, 5], [2, 1]]
matrix_z = [[2, 4], [5, 3]]
print (matrix_addition(matrix_x, matrix_y)) # Expected value: [[4, 7], [4, 3]]
print (matrix_addition(matrix_x, matrix_y, matrix_z)) # Expected value: [[6, 11], [9, 6]]
def matrix_subtraction(*matrix_variables):
if matrix_size_check(*matrix_variables) == False:
raise ArithmeticError
return None
# 실행결과
matrix_x = [[2, 2], [2, 2]]
matrix_y = [[2, 5], [2, 1]]
matrix_z = [[2, 4], [5, 3]]
print (matrix_subtraction(matrix_x, matrix_y)) # Expected value: [[0, -3], [0, 1]]
print (matrix_subtraction(matrix_x, matrix_y, matrix_z)) # Expected value: [[-2, -7], [-5, -2]]
def matrix_transpose(matrix_variable):
return None
# 실행결과
matrix_w = [[2, 5], [1, 1], [2, 2]]
matrix_transpose(matrix_w)
def scalar_matrix_product(alpha, matrix_variable):
return None
# 실행결과
matrix_x = [[2, 2], [2, 2], [2, 2]]
matrix_y = [[2, 5], [2, 1]]
matrix_z = [[2, 4], [5, 3]]
matrix_w = [[2, 5], [1, 1], [2, 2]]
print(scalar_matrix_product(3, matrix_x)) #Expected value: [[6, 6], [6, 6], [6, 6]]
print(scalar_matrix_product(2, matrix_y)) #Expected value: [[4, 10], [4, 2]]
print(scalar_matrix_product(4, matrix_z)) #Expected value: [[8, 16], [20, 12]]
print(scalar_matrix_product(3, matrix_w)) #Expected value: [[6, 15], [3, 3], [6, 6]]
def is_product_availability_matrix(matrix_a, matrix_b):
return None
# 실행결과
matrix_x= [[2, 5], [1, 1]]
matrix_y = [[1, 1, 2], [2, 1, 1]]
matrix_z = [[2, 4], [5, 3], [1, 3]]
print(is_product_availability_matrix(matrix_y, matrix_z)) # Expected value: True
print(is_product_availability_matrix(matrix_z, matrix_x)) # Expected value: True
print(is_product_availability_matrix(matrix_z, matrix_w)) # Expected value: False //matrix_w가없습니다
print(is_product_availability_matrix(matrix_x, matrix_x)) # Expected value: True
def matrix_product(matrix_a, matrix_b):
if is_product_availability_matrix(matrix_a, matrix_b) == False:
raise ArithmeticError
return None
# 실행결과
matrix_x= [[2, 5], [1, 1]]
matrix_y = [[1, 1, 2], [2, 1, 1]]
matrix_z = [[2, 4], [5, 3], [1, 3]]
print(matrix_product(matrix_y, matrix_z)) # Expected value: [[9, 13], [10, 14]]
print(matrix_product(matrix_z, matrix_x)) # Expected value: [[8, 14], [13, 28], [5, 8]]
print(matrix_product(matrix_x, matrix_x)) # Expected value: [[9, 15], [3, 6]]
print(matrix_product(matrix_z, matrix_w)) # Expected value: False
import gachon_autograder_client as g_autograder
THE_TEMLABIO_ID = "#YOUR_ID"
PASSWORD = "#YOUR_PASSWORD"
ASSIGNMENT_FILE_NAME = "linear_algebra_basic_I.ipynb"
g_autograder.submit_assignment(THE_TEMLABIO_ID, PASSWORD, ASSIGNMENT_FILE_NAME)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 위 소스 코드를 .py 파일 또는 jupyter notebook에 입력하여 파이썬으로 실행 시키면 "linear_algebra_basic_I.ipynb" 파일이 생성되며, jupyter notebook으로 실행하거나, 콘솔창(cmd)에서 해당 파일이 있는 폴더로 이동 후 아래와 같이 입력하면 해당 파일이 실행 될 것이다.
Step2: Problem #2 - vector_addition (one line code available)
Step3: Problem #3 - vector_subtraction (one line code available)
Step4: Problem #4 - scalar_vector_product (one line code available)
Step5: Problem #5 - matrix_size_check (one line code available)
Step6: Problem #6 - is_matrix_equal (one line code available)
Step7: Problem #7 - matrix_addition (one line code available)
Step8: Problem #8 - matrix_subtraction (one line code available)
Step9: Problem #9 - matrix_transpose (one line code available)
Step10: Problem #10 - scalar_matrix_product (one line code available)
Step11: Problem #11 - is_product_availability_matrix (one line code available)
Step12: Problem #12 - matrix_product (one line code available)
Step13: 결과 제출 하기
|
4,602
|
<ASSISTANT_TASK:>
Python Code:
maketimeseries() # Load this function from bottom of notebook to print.
# General libraries
%matplotlib inline
import pandas as pd
import warnings
warnings.simplefilter(action = "ignore", category = FutureWarning) # Supress some meaningless warnings.
#from tabulate import tabulate
from collections import Counter
import seaborn as sns
import numpy as np
from itertools import combinations
import matplotlib.pyplot as plt
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 7 # Make figures a little bigger
# Load up this cell to enable the keywordcooccurrence function.
def cooccurrence(column):
'''
Input: a dataframe column containing keywords that are separated by semicolons
Example: df.DE (Web of Science) or df[Author keywords] (Scopus)
Output: A list of co-occurring keywords that can be ranked with the Counter function
'''
cooccurrencelist = []
for keyword in column:
k = str(keyword)
keywordsperarticle = k.split('; ')
keywordsperarticle = [word.lower() for word in keywordsperarticle] # Lowers all keywords in each list.
cooccurrence = list(combinations(keywordsperarticle, 2))
for c in cooccurrence:
cooccurrencelist.append(c)
return(cooccurrencelist)
# This function returns a list of journals that can be easily counted
def frequentjournals(column):
'''
Input: a dataframe column containing journal names.
Example: df.SO (Web of Science) or df[Source title] (Scopus)
Output: A list of journal names that can be ranked with the Counter function
'''
journallist = []
for journal in column:
#print(len(journal))
journallist.append(journal.lower()) # Lower names. Looks bad, but computes well.
return(journallist)
#fel data , error_bad_lines=False
df = pd.read_csv('.data/WoS549recs20170121.tsv', sep="\t", encoding='utf-8') # Input: web of science tsv file, utf-8 encoding.
df.head(3)
# Print this for explanation of WoS columns
woskeyfile = open('woskeys.txt')
woskeys = woskeyfile.read()
#print(woskeys)
dfTC = df.sort('TC', ascending=False) # Order dataframe by times cited
dfTC[['AU', 'PY', 'TI', 'SO', 'TC', 'DI']].head(10) # Ten most cited articles.
publicationyears = sns.factorplot('PY', data=df, kind='count', size=10, aspect=2)
# Read all keywords into a list.
allkeywords = []
for keyword in df.DE:
k = str(keyword)
keywordsperarticle = k.split('; ')
for word in keywordsperarticle:
allkeywords.append(word.lower()) # make all lower case for better string matching.
print("Total number of keywords: " + str(len(allkeywords)))
# Find the most common keywords
commonkeywords = Counter(allkeywords).most_common(10) # Increase if you want more results
for word in commonkeywords:
if word[0] != "nan": # Clean out empty fields.
print(word[0] + "\t" + str(word[1]))
Counter(cooccurrence(df.DE)).most_common(10)
keywordsDF = pd.DataFrame(commonkeywords, columns=["keyword", "freq"])
# Plot figure while excluding "nan" values in Dataframe
keywordsWoS = sns.factorplot(x='keyword', y='freq', kind="bar", data=keywordsDF[keywordsDF.keyword.str.contains("nan") == False], size=8, aspect=2)
keywordsWoS.set_xticklabels(rotation=45)
# To get only journal articles, select df.SO[df['PT'] == 'J']
for journal in Counter(frequentjournals(df.SO[df['PT'] == 'J'] )).most_common(10):
print(journal)
df2 = pd.read_csv('.data/scopusRecursionOne769recs20170120.csv', encoding="utf-8")
# Print this for Scopus column names
#for header in list(df2.columns.values):
# print(header)
#df2['Document Type']
df2TC = df2.sort('Cited by', ascending=False) # Order dataframe by times cited
df2TC.tail(3)
# NOTE: there is a cryptic character in front of the Authors column:
# Sometimes 'Authors' works, sometimes 'Authors', depending on system locale settings.
df2TC[['Authors', 'Year', 'Title', 'Source title', 'Cited by', 'DOI']].head(10) # Ten most cited articles.
# Create a time series of the publications. Some data cleaning is needed:
df2TCdropna = df2TC.Year.dropna() # Drop empty values in years
df2TCyears = pd.DataFrame(df2TCdropna.astype(int)) # Convert existing years to integers, make new dataframe
publicationyearsScopus = sns.factorplot('Year', data=df2TCyears, kind='count', size=8, aspect=2)
# Read all keywords into a list.
allscopuskeywords = []
for keyword in df2['Author Keywords']:
k = str(keyword)
keywordsperarticle = k.split('; ')
for word in keywordsperarticle:
allscopuskeywords.append(word.lower()) # make all lower case for better string matching.
print("Total number of keywords: " + str(len(allkeywords)))
# Find the most common keywords
commonscopuskeywords = Counter(allscopuskeywords).most_common(20) # Increase if you want more results
for word in commonscopuskeywords:
if word[0] != "nan": # Clean out empty fields.
print(word[0] + "\t" + str(word[1]))
# Get co-occurrences
Counter(cooccurrence(df2['Author Keywords'])).most_common(10)
keywordsScopusDF = pd.DataFrame(commonscopuskeywords, columns=["keyword", "freq"])
# Plot figure while excluding "nan" values in Dataframe
keywordsScP = sns.factorplot(x='keyword', y='freq', kind="bar", data=keywordsScopusDF[keywordsScopusDF.keyword.str.contains("nan") == False], size=6, aspect=2)
keywordsScP.set_xticklabels(rotation=45)
keywordsScP.fig.text(0.65, 0.7, "Scopus - Recursion 1:\nSearchstring: \
'open science'\nMost frequent keywords\nN=769", ha ='left', fontsize = 15)
# For journal articles only: df2['Source title'][df2['Document Type'] == 'Article']
for journal in Counter(frequentjournals(df2['Source title'][df2['Document Type'] == 'Article'])).most_common(10):
print(journal)
df3 = pd.read_csv('.data/scopusRecursionTwo14146recs20170120.csv')
df3.tail(3) # Verify all data is there.
# Create a time series of the publications. Some data cleaning is needed:
df3dropna = df3.Year.dropna() # Drop empty values in years
df3years = pd.DataFrame(df3dropna.astype(int)) # Convert existing years to integers, make new dataframe
publicationyearsScopus = sns.factorplot('Year', data=df3years, kind='count', size=8, aspect=2)
publicationyearsScopus.set_xticklabels(rotation=45)
for journal in Counter(frequentjournals(df3['Source title'][df3['Document Type'] == "Article"])).most_common(10):
print(journal)
WoSyears = []
for year in df.PY.dropna():
if year > 1990.0:
WoSyears.append(int(year))
#print(sorted(WoSyears))
Scopusyears = []
for year in df2['Year'].dropna():
if year > 1990.0:
Scopusyears.append(year)
dfWoSyears = pd.DataFrame.from_dict(Counter(WoSyears), orient='index', dtype=None)
dfsorted = pd.DataFrame.sort_index(dfWoSyears)
dfsorted.head()
dfScopusyears = pd.DataFrame.from_dict(Counter(Scopusyears), orient='index', dtype=None)
dfSsorted = pd.DataFrame.sort_index(dfScopusyears)
dfSsorted.head()
def maketimeseries():
plt.title('"Open science" - Published articles and proceedings, 1990-2016', fontsize=16)
plt.xlabel('Year \n', fontsize=16)
plt.ylabel('Records', fontsize=16)
plt.ylim([0, 150])
plt.xlim(1990,2016)
# Line styles: http://matplotlib.org/1.3.1/examples/pylab_examples/line_styles.html
plt.plot(dfsorted, linestyle='--', marker='D', label="Web of Science")
plt.plot(dfSsorted, linestyle='-', marker='o', label="Scopus")
# legend guide: http://matplotlib.org/1.3.1/users/legend_guide.html
plt.legend(loc=2, borderaxespad=0., fontsize=16)
plt.savefig(".data/fig1.png")
maketimeseries()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A. The semantic connections
Step2: A. Web of Science - Recursion 1.
Step3: Keyword analysis
Step4: Journal analysis
Step5: Scopus - Recursive search
Step6: Journal analysis
Step7: C. Scopus Recursive search
Step8: Journal analysis
|
4,603
|
<ASSISTANT_TASK:>
Python Code:
# import python modules
import GPy
import numpy as np
from matplotlib import pyplot as plt
# call matplotlib with the inline command to make plots appear within the browser
%matplotlib inline
# The documentation to use the RBF function. There are several advanced options such as useGPU which are
# important for practical applications. The "?" symbol can be used with any function or class to view its
# documentation
GPy.kern.RBF?
# input dimension
d = 1
# variance
var = 1.
# lengthscale
length = 0.2
# define the kernel
k = GPy.kern.RBF(d, variance=var, lengthscale=length)
# view the parameters of the covariance function
print k
# plot the covariance function
k.plot()
# by default, all the parameters are set to 1. for the RBF kernel
k = GPy.kern.RBF(d)
# we experiment with different length scale parameter values here
theta = np.asarray([0.2,0.5,1.,2.,4.])
# create an instance of a figure
fig = plt.figure()
ax = plt.subplot(111)
# iterate over the lengthscales
for t in theta:
k.lengthscale=t
# plot in the same figure with a different color
k.plot(ax=ax, color=np.random.rand(3,), plot_limits=[-10.0,10.0])
plt.legend(theta)
# by default, all the parameters are set to 1. for the RBF kernel
k = GPy.kern.RBF(d)
# we experiment with different length scale parameter values here
var = np.asarray([0.2,0.5,1.,2.,4.])
# create an instance of a figure
fig = plt.figure()
ax = plt.subplot(111)
# iterate over the lengthscales
for v in var:
k.variance = v
# plot in the same figure with a different color
k.plot(ax=ax, color=np.random.rand(3,))
plt.legend(var)
# look for the kernel documentation
GPy.kern.Matern32?
# input dim
d = 1
# create the Matern32 kernel
k = GPy.kern.Matern32(d)
# view the kernel
print k
k.plot()
# input data: 50*2 matrix of iid standard Gaussians
X = np.random.rand(50,2)
# create the matern52 kernel
k = GPy.kern.Matern52(input_dim=2)
# compute the kernel matrix
C = k.K(X,X)
# computes eigenvalues of matrix
eigvals = np.linalg.eigvals(C)
# plot the eigen values
plt.bar(np.arange(len(eigvals)), eigvals)
plt.title('Eigenvalues of the Matern 5/2 Covariance')
# define rbf and matern52 kernels
kern1 = GPy.kern.RBF(1, variance=1., lengthscale=2.)
kern2 = GPy.kern.Matern52(1, variance=2., lengthscale=4.)
# combine both kernels
kern = kern1 + kern2
print kern
kern.plot(plot_limits=[-7,9])
kern = kern1*kern2
print kern
kern.plot(plot_limits=[-6,8])
# define RBF kernel
k = GPy.kern.RBF(input_dim=1,lengthscale=0.2)
# define X to be 500 points evenly spaced over [0,1]
X = np.linspace(0.,1.,500)
# make the numpy array to 2D array
X = X[:,None]
# set mean function i.e. 0 everywhere
mu = np.zeros((500))
# compute covariance matrix associated with inputs X
C = k.K(X,X)
# Generate 20 separate samples paths from a Gaussian with mean mu and covariance C
Z = np.random.multivariate_normal(mu,C,20)
# open a new plotting window
fig = plt.figure()
for i in range(20):
plt.plot(X[:],Z[i,:])
plt.matshow(C)
# Define input points and mean function
X = np.atleast_2d(np.linspace(0.,1.,500)).T
mu = np.zeros((500))
# sample paths for RBF kernel with different parameters
k = GPy.kern.RBF(input_dim=1,lengthscale=0.05)
C = k.K(X,X)
# Generate 20 separate samples paths from a Gaussian with mean mu and covariance C
Z = np.random.multivariate_normal(mu,C,10)
# open a new plotting window
fig = plt.figure()
for i in range(10):
plt.plot(X[:],Z[i,:])
# sample paths for Matern32 kernel with different parameters
k = GPy.kern.PeriodicMatern52(input_dim=1, lengthscale=0.01, period=1)
C = k.K(X,X)
# Generate 20 separate samples paths from a Gaussian with mean mu and covariance C
Z = np.random.multivariate_normal(mu,C,10)
# open a new plotting window
fig = plt.figure()
for i in range(10):
plt.plot(X[:],Z[i,:])
# input data points
X = np.linspace(0.05,0.95,10)[:,None]
# generate observations through function f
Y = -np.cos(np.pi*X) + np.sin(4*np.pi*X) + np.random.normal(loc=0.0, scale=0.1, size=(10,1))
# plot the generated data
plt.figure()
plt.plot(X,Y,'kx',mew=1.5)
# create instance of kernel
k = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
# create instance of GP regression model
m = GPy.models.GPRegression(X,Y,k)
# view model parameters
print m
# visualize posterior mean and variances
m.plot()
# obtain 5 test points
Xstar = np.linspace(0.01,0.99,5)[:,None]
# predict the output for the test points
Ystar, Vstar = m.predict(Xstar)
# print results
print Ystar
# set the noise parameter of the model using desired SNR
SNR = 10.0
m.Gaussian_noise.variance = m.rbf.variance/SNR
# check the model parameters and plot the model
print m
m.plot()
# get desired input points where model needs to be evaluated
Xp = np.linspace(0.0,1.0,100)[:,None]
# obtain posterior mean and variances
mu, C = m.predict(Xp, full_cov=True)
# generate 10 random paths of the distribution
nPaths = 10
paths = np.random.multivariate_normal(mu[:,0], C, nPaths)
# plot the dataset and the generated paths
plt.figure()
plt.plot(X,Y,'kx',mew=5)
for i in range(nPaths):
plt.plot(Xp[:],paths[i,:])
m.constrain_positive()
m.optimize()
m.plot()
print m
# get desired input points where model needs to be evaluated
Xp = np.linspace(0.0,1.0,100)[:,None]
# obtain posterior mean and variances
mu, C = m.predict(Xp, full_cov=True)
# generate 10 random paths of the distribution
nPaths = 10
paths = np.random.multivariate_normal(mu[:,0], C, nPaths)
# plot the dataset and the generated paths
plt.figure()
plt.plot(X,Y,'kx',mew=5)
for i in range(nPaths):
plt.plot(Xp[:],paths[i,:])
# create instance of kernel and GP Regression model
k = GPy.kern.Matern32(input_dim=1, variance=1., lengthscale=1.)
m = GPy.models.GPRegression(X,Y,k)
# optimize the model
m.constrain_positive()
m.optimize()
# view model parameters
print m
m.plot()
# get desired input points where model needs to be evaluated
Xp = np.linspace(0.0,1.0,100)[:,None]
# obtain posterior mean and variances
mu, C = m.predict(Xp, full_cov=True)
# generate 10 random paths of the distribution
nPaths = 10
paths = np.random.multivariate_normal(mu[:,0], C, nPaths)
# plot the dataset and the generated paths
plt.figure()
plt.plot(X,Y,'kx',mew=5)
for i in range(nPaths):
plt.plot(Xp[:],paths[i,:])
# load marathon timing dataset
data = np.genfromtxt('marathon.csv', delimiter=',')
# set the input and output data
X = data[:, 0:1]
Y = data[:, 1:2]
# plot the timings
plt.plot(X, Y, 'bx')
plt.xlabel('year')
plt.ylabel('marathon pace min/km')
# create the covariance function
kern = GPy.kern.RBF(1) + GPy.kern.Bias(1)
# create the GP Regression model
model = GPy.models.GPRegression(X, Y, kern)
# optimize the model
model.optimize()
model.plot()
# create the covariance function and set the desired parameter values
kern2 = GPy.kern.RBF(1, variance=0.5) + GPy.kern.Bias(1)
model2 = GPy.models.GPRegression(X, Y, kern2)
# optimize and plot the model
model2.optimize()
model2.plot()
# compare the log likelihoods
print model.log_likelihood(), model2.log_likelihood()
# create the covariance function and set the desired parameter values
kern3 = GPy.kern.RBF(1, lengthscale=80.0) + GPy.kern.Matern32(1, lengthscale=10.0) + GPy.kern.Bias(1)
model3 = GPy.models.GPRegression(X, Y, kern3)
# optimize and plot the model
model3.optimize()
model3.plot()
# compare the log likelihoods
print model.log_likelihood(), model3.log_likelihood()
# create the covariance function and set the desired parameter values
kern4 = GPy.kern.RBF(1, lengthscale=20.0) + GPy.kern.Matern32(1, lengthscale=20.0) + GPy.kern.Bias(1)
model4 = GPy.models.GPRegression(X, Y, kern4)
# optimize and plot the model
model4.optimize()
model4.plot()
# compare the log likelihoods
print model3.log_likelihood(), model4.log_likelihood()
# create the covariance function and set the desired parameter values
kern5 = GPy.kern.RBF(1, lengthscale=5.0, variance=5.0)*GPy.kern.Linear(1)
model5 = GPy.models.GPRegression(X, Y, kern5)
# optimize and plot the model
model5.optimize()
model5.plot()
# get the model parameters
print model5
# compare the log likelihoods
print model.log_likelihood(), model5.log_likelihood()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 Covariance Functions
Step2: A summary of the kernel can be obtained using the command print k.
Step3: It is also possible to plot the kernel as a function of one of its inputs (whilst fixing the other) with k.plot(). Use "?" to view the properties of the plot function for kern instance.
Step4: Setting Covariance Function Parameters
Step5: Exercise 1
Step6: c) Instead of rbf, try constructing and plotting the following covariance functions
Step7: Computing the kernel matrix given input data, $\mathbf{X}$
Step8: Combining Covariance Functions
Step9: It is also possible to multiply two kernel functions
Step10: 2 Sampling from a Gaussian Process
Step11: We can see the structure of the covariance matrix we are plotting from if we visualize C.
Step12: Exercise 2
Step13: b) Can you tell the covariance structures that have been used for generating the
Step14: A GP regression model is defined by first specifying the covariance function for analysis. Then an instance of the model is generated with a default set of parameters. Then it is possible to view the parameters using print m and visualize the posterior mean prediction and variances using m.plot
Step15: The actual predictions of the model for a set of points Xstar can be computed using m.predict(Xstar)
Step16: Exercise 3
Step17: c) Random sample paths from the conditional GP can be obtained using np.random.multivariate_normal(mu[
Step18: Covariance Function Parameter Estimation
Step19: We can optimize the hyperparameters of the model using the m.optimize() method.
Step20: Exercise 4
Step21: b) Modify the kernel used for building the model to investigate its influence on the model
Step22: 4 A Running Example
Step23: Exercise 5
Step24: b) Fit the same model, but this time intialize the length scale of the RBF kernel to 0.5. What has happened? Which of model has the higher log likelihood, this one or the one from (a)?
Step25: c) Modify your model by including two covariance functions. Intitialize a covariance function with an exponentiated quadratic part, a Matern 3/2 part and a bias covariance. Set the initial lengthscale of the exponentiated quadratic to 80 years, set the initial length scale of the Matern 3/2 to 10 years. Optimize the new model and plot the fit again. How does it compare with the previous model?
Step26: d) Repeat part c) but now initialize both of the covariance functions' lengthscales to 20 years. Check the model parameters, what happens now?
Step27: e) Now model the data with a product of an exponentiated quadratic covariance function and a linear covariance function. Fit the covariance function parameters. Why are the variance parameters of the linear part so small? How could this be fixed?
|
4,604
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_context('talk')
sns.set_style('darkgrid')
iris = sns.load_dataset('iris')
iris.head()
irisplot = sns.pairplot(iris, hue="species", palette='Set2', diag_kind="kde", size=2.5)
irisplot.fig.suptitle('Scatter Plots and Kernel Density Estimate of Iris Data by Species', fontsize = 18)
irisplot.fig.subplots_adjust(top=.9)
from microscopes.models import niw as normal_inverse_wishart
mvn5 = normal_inverse_wishart(5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Iris Flower Dataset is a standard machine learning data set dating back to the 1930s. It contains measurements from 150 flowers, 50 from each of the following species
Step2: In the case of the iris dataset, plotting the data shows that indiviudal species exhibit a typical range of measurements
Step3: If we wanted to learn these underlying species' measurements, we would use these real valued measurements and make assumptions about the structure of the data.
|
4,605
|
<ASSISTANT_TASK:>
Python Code:
from gensim.sklearn_api import LdaTransformer
from gensim.corpora import Dictionary
texts = [
['complier', 'system', 'computer'],
['eulerian', 'node', 'cycle', 'graph', 'tree', 'path'],
['graph', 'flow', 'network', 'graph'],
['loading', 'computer', 'system'],
['user', 'server', 'system'],
['tree', 'hamiltonian'],
['graph', 'trees'],
['computer', 'kernel', 'malfunction', 'computer'],
['server', 'system', 'computer']
]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
model = LdaTransformer(num_topics=2, id2word=dictionary, iterations=20, random_state=1)
model.fit(corpus)
model.transform(corpus)
import numpy as np
from gensim import matutils
from gensim.models.ldamodel import LdaModel
from sklearn.datasets import fetch_20newsgroups
from gensim.sklearn_api.ldamodel import LdaTransformer
rand = np.random.mtrand.RandomState(1) # set seed for getting same result
cats = ['rec.sport.baseball', 'sci.crypt']
data = fetch_20newsgroups(subset='train', categories=cats, shuffle=True)
data_texts = [_.split() for _ in data.data]
id2word = Dictionary(data_texts)
corpus = [id2word.doc2bow(i.split()) for i in data.data]
obj = LdaTransformer(id2word=id2word, num_topics=5, iterations=20)
lda = obj.fit(corpus)
from sklearn.model_selection import GridSearchCV
obj = LdaTransformer(id2word=id2word, num_topics=2, iterations=5, scorer='u_mass') # here 'scorer' can be 'perplexity' or 'u_mass'
parameters = {'num_topics': (2, 3, 5, 10), 'iterations': (1, 20, 50)}
# set `scoring` as `None` to use the inbuilt score function of `SklLdaModel` class
model = GridSearchCV(obj, parameters, cv=3, scoring=None)
model.fit(corpus)
model.best_params_
from gensim.models.coherencemodel import CoherenceModel
# supplying a custom scoring function
def scoring_function(estimator, X, y=None):
goodcm = CoherenceModel(model=estimator.gensim_model, texts=data_texts, dictionary=estimator.gensim_model.id2word, coherence='c_v')
return goodcm.get_coherence()
obj = LdaTransformer(id2word=id2word, num_topics=5, iterations=5)
parameters = {'num_topics': (2, 3, 5, 10), 'iterations': (1, 20, 50)}
# set `scoring` as your custom scoring function
model = GridSearchCV(obj, parameters, cv=2, scoring=scoring_function)
model.fit(corpus)
model.best_params_
from sklearn.pipeline import Pipeline
from sklearn import linear_model
def print_features_pipe(clf, vocab, n=10):
''' Better printing for sorted list '''
coef = clf.named_steps['classifier'].coef_[0]
print coef
print 'Positive features: %s' % (' '.join(['%s:%.2f' % (vocab[j], coef[j]) for j in np.argsort(coef)[::-1][:n] if coef[j] > 0]))
print 'Negative features: %s' % (' '.join(['%s:%.2f' % (vocab[j], coef[j]) for j in np.argsort(coef)[:n] if coef[j] < 0]))
id2word = Dictionary([_.split() for _ in data.data])
corpus = [id2word.doc2bow(i.split()) for i in data.data]
model = LdaTransformer(num_topics=15, id2word=id2word, iterations=10, random_state=37)
clf = linear_model.LogisticRegression(penalty='l2', C=0.1) # l2 penalty used
pipe = Pipeline([('features', model,), ('classifier', clf)])
pipe.fit(corpus, data.target)
print_features_pipe(pipe, id2word.values())
print(pipe.score(corpus, data.target))
from gensim.sklearn_api import LsiTransformer
model = LsiTransformer(num_topics=15, id2word=id2word)
clf = linear_model.LogisticRegression(penalty='l2', C=0.1) # l2 penalty used
pipe = Pipeline([('features', model,), ('classifier', clf)])
pipe.fit(corpus, data.target)
print_features_pipe(pipe, id2word.values())
print(pipe.score(corpus, data.target))
from gensim.sklearn_api import RpTransformer
model = RpTransformer(num_topics=2)
np.random.mtrand.RandomState(1) # set seed for getting same result
clf = linear_model.LogisticRegression(penalty='l2', C=0.1) # l2 penalty used
pipe = Pipeline([('features', model,), ('classifier', clf)])
pipe.fit(corpus, data.target)
print_features_pipe(pipe, id2word.values())
print(pipe.score(corpus, data.target))
from gensim.sklearn_api import LdaSeqTransformer
test_data = data.data[0:2]
test_target = data.target[0:2]
id2word_ldaseq = Dictionary(map(lambda x: x.split(), test_data))
corpus_ldaseq = [id2word_ldaseq.doc2bow(i.split()) for i in test_data]
model = LdaSeqTransformer(id2word=id2word_ldaseq, num_topics=2, time_slice=[1, 1, 1], initialize='gensim')
clf = linear_model.LogisticRegression(penalty='l2', C=0.1) # l2 penalty used
pipe = Pipeline([('features', model,), ('classifier', clf)])
pipe.fit(corpus_ldaseq, test_target)
print_features_pipe(pipe, id2word_ldaseq.values())
print(pipe.score(corpus_ldaseq, test_target))
from gensim.sklearn_api import W2VTransformer
w2v_texts = [
['calculus', 'is', 'the', 'mathematical', 'study', 'of', 'continuous', 'change'],
['geometry', 'is', 'the', 'study', 'of', 'shape'],
['algebra', 'is', 'the', 'study', 'of', 'generalizations', 'of', 'arithmetic', 'operations'],
['differential', 'calculus', 'is', 'related', 'to', 'rates', 'of', 'change', 'and', 'slopes', 'of', 'curves'],
['integral', 'calculus', 'is', 'realted', 'to', 'accumulation', 'of', 'quantities', 'and', 'the', 'areas', 'under', 'and', 'between', 'curves'],
['physics', 'is', 'the', 'natural', 'science', 'that', 'involves', 'the', 'study', 'of', 'matter', 'and', 'its', 'motion', 'and', 'behavior', 'through', 'space', 'and', 'time'],
['the', 'main', 'goal', 'of', 'physics', 'is', 'to', 'understand', 'how', 'the', 'universe', 'behaves'],
['physics', 'also', 'makes', 'significant', 'contributions', 'through', 'advances', 'in', 'new', 'technologies', 'that', 'arise', 'from', 'theoretical', 'breakthroughs'],
['advances', 'in', 'the', 'understanding', 'of', 'electromagnetism', 'or', 'nuclear', 'physics', 'led', 'directly', 'to', 'the', 'development', 'of', 'new', 'products', 'that', 'have', 'dramatically', 'transformed', 'modern', 'day', 'society']
]
model = W2VTransformer(size=10, min_count=1)
model.fit(w2v_texts)
class_dict = {'mathematics': 1, 'physics': 0}
train_data = [
('calculus', 'mathematics'), ('mathematical', 'mathematics'), ('geometry', 'mathematics'), ('operations', 'mathematics'), ('curves', 'mathematics'),
('natural', 'physics'), ('nuclear', 'physics'), ('science', 'physics'), ('electromagnetism', 'physics'), ('natural', 'physics')
]
train_input = list(map(lambda x: x[0], train_data))
train_target = list(map(lambda x: class_dict[x[1]], train_data))
clf = linear_model.LogisticRegression(penalty='l2', C=0.1)
clf.fit(model.transform(train_input), train_target)
text_w2v = Pipeline([('features', model,), ('classifier', clf)])
score = text_w2v.score(train_input, train_target)
print(score)
from gensim.sklearn_api import AuthorTopicTransformer
from sklearn import cluster
atm_texts = [
['complier', 'system', 'computer'],
['eulerian', 'node', 'cycle', 'graph', 'tree', 'path'],
['graph', 'flow', 'network', 'graph'],
['loading', 'computer', 'system'],
['user', 'server', 'system'],
['tree', 'hamiltonian'],
['graph', 'trees'],
['computer', 'kernel', 'malfunction', 'computer'],
['server', 'system', 'computer'],
]
atm_dictionary = Dictionary(atm_texts)
atm_corpus = [atm_dictionary.doc2bow(text) for text in atm_texts]
author2doc = {'john': [0, 1, 2, 3, 4, 5, 6], 'jane': [2, 3, 4, 5, 6, 7, 8], 'jack': [0, 2, 4, 6, 8], 'jill': [1, 3, 5, 7]}
model = AuthorTopicTransformer(id2word=atm_dictionary, author2doc=author2doc, num_topics=10, passes=100)
model.fit(atm_corpus)
# create and train clustering model
clstr = cluster.MiniBatchKMeans(n_clusters=2)
authors_full = ['john', 'jane', 'jack', 'jill']
clstr.fit(model.transform(authors_full))
# stack together the two models in a pipeline
text_atm = Pipeline([('features', model,), ('cluster', clstr)])
author_list = ['jane', 'jack', 'jill']
ret_val = text_atm.predict(author_list)
print(ret_val)
from gensim.sklearn_api import D2VTransformer
from gensim.models import doc2vec
d2v_sentences = [doc2vec.TaggedDocument(words, [i]) for i, words in enumerate(w2v_texts)]
model = D2VTransformer(min_count=1)
model.fit(d2v_sentences)
class_dict = {'mathematics': 1, 'physics': 0}
train_data = [
(['calculus', 'mathematical'], 'mathematics'), (['geometry', 'operations', 'curves'], 'mathematics'),
(['natural', 'nuclear'], 'physics'), (['science', 'electromagnetism', 'natural'], 'physics')
]
train_input = list(map(lambda x: x[0], train_data))
train_target = list(map(lambda x: class_dict[x[1]], train_data))
clf = linear_model.LogisticRegression(penalty='l2', C=0.1)
clf.fit(model.transform(train_input), train_target)
text_d2v = Pipeline([('features', model,), ('classifier', clf)])
score = text_d2v.score(train_input, train_target)
print(score)
from gensim.sklearn_api import Text2BowTransformer
text2bow_model = Text2BowTransformer()
lda_model = LdaTransformer(num_topics=2, passes=10, minimum_probability=0, random_state=np.random.seed(0))
clf = linear_model.LogisticRegression(penalty='l2', C=0.1)
text_t2b = Pipeline([('bow_model', text2bow_model), ('ldamodel', lda_model), ('classifier', clf)])
text_t2b.fit(data.data, data.target)
score = text_t2b.score(data.data, data.target)
print(score)
from gensim.sklearn_api import TfIdfTransformer
tfidf_model = TfIdfTransformer()
tfidf_model.fit(corpus)
lda_model = LdaTransformer(num_topics=2, passes=10, minimum_probability=0, random_state=np.random.seed(0))
clf = linear_model.LogisticRegression(penalty='l2', C=0.1)
text_tfidf = Pipeline((('tfidf_model', tfidf_model), ('ldamodel', lda_model), ('classifier', clf)))
text_tfidf.fit(corpus, data.target)
score = text_tfidf.score(corpus, data.target)
print(score)
from gensim.sklearn_api import HdpTransformer
model = HdpTransformer(id2word=id2word)
clf = linear_model.LogisticRegression(penalty='l2', C=0.1)
text_hdp = Pipeline([('features', model,), ('classifier', clf)])
text_hdp.fit(corpus, data.target)
score = text_hdp.score(corpus, data.target)
print(score)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we will create a dummy set of texts and convert it into a corpus
Step2: Then to run the LdaModel on it
Step3: Integration with Sklearn
Step4: Next, we use use the loaded data to create our dictionary and corpus.
Step5: Next, we just need to fit corpus and id2word to our Lda wrapper.
Step6: Example for Using Grid Search
Step7: The inbuilt score function of Lda wrapper class provides two modes
Step8: You can also supply a custom scoring function of your choice using the scoring parameter of GridSearchCV function. The example shown below uses c_v mode of CoherenceModel class for computing the scores of the candidate models.
Step9: Example of Using Pipeline
Step10: LSI Model
Step11: Example of Using Pipeline
Step12: Random Projections Model
Step13: Example of Using Pipeline
Step14: LDASeq Model
Step15: Example of Using Pipeline
Step16: Word2Vec Model
Step17: Example of Using Pipeline
Step18: AuthorTopic Model
Step19: Example of Using Pipeline
Step20: Doc2Vec Model
Step21: Example of Using Pipeline
Step22: Text2Bow Model
Step23: Example of Using Pipeline
Step24: TfIdf Model
Step25: Example of Using Pipeline
Step26: HDP Model
Step27: Example of Using Pipeline
|
4,606
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
reviews[:10]
from collections import Counter
counter = Counter(words)
#print(counter.most_common())
vocabs = set(counter.keys())
vocab_size = len(vocabs)
print(vocab_size)
# Create your dictionary that maps vocab words to integers here
vocab_to_int = {}
for i,vocab in enumerate(vocabs):
#if (i <100): print(i, vocab)
vocab_to_int[vocab] = i+1
#vocab_to_int
# Convert the reviews to integers, same shape as reviews list, but with integers
#reviews[:10]
reviews_ints = []
for review in reviews:
review_ints = []
for word in review.split(" "):
if word not in ' ':
review_ints.append(vocab_to_int[word])
reviews_ints.append(review_ints)
reviews_ints[0]
print(reviews[0])
print(len(reviews[0].split(' ')))
print(len(reviews_ints[0]))
print(reviews_ints[:2])
def get_target_for_label(label):
if(label == 'positive'):
return 1
else:
return 0
# Convert labels to 1s and 0s for 'positive' and 'negative'
#print(labels[:100])
#([c for c in reviews if c not in punctuation])
text_labels = labels.split('\n')
labels_ints = []
for text_label in text_labels:
labels_ints.append(get_target_for_label(text_label))
#labels_ints[:11]
labels = labels_ints
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
# Filter out that review with 0 length
reviews_ints = [review for review in reviews_ints if len(review) > 0]
seq_len = 200
reviews_ints_optimized = []
for review_ints in reviews_ints:
if len(review_ints) > seq_len:
truncated = review_ints[0:seq_len]
reviews_ints_optimized.append(truncated)
else:
padded = [0] * (seq_len - len(review_ints)) + review_ints
reviews_ints_optimized.append(padded)
features = np.array(reviews_ints_optimized)
len(reviews_ints_optimized[1130])
features[9,:100]
features[:10,:100]
split_frac = 0.8
def split_data(data, frac):
first_group_size = int(round(len(data) * frac))
group_a, group_b = data[:first_group_size], data[first_group_size:]
return group_a, group_b
train_x, val_x = split_data(features, split_frac)
train_y, val_y = split_data(labels, split_frac)
val_x, test_x = split_data(val_x, 0.5)
val_y, test_y = split_data(val_y, 0.5)
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocabs)
print(n_words)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [batch_size, n_words], name='inputs')
labels_ = tf.placeholder(tf.int32, [batch_size, n_words], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform([n_words, embed_size], minval=-1, maxval=1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs_)# use tf.nn.embedding_lookup to get the hidden layer output
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
#lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
#drop = tf.nn.rnn_cell.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
#cell = tf.nn.rnn_cell.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs_, initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step11: Embedding
Step12: LSTM cell
Step13: RNN forward pass
Step14: Output
Step15: Validation accuracy
Step16: Batching
Step17: Training
Step18: Testing
|
4,607
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyemu
la = pyemu.Schur("pest.jco",verbose=False)
la.drop_prior_information()
jco_ord = la.jco.get(la.pst.obs_names,la.pst.par_names)
ord_base = "pest_ord"
jco_ord.to_binary(ord_base + ".jco")
pv_names = []
predictions = ["pd_ten", "c_obs10_2"]
for pred in predictions:
pv = jco_ord.extract(pred).T
pv_name = pred + ".vec"
pv.to_ascii(pv_name)
pv_names.append(pv_name)
prior_uncfile = "pest.unc"
la.parcov.to_uncfile(prior_uncfile,covmat_file=None)
post_mat = "post.cov"
post_unc = "post.unc"
args = [ord_base + ".pst","1.0",prior_uncfile,
post_mat,post_unc,"1"]
pd7_in = "predunc7.in"
f = open(pd7_in,'w')
f.write('\n'.join(args)+'\n')
f.close()
out = "pd7.out"
pd7 = os.path.join("exe","i64predunc7.exe")
os.system(pd7 + " <" + pd7_in + " >"+out)
for line in open(out).readlines():
print line,
post_pd7 = pyemu.Cov()
post_pd7.from_ascii(post_mat)
la_ord = pyemu.Schur(jco=ord_base+".jco",predictions=predictions)
post_pyemu = la_ord.posterior_parameter
#post_pyemu = post_pyemu.get(post_pd7.row_names)
delta = (post_pd7 - post_pyemu).x
(post_pd7 - post_pyemu).to_ascii("delta.cov")
print delta.sum()
print delta.max(),delta.min()
args = [ord_base + ".pst", "1.0", prior_uncfile, None, "1"]
pd1_in = "predunc1.in"
pd1 = os.path.join("exe", "i64predunc1.exe")
pd1_results = {}
for pv_name in pv_names:
args[3] = pv_name
f = open(pd1_in, 'w')
f.write('\n'.join(args) + '\n')
f.close()
out = "predunc1" + pv_name + ".out"
os.system(pd1 + " <" + pd1_in + ">" + out)
f = open(out,'r')
for line in f:
if "pre-cal " in line.lower():
pre_cal = float(line.strip().split()[-2])
elif "post-cal " in line.lower():
post_cal = float(line.strip().split()[-2])
f.close()
pd1_results[pv_name.split('.')[0].lower()] = [pre_cal, post_cal]
pyemu_results = {}
for pname in la_ord.prior_prediction.keys():
pyemu_results[pname] = [np.sqrt(la_ord.prior_prediction[pname]),
np.sqrt(la_ord.posterior_prediction[pname])]
f = open("predunc1_textable.dat",'w')
for pname in pd1_results.keys():
print pname
f.write(pname+"&{0:6.5f}&{1:6.5}&{2:6.5f}&{3:6.5f}\\\n"\
.format(pd1_results[pname][0],pyemu_results[pname][0],
pd1_results[pname][1],pyemu_results[pname][1]))
print "prior",pname,pd1_results[pname][0],pyemu_results[pname][0]
print "post",pname,pd1_results[pname][1],pyemu_results[pname][1]
f.close()
f = open("pred_list.dat",'w')
out_files = []
for pv in pv_names:
out_name = pv+".predvar1b.out"
out_files.append(out_name)
f.write(pv+" "+out_name+"\n")
f.close()
args = [ord_base+".pst","1.0","pest.unc","pred_list.dat"]
for i in xrange(36):
args.append(str(i))
args.append('')
args.append("n") #no for most parameters
args.append("y") #yes for mult
f = open("predvar1b.in", 'w')
f.write('\n'.join(args) + '\n')
f.close()
os.system("exe\\predvar1b.exe <predvar1b.in")
pv1b_results = {}
for out_file in out_files:
pred_name = out_file.split('.')[0]
f = open(out_file,'r')
for _ in xrange(3):
f.readline()
arr = np.loadtxt(f)
pv1b_results[pred_name] = arr
la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco",
predictions=predictions,
omitted_parameters="mult1",
verbose=False)
df = la_ord_errvar.get_errvar_dataframe(np.arange(36))
df
fig = plt.figure(figsize=(6,6))
max_idx = 15
idx = np.arange(max_idx)
for ipred,pred in enumerate(predictions):
arr = pv1b_results[pred][:max_idx,:]
first = df[("first", pred)][:max_idx]
second = df[("second", pred)][:max_idx]
third = df[("third", pred)][:max_idx]
ax = plt.subplot(len(predictions),1,ipred+1)
#ax.plot(arr[:,1],color='b',dashes=(6,6),lw=4,alpha=0.5)
#ax.plot(first,color='b')
#ax.plot(arr[:,2],color='g',dashes=(6,4),lw=4,alpha=0.5)
#ax.plot(second,color='g')
#ax.plot(arr[:,3],color='r',dashes=(6,4),lw=4,alpha=0.5)
#ax.plot(third,color='r')
ax.scatter(idx,arr[:,1],marker='x',s=40,color='g',
label="PREDVAR1B - first term")
ax.scatter(idx,arr[:,2],marker='x',s=40,color='b',
label="PREDVAR1B - second term")
ax.scatter(idx,arr[:,3],marker='x',s=40,color='r',
label="PREVAR1B - third term")
ax.scatter(idx,first,marker='o',facecolor='none',
s=50,color='g',label='pyEMU - first term')
ax.scatter(idx,second,marker='o',facecolor='none',
s=50,color='b',label="pyEMU - second term")
ax.scatter(idx,third,marker='o',facecolor='none',
s=50,color='r',label="pyEMU - third term")
ax.set_ylabel("forecast variance")
ax.set_title("forecast: " + pred)
if ipred == len(predictions) -1:
ax.legend(loc="lower center",bbox_to_anchor=(0.5,-0.75),
scatterpoints=1,ncol=2)
ax.set_xlabel("singular values")
#break
plt.savefig("predvar1b_ver.eps")
cmd_args = [os.path.join("exe","i64identpar.exe"),ord_base,"5",
"null","null","ident.out","/s"]
cmd_line = ' '.join(cmd_args)+'\n'
print(cmd_line)
print(os.getcwd())
os.system(cmd_line)
identpar_df = pd.read_csv("ident.out",delim_whitespace=True)
la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco",
predictions=predictions,
verbose=False)
df = la_ord_errvar.get_identifiability_dataframe(5)
df
fig = plt.figure()
ax = plt.subplot(111)
axt = plt.twinx()
ax.plot(identpar_df["identifiability"])
ax.plot(df["ident"])
ax.set_xlim(-10,600)
diff = identpar_df["identifiability"].values - df["ident"].values
#print(diff)
axt.plot(diff)
axt.set_ylim(-1,1)
ax.set_xlabel("parmaeter")
ax.set_ylabel("identifiability")
axt.set_ylabel("difference")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
Step2: extract and save the forecast sensitivity vectors
Step3: save the prior parameter covariance matrix as an uncertainty file
Step4: PRECUNC7
Step5: load the posterior matrix written by predunc7
Step6: The cumulative difference between the two posterior matrices
Step7: PREDUNC1
Step8: organize the pyemu results into a structure for comparison
Step9: compare the results
Step10: PREDVAR1b
Step11: now for pyemu
Step12: generate some plots to verify
Step13: Identifiability
Step14: cheap plot to verify
|
4,608
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
import random
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matlotlib backend as plotting inline in IPython
%matplotlib inline
print("All imports are fine")
# defining some useful utils
def randindex(items):
'''Gets random index'''
return items[random.randint(0, len(items) -1)]
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
from IPython.display import Image, display
rootdir = "notMNIST_large"
for letter in os.listdir(rootdir):
if ".pickle" in letter:
continue
images = os.listdir(os.path.join(rootdir, letter))
image = images[random.randint(0, len(images)-1)]
image = os.path.join(rootdir, letter, image)
display(Image(filename=image))
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
stats = {}
cols = 5
rows = 10 / cols
f, grid = plt.subplots(rows, cols)
counter = 0
for picklefile in train_datasets:
with open(picklefile) as f:
dataset = pickle.load(f)
L = picklefile.split("/")[-1].replace(".pickle", "")
stats[L]= len(dataset)
grid[counter / cols][counter % cols].imshow(dataset[random.randint(0, len(dataset) - 1)])
counter += 1
print(stats)
plt.bar(range(len(stats)), stats.values(), align='center')
plt.xticks(range(len(stats)), stats.keys())
plt.show()
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
print("Shuffled")
cols = 5
rows = 10 / cols
for h, ds in {'Train': train_dataset, 'Test':test_dataset, 'Validation': valid_dataset}.items():
print(h)
_, grid = plt.subplots(rows, cols)
counter = 0
for i in range(10):
grid[counter / cols][counter % cols].imshow(randindex(ds))
counter += 1
plt.show()
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
# Hash is computed using Zobrist Hashing Algorithm https://en.wikipedia.org/wiki/Zobrist_hashing
import uuid
min_val = 0
max_val = 255
tot_vals = max_val - min_val + 1 # possible entries for each pixel
ZB = np.zeros(shape=(image_size, image_size, tot_vals), dtype=object) # Zobrist Board
for i in range(image_size):
for j in range(image_size):
for k in range(tot_vals):
randmbits = long(uuid.uuid4().int)
ZB[i][j][k] = randmbits
print("Zobrist Board initialized")
def hashfunc(img):
h = 0
for i in range(image_size):
for j in range(image_size):
k = img[i][j]
# color is in range of [-1.0, 1.0];
# converting to [0, 255]
k = int(k * 127) + 128
assert k >= min_val
assert k <= max_val
h ^= ZB[i][j][k]
return h
def aggr_dictval_reptn(d): # Finds the value repeartation sum
return sum(map(lambda x: x - 1, filter(lambda x: x > 1, d.values())))
def find_overlap(ds):
d = {}
percent = len(ds) / 100
i = 0
for img in ds:
h = hashfunc(img)
d[h] = d.get(h, 0) + 1
if i % percent == 0:
print("%d%%.. " % (i/percent), end=""),
i += 1
tot = len(ds)
reptn = aggr_dictval_reptn(d)
ovrlp = reptn / float(tot)
return ovrlp, d
def find_overlap_hash(ds1, ds2):
ovrlp1, d1 = find_overlap(ds1)
print("overlap in ds1 = %f" % ovrlp1)
ovrlp2, d2 = find_overlap(ds2)
print("overlap in ds2 = %f" % ovrlp2)
# finding duplication across datasets
d = d1 # not making another copy!
for h, c in d2.items():
d[h] = d.get(h, 0) + c
reptn = aggr_dictval_reptn(d)
ovrlp = reptn / float(len(ds1) + len(ds2))
print("repeatation across datasets = %d" % reptn)
print("overlap across datasets = %f" % ovrlp)
print("Starting")
find_overlap_hash(train_dataset, test_dataset)
print("Done")
lg_model = LogisticRegression(C=1e5)
X_data = np.array(map(lambda x: x.flatten(), train_dataset))
lg_model = lg_model.fit(X_data, train_labels)
print("Fitting Done")
pickle_file = 'logistic_regr_model.pickle'
with open(pickle_file, 'wb') as handle:
pickle.dump(lg_model, handle)
print("Model dumped")
statinfo = os.stat(pickle_file)
print('Model pickle size:', statinfo.st_size)
error = 0
tds = np.array(map(lambda x: x.flatten(), test_dataset))
Y = lg_model.predict(tds)
Y_ = test_labels
assert len(Y) == len(Y_) == len(test_dataset)
for i in xrange(len(Y)):
if Y[i] != Y_[i]:
error += 1
print("Error = %d out of %d, i.e. %.2f%%" % (error, len(test_dataset), 100.0 * error/len(test_dataset)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step4: Extract the dataset from the compressed .tar.gz file.
Step5: Problem 1
Step7: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
Step8: Problem 2
Step9: Problem 3
Step10: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Step11: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step12: Problem 4
Step13: Finally, let's save the data for later reuse
Step14: Problem 5
Step15: Problem 6
|
4,609
|
<ASSISTANT_TASK:>
Python Code:
ph_sel_name = "all-ph"
data_id = "7d"
# ph_sel_name = "all-ph"
# data_id = "7d"
from fretbursts import *
init_notebook()
from IPython.display import display
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'),
'DexDem': Ph_sel(Dex='Dem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
d = loader.photon_hdf5(filename=files_dict[data_id])
d.ph_times_t, d.det_t
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
plot_alternation_hist(d)
loader.alex_apply_period(d)
d
d.time_max
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel)
d.burst_search(**bs_kws)
th1 = 30
ds = d.select_bursts(select_bursts.size, th1=30)
bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True)
.round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4}))
bursts.head()
burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'
.format(sample=data_id, th=th1, **bs_kws))
burst_fname
bursts.to_csv(burst_fname)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print ('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
def hsm_mode(s):
Half-sample mode (HSM) estimator of `s`.
`s` is a sample from a continuous distribution with a single peak.
Reference:
Bickel, Fruehwirth (2005). arXiv:math/0505419
s = memoryview(np.sort(s))
i1 = 0
i2 = len(s)
while i2 - i1 > 3:
n = (i2 - i1) // 2
w = [s[n-1+i+i1] - s[i+i1] for i in range(n)]
i1 = w.index(min(w)) + i1
i2 = i1 + n
if i2 - i1 == 3:
if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]:
i2 -= 1
elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]:
i1 += 1
else:
i1 = i2 = i1 + 1
return 0.5*(s[i1] + s[i2])
E_pr_do_hsm = hsm_mode(ds_do.E[0])
print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100))
E_fitter = bext.bursts_fitter(ds_do, weights=None)
E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03))
E_fitter.fit_histogram(model=mfit.factory_gaussian())
E_fitter.params
res = E_fitter.fit_res[0]
res.params.pretty_print()
E_pr_do_gauss = res.best_values['center']
E_pr_do_gauss
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_fitter.calc_kde(bandwidth=bandwidth)
E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1])
E_pr_do_kde = E_fitter.kde_max_pos[0]
E_pr_do_kde
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False)
plt.axvline(E_pr_do_hsm, color='m', label='HSM')
plt.axvline(E_pr_do_gauss, color='k', label='Gauss')
plt.axvline(E_pr_do_kde, color='r', label='KDE')
plt.xlim(0, 0.3)
plt.legend()
print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' %
(E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
ds_fret.fit_E_m(weights='size')
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
sample = data_id
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n')
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Step5: Laser alternation selection
Step6: We need to define some parameters
Step7: We should check if everithing is OK with an alternation histogram
Step8: If the plot looks good we can apply the parameters with
Step9: Measurements infos
Step10: Or check the measurements duration
Step11: Compute background
Step12: Burst search and selection
Step14: Donor Leakage fit
Step15: Gaussian Fit
Step16: KDE maximum
Step17: Leakage summary
Step18: Burst size distribution
Step19: Fret fit
Step20: Weighted mean of $E$ of each burst
Step21: Gaussian fit (no weights)
Step22: Gaussian fit (using burst size as weights)
Step23: Stoichiometry fit
Step24: The Maximum likelihood fit for a Gaussian population is the mean
Step25: Computing the weighted mean and weighted standard deviation we get
Step26: Save data to file
Step27: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step28: This is just a trick to format the different variables
|
4,610
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from bqplot import Figure, LinearScale, ColorScale, Color, Axis, HeatMap, ColorAxis
from ipywidgets import Layout
x = np.linspace(-5, 5, 200)
y = np.linspace(-5, 5, 200)
X, Y = np.meshgrid(x, y)
color = np.cos(X ** 2 + Y ** 2)
x_sc, y_sc, col_sc = LinearScale(), LinearScale(), ColorScale(scheme="RdYlBu")
heat = HeatMap(x=x, y=y, color=color, scales={"x": x_sc, "y": y_sc, "color": col_sc})
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation="vertical")
ax_c = ColorAxis(scale=col_sc)
fig = Figure(
marks=[heat],
axes=[ax_x, ax_y, ax_c],
title="Cosine",
layout=Layout(width="650px", height="650px"),
min_aspect_ratio=1,
max_aspect_ratio=1,
padding_y=0,
)
fig
from scipy.misc import ascent
Z = ascent()
Z = Z[::-1, :]
aspect_ratio = Z.shape[1] / Z.shape[0]
col_sc = ColorScale(scheme="Greys", reverse=True)
scales = {"color": col_sc}
ascent = HeatMap(color=Z, scales=scales)
img = Figure(
title="Ascent",
marks=[ascent],
layout=Layout(width="650px", height="650px"),
min_aspect_ratio=aspect_ratio,
max_aspect_ratio=aspect_ratio,
padding_y=0,
)
img
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Input
Step2: Plotting a 2-dimensional function
Step3: Displaying an image
|
4,611
|
<ASSISTANT_TASK:>
Python Code:
%%javascript
delete requirejs.s.contexts._.defined.CustomViewModule;
define('CustomViewModule', ['jquery', 'widgets/js/widget'], function($, widget) {
var CustomView = widget.DOMWidgetView.extend({
});
return {CustomView: CustomView};
});
from IPython.html.widgets import DOMWidget
from IPython.display import display
from IPython.utils.traitlets import Unicode
class CustomWidget(DOMWidget):
_view_module = Unicode('CustomViewModule', sync=True)
_view_name = Unicode('CustomView', sync=True)
display(CustomWidget())
answer('2_1.js')
answer('2_1.py')
from IPython.html.widgets import DOMWidget
from IPython.display import display
from IPython.utils.traitlets import Unicode
class ColorWidget(DOMWidget):
_view_module = Unicode('ColorViewModule', sync=True)
_view_name = Unicode('ColorView', sync=True)
%%javascript
delete requirejs.s.contexts._.defined.ColorViewModule;
define('ColorViewModule', ['jquery', 'widgets/js/widget'], function($, widget) {
var ColorView = widget.DOMWidgetView.extend({
});
return {ColorView: ColorView};
});
answer('2_2.py')
answer('2_2_1.js')
answer('2_2_2.js')
answer('2_2.js')
w = ColorWidget()
display(w)
display(w)
w.value = '#00FF00'
w.value
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using the template below, make a color picker widget. This can be done in a few steps
|
4,612
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Laura Gwilliams <laura.gwilliams@nyu.edu>
# Jean-Remi King <jeanremi.king@gmail.com>
# Alex Barachant <alexandre.barachant@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mne import Epochs, create_info, events_from_annotations
from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
from mne.decoding import CSP
from mne.time_frequency import AverageTFR
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import StratifiedKFold, cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder
event_id = dict(hands=2, feet=3) # motor imagery: hands vs feet
subject = 1
runs = [6, 10, 14]
raw_fnames = eegbci.load_data(subject, runs)
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames])
# Extract information from the raw file
sfreq = raw.info['sfreq']
events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3))
raw.pick_types(meg=False, eeg=True, stim=False, eog=False, exclude='bads')
# Assemble the classifier using scikit-learn pipeline
clf = make_pipeline(CSP(n_components=4, reg=None, log=True, norm_trace=False),
LinearDiscriminantAnalysis())
n_splits = 5 # how many folds to use for cross-validation
cv = StratifiedKFold(n_splits=n_splits, shuffle=True)
# Classification & Time-frequency parameters
tmin, tmax = -.200, 2.000
n_cycles = 10. # how many complete cycles: used to define window size
min_freq = 5.
max_freq = 25.
n_freqs = 8 # how many frequency bins to use
# Assemble list of frequency range tuples
freqs = np.linspace(min_freq, max_freq, n_freqs) # assemble frequencies
freq_ranges = list(zip(freqs[:-1], freqs[1:])) # make freqs list of tuples
# Infer window spacing from the max freq and number of cycles to avoid gaps
window_spacing = (n_cycles / np.max(freqs) / 2.)
centered_w_times = np.arange(tmin, tmax, window_spacing)[1:]
n_windows = len(centered_w_times)
# Instantiate label encoder
le = LabelEncoder()
# init scores
freq_scores = np.zeros((n_freqs - 1,))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
X = epochs.get_data()
# Save mean scores over folds for each frequency and time window
freq_scores[freq] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
plt.bar(freqs[:-1], freq_scores, width=np.diff(freqs)[0],
align='edge', edgecolor='black')
plt.xticks(freqs)
plt.ylim([0, 1])
plt.axhline(len(epochs['feet']) / len(epochs), color='k', linestyle='--',
label='chance level')
plt.legend()
plt.xlabel('Frequency (Hz)')
plt.ylabel('Decoding Scores')
plt.title('Frequency Decoding Scores')
# init scores
tf_scores = np.zeros((n_freqs - 1, n_windows))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
# Roll covariance, csp and lda over time
for t, w_time in enumerate(centered_w_times):
# Center the min and max of the window
w_tmin = w_time - w_size / 2.
w_tmax = w_time + w_size / 2.
# Crop data into time-window of interest
X = epochs.copy().crop(w_tmin, w_tmax).get_data()
# Save mean scores over folds for each frequency and time window
tf_scores[freq, t] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
# Set up time frequency object
av_tfr = AverageTFR(create_info(['freq'], sfreq), tf_scores[np.newaxis, :],
centered_w_times, freqs[1:], 1)
chance = np.mean(y) # set chance level to white in the plot
av_tfr.plot([0], vmin=chance, title="Time-Frequency Decoding Scores",
cmap=plt.cm.Reds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters and read data
Step2: Loop through frequencies, apply classifier and save scores
Step3: Plot frequency results
Step4: Loop through frequencies and time, apply classifier and save scores
Step5: Plot time-frequency results
|
4,613
|
<ASSISTANT_TASK:>
Python Code:
mu, sigma = 64, 8
popn = np.random.normal(loc=mu,scale=sigma, size=100000)
truemu, truesigma = np.mean(popn), np.std(popn)
s = \
For the population of interest, the true mean is {}
and the true standard deviation is {}
print(s.format(truemu,truesigma))
plt.hist(popn, bins=50, color='gray', alpha=0.75, histtype='stepfilled')
plt.xlabel("X")
plt.ylabel("Frequency")
pass
sample = np.random.choice(popn, size=60, replace=False)
s = \
For the population of interest, the point estimates of the
mean and standard deviation are {} and {}, respectively
print(s.format(np.mean(sample),np.std(sample,ddof=1)))
plt.hist(sample, color='steelblue', alpha=0.75,
histtype='stepfilled',label='sample')
plt.xlabel("X")
plt.ylabel("Frequency")
pass
plt.hist(popn, normed=True, bins=50, label='population',
color='gray', alpha=0.75, histtype='stepfilled')
plt.hist(sample, normed=True, label='sample',
color='steelblue', alpha=0.75, histtype='stepfilled')
plt.xlabel("X")
plt.ylabel("Density")
plt.legend(loc="best")
pass
smeans = []
for i in range(1000):
rsample = np.random.choice(popn, size=60, replace=False)
smeans.append(np.mean(rsample))
plt.hist(smeans, normed=True, bins=30, label='simulated\ndistn of\n means',
color='firebrick', alpha=0.75, histtype='stepfilled')
plt.xlabel("mean(X)")
plt.ylabel("Frequency")
pass
plt.hist(popn, normed=True, bins=50, label='population',
color='gray', alpha=0.75, histtype='stepfilled')
plt.hist(sample, normed=True, label='sample',
color='steelblue', alpha=0.75, histtype='stepfilled')
plt.hist(smeans, normed=True, bins=50, label='simulated\ndistn of\n means',
color='firebrick', alpha=0.75, histtype='stepfilled')
plt.xlabel("X")
plt.ylabel("Density")
plt.legend(loc="best")
plt.ylim(0,0.06) # comment out this line to remove truncation
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Population distribution
Step2: This is what the population distribution looks like when represented as a frequency histogram.
Step4: Sample distribution
Step5: Here is what the frequency histogram of the sample looks like.
Step6: Here's density histograms of the population (gray) and the sample (blue) drawn together.
Step7: Sampling distribution of the mean
Step8: Here is the frequency histogram of the sampling distribution of the mean.
Step9: Here are the density histograms of the population (grey), our first sample (blue), and the sampling distribution of the mean(red), all drawn in the same plot.
|
4,614
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mpld3 import plugins, utils
import geopandas as gp
import pandas as pd
from shapely.wkt import loads
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
#Modules developed for this project
from transport_network_modeling import network_visualization as net_v
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual, IntSlider
import ipywidgets as widgets
#import criticality results
result_df_loc = r'./criticality_results/result_interdiction_1107noz2_v03.csv'
result_df = pd.read_csv(result_df_loc)
#import district shapefile for background
district_gdf_loc = r'./model_input_data/BGD_adm1.shp'
district_gdf = gp.read_file(district_gdf_loc)
#alter the 'geometry' string of the dataframe into geometry object
result_df['geometry'] = result_df['geometry'].apply(loads)
#create geodataframe from criticality results dataframe
crs = {'init': 'epsg:4326'}
result_gdf = gp.GeoDataFrame(result_df, crs=crs, geometry=result_df['geometry'])
#record all metrics in a list
all_metric = ['m1_01', 'm1_02', 'm2_01', 'm2_02', 'm3_01', 'm3_02', 'm4_01', 'm4_02', 'm5_01', 'm6_01',
'm7_01', 'm7_02', 'm7_03', 'm8_01', 'm8_02', 'm8_03', 'm9_01', 'm10']
#create ranking columns for each metric
for metric in all_metric:
result_gdf[metric + '_rank'] = result_gdf[metric].rank(ascending=False)
#create special colormap for the visualization
cmap = plt.get_cmap('YlOrRd')
new_cmap1 = net_v.truncate_colormap(cmap, 0.3, 1)
cmap = plt.get_cmap('Blues')
new_cmap2 = net_v.truncate_colormap(cmap, 0.3, 1)
widgets.interact_manual(net_v.plot_interactive, rank=widgets.IntSlider(min=50, max=500, step=10, value=50),
metric=widgets.Dropdown(options=all_metric, value='m1_01'),
show_division=widgets.Checkbox(value=False), result_gdf=fixed(result_gdf),
cmaps=fixed([new_cmap1, new_cmap2]), district_gdf=fixed(district_gdf));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Interactive visualization
Step2: There are three elements that can be adjusted in this interactive visualization
|
4,615
|
<ASSISTANT_TASK:>
Python Code:
# import libraries
# linear algebra
import numpy as np
# data processing
import pandas as pd
# data visualization
from matplotlib import pyplot as plt
# load the data with pandas
dataset = pd.read_csv('dataset.csv', header=None)
dataset = np.array(dataset)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.show()
def calculate_initial_centers(dataset, k):
Inicializa os centróides iniciais de maneira arbitrária
Argumentos:
dataset -- Conjunto de dados - [m,n]
k -- Número de centróides desejados
Retornos:
centroids -- Lista com os centróides calculados - [k,n]
#### CODE HERE ####
### END OF CODE ###
return centroids
k = 3
centroids = calculate_initial_centers(dataset, k)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red',s=100)
plt.show()
def euclidean_distance(a, b):
Calcula a distância euclidiana entre os pontos a e b
Argumentos:
a -- Um ponto no espaço - [1,n]
b -- Um ponto no espaço - [1,n]
Retornos:
distance -- Distância euclidiana entre os pontos
#### CODE HERE ####
### END OF CODE ###
return distance
a = np.array([1, 5, 9])
b = np.array([3, 7, 8])
if (euclidean_distance(a,b) == 3):
print("Distância calculada corretamente!")
else:
print("Função de distância incorreta")
def nearest_centroid(a, centroids):
Calcula o índice do centroid mais próximo ao ponto a
Argumentos:
a -- Um ponto no espaço - [1,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_index -- Índice do centróide mais próximo
#### CODE HERE ####
### END OF CODE ###
return nearest_index
# Seleciona um ponto aleatório no dataset
index = np.random.randint(dataset.shape[0])
a = dataset[index,:]
# Usa a função para descobrir o centroid mais próximo
idx_nearest_centroid = nearest_centroid(a, centroids)
# Plota os dados ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], s=10)
# Plota o ponto aleatório escolhido em uma cor diferente
plt.scatter(a[0], a[1], c='magenta', s=30)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
# Plota o centroid mais próximo com uma cor diferente
plt.scatter(centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],
marker='^', c='springgreen', s=100)
# Cria uma linha do ponto escolhido para o centroid selecionado
plt.plot([a[0], centroids[idx_nearest_centroid,0]],
[a[1], centroids[idx_nearest_centroid,1]],c='orange')
plt.annotate('CENTROID', (centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],))
plt.show()
def all_nearest_centroids(dataset, centroids):
Calcula o índice do centroid mais próximo para cada
ponto do dataset
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_indexes -- Índices do centróides mais próximos - [m,1]
#### CODE HERE ####
### END OF CODE ###
return nearest_indexes
nearest_indexes = all_nearest_centroids(dataset, centroids)
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
plt.show()
def inertia(dataset, centroids, nearest_indexes):
Soma das distâncias quadradas das amostras para o
centro do cluster mais próximo.
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
inertia -- Soma total do quadrado da distância entre
os dados de um cluster e seu centróide
#### CODE HERE ####
### END OF CODE ###
return inertia
tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]])
tmp_centroide = np.array([[2,3,4]])
tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide)
if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26:
print("Inertia calculada corretamente!")
else:
print("Função de inertia incorreta!")
# Use a função para verificar a inertia dos seus clusters
inertia(dataset, centroids, nearest_indexes)
def update_centroids(dataset, centroids, nearest_indexes):
Atualiza os centroids
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
centroids -- Lista com centróides atualizados - [k,n]
#### CODE HERE ####
### END OF CODE ###
return centroids
nearest_indexes = all_nearest_centroids(dataset, centroids)
# Plota os os cluster ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
for index, centroid in enumerate(centroids):
dataframe = dataset[nearest_indexes == index,:]
for data in dataframe:
plt.plot([centroid[0], data[0]], [centroid[1], data[1]],
c='lightgray', alpha=0.3)
plt.show()
centroids = update_centroids(dataset, centroids, nearest_indexes)
class KMeans():
def __init__(self, n_clusters=8, max_iter=300):
self.n_clusters = n_clusters
self.max_iter = max_iter
def fit(self,X):
# Inicializa os centróides
self.cluster_centers_ = [None]
# Computa o cluster de cada amostra
self.labels_ = [None]
# Calcula a inércia inicial
old_inertia = [None]
for index in [None]:
#### CODE HERE ####
### END OF CODE ###
return self
def predict(self, X):
return [None]
kmeans = KMeans(n_clusters=3)
kmeans.fit(dataset)
print("Inércia = ", kmeans.inertia_)
plt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_)
plt.scatter(kmeans.cluster_centers_[:,0],
kmeans.cluster_centers_[:,1], marker='^', c='red', s=100)
plt.show()
#### CODE HERE ####
#### CODE HERE ####
#### CODE HERE ####
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1. Implementar o algoritmo K-means
Step3: Teste a função criada e visualize os centróides que foram calculados.
Step5: 1.2 Definir os clusters
Step6: Teste a função criada.
Step8: 1.2.2 Calcular o centroide mais próximo
Step9: Teste a função criada
Step11: 1.2.3 Calcular centroid mais próximo de cada dado do dataset
Step12: Teste a função criada visualizando os cluster formados.
Step14: 1.3 Métrica de avaliação
Step15: Teste a função codificada executando o código abaixo.
Step17: 1.4 Atualizar os clusters
Step18: Visualize os clusters formados
Step19: Execute a função de atualização e visualize novamente os cluster formados
Step20: 2. K-means
Step21: Verifique o resultado do algoritmo abaixo!
Step22: 2.2 Comparar com algoritmo do Scikit-Learn
Step23: 3. Método do cotovelo
Step24: 4. Dataset Real
|
4,616
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import tensorflow as tf
import numpy as np
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
sess = tf.InteractiveSession()
_X = np.array([[1,2,3], [4,5,6]])
X = tf.convert_to_tensor(_X)
out = tf.cumsum(X, axis=1)
print(out.eval())
_out = np.cumsum(_X, axis=1)
assert np.array_equal(out.eval(), _out) # tf.cumsum == np.cumsum
_X = np.array([[1,2,3], [4,5,6]])
X = tf.convert_to_tensor(_X)
out = tf.cumprod(X, axis=1)
print(out.eval())
_out = np.cumprod(_X, axis=1)
assert np.array_equal(out.eval(), _out) # tf.cumprod == np.cumprod
_X = np.array(
[[1,2,3,4],
[-1,-2,-3,-4],
[-10,-20,-30,-40],
[10,20,30,40]])
X = tf.convert_to_tensor(_X)
out = tf.segment_sum(X, [0, 0, 1, 1])
print(out.eval())
_X = np.array(
[[1,2,3,4],
[1,1/2,1/3,1/4],
[1,2,3,4],
[-1,-1,-1,-1]])
X = tf.convert_to_tensor(_X)
out = tf.segment_prod(X, [0, 0, 1, 1])
print(out.eval())
_X = np.array(
[[1,4,5,7],
[2,3,6,8],
[1,2,3,4],
[-1,-2,-3,-4]])
X = tf.convert_to_tensor(_X)
out = tf.segment_min(X, [0, 0, 1, 1])
print(out.eval())
_X = np.array(
[[1,4,5,7],
[2,3,6,8],
[1,2,3,4],
[-1,-2,-3,-4]])
X = tf.convert_to_tensor(_X)
out = tf.segment_max(X, [0, 0, 1, 1])
print(out.eval())
_X = np.array(
[[1,2,3,4],
[5,6,7,8],
[-1,-2,-3,-4],
[-5,-6,-7,-8]])
X = tf.convert_to_tensor(_X)
out = tf.segment_mean(X, [0, 0, 1, 1])
print(out.eval())
_X = np.array(
[[1,2,3,4],
[-1,-2,-3,-4],
[-10,-20,-30,-40],
[10,20,30,40]])
X = tf.convert_to_tensor(_X)
out = tf.unsorted_segment_sum(X, [1, 0, 1, 0], 2)
print(out.eval())
_X = np.random.permutation(10).reshape((2, 5))
print("_X =", _X)
X = tf.convert_to_tensor(_X)
out1 = tf.argmax(X, axis=1)
out2 = tf.argmin(X, axis=1)
print(out1.eval())
print(out2.eval())
_out1 = np.argmax(_X, axis=1)
_out2 = np.argmin(_X, axis=1)
assert np.allclose(out1.eval(), _out1)
assert np.allclose(out2.eval(), _out2)
# tf.argmax == np.argmax
# tf.argmin == np.argmin
_x = np.array([0, 1, 2, 5, 0])
_y = np.array([0, 1, 4])
x = tf.convert_to_tensor(_x)
y = tf.convert_to_tensor(_y)
out = tf.setdiff1d(x, y)[0]
print(out.eval())
_out = np.setdiff1d(_x, _y)
assert np.array_equal(out.eval(), _out)
# Note that tf.setdiff1d returns a tuple of (out, idx),
# whereas np.setdiff1d returns out only.
_X = np.arange(1, 10).reshape(3, 3)
X = tf.convert_to_tensor(_X)
out = tf.where(X < 4, X, X*10)
print(out.eval())
_out = np.where(_X < 4, _X, _X*10)
assert np.array_equal(out.eval(), _out) # tf.where == np.where
_x = np.array([1, 2, 6, 4, 2, 3, 2])
x = tf.convert_to_tensor(_x)
out, indices = tf.unique(x)
print(out.eval())
print(indices.eval())
_out, _indices = np.unique(_x, return_inverse=True)
print("sorted unique elements =", _out)
print("indices =", _indices)
# Note that tf.unique keeps the original order, whereas
# np.unique sorts the unique members.
# Check the documentation on tf.SparseTensor if you are not
# comfortable with sparse tensor.
hypothesis = tf.SparseTensor(
[[0, 0],[0, 1],[0, 2],[0, 4]],
["a", "b", "c", "a"],
(1, 5))
# Note that this is equivalent to the dense tensor.
# [["a", "b", "c", 0, "a"]]
truth = tf.SparseTensor(
[[0, 0],[0, 2],[0, 4]],
["a", "c", "b"],
(1, 6))
# This is equivalent to the dense tensor.
# [["a", 0, "c", 0, "b", 0]]
out1 = tf.edit_distance(hypothesis, truth, normalize=False)
out2 = tf.edit_distance(hypothesis, truth, normalize=True)
print(out1.eval()) # 2 <- one deletion ("b") and one substitution ("a" to "b")
print(out2.eval()) # 0.6666 <- 2 / 6
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NOTE on notation
Step2: Q2. Compute the cumulative product of X along the second axis.
Step3: Segmentation
Step4: Q4. Compute the product along the first two elements and the last two elements of X separately.
Step5: Q5. Compute the minimum along the first two elements and the last two elements of X separately.
Step6: Q6. Compute the maximum along the first two elements and the last two elements of X separately.
Step7: Q7. Compute the mean along the first two elements and the last two elements of X separately.
Step8: Q8. Compute the sum along the second and fourth and
Step9: Sequence Comparison and Indexing
Step10: Q10. Find the unique elements of x that are not present in y.
Step11: Q11. Return the elements of X, if X < 4, otherwise X*10.
Step12: Q12. Get unique elements and their indices from x.
Step13: Q13. Compute the edit distance between hypothesis and truth.
|
4,617
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from builtins import range
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from singa import tensor
from singa import optimizer
from singa import loss
from singa import layer
#from singa.proto import model_pb2
# generate the boundary
f = lambda x: (5 * x + 1)
bd_x = np.linspace(-1., 1, 200)
bd_y = f(bd_x)
# generate the training data
x = np.random.uniform(-1, 1, 400)
y = f(x) + 2 * np.random.randn(len(x))
# convert training data to 2d space
label = np.asarray([5 * a + 1 > b for (a, b) in zip(x, y)])
data = np.array([[a,b] for (a, b) in zip(x, y)], dtype=np.float32)
plt.plot(bd_x, bd_y, 'k', label = 'boundary')
plt.plot(x[label], y[label], 'ro', ms=7)
plt.plot(x[~label], y[~label], 'bo', ms=7)
plt.legend(loc='best')
plt.show()
# create layers
layer.engine = 'singacpp'
dense = layer.Dense('dense', 2, input_sample_shape=(2,))
p = dense.param_values()
print(p[0].shape, p[1].shape)
# init parameters
p[0].gaussian(0, 0.1) # weight matrix
p[1].set_value(0) # bias
# setup optimizer and loss func
opt = optimizer.SGD(lr=0.05)
lossfunc = loss.SoftmaxCrossEntropy()
tr_data = tensor.from_numpy(data)
tr_label = tensor.from_numpy(label.astype(int))
# plot the classification results using the current model parameters
def plot_status(w, b, title='origin'):
global bd_x, bd_y, data
pr = np.add(np.dot(data, w), b)
lbl = pr[:, 0] < pr[:, 1]
plt.figure(figsize=(6,3));
plt.plot(bd_x, bd_y, 'k', label='truth line')
plt.plot(data[lbl, 0], data[lbl, 1], 'ro', ms=7)
plt.plot(data[~lbl, 0], data[~lbl, 1], 'bo', ms=7)
plt.legend(loc='best')
plt.title(title)
plt.xlim(-1, 1);
plt.ylim(data[:, 1].min()-1, data[:, 1].max()+1)
# sgd
for i in range(1000):
act = dense.forward(True, tr_data)
lvalue = lossfunc.forward(True, act, tr_label)
dact = lossfunc.backward()
dact /= tr_data.shape[0]
_, dp = dense.backward(True, dact)
# update the parameters
opt.apply(i, dp[0], p[0], 'w')
opt.apply(i, dp[1], p[1], 'b')
if (i%100 == 0):
print('training loss = %f' % lvalue.l1())
plot_status(tensor.to_numpy(p[0]), tensor.to_numpy(p[1]),title='epoch %d' % i)
#train(dat, label)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To import PySINGA modules
Step2: Task is to train a MLP model to classify 2-d points into the positive and negative categories.
Step3: We generate the datapoints by adding a random noise to the data points on the boundary line
Step4: We label the data points above the boundary line as positive points with label 1 and other data points with label 0 (negative).
Step5: Create the MLP model
Step6: Each layer is created with a layer name and other meta data, e.g., the dimension size for the dense layer. The last argument is the shape of a single input sample of this layer.
|
4,618
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import torch
import matplotlib.pyplot as plt
from torchvision import datasets
import torchvision.transforms as transforms
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 64
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# get the training datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
# prepare data loader
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers)
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (3,3))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
import torch.nn as nn
import torch.nn.functional as F
class Discriminator(nn.Module):
def __init__(self, input_size, hidden_dim, output_size):
super(Discriminator, self).__init__()
# define all layers
def forward(self, x):
# flatten image
# pass x through all layers
# apply leaky relu activation to all hidden layers
return x
class Generator(nn.Module):
def __init__(self, input_size, hidden_dim, output_size):
super(Generator, self).__init__()
# define all layers
def forward(self, x):
# pass x through all layers
# final layer should have tanh applied
return x
# Discriminator hyperparams
# Size of input image to discriminator (28*28)
input_size =
# Size of discriminator output (real or fake)
d_output_size =
# Size of *last* hidden layer in the discriminator
d_hidden_size =
# Generator hyperparams
# Size of latent vector to give to generator
z_size =
# Size of discriminator output (generated image)
g_output_size =
# Size of *first* hidden layer in the generator
g_hidden_size =
# instantiate discriminator and generator
D = Discriminator(input_size, d_hidden_size, d_output_size)
G = Generator(z_size, g_hidden_size, g_output_size)
# check that they are as you expect
print(D)
print()
print(G)
# Calculate losses
def real_loss(D_out, smooth=False):
# compare logits to real labels
# smooth labels if smooth=True
loss =
return loss
def fake_loss(D_out):
# compare logits to fake labels
loss =
return loss
import torch.optim as optim
# learning rate for optimizers
lr = 0.002
# Create optimizers for the discriminator and generator
d_optimizer =
g_optimizer =
import pickle as pkl
# training hyperparams
num_epochs = 40
# keep track of loss and generated, "fake" samples
samples = []
losses = []
print_every = 400
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# train the network
D.train()
G.train()
for epoch in range(num_epochs):
for batch_i, (real_images, _) in enumerate(train_loader):
batch_size = real_images.size(0)
## Important rescaling step ##
real_images = real_images*2 - 1 # rescale input images from [0,1) to [-1, 1)
# ============================================
# TRAIN THE DISCRIMINATOR
# ============================================
# 1. Train with real images
# Compute the discriminator losses on real images
# use smoothed labels
# 2. Train with fake images
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
fake_images = G(z)
# Compute the discriminator losses on fake images
# add up real and fake losses and perform backprop
d_loss =
# =========================================
# TRAIN THE GENERATOR
# =========================================
# 1. Train with fake images and flipped labels
# Generate fake images
# Compute the discriminator losses on fake images
# using flipped labels!
# perform backprop
g_loss =
# Print some loss stats
if batch_i % print_every == 0:
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, num_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# generate and save sample, fake images
G.eval() # eval mode for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to train mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach()
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
# -1 indicates final epoch's samples (the last in the list)
view_samples(-1, samples)
rows = 10 # split epochs into 10, so 100/10 = every 10 epochs
cols = 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
img = img.detach()
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
# randomly generated, new latent vectors
sample_size=16
rand_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
rand_z = torch.from_numpy(rand_z).float()
G.eval() # eval mode
# generated samples
rand_images = G(rand_z)
# 0 indicates the first set of samples in the passed in list
# and we only have one batch of samples, here
view_samples(0, [rand_images])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualize the data
Step2: Define the Model
Step3: Generator
Step4: Model hyperparameters
Step5: Build complete network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs.
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
|
4,619
|
<ASSISTANT_TASK:>
Python Code:
training_data = pd.read_csv('training_set_values.csv', index_col=0)
training_label = pd.read_csv('training_set_labels.csv', index_col=0)
test_data = pd.read_csv('test_set_values.csv', index_col=0)
# Merge test data and training data to apply same data management operations on them
data = training_data.append(test_data).sort_index()
# As lots of waterpoints are missing a value for amount_tsh. For that field the missing
# data will be replaced by the mean data to drop less data for the model fit
imp = preprocessing.Imputer(missing_values=0, strategy='mean')
imp.fit(data['amount_tsh'].values.reshape(-1, 1))
data['water_amount'] = imp.transform(data['amount_tsh'].values.reshape(-1, 1)).ravel()
imp = preprocessing.Imputer(missing_values=0, strategy='median')
imp.fit(data['construction_year'].values.reshape(-1, 1))
data['construction_year'] = imp.transform(data['construction_year'].values.reshape(-1, 1)).ravel()
imp = preprocessing.Imputer(missing_values=0, strategy='mean')
imp.fit(data['gps_height'].values.reshape(-1, 1))
data['height'] = imp.transform(data['gps_height'].values.reshape(-1, 1)).ravel()
# Recode missing data as NaN
for field in ('longitude', 'latitude'):
data[field] = data[field].map(lambda x: x if x else pd.np.nan)
def group_installer(data):
def gather_installer(x):
installer_map = {
'organisation' : ('bank', 'msf', 'wwf', 'unicef', 'unisef', 'oxfam', 'oxfarm', 'club', 'care', 'without', 'faim', 'rain', 'red', 'angels', 'fundat', 'foundation'),
'church' : ('church', 'churc', 'rcchurch', 'roman', 'missionsry', 'lutheran', 'islamic', 'islam', 'vision'),
'private' : ('consulting', 'engineer', 'private', 'ltd', 'co.ltd', 'contractor', 'enterp', 'enterpr', 'company', 'contract'),
'community' : ('village', 'community', 'communit', 'district', 'council', 'commu', 'villigers', 'villagers'),
'government' : ('government', 'gov', 'govt', 'gover', 'gove', 'governme', 'ministry'),
'other' : ('0', 'nan', 'known', 'other', 'unknown'), # Group 'unknown' data with 'other' as finally this means the same for interpretation
'danida' : ('danida', 'danid'),
'foreign government' : ('netherlands', 'germany', 'european')
}
for substr in x.split():
for subsubstr in substr.split('/'):
for key in installer_map:
if subsubstr in installer_map[key]:
return key
return x
lower_data = data.map(lambda x: str(x).lower())
tmp_data = lower_data.map(gather_installer)
top10 = list(tmp_data.value_counts()[:10].index)
return tmp_data.map(lambda x: x if x in top10 else 'other')
data['installer'] = group_installer(data.installer)
data['funder'] = group_installer(data.funder)
clean_data = (data.iloc[training_data.index]
.join(training_label['status_group'])
.dropna())
# Create two columns one collapsing 'functional' and 'functional needs repair'
# and the other one collapsing 'non functional' and 'functional needs repair'
clean_data['functional'] = clean_data['status_group'].map({'functional' : 1,
'functional needs repair' : 1,
'non functional' : 0})
clean_data['no_repairs'] = clean_data['status_group'].map({'functional' : 1,
'functional needs repair' : 0,
'non functional' : 0})
# Extract predictors and convert categorical variables in dichotomic variables
predictors_name = ['water_amount', 'height', 'longitude', 'latitude',
'basin', 'region', 'population', 'public_meeting', 'management_group',
'permit', 'construction_year', 'extraction_type_class', 'payment_type',
'quality_group', 'quantity_group', 'source_type', 'waterpoint_type_group',
'installer', 'funder']
categorical_predictors = ('basin', 'region', 'management_group', 'extraction_type_class',
'payment_type', 'quality_group', 'quantity_group',
'source_type', 'waterpoint_type_group', 'installer', 'funder')
process_data = pd.DataFrame()
for name in predictors_name:
if name in categorical_predictors:
classes = data[name].unique()
deployed_categories = preprocessing.label_binarize(data[name], classes=classes)
# Avoid class name collision
classe_names = list()
for c in classes:
if c in process_data.columns:
classe_names.append('_'.join((c, name)))
else:
classe_names.append(c)
tmp_df = pd.DataFrame(deployed_categories,
columns=classe_names,
index=data.index)
process_data = process_data.join(tmp_df)
else:
process_data[name] = data[name]
predictors_columns = process_data.columns
deployed_data = (process_data.iloc[training_data.index]
.join(training_label['status_group'])
.dropna())
# Create two columns one collapsing 'functional' and 'functional needs repair'
# and the other one collapsing 'non functional' and 'functional needs repair'
deployed_data['functional'] = deployed_data['status_group'].map({'functional' : 1,
'functional needs repair' : 1,
'non functional' : 0})
deployed_data['no_repairs'] = deployed_data['status_group'].map({'functional' : 1,
'functional needs repair' : 0,
'non functional' : 0})
predictors = deployed_data[predictors_columns]
# fit an Extra Trees model to the data and look at the first 15 important fields
model = ExtraTreesClassifier()
model.fit(predictors, deployed_data['status_group'])
# display the relative importance of each attribute
cm = sns.light_palette("yellow", as_cmap=True)
display(pd.Series(model.feature_importances_, index=predictors.columns, name='importance')
.sort_values(ascending=False)
.to_frame()
.iloc[:15])
display_markdown("> Table 1 : The 15 most important features in the dataset.", raw=True)
# Extract predictors and convert categorical variables in dichotomic variables
predictors_name = ['height', 'longitude', 'latitude', 'population',
'permit', 'construction_year', 'extraction_type_class', 'payment_type',
'quantity_group', 'waterpoint_type_group']
categorical_predictors = ('extraction_type_class', 'payment_type', 'quantity_group',
'waterpoint_type_group')
process_data = pd.DataFrame()
for name in predictors_name:
if name in categorical_predictors:
classes = data[name].unique()
deployed_categories = preprocessing.label_binarize(data[name], classes=classes)
# Avoid class name collision
classe_names = list()
for c in classes:
if c in process_data.columns or c == 'other':
classe_names.append('_'.join((c, name)))
else:
classe_names.append(c)
tmp_df = pd.DataFrame(deployed_categories,
columns=classe_names,
index=data.index)
process_data = process_data.join(tmp_df)
else:
process_data[name] = data[name]
predictors_columns = process_data.columns
deployed_data = (process_data.iloc[training_data.index]
.join(training_label['status_group'])
.dropna())
# Create two columns one collapsing 'functional' and 'functional needs repair'
# and the other one collapsing 'non functional' and 'functional needs repair'
deployed_data['functional'] = deployed_data['status_group'].map({'functional' : 1,
'functional needs repair' : 1,
'non functional' : 0})
deployed_data['no_repairs'] = deployed_data['status_group'].map({'functional' : 1,
'functional needs repair' : 0,
'non functional' : 0})
predictors = deployed_data[predictors_columns]
pd.set_option('display.float_format', lambda x: '{:.5g}'.format(x))
quantitative_var = dict()
for field in ('gps_height', 'latitude', 'longitude', 'construction_year', 'population'):
if field == 'gps_height':
field_name = 'height'
else:
field_name = ' '.join(field.split('_'))
clean_field = training_data[field].map(lambda x: x if abs(x)>1e-8 else pd.np.nan)
clean_field = clean_field.dropna()
quantitative_var[field_name] = clean_field.describe()
(pd.DataFrame(quantitative_var)
.loc[['count', 'mean', 'std', 'min', 'max']]
.T)
fig, axes = plt.subplots(3, 2,
sharey=True,
gridspec_kw=dict(hspace=0.285),
figsize=(10, 16.5))
axes = axes.ravel()
for i, field in enumerate(('extraction_type_class', 'waterpoint_type_group', 'payment_type',
'quantity_group', 'permit')):
field_name = ' '.join(field.split('_'))
var_analysis = clean_data[['status_group', 'functional', 'no_repairs', field]]
ax = sns.barplot(x=field, y='functional', data=var_analysis, ci=None, ax=axes[i])
ax.set_xlabel(field_name)
if i % 2 == 0:
ax.set_ylabel('functional vs non functional')
else:
ax.set_ylabel('')
lbls = ['\n'.join(l.get_text().split()) for l in ax.get_xticklabels()]
if len(lbls) > 5:
ax.set_xticklabels(lbls, rotation=60)
axes[5].set_visible(False)
fig.suptitle('Functional waterpoint proportion per categorical fields', fontsize=14)
plt.subplots_adjust(top=0.97)
plt.show();
fig, axes = plt.subplots(2, 2,
sharey=True,
gridspec_kw=dict(hspace=0.12),
figsize=(10, 11))
axes = axes.ravel()
for i, field in enumerate(('gps_height', 'longitude', 'construction_year', 'population')):
if field == 'gps_height':
field_name = 'height'
else:
field_name = ' '.join(field.split('_'))
var_analysis = clean_data[['status_group', 'functional', 'no_repairs']]
clean_field = clean_data[field].map(lambda x: x if abs(x)>1e-8 else pd.np.nan)
var_analysis = var_analysis.join(clean_field).dropna()
var_analysis[field+'grp2'] = pd.qcut(var_analysis[field],
2,
labels=["50th%tile",
"100th%tile"])
# 4,
# labels=["25th%tile", "50th%tile",
# "75th%tile", "100th%tile"])
ax = sns.barplot(x=field+'grp2', y='functional', data=var_analysis, ci=None, ax=axes[i])
ax.set_xlabel(field_name)
if i % 2 == 0:
ax.set_ylabel('functional vs non functional')
else:
ax.set_ylabel('')
fig.suptitle('Functional waterpoint proportion per quantitative field quartile', fontsize=14)
plt.subplots_adjust(top=0.95)
plt.show();
pd.np.random.seed(12345)
pred_train, pred_test, tar_train, tar_test = train_test_split(predictors,
deployed_data['status_group'],
test_size=.4)
trees=range(1, 31)
accuracy=pd.np.zeros(len(trees))
for idx in trees:
classifier=RandomForestClassifier(n_estimators=idx)
classifier=classifier.fit(pred_train,tar_train)
predictions=classifier.predict(pred_test)
accuracy[idx-1]=accuracy_score(tar_test, predictions)
plt.plot(trees, accuracy)
plt.xlabel("Number of trees")
plt.ylabel("Accuracy score")
plt.show();
model = RandomForestClassifier(n_estimators=25)
model = model.fit(predictors, deployed_data['status_group'])
clean_test_data = process_data.iloc[test_data.index].dropna()
predictions = model.predict(process_data.iloc[test_data.index].dropna())
pred = pd.Series(predictions, index=clean_test_data.index, name='status_group')
missing_index = list()
for i in test_data.index:
if i not in clean_test_data.index:
missing_index.append(i)
data_list = list()
pd.np.random.seed(12345)
for rnd in pd.np.random.rand(len(missing_index)):
if rnd < 0.072677:
data_list.append('functional needs repair')
elif rnd < 0.384242 + 0.072677:
data_list.append('non functional')
else:
data_list.append('functional')
fill = pd.Series(data_list, index=missing_index)
pred = pred.append(fill)
to_file = pred[test_data.index]
to_file.to_csv('randomForest.csv', index_label='id', header=('status_group',))
!jupyter nbconvert --to html --template full_nice WaterPumpsPrediction.ipynb
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Measures
Step2: Analyzes
Step3: Descriptive Statistics
Step4: Bivariate analyzes
Step5: To visualize the influence of the quantitative variables on the functional status
Step6: Random Forest Test
Step7: So I run a random forest test with 25 trees with all training data and submitted on DrivenData.org the resulting prediction. I got an accuracy score of 76.86%.
Step8: Conclusion
|
4,620
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Criando uma função
def verificaPar(num):
if num % 2 == 0:
return True
else:
return False
# Chamando a função e passando um número como parâmetro. Retornará
# Falso de for ímpar e True se for par.
verificaPar(35)
lista = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]
lista
filter(verificaPar, lista)
list(filter(verificaPar, lista))
list(filter(lambda x: x%2==0, lista))
list(filter(lambda num: num > 8, lista))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Filter
|
4,621
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo("3Md5KCCQX-0")
import numpy as np
from scipy import linalg
# https://en.wikipedia.org/wiki/Hermitian_matrix
A = np.matrix('2, 2+1j, 4; 2-1j, 3, 1j; 4, -1j, 1')
assert (A == A.H).all() # expect True
print("A", A, sep='\n')
print("A.H", A.H, sep='\n')
YouTubeVideo("11dNghWC4HI")
A = np.array(
[[-1, 2, 2],
[2, 2, -1],
[2, -1, 2]]) # ordinary numpy Array
M_A = np.matrix(A) # special matrix version
la, v = linalg.eig(A) # get eigenvalues la and eigenvectors v
l1, l2, l3 = list(map(lambda c: c.real, la))
print("Eigenvalues :", l1, l2, l3)
print("Eigenvector 1:", v[:,0])
print("Eigenvector 2:", v[:,1])
print("Eigenvector 3:", v[:,2])
eigen1 = v[:,0].reshape(3, 1)
print("Scaling E1", (M_A * eigen1)/eigen1, sep="\n") # show the scale factor
eigen2 = v[:,1].reshape(3, 1)
print("Scaling E2", (M_A * eigen2)/eigen2, sep="\n") # show the scale factor
eigen3 = v[:,2].reshape(3, 1)
print("Scaling E3", (M_A * eigen3)/eigen3, sep="\n") # show the scale factor
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Yes, you may embed Youtubes in your I-Python Notebooks, meaning you may follow up on a presentation with some example interactive code (or static code for display purposes).
Step2: Now let's return to Khan's example. He actually starts his solution in an earlier video, defining matrix A and seeking eigenvalues as a first step....
Step3: Of course the SciPy docs comes with it's own documentation on how the eigen-stuff is found.
|
4,622
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
from utils import plot_samples, plot_curves
import time
import numpy as np
# force random seed for results to be reproducible
SEED = 4242
np.random.seed(SEED)
from keras.datasets import mnist
from keras.utils import np_utils
# Load pre-shuffled MNIST data into train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Display some of the samples
plot_samples(X_train)
X_train.shape
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential()
# in the first layer we need to specify the input shape
model.add(Dense(10, input_shape=(784,)))
model.add(Activation('softmax'))
model.summary()
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
nb_classes = 10
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
y_train, Y_train
from keras.optimizers import SGD
lr = 0.01
# For now we will not decrease the learning rate
decay = 0
optim = SGD(lr=lr, decay=decay, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=optim,
metrics=['accuracy'])
batch_size = 32
nb_epoch = 20
verbose = 2
t = time.time()
history = model.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=verbose,validation_data=(X_test, Y_test))
print (time.time() - t, "seconds.")
plot_curves(history,nb_epoch)
score = model.evaluate(X_test, Y_test, verbose=0)
print ("Loss: %f"%(score[0]))
print ("Accuracy: %f"%(score[1]))
import numpy as np
np.random.seed(SEED)
# MODEL DEFINITION
model = Sequential()
# ...
model.summary()
# COMPILE & TRAIN
import numpy as np
np.random.seed(SEED)
from keras.layers import Dropout
dratio = 0.2
H_DIM = 128
model = Sequential()
# ...
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=optim,
metrics=['accuracy'])
t = time.time()
history = model.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=verbose,validation_data=(X_test, Y_test))
print (time.time() - t, "seconds.")
score = model.evaluate(X_test, Y_test, verbose=0)
print ("-"*10)
print ("Loss: %f"%(score[0]))
print ("Accuracy: %f"%(score[1]))
plot_curves(history,nb_epoch)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset
Step2: Multiclass softmax
Step3: Exercise
Step4: Categories need to be converted to one-hot vectors for training
Step5: We are now ready to train. Let's define the optimizer
Step6: In Keras, we need to compile the model to define the loss and the optimizer we want to use. Since we are dealing with a classification problem, we will use the cross entropy loss, which is already defined in keras. Additionally, we will incorporate the accuracy as an additional metric to compute at the end of each epoch
Step7: Now let's train the model. model.fit() will do the training loop for us. We just need to pass the training data X_train and labels Y_train as input, specify the batch_size and the number of epochs nb_epoch we want to do. We also pass the test set (X_test,Y_Test) as validation data, which will allow us to see how the model performs on the test data as training progresses. Let's run it
Step8: We can plot the loss and accuracy curves with the history object returned by model.fit(). The function plot_curves, which is defined in utils.py will do this for us.
Step9: The curve trend indicates that the model may be able to improve if we train it for longer, but for now let's leave it here.
Step10: Adding a hidden layer
Step11: Exercise
Step12: Dropout
|
4,623
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# SOD PR-AUC vs SNN (ionosphere)
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=3, nrows=5, usecols=[2,4])
fig = plt.figure(figsize=(5,3))
ax = fig.add_axes([0.12, 0.15, 0.8, 0.7])
ax.plot(df[2].values, df[4].values)
ax.plot([7, 7], [0, 1.1])
ax.set_xlabel('snn', fontsize=14)
ax.set_ylabel('PR-AUC', fontsize=14)
ax.set_ylim([0.0, 1.1])
ax.set_xlim([0, 101])
ax.set_title('SOD Precision vs SNN (ionosphere)', fontsize=16)
plt.show()
# SOD PR-AUC vs SNN (shuttle)
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=10, nrows=5, usecols=[2,4])
fig = plt.figure(figsize=(5,3))
ax = fig.add_axes([0.12, 0.15, 0.8, 0.7])
ax.plot(df[2].values, df[4].values)
ax.plot([772, 772], [0, 1.1])
ax.set_xlabel('snn', fontsize=14)
ax.set_ylabel('PR-AUC', fontsize=14)
ax.set_ylim([0.0, 1.1])
ax.set_xlim([24, 1501])
ax.set_title('SOD Precision vs SNN (shuttle)', fontsize=16)
plt.show()
# SOD PR-AUC vs SNN (breast_cancer)
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=17, nrows=5, usecols=[2,4])
fig = plt.figure(figsize=(5,3))
ax = fig.add_axes([0.12, 0.15, 0.8, 0.7])
ax.plot(df[2].values, df[4].values)
ax.plot([9, 9], [0, 1.1])
ax.set_xlabel('snn', fontsize=14)
ax.set_ylabel('PR-AUC', fontsize=14)
ax.set_ylim([0.0, 1.1])
ax.set_xlim([0, 101])
ax.set_title('SOD Precision vs SNN (breast_cancer)', fontsize=16)
plt.show()
# SOD PR-AUC vs SNN (satellite)
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=24, nrows=7, usecols=[2,4])
fig = plt.figure(figsize=(5,3))
ax = fig.add_axes([0.12, 0.15, 0.8, 0.7])
ax.plot(df[2].values, df[4].values)
ax.plot([59, 59], [0, 1.1])
ax.set_xlabel('snn', fontsize=14)
ax.set_ylabel('PR-AUC', fontsize=14)
ax.set_ylim([0.0, 1.1])
ax.set_xlim([0, 501])
ax.set_title('SOD Precision vs SNN (satellite)', fontsize=16)
plt.show()
# SOD PR-AUC vs SNN (mouse)
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=32, nrows=5, usecols=[2,4])
fig = plt.figure(figsize=(5,3))
ax = fig.add_axes([0.12, 0.15, 0.8, 0.7])
ax.plot(df[2].values, df[4].values)
ax.plot([10, 10], [0, 1.1])
ax.set_xlabel('snn', fontsize=14)
ax.set_ylabel('PR-AUC', fontsize=14)
ax.set_ylim([0.0, 1.1])
ax.set_xlim([0, 101])
ax.set_title('SOD Precision vs SNN (mouse)', fontsize=16)
plt.show()
# SOD PR-AUC vs SNN (kddcup99_5000)
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=39, nrows=5, usecols=[2,4])
fig = plt.figure(figsize=(5,3))
ax = fig.add_axes([0.12, 0.15, 0.8, 0.7])
ax.plot(df[2].values, df[4].values)
ax.plot([100, 100], [0, 1.1])
ax.set_xlabel('snn', fontsize=14)
ax.set_ylabel('PR-AUC', fontsize=14)
ax.set_ylim([0.0, 1.1])
ax.set_xlim([24, 401])
ax.set_title('SOD Precision vs SNN (kddcup99 5000)', fontsize=16)
plt.show()
# SOD PR-AUC vs SNN (kddcup99_10000)
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=46, nrows=5, usecols=[2,4])
fig = plt.figure(figsize=(5,3))
ax = fig.add_axes([0.12, 0.15, 0.8, 0.7])
ax.plot(df[2].values, df[4].values)
ax.plot([200, 200], [0, 1.1])
ax.set_xlabel('snn', fontsize=14)
ax.set_ylabel('PR-AUC', fontsize=14)
ax.set_ylim([0.0, 1.1])
ax.set_xlim([24, 401])
ax.set_title('SOD Precision vs SNN (kddcup99 10000)', fontsize=16)
plt.show()
# SOD PR-AUC vs SNN (kddcup99_20000)
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=53, nrows=5, usecols=[2,4])
fig = plt.figure(figsize=(5,3))
ax = fig.add_axes([0.12, 0.15, 0.8, 0.7])
ax.plot(df[2].values, df[4].values)
ax.plot([400, 400], [0, 1.1])
ax.set_xlabel('snn', fontsize=14)
ax.set_ylabel('PR-AUC', fontsize=14)
ax.set_ylim([0.0, 1.1])
ax.set_xlim([24, 425])
ax.set_title('SOD Precision vs SNN (kddcup99 20000)', fontsize=16)
plt.show()
# SOD running time (s) vs snn
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=39, nrows=19, usecols=[2,5])
fig = plt.figure(figsize=(7,3))
ax = fig.add_axes([0.1, 0.15, 0.63, 0.7])
ax.plot(df[2].values[0:5], df[5].values[0:5], label="5000 pts")
ax.plot(df[2].values[7:12], df[5].values[7:12], label="10000 pts")
ax.plot(df[2].values[9:], df[5].values[9:], label = "20000 pts")
ax.set_xlabel('snn', fontsize=14)
ax.set_ylabel('Time (sec)', fontsize=14)
ax.set_ylim([-100, 2700])
ax.set_xlim([24, 401])
ax.set_title('SOD running time (s) vs SNN (kddcup99)', fontsize=16)
ax.legend(bbox_to_anchor=(1.44, 0.75), prop={'family': 'monospace'})
plt.show()
# SOD running time (s) vs # datapoints
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=39, nrows=19, usecols=[2,5])
fig = plt.figure(figsize=(7.5,3))
ax = fig.add_axes([0.1, 0.15, 0.69, 0.7])
ax.plot([5000,10000,20000], df[5].values[0::7], label="snn=25")
ax.plot([5000,10000,20000], df[5].values[1::7], label="snn=50")
ax.plot([5000,10000,20000], df[5].values[2::7], label="snn=100")
ax.plot([5000,10000,20000], df[5].values[3::7], label="snn=200")
ax.plot([5000,10000,20000], df[5].values[4::7], label="snn=400")
ax.set_xlabel('# datapoints', fontsize=14)
ax.set_ylabel('Time (sec)', fontsize=14)
ax.set_ylim([-100, 2700])
ax.set_xlim([4900, 21000])
ax.set_title('SOD running time (s) vs #pts (kddcup99)', fontsize=16)
ax.legend(bbox_to_anchor=(1.32, 0.85), prop={'family': 'monospace'})
plt.show()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import auc
%matplotlib inline
# process SVM PR curves
datasets = ['ionosphere', 'shuttle', 'breast_cancer_wisconsin_diagnostic', 'satellite', 'mouse', 'kddcup99_5000', 'kddcup99_10000']
for name in datasets:
if name == 'kddcup99_5000':
dirname = 'kddcup99'
subdirbase = "outputsvm_5000"
elif name == 'kddcup99_10000':
dirname = 'kddcup99'
subdirbase = "outputsvm_10000"
else:
dirname = name
subdirbase = "outputsvm"
eta = pd.read_csv('%s/%s_eta/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
eta_no_gamma_tuning = pd.read_csv('%s/%s_eta_no_gamma_tuning/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
robust = pd.read_csv('%s/%s_robust/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
robust_no_gamma_tuning = pd.read_csv('%s/%s_robust_no_gamma_tuning/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
one_class = pd.read_csv('%s/%s_one_class/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
one_class_no_gamma_tuning = pd.read_csv('%s/%s_one_class_no_gamma_tuning/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
eta_auc = auc(eta[0], eta[1])
eta_no_gamma_tuning_auc = auc(eta_no_gamma_tuning[0], eta_no_gamma_tuning[1])
robust_auc = auc(robust[0], robust[1])
robust_no_gamma_tuning_auc = auc(robust_no_gamma_tuning[0], robust_no_gamma_tuning[1])
one_class_auc = auc(one_class[0], one_class[1])
one_class_no_gamma_tuning_auc = auc(one_class_no_gamma_tuning[0], one_class_no_gamma_tuning[1])
fig = plt.figure(figsize=(12,5))
ax = fig.add_axes([0.045, 0.1, 0.6, 0.8])
ax.plot(eta[0].values, eta[1].values, label='eta AUC=%f' % eta_auc, lw=2)
ax.plot(eta_no_gamma_tuning[0].values, eta_no_gamma_tuning[1].values, label='eta_noauto AUC=%f' % eta_no_gamma_tuning_auc, lw=2)
ax.plot(robust[0].values, robust[1].values, label='robust AUC=%f' % robust_auc, lw=2)
ax.plot(robust_no_gamma_tuning[0].values, robust_no_gamma_tuning[1].values, label='robust_noauto AUC=%f' % robust_no_gamma_tuning_auc, lw=2)
ax.plot(one_class[0].values, one_class[1].values, label='one_class AUC=%f' % one_class_auc, lw=2)
ax.plot(one_class_no_gamma_tuning[0].values, one_class_no_gamma_tuning[1].values, label='one_class_noauto AUC=%f' % one_class_no_gamma_tuning_auc, lw=2)
ax.set_xlabel('Recall', fontsize=14)
ax.set_ylabel('Precision', fontsize=14)
ax.set_ylim([0.0, 1.05])
ax.set_xlim([0.0, 1.0])
ax.set_title('SVM Precision-Recall (%s)' % name, fontsize=16)
ax.legend(bbox_to_anchor=(1.6, 0.7), prop={'family': 'monospace'})
plt.show()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import auc
%matplotlib inline
# process SVM PR curves vs SOD (optimal) PR curves
datasets = ['ionosphere', 'shuttle', 'breast_cancer_wisconsin_diagnostic', 'satellite', 'kddcup99_5000', 'kddcup99_10000']
for name in datasets:
if name == 'ionosphere':
dirname = name
svmsubdirbase = "outputsvm_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_10snn"
elif name == 'shuttle':
dirname = name
svmsubdirbase = "outputsvm_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_1000snn"
elif name == 'breast_cancer_wisconsin_diagnostic':
dirname = name
svmsubdirbase = "outputsvm_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_25snn"
elif name == 'satellite':
dirname = name
svmsubdirbase = "outputsvm_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_250snn"
elif name == 'mouse':
dirname = name
svmsubdirbase = "outputsvm_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_10snn"
elif name == 'kddcup99_5000':
dirname = 'kddcup99'
svmsubdirbase = "outputsvm_5000_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_5000_100snn"
elif name == 'kddcup99_10000':
dirname = 'kddcup99'
svmsubdirbase = "outputsvm_10000_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_5000_200snn"
one_class_no_gamma_tuning = pd.read_csv('%s/%s/pr.txt' % (dirname, svmsubdirbase), header=None, index_col=False, skiprows=1)
sod = pd.read_csv('%s/%s/pr-curve.txt' % (dirname, sodsubdirbase), header=None, index_col=False, skiprows=2, sep=' ')
one_class_no_gamma_tuning_auc = auc(one_class_no_gamma_tuning[0], one_class_no_gamma_tuning[1])
sod_auc = auc(sod[0], sod[1])
fig = plt.figure(figsize=(12,5))
ax = fig.add_axes([0.045, 0.1, 0.6, 0.8])
ax.plot(sod[0].values, sod[1].values, label='SOD (optimal) AUC=%f' % sod_auc, lw=2)
ax.plot(one_class_no_gamma_tuning[0].values, one_class_no_gamma_tuning[1].values, label='one_class_noauto AUC=%f' % one_class_no_gamma_tuning_auc, lw=2)
ax.set_xlabel('Recall', fontsize=14)
ax.set_ylabel('Precision', fontsize=14)
ax.set_ylim([0.0, 1.05])
ax.set_xlim([0.0, 1.0])
ax.set_title('Optimal SOD and SVM Precision-Recall (%s)' % name, fontsize=16)
ax.legend(bbox_to_anchor=(1.6, 0.7), prop={'family': 'monospace'})
plt.show()
# SOD optimal running time (s) compared to one-class SVM running time
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=39, nrows=19, usecols=[5,12])
fig = plt.figure(figsize=(7.5,3))
ax = fig.add_axes([0.1, 0.15, 0.69, 0.7])
ax.plot([5000,10000,20000], [df[5].values[2], df[5].values[10], df[5].values[18]], label="optimal SOD")
ax.plot([5000,10000,20000], [df[12].values[5], df[12].values[12], df[12].values[14]], label="one-class SVM")
ax.set_xlabel('# datapoints', fontsize=14)
ax.set_ylabel('Time (sec)', fontsize=14)
ax.set_ylim([-100, 2700])
ax.set_xlim([4900, 21000])
ax.set_title('Comparison of SOD and SVM running time (kddcup99)', fontsize=16)
ax.legend(bbox_to_anchor=(1.5, 0.65), prop={'family': 'monospace'})
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Discussion
Step2: Likewise, we need to know how running time is affected by increasing the size of the dataset. Below we plot several curves with fixed $\text{snn}$ and varying data size. It turns out that the running time grows quadratically $\text{O}(n^2)$ in each case.
Step3: Discussion
Step4: Discussion
Step5: Head-to-head time comparison
|
4,624
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import numpy as np
import SDSS
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import copy
# We want to select galaxies, and then are only interested in their positions on the sky.
data = pd.read_csv("downloads/SDSSobjects.csv",usecols=['ra','dec','u','g',\
'r','i','size'])
# Filter out objects with bad magnitude or size measurements:
data = data[(data['u'] > 0) & (data['g'] > 0) & (data['r'] > 0) & (data['i'] > 0) & (data['size'] > 0)]
# Make size cuts, to exclude stars and nearby galaxies, and magnitude cuts, to get good galaxy detections:
data = data[(data['size'] > 0.8) & (data['size'] < 10.0) & (data['i'] > 17) & (data['i'] < 22)]
# Drop the things we're not so interested in:
del data['u'], data['g'], data['r'], data['i'],data['size']
data.head()
Ngals = len(data)
ramin,ramax = np.min(data['ra']),np.max(data['ra'])
decmin,decmax = np.min(data['dec']),np.max(data['dec'])
print Ngals,"galaxy-like objects in (ra,dec) range (",ramin,":",ramax,",",decmin,":",decmax,")"
# !pip install --upgrade TreeCorr
random = pd.DataFrame({'ra' : ramin + (ramax-ramin)*np.random.rand(Ngals), 'dec' : decmin + (decmax-decmin)*np.random.rand(Ngals)})
print len(random), type(random)
fig, ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
random.plot(kind='scatter', x='ra', y='dec', ax=ax[0], title='Random')
ax[0].set_xlabel('RA / deg')
ax[0].set_ylabel('Dec. / deg')
data.plot(kind='scatter', x='ra', y='dec', ax=ax[1], title='Data')
ax[1].set_xlabel('RA / deg')
ax[1].set_ylabel('Dec. / deg')
import treecorr
random_cat = treecorr.Catalog(ra=random['ra'], dec=random['dec'], ra_units='deg', dec_units='deg')
data_cat = treecorr.Catalog(ra=data['ra'], dec=data['dec'], ra_units='deg', dec_units='deg')
# Set up some correlation function estimator objects:
sep_units='arcmin'
min_sep=0.5
max_sep=10.0
N = 7
bin_size = np.log10(1.0*max_sep/min_sep)/(1.0*N)
dd = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size)
rr = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size)
# Process the data:
dd.process(data_cat)
rr.process(random_cat)
# Combine into a correlation function and its variance:
xi, varxi = dd.calculateXi(rr)
plt.figure(figsize=(15,8))
plt.rc('xtick', labelsize=16)
plt.rc('ytick', labelsize=16)
plt.errorbar(np.exp(dd.logr),xi,np.sqrt(varxi),c='blue',linewidth=2)
# plt.xscale('log')
plt.xlabel('$\\theta / {\\rm arcmin}$',fontsize=20)
plt.ylabel('$\\xi(\\theta)$',fontsize=20)
plt.ylim([-0.1,0.2])
plt.grid(True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Correlation Function
Step2: Random Catalogs
Step3: Now let's plot both catalogs, and compare.
Step4: Estimating $\xi(\theta)$
|
4,625
|
<ASSISTANT_TASK:>
Python Code:
from regraph import NXGraph, Neo4jHierarchy, Rule
from regraph import plot_graph, plot_instance, plot_rule
%matplotlib inline
# Define graph G
g = NXGraph()
g.add_nodes_from(["protein", "binding", "region", "compound"])
g.add_edges_from([("region", "protein"), ("protein", "binding"), ("region", "binding"), ("compound", "binding")])
# Define graph T
t = NXGraph()
t.add_nodes_from(["action", "agent"])
t.add_edges_from([("agent", "agent"), ("agent", "action")])
# Create a hierarchy
simple_hierarchy = Neo4jHierarchy(uri="bolt://localhost:7687", user="neo4j", password="admin")
# If you run this notebooks multiple times, you need to clear the graph in the db
simple_hierarchy._clear()
simple_hierarchy.add_graph("G", g, {"name": "Simple protein interaction"})
simple_hierarchy.add_graph("T", t, {"name": "Agent interaction"})
simple_hierarchy.add_typing(
"G", "T",
{"protein": "agent",
"region": "agent",
"compound": "agent",
"binding": "action",
}
)
print(simple_hierarchy)
type(simple_hierarchy.get_graph("T"))
simple_hierarchy.get_typing("G", "T")
lhs = NXGraph()
lhs.add_nodes_from(["a", "b"])
lhs.add_edges_from([("a", "b")])
p = NXGraph()
p.add_nodes_from(["a", "b"])
p.add_edges_from([])
rhs = NXGraph()
rhs.add_nodes_from(["a", "b", "c"])
rhs.add_edges_from([("c", "a"), ("c", "b")])
# By default if `p_lhs` and `p_rhs` are not provided
# to a rule, it tries to construct this homomorphisms
# automatically by matching the names. In this case we
# have defined lhs, p and rhs in such a way that that
# the names of the matching nodes correspond
rule = Rule(p, lhs, rhs)
plot_rule(rule)
instances = simple_hierarchy.find_matching("G", rule.lhs)
print("Instances: ", instances)
instance = {
"a": "protein",
"b": "binding"
}
try:
rhs_instance = simple_hierarchy.rewrite("G", rule, instance, strict=True)
except Exception as e:
print("Error message: ", e)
print("Type: ", type(e))
rhs_typing = {
"T": {"c": "agent"}
}
rhs_instance = simple_hierarchy.rewrite(
"G", rule, instance, rhs_typing=rhs_typing, strict=True)
print("Instance of the RHS in G", rhs_instance)
lhs = NXGraph()
lhs.add_nodes_from(["agent"])
rule = Rule.from_transform(lhs)
_, rhs_clone = rule.inject_clone_node("agent")
plot_rule(rule)
instance = {
"agent": "agent"
}
try:
rhs_instance = simple_hierarchy.rewrite("T", rule, instance, strict=True)
except Exception as e:
print("Error message: ", e)
print("Type: ", type(e))
p_typing = {
"G": {
'protein': 'agent',
'region': 'agent',
'compound': rhs_clone,
'c': 'agent'
}
}
rhs_instance = simple_hierarchy.rewrite("T", rule, instance, p_typing=p_typing, strict=True)
print("Instance of the RHS in G", rhs_instance)
simple_hierarchy.relabel_graph_node('T', rhs_instance['agent'], 'organic_agent')
simple_hierarchy.relabel_graph_node('T', rhs_instance[rhs_clone], 'non_organic_agent')
print(simple_hierarchy.get_typing("G", "T"))
hierarchy = Neo4jHierarchy(uri="bolt://localhost:7687", user="neo4j", password="admin")
hierarchy._clear()
colors = NXGraph()
colors.add_nodes_from([
"green", "red"
])
colors.add_edges_from([
("red", "green"),
("red", "red"),
("green", "green")
])
hierarchy.add_graph("colors", colors)
shapes = NXGraph()
shapes.add_nodes_from(["circle", "square"])
shapes.add_edges_from([
("circle", "square"),
("square", "circle"),
("circle", "circle")
])
hierarchy.add_graph("shapes", shapes)
quality = NXGraph()
quality.add_nodes_from(["good", "bad"])
quality.add_edges_from([
("bad", "bad"),
("bad", "good"),
("good", "good")
])
hierarchy.add_graph("quality", quality)
g1 = NXGraph()
g1.add_nodes_from([
"red_circle",
"red_square",
])
g1.add_edges_from([
("red_circle", "red_square"),
("red_circle", "red_circle"),
("red_square", "red_circle")
])
g1_colors = {
"red_circle": "red",
"red_square": "red",
}
g1_shapes = {
"red_circle": "circle",
"red_square": "square",
}
hierarchy.add_graph("g1", g1)
hierarchy.add_typing("g1", "colors", g1_colors)
hierarchy.add_typing("g1", "shapes", g1_shapes)
g2 = NXGraph()
g2.add_nodes_from([
"good_circle",
"good_square",
"bad_circle",
])
g2.add_edges_from([
("good_circle", "good_square"),
("good_square", "good_circle"),
("bad_circle", "good_circle"),
("bad_circle", "bad_circle"),
])
g2_shapes = {
"good_circle": "circle",
"good_square": "square",
"bad_circle": "circle"
}
g2_quality = {
"good_circle": "good",
"good_square": "good",
"bad_circle": "bad",
}
hierarchy.add_graph("g2", g2)
hierarchy.add_typing("g2", "shapes", g2_shapes)
hierarchy.add_typing("g2", "quality", g2_quality)
g3 = NXGraph()
g3.add_nodes_from([
"good_red_circle",
"bad_red_circle",
"good_red_square",
])
g3.add_edges_from([
("bad_red_circle", "good_red_circle"),
("good_red_square", "good_red_circle"),
("good_red_circle", "good_red_square")
])
g3_g1 = {
"good_red_circle": "red_circle",
"bad_red_circle": "red_circle",
"good_red_square": "red_square"
}
g3_g2 = {
"good_red_circle": "good_circle",
"bad_red_circle": "bad_circle",
"good_red_square": "good_square",
}
hierarchy.add_graph("g3", g3)
hierarchy.add_typing("g3", "g1", g3_g1)
hierarchy.add_typing("g3", "g2", g3_g2)
print(hierarchy)
for graph in hierarchy.graphs():
print("Graph '{}', nodes: {}".format(graph, hierarchy.get_graph(graph).nodes()))
print("Node types in G3:\n")
for node in hierarchy.get_graph("g3").nodes():
print(node, hierarchy.node_type("g3", node))
lhs = NXGraph()
lhs.add_nodes_from(["a", "b"])
lhs.add_edges_from([
("a", "b"),
("b", "a")
])
p = NXGraph()
p.add_nodes_from(["a", "a1", "b"])
p.add_edges_from([
("a", "b"),
("a1", "b")
])
rhs = NXGraph.copy(p)
rule = Rule(
p, lhs, rhs,
{"a": "a", "a1": "a", "b": "b"},
{"a": "a", "a1": "a1", "b": "b"},
)
plot_rule(rule)
instances = hierarchy.find_matching("shapes", lhs)
print("Instances:")
for instance in instances:
print(instance)
rhs_instances = hierarchy.rewrite("shapes", rule, {"a": "circle", "b": "square"})
for graph in hierarchy.graphs():
print("Graph '{}', nodes: {}".format(graph, hierarchy.get_graph(graph).nodes()))
pattern = NXGraph()
pattern.add_nodes_from(["a", "b"])
rule = Rule.from_transform(pattern)
rhs_node = rule.inject_merge_nodes(["a", "b"])
rule.inject_add_node("c")
rule.inject_add_edge("c", rhs_node)
instance = {
"a": "good_circle",
"b": "bad_circle",
}
rhs_typing = {
"shapes": {
"c": "circle"
}
}
rhs_instance = hierarchy.rewrite("g2", rule, instance, rhs_typing=rhs_typing)
print(rhs_instance)
for graph in hierarchy.graphs():
print("Graph '{}', nodes: {}".format(graph, hierarchy.get_graph(graph).nodes()))
hierarchy_json = hierarchy.to_json()
import json
print(json.dumps(hierarchy_json, indent=" "))
# Clear the DB for the previous hierarchy
hierarchy._clear()
# Load json-data back to the DB
hierarchy = Neo4jHierarchy.from_json(
uri="bolt://localhost:7687", user="neo4j", password="admin", json_data=hierarchy_json)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Creating and modifying a hierarchy object
Step2: The method get_graph returns the graph object corresponding to the provided graph id.
Step3: The method get_typing returns the dictionary object corresponding to the provided hierarchy edge and representing the associated graph homomorphism.
Step4: 2. Rewriting of objects in a hierarchy
Step5: The created rule removes the edge 1->2, adds the new node 3 and two edges 3->1 and 3->2.
Step6: Let us fixed the desired instance
Step7: We try to apply the rule to the selected instance as is in the strict rewriting mode.
Step8: We have failed to rewrite G, because we have not specified typing for the newly added node 3. Let us try again, but this time we will prove such typing.
Step9: We will now create a rule that applied to T and that clones the node agent into two nodes.
Step10: We try to apply the created rule to the graph T in the strict mode.
Step11: We have failed to rewrite T, because we have not specified typing for instances of agent in $p$. Let us try again, but this time we will prove such typing.
Step12: Let us relabel nodes in T.
Step13: 2.2. Rewriting and propagation
Step14: Some of the graphs in the hierarchy are now typed by multiple graphs, which is reflected in the types of nodes, as in the example below
Step15: 2.3. Rewriting and propagation
Step16: We have created a rule that clones the node a and reconnects the edges between a and b.
Step17: We rewrite the graph shapes with the fixed instances (so, the node circle is cloned).
Step18: Observe the following plots, the cloning of circle was propagated to all the ancestors of shapes, because we didn't specify how to retype intances of circle for these ancestors using the p_typing parameter. This is an example of previously mentioned backward propagation.
Step19: Let us now consider a small example of forward propagation. We will create a rule that performs some additions and merges of nodes.
Step20: Application of this rule will merge nodes bad_circle and good_circle in the graph g2. It with then add a new node and connect it with an edge to the merged node. Let us specify some typings of the new node in the RHS
Step21: Observe the following graphs, as the resul of forward propagation nodes good and bad were merged in the graph qualities. In addition, a new node typing the node c in the rule was added to the graph qualities.
Step22: 3. Dumping hierarchies with JSON
|
4,626
|
<ASSISTANT_TASK:>
Python Code:
from sympy import *
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
x1, x2, x3 = symbols("x_1 x_2 x_3")
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3")
R, L, ga, gv = symbols("R L g_a g_v")
init_printing()
a1 = pi / 2 + (L / 2 - alpha1)/R
x = (R + alpha3 + ga * cos(gv * a1)) * cos(a1)
y = alpha2
z = (R + alpha3 + ga * cos(gv * a1)) * sin(a1)
r = x*N.i + y*N.j + z*N.k
R1=r.diff(alpha1)
R2=r.diff(alpha2)
R3=r.diff(alpha3)
trigsimp(R1)
R2
R3
eps=trigsimp(R1.dot(R2.cross(R3)))
R_1=simplify(trigsimp(R2.cross(R3)/eps))
R_2=simplify(trigsimp(R3.cross(R1)/eps))
R_3=simplify(trigsimp(R1.cross(R2)/eps))
R_1
R_2
R_3
dx1da1=R1.dot(N.i)
dx1da2=R2.dot(N.i)
dx1da3=R3.dot(N.i)
dx2da1=R1.dot(N.j)
dx2da2=R2.dot(N.j)
dx2da3=R3.dot(N.j)
dx3da1=R1.dot(N.k)
dx3da2=R2.dot(N.k)
dx3da3=R3.dot(N.k)
A=Matrix([[dx1da1, dx1da2, dx1da3], [dx2da1, dx2da2, dx2da3], [dx3da1, dx3da2, dx3da3]])
simplify(A)
A_inv = A**-1
trigsimp(A_inv[0,0])
trigsimp(A.det())
g11=R1.dot(R1)
g12=R1.dot(R2)
g13=R1.dot(R3)
g21=R2.dot(R1)
g22=R2.dot(R2)
g23=R2.dot(R3)
g31=R3.dot(R1)
g32=R3.dot(R2)
g33=R3.dot(R3)
G=Matrix([[g11, g12, g13],[g21, g22, g23], [g31, g32, g33]])
G=trigsimp(G)
G
g_11=R_1.dot(R_1)
g_12=R_1.dot(R_2)
g_13=R_1.dot(R_3)
g_21=R_2.dot(R_1)
g_22=R_2.dot(R_2)
g_23=R_2.dot(R_3)
g_31=R_3.dot(R_1)
g_32=R_3.dot(R_2)
g_33=R_3.dot(R_3)
G_con=Matrix([[g_11, g_12, g_13],[g_21, g_22, g_23], [g_31, g_32, g_33]])
G_con=trigsimp(G_con)
G_con
G_inv = G**-1
G_inv
dR1dalpha1 = trigsimp(R1.diff(alpha1))
dR1dalpha1
dR1dalpha2 = trigsimp(R1.diff(alpha2))
dR1dalpha2
dR1dalpha3 = trigsimp(R1.diff(alpha3))
dR1dalpha3
dR2dalpha1 = trigsimp(R2.diff(alpha1))
dR2dalpha1
dR2dalpha2 = trigsimp(R2.diff(alpha2))
dR2dalpha2
dR2dalpha3 = trigsimp(R2.diff(alpha3))
dR2dalpha3
dR3dalpha1 = trigsimp(R3.diff(alpha1))
dR3dalpha1
dR3dalpha2 = trigsimp(R3.diff(alpha2))
dR3dalpha2
dR3dalpha3 = trigsimp(R3.diff(alpha3))
dR3dalpha3
u1=Function('u^1')
u2=Function('u^2')
u3=Function('u^3')
q=Function('q') # q(alpha3) = 1+alpha3/R
K = Symbol('K') # K = 1/R
u1_nabla1 = u1(alpha1, alpha2, alpha3).diff(alpha1) + u3(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla1 = u2(alpha1, alpha2, alpha3).diff(alpha1)
u3_nabla1 = u3(alpha1, alpha2, alpha3).diff(alpha1) - u1(alpha1, alpha2, alpha3) * K * q(alpha3)
u1_nabla2 = u1(alpha1, alpha2, alpha3).diff(alpha2)
u2_nabla2 = u2(alpha1, alpha2, alpha3).diff(alpha2)
u3_nabla2 = u3(alpha1, alpha2, alpha3).diff(alpha2)
u1_nabla3 = u1(alpha1, alpha2, alpha3).diff(alpha3) + u1(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla3 = u2(alpha1, alpha2, alpha3).diff(alpha3)
u3_nabla3 = u3(alpha1, alpha2, alpha3).diff(alpha3)
# $\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$
grad_u = Matrix([[u1_nabla1, u2_nabla1, u3_nabla1],[u1_nabla2, u2_nabla2, u3_nabla2], [u1_nabla3, u2_nabla3, u3_nabla3]])
grad_u
G_s = Matrix([[q(alpha3)**2, 0, 0],[0, 1, 0], [0, 0, 1]])
grad_u_down=grad_u*G_s
expand(simplify(grad_u_down))
B = zeros(9, 12)
B[0,1] = (1+alpha3/R)**2
B[0,8] = (1+alpha3/R)/R
B[1,2] = (1+alpha3/R)**2
B[2,0] = (1+alpha3/R)/R
B[2,3] = (1+alpha3/R)**2
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[6,0] = -(1+alpha3/R)/R
B[7,10] = S(1)
B[8,11] = S(1)
B
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
Q=E*B
Q=simplify(Q)
Q
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
Q=E*B*T
Q=simplify(Q)
Q
from sympy import MutableDenseNDimArray
C_x = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x[i,j,k,l] = el
C_x
C_x_symmetry = MutableDenseNDimArray.zeros(3, 3, 3, 3)
def getCIndecies(index):
if (index == 0):
return 0, 0
elif (index == 1):
return 1, 1
elif (index == 2):
return 2, 2
elif (index == 3):
return 0, 1
elif (index == 4):
return 0, 2
elif (index == 5):
return 1, 2
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x_symmetry[i,j,k,l] = el
C_x_symmetry[i,j,l,k] = el
C_x_symmetry[j,i,k,l] = el
C_x_symmetry[j,i,l,k] = el
C_x_symmetry[k,l,i,j] = el
C_x_symmetry[k,l,j,i] = el
C_x_symmetry[l,k,i,j] = el
C_x_symmetry[l,k,j,i] = el
C_x_symmetry
C_isotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_isotropic_matrix = zeros(6)
mu = Symbol('mu')
la = Symbol('lambda')
for s in range(6):
for t in range(s, 6):
if (s < 3 and t < 3):
if(t != s):
C_isotropic_matrix[s,t] = la
C_isotropic_matrix[t,s] = la
else:
C_isotropic_matrix[s,t] = 2*mu+la
C_isotropic_matrix[t,s] = 2*mu+la
elif (s == t):
C_isotropic_matrix[s,t] = mu
C_isotropic_matrix[t,s] = mu
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_isotropic_matrix[s, t]
C_isotropic[i,j,k,l] = el
C_isotropic[i,j,l,k] = el
C_isotropic[j,i,k,l] = el
C_isotropic[j,i,l,k] = el
C_isotropic[k,l,i,j] = el
C_isotropic[k,l,j,i] = el
C_isotropic[l,k,i,j] = el
C_isotropic[l,k,j,i] = el
C_isotropic
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_isotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_isotropic, A_inv, i, j, k, l)
C_isotropic_alpha[i,j,k,l] = c
C_isotropic_alpha[0,0,0,0]
C_isotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha[s,t] = C_isotropic_alpha[i,j,k,l]
C_isotropic_matrix_alpha
C_orthotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_orthotropic_matrix = zeros(6)
for s in range(6):
for t in range(s, 6):
elem_index = 'C^{{{}{}}}'.format(s+1, t+1)
el = Symbol(elem_index)
if ((s < 3 and t < 3) or t == s):
C_orthotropic_matrix[s,t] = el
C_orthotropic_matrix[t,s] = el
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_orthotropic_matrix[s, t]
C_orthotropic[i,j,k,l] = el
C_orthotropic[i,j,l,k] = el
C_orthotropic[j,i,k,l] = el
C_orthotropic[j,i,l,k] = el
C_orthotropic[k,l,i,j] = el
C_orthotropic[k,l,j,i] = el
C_orthotropic[l,k,i,j] = el
C_orthotropic[l,k,j,i] = el
C_orthotropic
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_orthotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_orthotropic, A_inv, i, j, k, l)
C_orthotropic_alpha[i,j,k,l] = c
C_orthotropic_alpha[0,0,0,0]
C_orthotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha[s,t] = C_orthotropic_alpha[i,j,k,l]
C_orthotropic_matrix_alpha
P=eye(12,12)
P[0,0]=1/(1+alpha3/R)
P[1,1]=1/(1+alpha3/R)
P[2,2]=1/(1+alpha3/R)
P[3,0]=-1/(R*(1+alpha3/R)**2)
P[3,3]=1/(1+alpha3/R)
P
Def=simplify(E*B*P)
Def
rows, cols = Def.shape
D_p=zeros(rows, cols)
q = 1+alpha3/R
for i in range(rows):
ratio = 1
if (i==0):
ratio = q*q
elif (i==3 or i == 4):
ratio = q
for j in range(cols):
D_p[i,j] = Def[i,j] / ratio
D_p = simplify(D_p)
D_p
C_isotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_isotropic_alpha_p[i,j,k,l] = simplify(C_isotropic_alpha[i,j,k,l]*fact)
C_isotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha_p[s,t] = C_isotropic_alpha_p[i,j,k,l]
C_isotropic_matrix_alpha_p
C_orthotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_orthotropic_alpha_p[i,j,k,l] = simplify(C_orthotropic_alpha[i,j,k,l]*fact)
C_orthotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha_p[s,t] = C_orthotropic_alpha_p[i,j,k,l]
C_orthotropic_matrix_alpha_p
D_p_T = D_p*T
K = Symbol('K')
D_p_T = D_p_T.subs(R, 1/K)
simplify(D_p_T)
theta, h1, h2=symbols('theta h_1 h_2')
square_geom=theta/2*(R+h2)**2-theta/2*(R+h1)**2
expand(simplify(square_geom))
square_int=integrate(integrate(1+alpha3/R, (alpha3, h1, h2)), (alpha1, 0, theta*R))
expand(simplify(square_int))
simplify(D_p.T*C_isotropic_matrix_alpha_p*D_p)
W = simplify(D_p_T.T*C_isotropic_matrix_alpha_p*D_p_T*(1+alpha3*K)**2)
W
h=Symbol('h')
E=Symbol('E')
v=Symbol('nu')
W_a3 = integrate(W, (alpha3, -h/2, h/2))
W_a3 = simplify(W_a3)
W_a3.subs(la, E*v/((1+v)*(1-2*v))).subs(mu, E/((1+v)*2))
A_M = zeros(3)
A_M[0,0] = E*h/(1-v**2)
A_M[1,1] = 5*E*h/(12*(1+v))
A_M[2,2] = E*h**3/(12*(1-v**2))
Q_M = zeros(3,6)
Q_M[0,1] = 1
Q_M[0,4] = K
Q_M[1,0] = -K
Q_M[1,2] = 1
Q_M[1,5] = 1
Q_M[2,3] = 1
W_M=Q_M.T*A_M*Q_M
W_M
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Corrugated cylindrical coordinates
Step2: Base Vectors $\vec{R}_1, \vec{R}_2, \vec{R}_3$
Step3: Base Vectors $\vec{R}^1, \vec{R}^2, \vec{R}^3$
Step4: Jacobi matrix
Step5: Metric tensor
Step6: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
Step7: Derivatives of vectors
Step8: $ \frac { d\vec{R_1} } { d\alpha_1} = -\frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} $
Step9: $ \frac { d\vec{R_1} } { d\alpha_3} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
Step10: $ \frac { d\vec{R_3} } { d\alpha_1} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
Step11: $ \frac { d\vec{R_3} } { d\alpha_3} = \vec{0} $
Step12: $
Step13: Deformations tensor
Step14: Tymoshenko theory
Step15: Elasticity tensor(stiffness tensor)
Step16: Include symmetry
Step17: Isotropic material
Step18: Orthotropic material
Step19: Orthotropic material in shell coordinates
Step20: Physical coordinates
Step21: Stiffness tensor
Step22: Tymoshenko
Step23: Square of segment
Step24: ${\displaystyle A=\int_{0}^{L}\int_{h_1}^{h_2} \left( 1+\frac{\alpha_3}{R} \right) d \alpha_1 d \alpha_3}, L=R \theta$
Step25: Virtual work
Step26: Isotropic material physical coordinates - Tymoshenko
|
4,627
|
<ASSISTANT_TASK:>
Python Code:
def divide(numerator, denominator):
result = numerator/denominator
print("result = %f" % result)
divide(1.0, 0)
def divide1(numerator, denominator):
try:
GARBAGE
result = numerator/denominator
print("result = %f" % result)
except (ZeroDivisionError, NameError) as err:
import pdb; pdb.set_trace()
print("You can't divide by 0! or use GARBAGE.")
divide1(1.0, 'a')
print(err)
divide1(1.0, 2)
divide1("x", 2)
def divide2(numerator, denominator):
try:
result = numerator / denominator
print("result = %f" % result)
except (ZeroDivisionError, TypeError) as err:
print("Got an exception: %s" % err)
divide2(1, "X")
#divide2("x, 2)
# Handle division by 0 by using a small number
SMALL_NUMBER = 1e-3
def divide3(numerator, denominator):
try:
result = numerator/denominator
except ZeroDivisionError:
result = numerator/SMALL_NUMBER
print("result = %f" % result)
except Exception as err:
print("Different error than division by zero:", err)
divide3(1,0)
divide3("1",0)
import pandas as pd
def validateDF(df):
"
:param pd.DataFrame df: should have a column named "hours"
if not "hours" in df.columns:
raise ValueError("DataFrame should have a column named 'hours'.")
df = pd.DataFrame({'hours': range(10) })
validateDF(df)
class SeattleCrimeError(Exception):
pass
b = False
if not b:
raise SeattleCrimeError("There's been a crime!")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Why didn't we catch this SyntaxError?
Step3: What do you do when you get an exception?
|
4,628
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Denis Engemannn <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
import mne
from mne import (io, spatial_tris_connectivity, compute_morph_matrix,
grade_to_tris)
from mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm,
f_mway_rm, summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = io.Raw(raw_fname)
events = mne.read_events(event_fname)
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
# we'll load all four conditions that make up the 'two ways' of our ANOVA
event_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4)
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
epochs.equalize_event_counts(event_id, copy=False)
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
inverse_operator = read_inverse_operator(fname_inv)
# we'll only use one hemisphere to speed up this example
# instead of a second vertex array we'll pass an empty array
sample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)]
# Let's average and compute inverse, then resample to speed things up
conditions = []
for cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important
evoked = epochs[cond].average()
evoked.resample(50)
condition = apply_inverse(evoked, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition.crop(0, None)
conditions.append(condition)
tmin = conditions[0].tmin
tstep = conditions[0].tstep
# Normally you would read in estimates across several subjects and morph
# them to the same cortical space (e.g. fsaverage). For example purposes,
# we will simulate this by just having each "subject" have the same
# response (just noisy in source space) here.
# we'll only consider the left hemisphere in this example.
n_vertices_sample, n_times = conditions[0].lh_data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 4) * 10
for ii, condition in enumerate(conditions):
X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis]
# It's a good idea to spatially smooth the data, and for visualization
# purposes, let's morph these to fsaverage, which is a grade 5 source space
# with vertices 0:10242 for each hemisphere. Usually you'd have to morph
# each subject's data separately (and you might want to use morph_data
# instead), but here since all estimates are on 'sample' we can use one
# morph matrix for all the heavy lifting.
fsave_vertices = [np.arange(10242), np.array([], int)] # right hemi is empty
morph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices,
fsave_vertices, 20, subjects_dir)
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 4)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 4)
# Now we need to prepare the group matrix for the ANOVA statistic.
# To make the clustering function work correctly with the
# ANOVA function X needs to be a list of multi-dimensional arrays
# (one per condition) of shape: samples (subjects) x time x space
X = np.transpose(X, [2, 1, 0, 3]) # First we permute dimensions
# finally we split the array into a list a list of conditions
# and discard the empty dimension resulting from the split using numpy squeeze
X = [np.squeeze(x) for x in np.split(X, 4, axis=-1)]
# As our ANOVA function is a multi-purpose tool we need to apply a few
# modifications to integrate it with the clustering function. This
# includes reshaping data, setting default arguments and processing
# the return values. For this reason we'll write a tiny dummy function.
# We will tell the ANOVA how to interpret the data matrix in terms of
# factors. This is done via the factor levels argument which is a list
# of the number factor levels for each factor.
factor_levels = [2, 2]
# Finally we will pick the interaction effect by passing 'A:B'.
# (this notation is borrowed from the R formula language)
effects = 'A:B' # Without this also the main effects will be returned.
# Tell the ANOVA not to compute p-values which we don't need for clustering
return_pvals = False
# a few more convenient bindings
n_times = X[0].shape[1]
n_conditions = 4
# A stat_fun must deal with a variable number of input arguments.
def stat_fun(*args):
# Inside the clustering function each condition will be passed as
# flattened array, necessitated by the clustering procedure.
# The ANOVA however expects an input array of dimensions:
# subjects X conditions X observations (optional).
# The following expression catches the list input
# and swaps the first and the second dimension, and finally calls ANOVA.
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=return_pvals)[0]
# get f-values only.
# Note. for further details on this ANOVA function consider the
# corresponding time frequency example.
# To use an algorithm optimized for spatio-temporal clustering, we
# just pass the spatial connectivity matrix (instead of spatio-temporal)
source_space = grade_to_tris(5)
# as we only have one hemisphere we need only need half the connectivity
lh_source_space = source_space[source_space[:, 0] < 10242]
print('Computing connectivity.')
connectivity = spatial_tris_connectivity(lh_source_space)
# Now let's actually do the clustering. Please relax, on a small
# notebook and one single thread only this will take a couple of minutes ...
pthresh = 0.0005
f_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh)
# To speed things up a bit we will ...
n_permutations = 128 # ... run fewer permutations (reduces sensitivity)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,
threshold=f_thresh, stat_fun=stat_fun,
n_permutations=n_permutations,
buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# The brighter the color, the stronger the interaction between
# stimulus modality and stimulus location
brain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, colormap='mne',
time_label='Duration significant (ms)')
brain.set_data_time_index(0)
brain.show_view('lateral')
brain.save_image('cluster-lh.png')
brain.show_view('medial')
inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in
enumerate(good_cluster_inds)][0] # first cluster
times = np.arange(X[0].shape[1]) * tstep * 1e3
plt.figure()
colors = ['y', 'b', 'g', 'purple']
event_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis']
for ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)):
# extract time course at cluster vertices
condition = condition[:, :, inds_v]
# normally we would normalize values across subjects but
# here we use data from the same subject so we're good to just
# create average time series across subjects and vertices.
mean_tc = condition.mean(axis=2).mean(axis=0)
std_tc = condition.std(axis=2).std(axis=0)
plt.plot(times, mean_tc.T, color=color, label=eve_id)
plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray',
alpha=0.5, label='')
ymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5
plt.xlabel('Time (ms)')
plt.ylabel('Activation (F-values)')
plt.xlim(times[[0, -1]])
plt.ylim(ymin, ymax)
plt.fill_betweenx((ymin, ymax), times[inds_t[0]],
times[inds_t[-1]], color='orange', alpha=0.3)
plt.legend()
plt.title('Interaction between stimulus-modality and location.')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Step5: Prepare function for arbitrary contrast
Step6: Compute clustering statistic
Step7: Visualize the clusters
Step8: Finally, let's investigate interaction effect by reconstructing the time
|
4,629
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv/')
sales
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
crime_model = graphlab.linear_regression.create(sales, target='HousePrice', features=['CrimeRate'],validation_set=None,verbose=False)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(sales['CrimeRate'],sales['HousePrice'],'.',
sales['CrimeRate'],crime_model.predict(sales),'-')
sales_noCC = sales[sales['MilesPhila'] != 0.0]
sales_noCC.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
crime_model_noCC = graphlab.linear_regression.create(sales_noCC, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
plt.plot(sales_noCC['CrimeRate'],sales_noCC['HousePrice'],'.',
sales_noCC['CrimeRate'],crime_model.predict(sales_noCC),'-')
crime_model.get('coefficients')
crime_model_noCC.get('coefficients')
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000]
crime_model_nohighend = graphlab.linear_regression.create(sales_nohighend, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
crime_model_noCC.get('coefficients')
crime_model_nohighend.get('coefficients')
plt.plot(sales_nohighend['CrimeRate'],sales_nohighend['HousePrice'],'.',
sales_nohighend['CrimeRate'],crime_model.predict(sales_nohighend),'-')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load some house value vs. crime rate data
Step2: Exploring the data
Step3: Fit the regression model using crime as the feature
Step4: Let's see what our fit looks like
Step5: Above
Step6: Refit our simple regression model on this modified dataset
Step7: Look at the fit
Step8: Compare coefficients for full-data fit versus no-Center-City fit
Step9: Above
Step10: Do the coefficients change much?
Step11: Above
|
4,630
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels) # Your one-hot encoded labels array here
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels))
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)# output layer logits
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_)
cost = tf.reduce_mean(cross_entropy)# cross entropy loss
optimizer = tf.train.AdamOptimizer().minimize(cost)# training optimizer
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
saver = tf.train.Saver()
batches = get_batches(train_x, train_y)
with tf.Session() as sess:
cost, accuracy et optimizer ont été défini avant, ils utilisent
la librairie tf
# TODO: Your training code here
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
for x, y in get_batches(train_x, train_y):
feed = {
inputs: x,
labels: y
}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flower power
Step2: ConvNet Codes
Step3: Below I'm running images through the VGG network in batches.
Step4: Building the Classifier
Step5: Data prep
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Step11: Training
Step12: Testing
Step13: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
|
4,631
|
<ASSISTANT_TASK:>
Python Code:
%%bash
cd /tmp
rm -rf playground #remove if it exists
git clone https://github.com/dsondak/playground.git
%%bash
ls -a /tmp/playground
%%bash
cd /tmp/playground
git log
%%bash
cd /tmp/playground
git status
%%bash
cd /tmp/playground
cat .git/config
%%bash
cd /tmp/playground
cat .gitignore
%%bash
cd /tmp/playground
echo '# Hello world!' > world.md
git status
%%bash
cd /tmp/playground
git add world.md
git status
%%bash
cd /tmp/playground
git commit -m "Hello world file to make sure things are working."
%%bash
cd /tmp/playground
git status
%%bash
cd /tmp/playground
git branch -av
%%bash
cd /tmp/playground
git push
git status
%%bash
cd /tmp/playground
git remote add course https://github.com/IACS-CS-207/playground.git
cat .git/config
%%bash
cd /tmp/playground
git fetch course
%%bash
cd /tmp/playground
git branch -avv
%%bash
cd /tmp/playground
git merge course/master
git status
%%bash
cd /tmp/playground
git log -3
%%bash
cd /tmp/playground
git push
git status
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Poking around
Step2: Each one of these "commits" is an SHA hash. It uniquely identifies all actions that have happened to this repository previously.
Step3: Pay close attention to the text above. It says we are on the master branch of our local repository, and that this branch is up-to-date with the master branch of the upstream repository or remote named origin. We know this as clone brings down a copy of the remote branch
Step4: Notice that this file tells us about a remote called origin which is simply the github repository we cloned from. So the process of cloning left us with a remote. The file also tells us about a branch called master, which "tracks" a remote branch called master at origin.
Step5: Making changes
Step6: We've created a file in the working directory, but it hasn't been staged yet.
Step7: Now our file is in the staging area (Index) waiting to be committed. The file is still not even in our local repository.
Step8: The git commit -m... version is just a way to specify a commit message without opening a text editor. The ipython notebook can't handle text editors. Don't worry, you'll get to use a text editor in your homework. If you use a text editor you just say git commit.
Step9: We see that our branch, "master", has one more commit than the "origin/master" branch, the local copy of the branch that came from the upstream repository (nicknamed "origin" in this case). Let's push the changes.
Step10: You can go to your remote repo and see the changes!
Step11: Notice that the master branch only tracks the same branch on the origin remote. We havent set up any connection with the course remote as yet.
Step12: A copy of a new remote branch has been made. To see this, provide the -avv argument to git branch.
Step13: Indeed, the way git works is by creating copies of remote branches locally. Then it just compares to these "copy" branches to see what changes have been made.
Step14: We seem to be ahead of our upstream-tracking repository by 2 commits..why?
Step15: Aha
|
4,632
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import tensorflow as tf
import helper
from tensorflow.examples.tutorials.mnist import input_data
print('Getting MNIST Dataset...')
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
print('Data Extracted.')
# Save the shapes of weights for each layer
layer_1_weight_shape = (mnist.train.images.shape[1], 256)
layer_2_weight_shape = (256, 128)
layer_3_weight_shape = (128, mnist.train.labels.shape[1])
all_zero_weights = [
tf.Variable(tf.zeros(layer_1_weight_shape)),
tf.Variable(tf.zeros(layer_2_weight_shape)),
tf.Variable(tf.zeros(layer_3_weight_shape))
]
all_one_weights = [
tf.Variable(tf.ones(layer_1_weight_shape)),
tf.Variable(tf.ones(layer_2_weight_shape)),
tf.Variable(tf.ones(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'All Zeros vs All Ones',
[
(all_zero_weights, 'All Zeros'),
(all_one_weights, 'All Ones')])
helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([10000], -3, 3))
# Default for tf.random_uniform is minval=0 and maxval=1
basline_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape)),
tf.Variable(tf.random_uniform(layer_2_weight_shape)),
tf.Variable(tf.random_uniform(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'Baseline',
[(basline_weights, 'tf.random_uniform [0, 1)')])
uniform_neg1to1_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1))
]
helper.compare_init_weights(
mnist,
'[0, 1) vs [-1, 1)',
[
(basline_weights, 'tf.random_uniform [0, 1)'),
(uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')])
uniform_neg01to01_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1))
]
uniform_neg001to001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01))
]
uniform_neg0001to0001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001))
]
helper.compare_init_weights(
mnist,
'[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)',
[
(uniform_neg1to1_weights, '[-1, 1)'),
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(uniform_neg001to001_weights, '[-0.01, 0.01)'),
(uniform_neg0001to0001_weights, '[-0.001, 0.001)')],
plot_n_batches=None)
import numpy as np
general_rule_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0])))
]
helper.compare_init_weights(
mnist,
'[-0.1, 0.1) vs General Rule',
[
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(general_rule_weights, 'General Rule')],
plot_n_batches=None)
helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([10000]))
normal_01_weights = [
tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Uniform [-0.1, 0.1) vs Normal stddev 0.1',
[
(uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'),
(normal_01_weights, 'Normal stddev 0.1')])
helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000]))
trunc_normal_01_weights = [
tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Normal vs Truncated Normal',
[
(normal_01_weights, 'Normal'),
(trunc_normal_01_weights, 'Truncated Normal')])
helper.compare_init_weights(
mnist,
'Baseline vs Truncated Normal',
[
(basline_weights, 'Baseline'),
(trunc_normal_01_weights, 'Truncated Normal')])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Neural Network
Step2: Initialize Weights
Step3: As you can see the accuracy is close to guessing for both zeros and ones, around 10%.
Step4: The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.
Step5: The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction.
Step6: We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small?
Step7: Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\sqrt{n}$.
Step8: The range we found and $y=1/\sqrt{n}$ are really close.
Step9: Let's compare the normal distribution against the previous uniform distribution.
Step10: The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution.
Step11: Again, let's compare the previous results with the previous distribution.
Step12: There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations.
|
4,633
|
<ASSISTANT_TASK:>
Python Code:
df = unpickle_object("final_dataframe_for_analysis.pkl") #dataframe we got from webscraping and cleaning!
#see other notebooks for more info.
df.dtypes # there are all our features. Our target variable is Box_office
df.shape
df['Month'] = df['Month'].astype(object)
df['Year'] = df['Year'].astype(object)
del df['Rank_in_genre']
df.reset_index(inplace=True)
del df['index']
percentage_missing(df)
df.hist(layout=(4,2), figsize=(50,50))
plot_corr_matrix(df)
X = unpickle_object("X_features_selection.pkl") #all features from the suffled dataframe. Numpy array
y = unpickle_object("y_variable_selection.pkl") #target variable from shuffled dataframe. Numpy array
final_df = unpickle_object("analysis_dataframe.pkl") #this is the shuffled dataframe!
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state = 0) #train on 75% of data
sc_X_train = StandardScaler()
sc_y_train = StandardScaler()
sc_X_train.fit(X_train[:,:6])#only need to learn fit of first 6 - rest are dummies
sc_y_train.fit(y_train)
X_train[:,:6] = sc_X_train.transform(X_train[:,:6]) #only need to transform first 6 columns - rest are dummies
X_test[:,:6] = sc_X_train.transform(X_test[:,:6]) #same as above
y_train = sc_y_train.transform(y_train)
y_test = sc_y_train.transform(y_test)
baseline_model(X_train, X_test, y_train, y_test)
holdout_results = holdout_grid(["Ridge", "Lasso", "Elastic Net"], X_train, X_test, y_train, y_test)
pickle_object(holdout_results, "holdout_model_results")
sc_X = StandardScaler()
sc_y = StandardScaler()
sc_X.fit(X[:,:6])#only need to learn fit of first 6 - rest are dummies
sc_y.fit(y)
X[:,:6] = sc_X.transform(X[:,:6]) #only need to transform first 6 columns - rest are dummies
y = sc_y.transform(y)
no_holdout_results = regular_grid(["Ridge", "Lasso", "Elastic Net"], X, y)
pickle_object(no_holdout_results, "no_holdout_model_results")
extract_model_comparisons(holdout_results, no_holdout_results, "Ridge")
extract_model_comparisons(holdout_results, no_holdout_results, "Lasso")
extract_model_comparisons(holdout_results, no_holdout_results, "Elastic Net")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Upon further thought, it doesnt make sense to have rank_in_genre as a predictor variable for box office budget. When the movie is release, it is not ranked immeadiately. The ranks assigned often occur many years after the movie is released and so it not related to the amount of money accrued at the box office. We will drop this variable.
Step2: From the above plots, we see that we have heavy skewness in all of our features and our target variable.
Step3: Baseline Model and Cross-Validation with Holdout Sets
Step4: Baseline Model
Step5: Ridge, Lasso and Elastic Net regression - Holdouts
Step6: Cross-Validation - No Holdout Sets
Step7: Analysis of Results!
Step8: Lasso Analysis
Step9: Elastic Net Analysis
|
4,634
|
<ASSISTANT_TASK:>
Python Code:
data_original = np.loadtxt('stanford_dl_ex/ex1/housing.data')
data = np.insert(data_original, 0, 1, axis=1)
np.random.shuffle(data)
train_X = data[:400, :-1]
train_y = data[:400, -1]
m, n = train_X.shape
theta = np.random.rand(n)
def cost_function(theta, X, y):
squared_errors = (X.dot(theta) - y) ** 2
J = 0.5 * squared_errors.sum()
return J
def gradient(theta, X, y):
errors = X.dot(theta) - y
return errors.dot(X)
epsilon = 1e-4
mask = np.identity(theta.size)
theta_plus = theta + epsilon * mask
theta_minus = theta - epsilon * mask
diffs = np.empty_like(theta)
for i in range(theta_plus.shape[0]):
gradient_def = (
(cost_function(theta_plus[i], train_X, train_y) - cost_function(theta_minus[i], train_X, train_y)) /
(2 * epsilon)
)
gradient_lin_reg = gradient(theta, train_X, train_y)[i]
diffs[i] = np.absolute(gradient_def - gradient_lin_reg)
diffs
assert all(np.less(diffs, 1e-4))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define some necessary functions.
Step2: Gradient Checking
Step3: Prepare theta step values (making use of numpy broadcasting).
Step4: Compute diffs between theta's gradient as mathematically defined and the gradient as defined by our function above.
Step5: Lookin' good! The smaller the values, the better.<br>
|
4,635
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import requests
import os
# GET A CSV OF ALL STARBUCKS LOCATIONS
# If this link is ever broken, use the link above to get a new one
fname = 'All_Starbucks_Locations_in_the_World.csv'
if not(os.path.isfile(fname)):
print 'Getting file from Socrata portal'
r = requests.get('https://opendata.socrata.com/api/views/xy4y-c4mk/rows.csv?accessType=DOWNLOAD')
f = open(fname, 'w')
f.write(r.text.encode('utf-8'))
f.close()
df = pd.read_csv(fname)
# LET'S GET SOME SUMMARY STATISTICS BY COUNTRY
by_country = pd.DataFrame(df.groupby(['Country'])['Store ID'].count())
by_country.sort('Store ID', ascending=False, inplace=True)
by_country.columns = ['count']
by_country['percentage'] = by_country['count'] / by_country['count'].sum()
by_country.head()
# DRILL DOWN BY STATES
filter = df['Country'] == 'US'
usa = pd.DataFrame(df[filter])
by_state = pd.DataFrame(usa.groupby(['Country Subdivision'])['Store ID'].count())
by_state.sort('Store ID', ascending=False, inplace=True)
by_state.columns = ['count']
by_state['percentage'] = by_state['count'] / by_state['count'].sum()
by_state.head()
# FOCUS ON LOS ANGELES
cfilter = df['Country'] == 'US'
sfilter = df['Country Subdivision'] == 'CA'
lafilter = df['City'] == 'Los Angeles'
filter = cfilter & sfilter & lafilter
la = df[filter].copy()
# HOW MANY ROWS AND COLUMNS?
la.shape
# CAN YOU FIND YOUR FAVORITE?
la[['Street 1', 'Street 2']]
co_series = la['Ownership Type']=='CO'
co_series.head()
~co_series.head()
co_series.tolist()
la.sort('Postal Code', inplace=True)
la.head()
la.index
la.index = np.arange(la.shape[0])
la.index
la.head()
la.drop('Brand', axis=1, inplace=True)
cols = la.columns.tolist()
cols[0] = 'store_id'
la.columns = cols
la.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Copy is a gotcha
Step2: A few Pandas features used in this workshop
Step3: Indexes
Step4: Column renaming and dropping
|
4,636
|
<ASSISTANT_TASK:>
Python Code:
# A dependency of the preprocessing for BERT inputs
!pip install -q --user tensorflow-text
!pip install -q --user tf-models-official
import os
import shutil
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
from official.nlp import optimization # to create AdamW optmizer
import matplotlib.pyplot as plt
tf.get_logger().setLevel('ERROR')
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
url = 'https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
#TODO
#Set a path to a folder outside the git repo. This is important so data won't get indexed by git on Jupyter lab
path = #example: '/home/jupyter/'
dataset = tf.keras.utils.get_file('aclImdb_v1.tar.gz', url,
untar=True, cache_dir=path,
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
train_dir = os.path.join(dataset_dir, 'train')
# remove unused folders to make it easier to load the data
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
AUTOTUNE = tf.data.AUTOTUNE
batch_size = 32
seed = 42
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
path+'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
class_names = raw_train_ds.class_names
train_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
path+'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
test_ds = tf.keras.preprocessing.text_dataset_from_directory(
path+'aclImdb/test',
batch_size=batch_size)
test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)
for text_batch, label_batch in train_ds.take(1):
for i in range(3):
print(f'Review: {text_batch.numpy()[i]}')
label = label_batch.numpy()[i]
print(f'Label : {label} ({class_names[label]})')
bert_model_name = 'small_bert/bert_en_uncased_L-4_H-512_A-8'
@param ["bert_en_uncased_L-12_H-768_A-12",
"bert_en_cased_L-12_H-768_A-12", "bert_multi_cased_L-12_H-768_A-12",
"small_bert/bert_en_uncased_L-2_H-128_A-2",
"small_bert/bert_en_uncased_L-2_H-256_A-4",
"small_bert/bert_en_uncased_L-2_H-512_A-8",
"small_bert/bert_en_uncased_L-2_H-768_A-12",
"small_bert/bert_en_uncased_L-4_H-128_A-2",
"small_bert/bert_en_uncased_L-4_H-256_A-4",
"small_bert/bert_en_uncased_L-4_H-512_A-8",
"small_bert/bert_en_uncased_L-4_H-768_A-12",
"small_bert/bert_en_uncased_L-6_H-128_A-2",
"small_bert/bert_en_uncased_L-6_H-256_A-4",
"small_bert/bert_en_uncased_L-6_H-512_A-8",
"small_bert/bert_en_uncased_L-6_H-768_A-12",
"small_bert/bert_en_uncased_L-8_H-128_A-2",
"small_bert/bert_en_uncased_L-8_H-256_A-4",
"small_bert/bert_en_uncased_L-8_H-512_A-8",
"small_bert/bert_en_uncased_L-8_H-768_A-12",
"small_bert/bert_en_uncased_L-10_H-128_A-2",
"small_bert/bert_en_uncased_L-10_H-256_A-4",
"small_bert/bert_en_uncased_L-10_H-512_A-8",
"small_bert/bert_en_uncased_L-10_H-768_A-12",
"small_bert/bert_en_uncased_L-12_H-128_A-2",
"small_bert/bert_en_uncased_L-12_H-256_A-4",
"small_bert/bert_en_uncased_L-12_H-512_A-8",
"small_bert/bert_en_uncased_L-12_H-768_A-12",
"albert_en_base", "electra_small",
"electra_base",
"experts_pubmed",
"experts_wiki_books",
"talking-heads_base"]
map_name_to_handle = {
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_base/2',
'electra_small':
'https://tfhub.dev/google/electra_small/2',
'electra_base':
'https://tfhub.dev/google/electra_base/2',
'experts_pubmed':
'https://tfhub.dev/google/experts/bert/pubmed/2',
'experts_wiki_books':
'https://tfhub.dev/google/experts/bert/wiki_books/2',
'talking-heads_base':
'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1',
}
map_model_to_preprocess = {
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'electra_small':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'electra_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_pubmed':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_wiki_books':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'talking-heads_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
}
tfhub_handle_encoder = map_name_to_handle[bert_model_name]
tfhub_handle_preprocess = map_model_to_preprocess[bert_model_name]
print(f'BERT model selected : {tfhub_handle_encoder}')
print(f'Preprocess model auto-selected: {tfhub_handle_preprocess}')
bert_preprocess_model = #TODO: your code goes here
text_test = ['this is such an amazing movie!']
text_preprocessed = #TODO: Code goes here
#This print box will help you inspect the keys in the pre-processed dictionary
print(f'Keys : {list(text_preprocessed.keys())}')
# 1. input_word_ids is the ids for the words in the tokenized sentence
print(f'Shape : {text_preprocessed["input_word_ids"].shape}')
print(f'Word Ids : {text_preprocessed["input_word_ids"][0, :12]}')
#2. input_mask is the tokens which we are masking (masked language model)
print(f'Input Mask : {text_preprocessed["input_mask"][0, :12]}')
#3. input_type_ids is the sentence id of the input sentence.
print(f'Type Ids : {text_preprocessed["input_type_ids"][0, :12]}')
bert_model = hub.KerasLayer(tfhub_handle_encoder)
bert_results = bert_model(text_preprocessed)
print(f'Loaded BERT: {tfhub_handle_encoder}')
print(f'Pooled Outputs Shape:{bert_results["pooled_output"].shape}')
print(f'Pooled Outputs Values:{bert_results["pooled_output"][0, :12]}')
print(f'Sequence Outputs Shape:{bert_results["sequence_output"].shape}')
print(f'Sequence Outputs Values:{bert_results["sequence_output"][0, :12]}')
def build_classifier_model():
# TODO: define your model here
return tf.keras.Model(text_input, net)
#Let's check that the model runs with the output of the preprocessing model.
classifier_model = build_classifier_model()
bert_raw_result = classifier_model(tf.constant(text_test))
print(tf.sigmoid(bert_raw_result))
tf.keras.utils.plot_model(classifier_model)
loss = #TODO: your code goes here
metrics = #TODO: your code goes here
epochs = 5
steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = int(0.1*num_train_steps)
init_lr = 3e-5
optimizer = optimization.create_optimizer(init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw')
#TODO: Model compile code goes here
print(f'Training model with {tfhub_handle_encoder}')
history = #TODO: model fit code goes here
loss, accuracy = classifier_model.evaluate(test_ds)
print(f'Loss: {loss}')
print(f'Accuracy: {accuracy}')
history_dict = history.history
print(history_dict.keys())
acc = history_dict['binary_accuracy']
val_acc = history_dict['val_binary_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
fig = plt.figure(figsize=(10, 6))
fig.tight_layout()
plt.subplot(2, 1, 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'r', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
# plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(epochs, acc, 'r', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
dataset_name = 'imdb'
saved_model_path = './{}_bert'.format(dataset_name.replace('/', '_'))
#TODO: your code goes here
reloaded_model = tf.saved_model.load(saved_model_path)
def print_my_examples(inputs, results):
result_for_printing = \
[f'input: {inputs[i]:<30} : score: {results[i][0]:.6f}'
for i in range(len(inputs))]
print(*result_for_printing, sep='\n')
print()
examples = [
'this is such an amazing movie!', # this is the same sentence tried earlier
'The movie was great!',
'The movie was meh.',
'The movie was okish.',
'The movie was terrible...'
]
reloaded_results = tf.sigmoid(reloaded_model(tf.constant(examples)))
original_results = tf.sigmoid(classifier_model(tf.constant(examples)))
print('Results from the saved model:')
print_my_examples(examples, reloaded_results)
print('Results from the model in memory:')
print_my_examples(examples, original_results)
serving_results = reloaded_model \
.signatures['serving_default'](tf.constant(examples))
serving_results = tf.sigmoid(serving_results['classifier'])
print_my_examples(examples, serving_results)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You will use the AdamW optimizer from tensorflow/models.
Step2: To check if you have a GPU attached. Run the following.
Step3: Sentiment Analysis
Step4: Next, you will use the text_dataset_from_directory utility to create a labeled tf.data.Dataset.
Step5: Let's take a look at a few reviews.
Step7: Loading models from TensorFlow Hub
Step8: The preprocessing model
Step9: Let's try the preprocessing model on some text and see the output
Step10: As you can see, now you have the 3 outputs from the preprocessing that a BERT model would use (input_words_id, input_mask and input_type_ids).
Step11: The BERT models return a map with 3 important keys
Step12: The output is meaningless, of course, because the model has not been trained yet.
Step13: Model training
Step14: Optimizer
Step15: Loading the BERT model and training
Step16: Note
Step17: Evaluate the model
Step18: Plot the accuracy and loss over time
Step19: In this plot, the red lines represents the training loss and accuracy, and the blue lines are the validation loss and accuracy.
Step20: Let's reload the model so you can try it side by side with the model that is still in memory.
Step21: Here you can test your model on any sentence you want, just add to the examples variable below.
Step22: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. In Python, you can test them as follows
|
4,637
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_predict_quantized_256.tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_transfer_quantized_dynamic.tflite')
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process style image input.
def preprocess_style_image(style_image):
# Resize the image so that the shorter dimension becomes 256px.
target_dim = 256
shape = tf.cast(tf.shape(style_image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
style_image = tf.image.resize(style_image, new_shape)
# Central crop the image.
style_image = tf.image.resize_with_crop_or_pad(style_image, target_dim, target_dim)
return style_image
# Function to pre-process content image input.
def preprocess_content_image(content_image):
# Central crop the image.
shape = tf.shape(content_image)[1:-1]
short_dim = min(shape)
content_image = tf.image.resize_with_crop_or_pad(content_image, short_dim, short_dim)
return content_image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_content_image(content_image)
preprocessed_style_image = preprocess_style_image(style_image)
print('Style Image Shape:', preprocessed_content_image.shape)
print('Content Image Shape:', preprocessed_style_image.shape)
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.resize_tensor_input(input_details[0]["index"],
preprocessed_content_image.shape)
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_style_image(content_image)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Artistic Style Transfer with TensorFlow Lite
Step2: Download the content and style images, and the pre-trained TensorFlow Lite models.
Step3: Pre-process the inputs
Step4: Visualize the inputs
Step5: Run style transfer with TensorFlow Lite
Step6: Style transform
Step7: Style blending
|
4,638
|
<ASSISTANT_TASK:>
Python Code:
N = 1000
alpha = 1.0
mu = 10.0
x, t = SSA(100,N,a=alpha,mu=mu)
x = x.astype(int) # path data supposed to be integers.
path_data = {
'N' : N,
't' : t,
'x' : x
}
print 'Simulated up to time T =', round(t[N-1])
# Setup STAN :
model_description =
data{
int<lower=0> N; ## number of time steps
vector[N] t; ## time value at each time step
int<lower=0> x[N]; ## population value at each time step
}
transformed data{
int<lower=0> x0; ## starting population value
x0 <- x[1];
}
parameters{
real<lower=0> alpha; ## birth rate parameter
real<lower=0> mu; ## death rate parameter
}
transformed parameters{
real<lower=0> gamma;
gamma <- alpha/mu;
}
model {
x ~ poisson(gamma+(x0-gamma)*exp(-mu*t));
}
fit = stan(model_code=model_description,
data=path_data, chains=1,iter=1000)
print fit
N = 300
Npath = 10
alpha = 1.0
mu = 10.0
x = np.zeros([Npath,N])
t = np.zeros([Npath,N])
for i in xrange(Npath):
x[i,], t[i,] = SSA(100,N,a=alpha,mu=mu)
x = x.astype(int) # path data supposed to be integers.
x = x.flatten()
t = t.flatten()
Ndata = N*Npath
path_data = {
'N' : Ndata,
't' : t,
'x' : x
}
fit_trans = stan(model_code=model_description,
data=path_data, chains=1,iter=2000)
print fit_trans
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Inferring from long versus short-time regimes
Step2: Above we can see the inference from STAN. Remember that $a=1$ and $\mu=10$, which gives $\gamma=a/\mu=0.1$. Looking at the values above, we can see that STAN didn't quite hit the mark with $a$, whereas it did better at $\mu$ and $\gamma$.
Step3: Now that we have data that are mostly from the transition regime, we run our inference again.
|
4,639
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
data = pd.read_csv("thanksgiving.csv", encoding = 'Latin-1')
data.head()
data.columns
data['Do you celebrate Thanksgiving?'].value_counts()
filter_yes = data['Do you celebrate Thanksgiving?'] == "Yes"
data = data.loc[filter_yes]
data
data['What is typically the main dish at your Thanksgiving dinner?'].value_counts()
filter_tofurkey = data['What is typically the main dish at your Thanksgiving dinner?'] == 'Tofurkey'
tofurkey = data.loc[filter_tofurkey]
tofurkey['Do you typically have gravy?'].value_counts()
apple_isnull = data['Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Apple'].isnull()
pumpkin_isnull = data['Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Pumpkin'].isnull()
pecan_isnull = data['Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Pecan'].isnull()
ate_pies = apple_isnull & pumpkin_isnull & pecan_isnull
ate_pies.value_counts()
proportion = 876/(876+182)
proportion
import numpy as np
def convert_age(value):
if pd.isnull(value):
return None
str2 = value.split(' ')[0]
str2 = str2.replace('+',' ')
return int(str2)
data['int_age'] = data['Age'].apply(convert_age)
np.mean(data['int_age'])
import numpy as np
def convert_income(value):
if pd.isnull(value):
return None
if value == 'Prefer not to answer':
return None
str2 = value.split(' ')[0]
str2 = str2.replace('$', '')
str2 = str2.replace(',', '')
return int(str2)
data['int_income'] = data['How much total combined money did all members of your HOUSEHOLD earn last year?'].apply(convert_income)
data_income = data['int_income'].dropna()
data_income.describe()
less_than_50k = data['int_income'] < 50000
filter_less_50k = data[less_than_50k]
filter_less_50k['How far will you travel for Thanksgiving?'].value_counts()
over_150k = data['int_income'] > 150000
filter_over_150k = data[over_150k]
filter_over_150k['How far will you travel for Thanksgiving?'].value_counts()
proportion_home_50k = 106/(106 + 92 + 64 + 16)
proportion_home_150k = 49/(49 + 25 + 16 + 12)
print(proportion_home_50k)
print(proportion_home_150k)
thanksgiving_meet_friends = 'Have you ever tried to meet up with hometown friends on Thanksgiving night?'
thanksgiving_friendsgiving = 'Have you ever attended a "Friendsgiving?"'
data.pivot_table(index = thanksgiving_meet_friends, columns = thanksgiving_friendsgiving, values = "int_age")
import re
desserts = 'Which of these desserts do you typically have at Thanksgiving dinner?'
column_headings = data.columns
desserts_index = [i for i,x in enumerate(column_headings) if re.match(desserts,x)]
counts = data[desserts_index].count()
counts.sort_values(ascending = False)
region_counts = data['US Region'].value_counts().sort_index()
print(region_counts)
data["US Region"] = data["US Region"].astype("category")
data["US Region"].cat.set_categories(['East North Central', 'East South Central', 'Middle Atlantic',
'Mountain','New England','Pacific','South Atlantic',
'West North Central','West South Central'],inplace=True)
data["What is typically the main dish at your Thanksgiving dinner?"] = data["What is typically the main dish at your Thanksgiving dinner?"].astype("category")
data["What is typically the main dish at your Thanksgiving dinner?"].cat.set_categories(['Chicken', 'I don\'t know','Other (please specify)', 'Roast beef', 'Tofurkey', 'Turducken', 'Turkey', 'Ham/Pork'],inplace=True)
main_dish = pd.pivot_table(data,index = [ "What is typically the main dish at your Thanksgiving dinner?"], columns =['US Region'], values=["RespondentID"], aggfunc=lambda x: len(x.unique()), fill_value = 0, margins = True )
main_dish_normalized = main_dish.div( main_dish.iloc[-1,:], axis=1 )
main_dish_normalized
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need to filter out the people who didn't celebrate Thanksgiving.
Step2: What main dishes do people eat at Thanksgiving?
Step3: How many people ate apple, pumpkin or pecan pie at Thanksgiving?
Step4: 876 people ate pies (is null was false), and 182 people didn't.
Step5: 82% of people ate pies.
Step6: We used the bottom of each age interval, so our mean would be an underestimate of the mean age.
Step7: Using the bottom of each range of income would likely underestimate mean. Also, we don't know where in the range the mean person would fall.
Step8: A greater proportion of people making higher income had Thanksgiving in their homes.
Step9: People who have met up with friends are generally younger (ie. 34 vs. 41 for people who haven't).
Step10: The most common dessert is ice cream followed by cookies.
|
4,640
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import dismalpy as dp
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, suppress=True, linewidth=120)
from pandas.io.data import DataReader
# Get the datasets from FRED
start = '1979-01-01'
end = '2014-12-01'
indprod = DataReader('IPMAN', 'fred', start=start, end=end)
income = DataReader('W875RX1', 'fred', start=start, end=end)
# sales = DataReader('CMRMTSPL', 'fred', start=start, end=end)
emp = DataReader('PAYEMS', 'fred', start=start, end=end)
# dta = pd.concat((indprod, income, sales, emp), axis=1)
# dta.columns = ['indprod', 'income', 'sales', 'emp']
HMRMT = DataReader('HMRMT', 'fred', start='1967-01-01', end=end)
CMRMT = DataReader('CMRMT', 'fred', start='1997-01-01', end=end)
HMRMT_growth = HMRMT.diff() / HMRMT.shift()
sales = pd.Series(np.zeros(emp.shape[0]), index=emp.index)
# Fill in the recent entries (1997 onwards)
sales[CMRMT.index] = CMRMT
# Backfill the previous entries (pre 1997)
idx = sales.ix[:'1997-01-01'].index
for t in range(len(idx)-1, 0, -1):
month = idx[t]
prev_month = idx[t-1]
sales.ix[prev_month] = sales.ix[month] / (1 + HMRMT_growth.ix[prev_month].values)
dta = pd.concat((indprod, income, sales, emp), axis=1)
dta.columns = ['indprod', 'income', 'sales', 'emp']
dta.ix[:, 'indprod':'emp'].plot(subplots=True, layout=(2, 2), figsize=(15, 6));
# Create log-differenced series
dta['dln_indprod'] = (np.log(dta.indprod)).diff() * 100
dta['dln_income'] = (np.log(dta.income)).diff() * 100
dta['dln_sales'] = (np.log(dta.sales)).diff() * 100
dta['dln_emp'] = (np.log(dta.emp)).diff() * 100
# De-mean and standardize
dta['std_indprod'] = (dta['dln_indprod'] - dta['dln_indprod'].mean()) / dta['dln_indprod'].std()
dta['std_income'] = (dta['dln_income'] - dta['dln_income'].mean()) / dta['dln_income'].std()
dta['std_sales'] = (dta['dln_sales'] - dta['dln_sales'].mean()) / dta['dln_sales'].std()
dta['std_emp'] = (dta['dln_emp'] - dta['dln_emp'].mean()) / dta['dln_emp'].std()
# Get the endogenous data
endog = dta.ix['1979-02-01':, 'std_indprod':'std_emp']
# Create the model
mod = dp.ssm.DynamicFactor(endog, k_factors=1, factor_order=2, error_order=2)
initial_res = mod.fit(method='powell', disp=False)
res = mod.fit(initial_res.params)
print(res.summary(separate_params=False))
fig, ax = plt.subplots(figsize=(13,3))
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, res.factors.filtered[0], label='Factor')
ax.legend()
# Retrieve and also plot the NBER recession indicators
rec = DataReader('USREC', 'fred', start=start, end=end)
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1);
res.plot_coefficients_of_determination(figsize=(8,2));
usphci = DataReader('USPHCI', 'fred', start='1979-01-01', end='2014-12-01')['USPHCI']
usphci.plot(figsize=(13,3));
dusphci = usphci.diff()[1:].values
def compute_coincident_index(mod, res):
# Estimate W(1)
spec = res.specification
design = mod.ssm['design']
transition = mod.ssm['transition']
ss_kalman_gain = res.filter_results.kalman_gain[:,:,-1]
k_states = ss_kalman_gain.shape[0]
W1 = np.linalg.inv(np.eye(k_states) - np.dot(
np.eye(k_states) - np.dot(ss_kalman_gain, design),
transition
)).dot(ss_kalman_gain)[0]
# Compute the factor mean vector
factor_mean = np.dot(W1, dta.ix['1972-02-01':, 'dln_indprod':'dln_emp'].mean())
# Normalize the factors
factor = res.factors.filtered[0]
factor *= np.std(usphci.diff()[1:]) / np.std(factor)
# Compute the coincident index
coincident_index = np.zeros(mod.nobs+1)
# The initial value is arbitrary; here it is set to
# facilitate comparison
coincident_index[0] = usphci.iloc[0] * factor_mean / dusphci.mean()
for t in range(0, mod.nobs):
coincident_index[t+1] = coincident_index[t] + factor[t] + factor_mean
# Attach dates
coincident_index = pd.TimeSeries(coincident_index, index=dta.index).iloc[1:]
# Normalize to use the same base year as USPHCI
coincident_index *= (usphci.ix['1992-07-01'] / coincident_index.ix['1992-07-01'])
return coincident_index
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
coincident_index = compute_coincident_index(mod, res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, label='Coincident index')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1);
from dismalpy.ssm import tools
class ExtendedDFM(dp.ssm.DynamicFactor):
def __init__(self, endog, **kwargs):
# Setup the model as if we had a factor order of 4
super(ExtendedDFM, self).__init__(
endog, k_factors=1, factor_order=4, error_order=2,
**kwargs)
# Note: `self.parameters` is an ordered dict with the
# keys corresponding to parameter types, and the values
# the number of parameters of that type.
# Reset the number of AR parameters
self.parameters['factor_transition'] = 2
# Add the new parameters
self.parameters['new_loadings'] = 3
# Cache a slice for the location of the 4 factor AR
# parameters (a_1, ..., a_4) in the full parameter vector
self._ar_offset = (self.parameters['factor_loadings'] +
self.parameters['exog'] +
self.parameters['error_cov'])
offset = self._ar_offset
self._params_factor_ar = np.s_[offset:offset+2]
self._params_factor_zero = np.s_[offset+2:offset+4]
@property
def start_params(self):
# Remove the two autoregressive terms we don't need
# Add three new loading parameters to the end of the parameter
# vector, initialized to zeros (for simplicity; they could
# be initialized any way you like)
params = super(ExtendedDFM, self).start_params
offset = self._ar_offset
return np.r_[params[:offset+2], params[offset+4:], 0, 0, 0]
@property
def param_names(self):
# Add the corresponding names for the new loading parameters
# (the name can be anything you like)
param_names = super(ExtendedDFM, self).param_names
offset = self._ar_offset
return param_names[:offset+2] + param_names[offset+4:] + [
'loading.L%d.f1.%s' % (i, self.endog_names[3]) for i in range(1,4)]
def transform_params(self, unconstrained):
# Perform the typical DFM transformation (w/o the new parameters)
offset = self._ar_offset
constrained = super(ExtendedDFM, self).transform_params(
np.r_[unconstrained[:offset+2], 0, 0, unconstrained[offset+2:-3]])
# Redo the factor AR constraint, since we only want an AR(2),
# and the previous constraint was for an AR(4)
ar_params = unconstrained[self._params_factor_ar]
constrained[self._params_factor_ar] = (
tools.constrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[constrained[:offset+2], constrained[offset+4:], unconstrained[-3:]]
def untransform_params(self, constrained):
# Perform the typical DFM untransformation (w/o the new parameters)
offset = self._ar_offset
unconstrained = super(ExtendedDFM, self).untransform_params(
np.r_[constrained[:offset+2], 0, 0, constrained[offset+2:-3]])
# Redo the factor AR unconstraint, since we only want an AR(2),
# and the previous unconstraint was for an AR(4)
ar_params = constrained[self._params_factor_ar]
unconstrained[self._params_factor_ar] = (
tools.unconstrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[unconstrained[:offset+2], unconstrained[offset+4:], constrained[-3:]]
def update(self, params, transformed=True):
# Peform the transformation, if required
if not transformed:
params = self.transform_params(params)
# Now perform the usual DFM update, but exclude our new parameters
offset = self._ar_offset
super(ExtendedDFM, self).update(np.r_[params[:offset+2], 0, 0, params[offset+2:-3]], transformed=True)
# Finally, set our new parameters in the design matrix
self.ssm['design', 3, 1:4] = params[-3:]
# Create the model
extended_mod = ExtendedDFM(endog)
initial_extended_res = extended_mod.fit(method='powell', disp=False)
extended_res = extended_mod.fit(initial_extended_res.params, maxiter=1000)
print(extended_res.summary(separate_params=False))
extended_res.plot_coefficients_of_determination(figsize=(8,2));
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
extended_coincident_index = compute_coincident_index(extended_mod, extended_res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, '-', linewidth=1, label='Basic model')
ax.plot(dates, extended_coincident_index, '--', linewidth=3, label='Extended model')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
ax.set(title='Coincident indices, comparison')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated.
Step3: Dynamic factors
Step4: Estimates
Step5: Estimated factors
Step6: Post-estimation
Step7: Coincident Index
Step8: Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI.
Step9: Appendix 1
Step10: So what did we just do?
Step11: Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters.
|
4,641
|
<ASSISTANT_TASK:>
Python Code:
#@title Imports & Utils
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='white')
import warnings
warnings.filterwarnings("ignore")
!wget -O silica_train.npz https://www.dropbox.com/s/3dojk4u4di774ve/silica_train.npz?dl=0
!wget https://github.com/google/jax-md/blob/master/examples/models/si_gnn.pickle?raw=true
import numpy as onp
from jax import device_put
box_size = 10.862
with open('silica_train.npz', 'rb') as f:
files = onp.load(f)
qm_positions, qm_energies, qm_forces = [device_put(x) for x in (files['arr_3'], files['arr_4'], files['arr_5'])]
qm_positions = qm_positions[:300]
qm_energies = qm_energies[:300]
qm_forces = qm_forces[:300]
!pip install jax-md
print(f'Box Size = {box_size}')
print(qm_positions.shape)
print(qm_energies.shape)
print(qm_forces.shape)
from jax_md.colab_tools import renderer
renderer.render(box_size,
{
'atom': renderer.Sphere(qm_positions[0]),
},
resolution=[400, 400])
from jax_md import space
displacement_fn, shift_fn = space.periodic(box_size, wrapped=False)
displacement_fn(qm_positions[0, 0], qm_positions[0, 3])
import jax.numpy as np
shift_fn(qm_positions[0, 0],
np.array([1.0, 0.0, 0.0]))
from jax_md import energy
init_fn, energy_fn = energy.graph_network(displacement_fn, r_cutoff=3.0)
import pickle
with open('si_gnn.pickle?raw=true', 'rb') as f:
params = pickle.load(f)
print(f'Predicted E = {energy_fn(params, qm_positions[0])}')
print(f'Actual E = {qm_energies[0]}')
import functools
energy_fn = functools.partial(energy_fn, params)
from jax import vmap
vectorized_energy_fn = vmap(energy_fn)
plt.plot(qm_energies, vectorized_energy_fn(qm_positions), 'o')
plt.show()
from jax_md import quantity
force_fn = quantity.force(energy_fn)
predicted_forces = force_fn(qm_positions[1])
plt.plot(qm_forces[1].reshape((-1,)),
predicted_forces.reshape((-1,)), 'o')
plt.show()
from jax_md.simulate import nvt_nose_hoover
K_B = 8.617e-5
dt = 5e-3
kT = K_B * 300
Si_mass = 2.91086E-3
init_fn, step_fn = nvt_nose_hoover(energy_fn, shift_fn, dt, kT, tau=1.0)
from jax import jit
step_fn = jit(step_fn)
from jax import random
key = random.PRNGKey(0)
state = init_fn(key, qm_positions[0], Si_mass)
positions = []
for i in range(5000):
state = step_fn(state)
if i % 25 == 0:
positions += [state.position]
positions = np.stack(positions)
renderer.render(box_size,
{
'atom': renderer.Sphere(positions),
},
resolution=[400, 400])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Demo
Step2: Data from a quantum mechanical simulation of Silicon.
Step3: Visualize states inside colab.
Step4: Every simulation starts by defining a space.
Step5: The displacement_fn computes displacement between points
Step6: The shift_fn moves points
Step7: Load a pretrained Graph Neural Network
Step8: Using the network in a simulation
|
4,642
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../')
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
%matplotlib inline
import onsager.crystal as crystal
import onsager.OnsagerCalc as onsager
from scipy.constants import physical_constants
kB = physical_constants['Boltzmann constant in eV/K'][0]
import h5py, json
a0 = 0.343
Ni = crystal.Crystal.FCC(a0, chemistry="Ni")
print(Ni)
chemistry = 0 # only one sublattice anyway
Nthermo = 2
NiSi = onsager.VacancyMediated(Ni, chemistry, Ni.sitelist(chemistry),
Ni.jumpnetwork(chemistry, 0.75*a0), Nthermo)
print(NiSi)
NiSidata={
"v:+0.000,+0.000,+0.000": [1., 0.],
"s:+0.000,+0.000,+0.000": [1., 0.],
"s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000": [1., -0.108],
"s:+0.000,+0.000,+0.000-v:-1.000,-1.000,+1.000": [1., +0.004],
"s:+0.000,+0.000,+0.000-v:+1.000,-2.000,+0.000": [1., +0.037],
"s:+0.000,+0.000,+0.000-v:+0.000,-2.000,+0.000": [1., -0.008],
"omega0:v:+0.000,+0.000,+0.000^v:+0.000,+1.000,-1.000": [4.8, 1.074],
"omega1:s:+0.000,+0.000,+0.000-v:-1.000,+0.000,+0.000^v:-1.000,+1.000,-1.000": [5.2, 1.213-0.108],
"omega1:s:+0.000,+0.000,+0.000-v:+0.000,-1.000,+0.000^v:+0.000,+0.000,-1.000": [5.2, 1.003-0.108],
"omega1:s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000^v:+0.000,+2.000,-2.000": [4.8, 1.128-0.108],
"omega1:s:+0.000,+0.000,+0.000-v:-1.000,+1.000,+0.000^v:-1.000,+2.000,-1.000": [5.2, 1.153-0.108],
"omega1:s:+0.000,+0.000,+0.000-v:+1.000,-1.000,-1.000^v:+1.000,+0.000,-2.000": [4.8, 1.091+0.004],
"omega2:s:+0.000,+0.000,+0.000-v:+0.000,-1.000,+1.000^s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000": [5.1, 0.891-0.108]
}
NiSi2013data={
"v:+0.000,+0.000,+0.000": [1., 0.],
"s:+0.000,+0.000,+0.000": [1., 0.],
"s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000": [1., -0.100],
"s:+0.000,+0.000,+0.000-v:-1.000,-1.000,+1.000": [1., +0.011],
"s:+0.000,+0.000,+0.000-v:+1.000,-2.000,+0.000": [1., +0.045],
"s:+0.000,+0.000,+0.000-v:+0.000,-2.000,+0.000": [1., 0],
"omega0:v:+0.000,+0.000,+0.000^v:+0.000,+1.000,-1.000": [4.8, 1.074],
"omega1:s:+0.000,+0.000,+0.000-v:-1.000,+0.000,+0.000^v:-1.000,+1.000,-1.000": [5.2, 1.213-0.100],
"omega1:s:+0.000,+0.000,+0.000-v:+0.000,-1.000,+0.000^v:+0.000,+0.000,-1.000": [5.2, 1.003-0.100],
"omega1:s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000^v:+0.000,+2.000,-2.000": [4.8, 1.128-0.100],
"omega1:s:+0.000,+0.000,+0.000-v:-1.000,+1.000,+0.000^v:-1.000,+2.000,-1.000": [5.2, 1.153-0.100],
"omega1:s:+0.000,+0.000,+0.000-v:+1.000,-1.000,-1.000^v:+1.000,+0.000,-2.000": [4.8, 1.091+0.011],
"omega2:s:+0.000,+0.000,+0.000-v:+0.000,-1.000,+1.000^s:+0.000,+0.000,+0.000-v:+0.000,+1.000,-1.000": [5.1, 0.891-0.100]
}
print(json.dumps(NiSi2013data, sort_keys=True, indent=4))
preenedict = NiSi.tags2preene(NiSi2013data)
preenedict
print("#T #Lss #Lsv #drag")
for T in np.linspace(300, 1400, 23):
L0vv, Lss, Lsv, L1vv = NiSi.Lij(*NiSi.preene2betafree(kB*T, **preenedict))
print(T, Lss[0,0], Lsv[0,0], Lsv[0,0]/Lss[0,0])
volume = 0.25*a0**3
conv = 1e3*0.1/volume # 10^3 for THz->ns^-1, 10^-1 for nm^-1 ->Ang^-1
# T: (L0vv, Lsv, Lss)
PRBdata = {960: (1.52e-1, 1.57e-1, 1.29e0),
1060: (4.69e-1, 0., 3.27e0),
1160: (1.18e0, -7.55e-1, 7.02e0)}
print("#T #Lvv #Lsv #Lss")
for T in (960, 1060, 1160):
c = conv/(kB*T)
L0vv, Lss, Lsv, L1vv = NiSi.Lij(*NiSi.preene2betafree(kB*T, **preenedict))
vv, sv, ss = L0vv[0,0]*c, Lsv[0,0]*c, Lss[0,0]*c
vvref, svref, ssref = PRBdata[T]
print("{} {:.4g} ({:.4g}) {:.4g} ({:.4g}) {:.4g} ({:.4g})".format(T, vv, vvref/vv, sv, svref/sv, ss, ssref/ss))
# raw comparison data from 2013 paper
Tval = np.array([510, 530, 550, 570, 590, 610, 630, 650, 670, 690,
710, 730, 750, 770, 790, 810, 830, 850, 870, 890,
910, 930, 950, 970, 990, 1010, 1030, 1050, 1070, 1090,
1110, 1130, 1150, 1170, 1190, 1210, 1230, 1250, 1270, 1290,
1310, 1330, 1350, 1370, 1390, 1410, 1430, 1450, 1470, 1490])
fluxval = np.array([0.771344, 0.743072, 0.713923, 0.684066, 0.653661, 0.622858,
0.591787, 0.560983, 0.529615, 0.498822, 0.467298, 0.436502,
0.406013, 0.376193, 0.346530, 0.316744, 0.288483, 0.260656,
0.232809, 0.205861, 0.179139, 0.154038, 0.128150, 0.103273,
0.079025, 0.055587, 0.032558, 0.010136, -0.011727, -0.033069,
-0.053826, -0.074061, -0.093802, -0.113075, -0.132267, -0.149595,
-0.167389, -0.184604, -0.202465, -0.218904, -0.234157, -0.250360,
-0.265637, -0.280173, -0.294940, -0.308410, -0.322271, -0.335809,
-0.349106, -0.361605])
# Trange = np.linspace(300, 1500, 121)
Draglist = []
for T in Tval:
L0vv, Lss, Lsv, L1vv = NiSi.Lij(*NiSi.preene2betafree(kB*T, **preenedict))
Draglist.append(Lsv[0,0]/Lss[0,0])
Drag = np.array(Draglist)
fig, ax1 = plt.subplots()
ax1.plot(Tval, Drag, 'k', label='GF')
ax1.plot(Tval, fluxval, 'r', label='SCMF (PRB 2013)')
ax1.set_ylabel('drag ratio $L^{\\rm{SiV}}/L^{\\rm{SiSi}}$', fontsize='x-large')
ax1.set_xlabel('$T$ [K]', fontsize='x-large')
ax1.legend(bbox_to_anchor=(0.5,0.6,0.5,0.2), ncol=1,
shadow=True, frameon=True, fontsize='x-large')
plt.show()
# plt.savefig('NiSi-drag.pdf', transparent=True, format='pdf')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create an FCC Ni crystal.
Step2: Next, we construct our diffuser. For this problem, our thermodynamic range is out to the fourth neighbor; hence, we construct a two shell thermodynamic range (that is, sums of two $\frac{a}{2}\langle 110\rangle$ vectors. That is, $N_\text{thermo}=2$ gives 4 stars
Step3: Below is an example of the above data translated into a dictionary corresponding to the data for Ni-Si; it is output into a JSON compliant file for reference. The strings are the corresponding tags in the diffuser. The first entry in each list is the prefactor (in THz) and the second is the corresponding energy (in eV). Note
Step4: Next, we convert our dictionary into the simpler form used by the diffuser.
Step5: We can now calculate the diffusion coefficients and drag ratio. Note
Step6: For direct comparison with the SCMF data in the 2013 Phys. Rev. B paper, we evaluate at 960K, 1060K (the predicted crossover temperature), and 1160K. The reported data is in units of mol/eV Å ns.
|
4,643
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# Required imports
from wikitools import wiki
from wikitools import category
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import gensim
import numpy as np
import lda
import lda.datasets
from time import time
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
import matplotlib.pyplot as plt
import pylab
from test_helper import Test
site = wiki.Wiki("https://en.wikipedia.org/w/api.php")
# Select a category with a reasonable number of articles (>100)
# cat = "Economics"
cat = "Pseudoscience"
print cat
# Loading category data. This may take a while
print "Loading category data. This may take a while..."
cat_data = category.Category(site, cat)
corpus_titles = []
corpus_text = []
for n, page in enumerate(cat_data.getAllMembersGen()):
print "\r Loading article {0}".format(n + 1),
corpus_titles.append(page.title)
corpus_text.append(page.getWikiText())
n_art = len(corpus_titles)
print "\nLoaded " + str(n_art) + " articles from category " + cat
# n = 5
# print corpus_titles[n]
# print corpus_text[n]
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "punkt"
# Select option "d) Download", and identifier "stopwords"
# nltk.download()
stopwords_en = stopwords.words('english')
corpus_clean = []
for n, art in enumerate(corpus_text):
print "\rProcessing article {0} out of {1}".format(n + 1, n_art),
# This is to make sure that all characters have the appropriate encoding.
art = art.decode('utf-8')
# Tokenize each text entry.
# scode: tokens = <FILL IN>
token_list = word_tokenize(art)
# Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.
# Store the result in a new token list, clean_tokens.
# scode: filtered_tokens = <FILL IN>
filtered_tokens = [token.lower() for token in token_list if token.isalnum()]
# Remove all tokens in the stopwords list and append the result to corpus_clean
# scode: clean_tokens = <FILL IN>
clean_tokens = [token for token in filtered_tokens if token not in stopwords_en]
# scode: <FILL IN>
corpus_clean.append(clean_tokens)
print "\nLet's check the first tokens from document 0 after processing:"
print corpus_clean[0][0:30]
Test.assertTrue(len(corpus_clean) == n_art, 'List corpus_clean does not contain the expected number of articles')
Test.assertTrue(len([c for c in corpus_clean[0] if c in stopwords_en])==0, 'Stopwords have not been removed')
# Select stemmer.
stemmer = nltk.stem.SnowballStemmer('english')
corpus_stemmed = []
for n, token_list in enumerate(corpus_clean):
print "\rStemming article {0} out of {1}".format(n + 1, n_art),
# Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.
# Store the result in a new token list, clean_tokens.
# scode: stemmed_tokens = <FILL IN>
stemmed_tokens = [stemmer.stem(token) for token in token_list]
# Add art to the stemmed corpus
# scode: <FILL IN>
corpus_stemmed.append(stemmed_tokens)
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_stemmed[0][0:30]
Test.assertTrue((len([c for c in corpus_stemmed[0] if c!=stemmer.stem(c)]) < 0.1*len(corpus_stemmed[0])),
'It seems that stemming has not been applied properly')
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "wordnet"
# nltk.download()
wnl = WordNetLemmatizer()
# Select stemmer.
corpus_lemmat = []
for n, token_list in enumerate(corpus_clean):
print "\rLemmatizing article {0} out of {1}".format(n + 1, n_art),
# scode: lemmat_tokens = <FILL IN>
lemmat_tokens = [wnl.lemmatize(token) for token in token_list]
# Add art to the stemmed corpus
# scode: <FILL IN>
corpus_lemmat.append(lemmat_tokens)
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_lemmat[0][0:30]
# Create dictionary of tokens
D = gensim.corpora.Dictionary(corpus_clean)
n_tokens = len(D)
print "The dictionary contains {0} tokens".format(n_tokens)
print "First tokens in the dictionary: "
for n in range(10):
print str(n) + ": " + D[n]
# Transform token lists into sparse vectors on the D-space
corpus_bow = [D.doc2bow(doc) for doc in corpus_clean]
Test.assertTrue(len(corpus_bow)==n_art, 'corpus_bow has not the appropriate size')
print "Original article (after cleaning): "
print corpus_clean[0][0:30]
print "Sparse vector representation (first 30 components):"
print corpus_bow[0][0:30]
print "The first component, {0} from document 0, states that token 0 ({1}) appears {2} times".format(
corpus_bow[0][0], D[0], corpus_bow[0][0][1])
print "{0} tokens".format(len(D))
print "{0} Wikipedia articles".format(len(corpus_bow))
# SORTED TOKEN FREQUENCIES (I):
# Create a "flat" corpus with all tuples in a single list
corpus_bow_flat = [item for sublist in corpus_bow for item in sublist]
# Initialize a numpy array that we will use to count tokens.
# token_count[n] should store the number of ocurrences of the n-th token, D[n]
token_count = np.zeros(n_tokens)
# Count the number of occurrences of each token.
for x in corpus_bow_flat:
# Update the proper element in token_count
# scode: <FILL IN>
token_count[x[0]] += x[1]
# Sort by decreasing number of occurences
ids_sorted = np.argsort(- token_count)
tf_sorted = token_count[ids_sorted]
print D[ids_sorted[0]]
print "{0} times in the whole corpus".format(tf_sorted[0])
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
# Example data
n_bins = 25
hot_tokens = [D[i] for i in ids_sorted[n_bins-1::-1]]
y_pos = np.arange(len(hot_tokens))
z = tf_sorted[n_bins-1::-1]/n_art
plt.barh(y_pos, z, align='center', alpha=0.4)
plt.yticks(y_pos, hot_tokens)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
# SORTED TOKEN FREQUENCIES:
# Example data
plt.semilogy(tf_sorted)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
# scode: <WRITE YOUR CODE HERE>
# Example data
cold_tokens = [D[i] for i in ids_sorted if tf_sorted[i]==1]
print "There are {0} cold tokens, which represent {1}% of the total number of tokens in the dictionary".format(
len(cold_tokens), float(len(cold_tokens))/n_tokens*100)
# scode: <WRITE YOUR CODE HERE>
# SORTED TOKEN FREQUENCIES (I):
# Count the number of occurrences of each token.
token_count2 = np.zeros(n_tokens)
for x in corpus_bow_flat:
token_count2[x[0]] += (x[1]>0)
# Sort by decreasing number of occurences
ids_sorted2 = np.argsort(- token_count2)
tf_sorted2 = token_count2[ids_sorted2]
# SORTED TOKEN FREQUENCIES (II):
# Example data
n_bins = 25
hot_tokens2 = [D[i] for i in ids_sorted2[n_bins-1::-1]]
y_pos2 = np.arange(len(hot_tokens2))
z2 = tf_sorted2[n_bins-1::-1]/n_art
plt.barh(y_pos2, z2, align='center', alpha=0.4)
plt.yticks(y_pos2, hot_tokens2)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
tfidf = gensim.models.TfidfModel(corpus_bow)
doc_bow = [(0, 1), (1, 1)]
tfidf[doc_bow]
corpus_tfidf = tfidf[corpus_bow]
print corpus_tfidf[0][0:5]
# Initialize an LSI transformation
n_topics = 5
# scode: lsi = <FILL IN>
lsi = gensim.models.LsiModel(corpus_tfidf, id2word=D, num_topics=n_topics)
lsi.show_topics(num_topics=-1, num_words=10, log=False, formatted=True)
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
n_bins = 25
# Example data
y_pos = range(n_bins-1, -1, -1)
pylab.rcParams['figure.figsize'] = 16, 8 # Set figure size
for i in range(n_topics):
### Plot top 25 tokens for topic i
# Read i-thtopic
# scode: <FILL IN>
topic_i = lsi.show_topic(i, topn=n_bins)
tokens = [t[0] for t in topic_i]
weights = [t[1] for t in topic_i]
# Plot
# scode: <FILL IN>
plt.subplot(1, n_topics, i+1)
plt.barh(y_pos, weights, align='center', alpha=0.4)
plt.yticks(y_pos, tokens)
plt.xlabel('Top {0} topic weights'.format(n_bins))
plt.title('Topic {0}'.format(i))
plt.show()
# On real corpora, target dimensionality of
# 200–500 is recommended as a “golden standard”
# Create a double wrapper over the original
# corpus bow tfidf fold-in-lsi
corpus_lsi = lsi[corpus_tfidf]
print corpus_lsi[0]
# Extract weights from corpus_lsi
# scode weight0 = <FILL IN>
weight0 = [doc[0][1] if doc != [] else -np.inf for doc in corpus_lsi]
# Locate the maximum positive weight
nmax = np.argmax(weight0)
print nmax
print weight0[nmax]
print corpus_lsi[nmax]
# Get topic 0
# scode: topic_0 = <FILL IN>
topic_0 = lsi.show_topic(0, topn=n_bins)
# Compute a list of tuples (token, wordcount) for all tokens in topic_0, where wordcount is the number of
# occurences of the token in the article.
# scode: token_counts = <FILL IN>
token_counts = [(t[0], corpus_clean[nmax].count(t[0])) for t in topic_0]
print "Topic 0 is:"
print topic_0
print "Token counts:"
print token_counts
ldag = gensim.models.ldamodel.LdaModel(
corpus=corpus_tfidf, id2word=D, num_topics=10, update_every=1, passes=10)
ldag.print_topics()
# For testing LDA, you can use the reuters dataset
# X = lda.datasets.load_reuters()
# vocab = lda.datasets.load_reuters_vocab()
# titles = lda.datasets.load_reuters_titles()
X = np.int32(np.zeros((n_art, n_tokens)))
for n, art in enumerate(corpus_bow):
for t in art:
X[n, t[0]] = t[1]
print X.shape
print X.sum()
vocab = D.values()
titles = corpus_titles
# Default parameters:
# model = lda.LDA(n_topics, n_iter=2000, alpha=0.1, eta=0.01, random_state=None, refresh=10)
model = lda.LDA(n_topics=10, n_iter=1500, random_state=1)
model.fit(X) # model.fit_transform(X) is also available
topic_word = model.topic_word_ # model.components_ also works
# Show topics...
n_top_words = 8
for i, topic_dist in enumerate(topic_word):
topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1]
print('Topic {}: {}'.format(i, ' '.join(topic_words)))
doc_topic = model.doc_topic_
for i in range(10):
print("{} (top topic: {})".format(titles[i], doc_topic[i].argmax()))
# This is to apply the model to a new doc(s)
# doc_topic_test = model.transform(X_test)
# for title, topics in zip(titles_test, doc_topic_test):
# print("{} (top topic: {})".format(title, topics.argmax()))
# Adapted from an example in sklearn site
# http://scikit-learn.org/dev/auto_examples/applications/topics_extraction_with_nmf_lda.html
# You can try also with the dataset provided by sklearn in
# from sklearn.datasets import fetch_20newsgroups
# dataset = fetch_20newsgroups(shuffle=True, random_state=1,
# remove=('headers', 'footers', 'quotes'))
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
print("Loading dataset...")
# scode: data_samples = <FILL IN>
print "*".join(['Esto', 'es', 'un', 'ejemplo'])
data_samples = [" ".join(c) for c in corpus_clean]
print 'Document 0:'
print data_samples[0][0:200], '...'
# Use tf (raw term count) features for LDA.
print("Extracting tf features for LDA...")
n_features = 1000
n_samples = 2000
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
print tf[0][0][0]
print("Fitting LDA models with tf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
# scode: lda = <FILL IN>
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=10,
learning_method='online', learning_offset=50., random_state=0)
# doc_topic_prior= 1.0/n_topics, topic_word_prior= 1.0/n_topics)
t0 = time()
corpus_lda = lda.fit_transform(tf)
print corpus_lda[10]/np.sum(corpus_lda[10])
print("done in %0.3fs." % (time() - t0))
print corpus_titles[10]
# print corpus_text[10]
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, 20)
topics = lda.components_
topic_probs = [t/np.sum(t) for t in topics]
# print topic_probs[0]
print -np.sort(-topic_probs[0])
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
n_bins = 50
# Example data
y_pos = range(n_bins-1, -1, -1)
pylab.rcParams['figure.figsize'] = 16, 8 # Set figure size
for i in range(n_topics):
### Plot top 25 tokens for topic i
# Read i-thtopic
# scode: <FILL IN>
topic_i = topic_probs[i]
rank = np.argsort(- topic_i)[0:n_bins]
tokens = [tf_feature_names[r] for r in rank]
weights = [topic_i[r] for r in rank]
# Plot
# scode: <FILL IN>
plt.subplot(1, n_topics, i+1)
plt.barh(y_pos, weights, align='center', alpha=0.4)
plt.yticks(y_pos, tokens)
plt.xlabel('Top {0} topic weights'.format(n_bins))
plt.title('Topic {0}'.format(i))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Corpus acquisition.
Step2: You can try with any other categories. Take into account that the behavior of topic modelling algorithms may depend on the amount of documents available for the analysis. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https
Step3: Now, we have stored the whole text collection in two lists
Step4: 2. Corpus Processing
Step5: 2.2. Stemming vs Lemmatization
Step6: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
Step7: Task
Step8: One of the advantages of the lemmatizer method is that the result of lemmmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
Step9: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Step10: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.
Step11: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
Step12: and a bow representation of a corpus with
Step13: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.
Step14: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is
Step15: which appears
Step16: In the following we plot the most frequent terms in the corpus.
Step17: Exercise
Step18: Exercise
Step19: 3. Semantic Analysis
Step20: From now on, tfidf can be used to convert any vector from the old representation (bow integer counts) to the new one (TfIdf real-valued weights)
Step21: Or to apply a transformation to a whole corpus
Step22: 3.1. Latent Semantic Indexing (LSI)
Step23: From LSI, we can check both the topic-tokens matrix and the document-topics matrix.
Step24: However, a more useful representation of topics is as a list of tuples (token, value). This is provided by the show_topic method.
Step25: LSI approximates any document as a linear combination of the topic vectors. We can compute the topic weights for any input corpus entered as input to the lsi model.
Step26: Task
Step27: 3.2. Latent Dirichlet Allocation (LDA)
Step28: 3.2.2. LDA using python lda library
Step29: Document-topic distribution
Step30: It allows incremental updates
Step31: Now, we need a dataset to feed the Count_Vectorizer object, by joining all tokens in corpus_clean in a single string, using a space ' ' as separator.
Step32: Now we are ready to compute the token counts.
Step33: Now we can apply the LDA algorithm.
Step34: Task
Step35: Exercise
|
4,644
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
4,645
|
<ASSISTANT_TASK:>
Python Code:
import unicodedata
import string
import re
import random
import time
import datetime
import math
import socket
hostname = socket.gethostname()
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch import optim
import torch.nn.functional as F
from torch.nn.utils.rnn import pad_packed_sequence, pack_padded_sequence#, masked_cross_entropy
from masked_cross_entropy import *
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
%matplotlib inline
USE_CUDA = True
PAD_token = 0
SOS_token = 1
EOS_token = 2
class Lang:
def __init__(self, name):
self.name = name
self.trimmed = False
self.word2index = {}
self.word2count = {}
self.index2word = {0: "PAD", 1: "SOS", 2: "EOS"}
self.n_words = 3 # Count default tokens
def index_words(self, sentence):
for word in sentence.split(' '):
self.index_word(word)
def index_word(self, word):
if word not in self.word2index:
self.word2index[word] = self.n_words
self.word2count[word] = 1
self.index2word[self.n_words] = word
self.n_words += 1
else:
self.word2count[word] += 1
# Remove words below a certain count threshold
def trim(self, min_count):
if self.trimmed: return
self.trimmed = True
keep_words = []
for k, v in self.word2count.items():
if v >= min_count:
keep_words.append(k)
print('keep_words %s / %s = %.4f' % (
len(keep_words), len(self.word2index), len(keep_words) / len(self.word2index)
))
# Reinitialize dictionaries
self.word2index = {}
self.word2count = {}
self.index2word = {0: "PAD", 1: "SOS", 2: "EOS"}
self.n_words = 3 # Count default tokens
for word in keep_words:
self.index_word(word)
# Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427
def unicode_to_ascii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
# Lowercase, trim, and remove non-letter characters
def normalize_string(s):
s = unicode_to_ascii(s.lower().strip())
s = re.sub(r"([,.!?])", r" \1 ", s)
s = re.sub(r"[^a-zA-Z,.!?]+", r" ", s)
s = re.sub(r"\s+", r" ", s).strip()
return s
def read_langs(lang1, lang2, reverse=False):
print("Reading lines...")
# Read the file and split into lines
# filename = '../data/%s-%s.txt' % (lang1, lang2)
filename = '../%s-%s.txt' % (lang1, lang2)
lines = open(filename).read().strip().split('\n')
# Split every line into pairs and normalize
pairs = [[normalize_string(s) for s in l.split('\t')] for l in lines]
# Reverse pairs, make Lang instances
if reverse:
pairs = [list(reversed(p)) for p in pairs]
input_lang = Lang(lang2)
output_lang = Lang(lang1)
else:
input_lang = Lang(lang1)
output_lang = Lang(lang2)
return input_lang, output_lang, pairs
MIN_LENGTH = 3
MAX_LENGTH = 25
def filter_pairs(pairs):
filtered_pairs = []
for pair in pairs:
if len(pair[0]) >= MIN_LENGTH and len(pair[0]) <= MAX_LENGTH \
and len(pair[1]) >= MIN_LENGTH and len(pair[1]) <= MAX_LENGTH:
filtered_pairs.append(pair)
return filtered_pairs
def prepare_data(lang1_name, lang2_name, reverse=False):
input_lang, output_lang, pairs = read_langs(lang1_name, lang2_name, reverse)
print("Read %d sentence pairs" % len(pairs))
pairs = filter_pairs(pairs)
print("Filtered to %d pairs" % len(pairs))
print("Indexing words...")
for pair in pairs:
input_lang.index_words(pair[0])
output_lang.index_words(pair[1])
print('Indexed %d words in input language, %d words in output' % (input_lang.n_words, output_lang.n_words))
return input_lang, output_lang, pairs
input_lang, output_lang, pairs = prepare_data('eng', 'fra', True)
MIN_COUNT = 5
input_lang.trim(MIN_COUNT)
output_lang.trim(MIN_COUNT)
keep_pairs = []
for pair in pairs:
input_sentence = pair[0]
output_sentence = pair[1]
keep_input = True
keep_output = True
for word in input_sentence.split(' '):
if word not in input_lang.word2index:
keep_input = False
break
for word in output_sentence.split(' '):
if word not in output_lang.word2index:
keep_output = False
break
# Remove if pair doesn't match input and output conditions
if keep_input and keep_output:
keep_pairs.append(pair)
print("Trimmed from %d pairs to %d, %.4f of total" % (len(pairs), len(keep_pairs), len(keep_pairs) / len(pairs)))
pairs = keep_pairs
# Return a list of indexes, one for each word in the sentence, plus EOS
def indexes_from_sentence(lang, sentence):
return [lang.word2index[word] for word in sentence.split(' ')] + [EOS_token]
# Pad a with the PAD symbol
def pad_seq(seq, max_length):
seq += [PAD_token for i in range(max_length - len(seq))]
return seq
def random_batch(batch_size):
input_seqs = []
target_seqs = []
# Choose random pairs
for i in range(batch_size):
pair = random.choice(pairs)
input_seqs.append(indexes_from_sentence(input_lang, pair[0]))
target_seqs.append(indexes_from_sentence(output_lang, pair[1]))
# Zip into pairs, sort by length (descending), unzip
seq_pairs = sorted(zip(input_seqs, target_seqs), key=lambda p: len(p[0]), reverse=True)
input_seqs, target_seqs = zip(*seq_pairs)
# For input and target sequences, get array of lengths and pad with 0s to max length
input_lengths = [len(s) for s in input_seqs]
input_padded = [pad_seq(s, max(input_lengths)) for s in input_seqs]
target_lengths = [len(s) for s in target_seqs]
target_padded = [pad_seq(s, max(target_lengths)) for s in target_seqs]
# Turn padded arrays into (batch_size x max_len) tensors, transpose into (max_len x batch_size)
input_var = Variable(torch.LongTensor(input_padded)).transpose(0, 1)
target_var = Variable(torch.LongTensor(target_padded)).transpose(0, 1)
if USE_CUDA:
input_var = input_var.cuda()
target_var = target_var.cuda()
return input_var, input_lengths, target_var, target_lengths
random_batch(2)
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size, n_layers=1, dropout=0.1):
super(EncoderRNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.n_layers = n_layers
self.dropout = dropout
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=self.dropout, bidirectional=True)
def forward(self, input_seqs, input_lengths, hidden=None):
# Note: we run this all at once (over multiple batches of multiple sequences)
embedded = self.embedding(input_seqs)
packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)
outputs, hidden = self.gru(packed, hidden)
outputs, output_lengths = torch.nn.utils.rnn.pad_packed_sequence(outputs) # unpack (back to padded)
outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:] # Sum bidirectional outputs
return outputs, hidden
class Attn(nn.Module):
def __init__(self, method, hidden_size):
super(Attn, self).__init__()
self.method = method
self.hidden_size = hidden_size
if self.method == 'general':
self.attn = nn.Linear(self.hidden_size, hidden_size)
elif self.method == 'concat':
self.attn = nn.Linear(self.hidden_size * 2, hidden_size)
self.v = nn.Parameter(torch.FloatTensor(1, hidden_size))
def forward(self, hidden, encoder_outputs):
max_len = encoder_outputs.size(0)
this_batch_size = encoder_outputs.size(1)
# Create variable to store attention energies
attn_energies = Variable(torch.zeros(this_batch_size, max_len)) # B x S
if USE_CUDA:
attn_energies = attn_energies.cuda()
# For each batch of encoder outputs
for b in range(this_batch_size):
# Calculate energy for each encoder output
for i in range(max_len):
attn_energies[b, i] = self.score(hidden[:, b], encoder_outputs[i, b].unsqueeze(0))
# Normalize energies to weights in range 0 to 1, resize to 1 x B x S
return F.softmax(attn_energies).unsqueeze(1)
def score(self, hidden, encoder_output):
if self.method == 'dot':
energy = hidden.dot(encoder_output)
return energy
elif self.method == 'general':
energy = self.attn(encoder_output)
energy = hidden.dot(energy)
return energy
elif self.method == 'concat':
energy = self.attn(torch.cat((hidden, encoder_output), 1))
energy = self.v.dot(energy)
return energy
class BahdanauAttnDecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, n_layers=1, dropout_p=0.1):
super(BahdanauAttnDecoderRNN, self).__init__()
# Define parameters
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout_p = dropout_p
self.max_length = max_length
# Define layers
self.embedding = nn.Embedding(output_size, hidden_size)
self.dropout = nn.Dropout(dropout_p)
self.attn = Attn('concat', hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=dropout_p)
self.out = nn.Linear(hidden_size, output_size)
def forward(self, word_input, last_hidden, encoder_outputs):
# Note: we run this one step at a time
# TODO: FIX BATCHING
# Get the embedding of the current input word (last output word)
word_embedded = self.embedding(word_input).view(1, 1, -1) # S=1 x B x N
word_embedded = self.dropout(word_embedded)
# Calculate attention weights and apply to encoder outputs
attn_weights = self.attn(last_hidden[-1], encoder_outputs)
context = attn_weights.bmm(encoder_outputs.transpose(0, 1)) # B x 1 x N
context = context.transpose(0, 1) # 1 x B x N
# Combine embedded input word and attended context, run through RNN
rnn_input = torch.cat((word_embedded, context), 2)
output, hidden = self.gru(rnn_input, last_hidden)
# Final output layer
output = output.squeeze(0) # B x N
output = F.log_softmax(self.out(torch.cat((output, context), 1)))
# Return final output, hidden state, and attention weights (for visualization)
return output, hidden, attn_weights
class LuongAttnDecoderRNN(nn.Module):
def __init__(self, attn_model, hidden_size, output_size, n_layers=1, dropout=0.1):
super(LuongAttnDecoderRNN, self).__init__()
# Keep for reference
self.attn_model = attn_model
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout = dropout
# Define layers
self.embedding = nn.Embedding(output_size, hidden_size)
self.embedding_dropout = nn.Dropout(dropout)
self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=dropout)
self.concat = nn.Linear(hidden_size * 2, hidden_size)
self.out = nn.Linear(hidden_size, output_size)
# Choose attention model
if attn_model != 'none':
self.attn = Attn(attn_model, hidden_size)
def forward(self, input_seq, last_hidden, encoder_outputs):
# Note: we run this one step at a time
# Get the embedding of the current input word (last output word)
batch_size = input_seq.size(0)
embedded = self.embedding(input_seq)
embedded = self.embedding_dropout(embedded)
embedded = embedded.view(1, batch_size, self.hidden_size) # S=1 x B x N
# Get current hidden state from input word and last hidden state
rnn_output, hidden = self.gru(embedded, last_hidden)
# Calculate attention from current RNN state and all encoder outputs;
# apply to encoder outputs to get weighted average
attn_weights = self.attn(rnn_output, encoder_outputs)
context = attn_weights.bmm(encoder_outputs.transpose(0, 1)) # B x S=1 x N
# Attentional vector using the RNN hidden state and context vector
# concatenated together (Luong eq. 5)
rnn_output = rnn_output.squeeze(0) # S=1 x B x N -> B x N
context = context.squeeze(1) # B x S=1 x N -> B x N
concat_input = torch.cat((rnn_output, context), 1)
concat_output = F.tanh(self.concat(concat_input))
# Finally predict next token (Luong eq. 6, without softmax)
output = self.out(concat_output)
# Return final output, hidden state, and attention weights (for visualization)
return output, hidden, attn_weights
small_batch_size = 3
input_batches, input_lengths, target_batches, target_lengths = random_batch(small_batch_size)
print('input_batches', input_batches.size()) # (max_len x batch_size)
print('target_batches', target_batches.size()) # (max_len x batch_size)
small_hidden_size = 8
small_n_layers = 2
encoder_test = EncoderRNN(input_lang.n_words, small_hidden_size, small_n_layers)
decoder_test = LuongAttnDecoderRNN('general', small_hidden_size, output_lang.n_words, small_n_layers)
if USE_CUDA:
encoder_test.cuda()
decoder_test.cuda()
encoder_outputs, encoder_hidden = encoder_test(input_batches, input_lengths, None)
print('encoder_outputs', encoder_outputs.size()) # max_len x batch_size x hidden_size
print('encoder_hidden', encoder_hidden.size()) # n_layers * 2 x batch_size x hidden_size
max_target_length = max(target_lengths)
# Prepare decoder input and outputs
decoder_input = Variable(torch.LongTensor([SOS_token] * small_batch_size))
decoder_hidden = encoder_hidden[:decoder_test.n_layers] # Use last (forward) hidden state from encoder
all_decoder_outputs = Variable(torch.zeros(max_target_length, small_batch_size, decoder_test.output_size))
if USE_CUDA:
all_decoder_outputs = all_decoder_outputs.cuda()
decoder_input = decoder_input.cuda()
# Run through decoder one time step at a time
for t in range(max_target_length):
decoder_output, decoder_hidden, decoder_attn = decoder_test(
decoder_input, decoder_hidden, encoder_outputs
)
all_decoder_outputs[t] = decoder_output # Store this step's outputs
decoder_input = target_batches[t] # Next input is current target
# Test masked cross entropy loss
loss = masked_cross_entropy(
all_decoder_outputs.transpose(0, 1).contiguous(),
target_batches.transpose(0, 1).contiguous(),
target_lengths
)
print('loss', loss.data[0])
def train(input_batches, input_lengths, target_batches, target_lengths, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):
# Zero gradients of both optimizers
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
loss = 0 # Added onto for each word
# Run words through encoder
encoder_outputs, encoder_hidden = encoder(input_batches, input_lengths, None)
# Prepare input and output variables
decoder_input = Variable(torch.LongTensor([SOS_token] * batch_size))
decoder_hidden = encoder_hidden[:decoder.n_layers] # Use last (forward) hidden state from encoder
max_target_length = max(target_lengths)
all_decoder_outputs = Variable(torch.zeros(max_target_length, batch_size, decoder.output_size))
# Move new Variables to CUDA
if USE_CUDA:
decoder_input = decoder_input.cuda()
all_decoder_outputs = all_decoder_outputs.cuda()
# Run through decoder one time step at a time
for t in range(max_target_length):
decoder_output, decoder_hidden, decoder_attn = decoder(
decoder_input, decoder_hidden, encoder_outputs
)
all_decoder_outputs[t] = decoder_output
decoder_input = target_batches[t] # Next input is current target
# Loss calculation and backpropagation
loss = masked_cross_entropy(
all_decoder_outputs.transpose(0, 1).contiguous(), # -> batch x seq
target_batches.transpose(0, 1).contiguous(), # -> batch x seq
target_lengths
)
loss.backward()
# Clip gradient norms
ec = torch.nn.utils.clip_grad_norm(encoder.parameters(), clip)
dc = torch.nn.utils.clip_grad_norm(decoder.parameters(), clip)
# Update parameters with optimizers
encoder_optimizer.step()
decoder_optimizer.step()
return loss.data[0], ec, dc
# Configure models
attn_model = 'dot'
hidden_size = 500
n_layers = 2
dropout = 0.1
batch_size = 100
batch_size = 50
# Configure training/optimization
clip = 50.0
teacher_forcing_ratio = 0.5
learning_rate = 0.0001
decoder_learning_ratio = 5.0
n_epochs = 50000
epoch = 0
plot_every = 20
print_every = 100
evaluate_every = 1000
# Initialize models
encoder = EncoderRNN(input_lang.n_words, hidden_size, n_layers, dropout=dropout)
decoder = LuongAttnDecoderRNN(attn_model, hidden_size, output_lang.n_words, n_layers, dropout=dropout)
# Initialize optimizers and criterion
encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate * decoder_learning_ratio)
criterion = nn.CrossEntropyLoss()
# Move models to GPU
if USE_CUDA:
encoder.cuda()
decoder.cuda()
import sconce
job = sconce.Job('seq2seq-translate', {
'attn_model': attn_model,
'n_layers': n_layers,
'dropout': dropout,
'hidden_size': hidden_size,
'learning_rate': learning_rate,
'clip': clip,
'teacher_forcing_ratio': teacher_forcing_ratio,
'decoder_learning_ratio': decoder_learning_ratio,
})
job.plot_every = plot_every
job.log_every = print_every
# Keep track of time elapsed and running averages
start = time.time()
plot_losses = []
print_loss_total = 0 # Reset every print_every
plot_loss_total = 0 # Reset every plot_every
def as_minutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def time_since(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return '%s (- %s)' % (as_minutes(s), as_minutes(rs))
def evaluate(input_seq, max_length=MAX_LENGTH):
input_lengths = [len(input_seq)]
input_seqs = [indexes_from_sentence(input_lang, input_seq)]
input_batches = Variable(torch.LongTensor(input_seqs), volatile=True).transpose(0, 1)
if USE_CUDA:
input_batches = input_batches.cuda()
# Set to not-training mode to disable dropout
encoder.train(False)
decoder.train(False)
# Run through encoder
encoder_outputs, encoder_hidden = encoder(input_batches, input_lengths, None)
# Create starting vectors for decoder
decoder_input = Variable(torch.LongTensor([SOS_token]), volatile=True) # SOS
decoder_hidden = encoder_hidden[:decoder.n_layers] # Use last (forward) hidden state from encoder
if USE_CUDA:
decoder_input = decoder_input.cuda()
# Store output words and attention states
decoded_words = []
decoder_attentions = torch.zeros(max_length + 1, max_length + 1)
# Run through decoder
for di in range(max_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_outputs
)
decoder_attentions[di,:decoder_attention.size(2)] += decoder_attention.squeeze(0).squeeze(0).cpu().data
# Choose top word from output
topv, topi = decoder_output.data.topk(1)
ni = topi[0][0]
if ni == EOS_token:
decoded_words.append('<EOS>')
break
else:
decoded_words.append(output_lang.index2word[ni])
# Next input is chosen word
decoder_input = Variable(torch.LongTensor([ni]))
if USE_CUDA: decoder_input = decoder_input.cuda()
# Set back to training mode
encoder.train(True)
decoder.train(True)
return decoded_words, decoder_attentions[:di+1, :len(encoder_outputs)]
def evaluate_randomly():
[input_sentence, target_sentence] = random.choice(pairs)
evaluate_and_show_attention(input_sentence, target_sentence)
import io
import torchvision
from PIL import Image
import visdom
vis = visdom.Visdom()
def show_plot_visdom():
buf = io.BytesIO()
plt.savefig(buf)
buf.seek(0)
attn_win = 'attention (%s)' % hostname
vis.image(torchvision.transforms.ToTensor()(Image.open(buf)), win=attn_win, opts={'title': attn_win})
def show_attention(input_sentence, output_words, attentions):
# Set up figure with colorbar
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(attentions.numpy(), cmap='bone')
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + input_sentence.split(' ') + ['<EOS>'], rotation=90)
ax.set_yticklabels([''] + output_words)
# Show label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
show_plot_visdom()
plt.show()
plt.close()
def evaluate_and_show_attention(input_sentence, target_sentence=None):
output_words, attentions = evaluate(input_sentence)
output_sentence = ' '.join(output_words)
print('>', input_sentence)
if target_sentence is not None:
print('=', target_sentence)
print('<', output_sentence)
show_attention(input_sentence, output_words, attentions)
# Show input, target, output text in visdom
win = 'evaluted (%s)' % hostname
text = '<p>> %s</p><p>= %s</p><p>< %s</p>' % (input_sentence, target_sentence, output_sentence)
vis.text(text, win=win, opts={'title': win})
# Begin!
ecs = []
dcs = []
eca = 0
dca = 0
while epoch < n_epochs:
epoch += 1
# Get training data for this cycle
input_batches, input_lengths, target_batches, target_lengths = random_batch(batch_size)
# Run the train function
loss, ec, dc = train(
input_batches, input_lengths, target_batches, target_lengths,
encoder, decoder,
encoder_optimizer, decoder_optimizer, criterion
)
# Keep track of loss
print_loss_total += loss
plot_loss_total += loss
eca += ec
dca += dc
job.record(epoch, loss)
if epoch % print_every == 0:
print_loss_avg = print_loss_total / print_every
print_loss_total = 0
print_summary = '%s (%d %d%%) %.4f' % (time_since(start, epoch / n_epochs), epoch, epoch / n_epochs * 100, print_loss_avg)
print(print_summary)
if epoch % evaluate_every == 0:
evaluate_randomly()
if epoch % plot_every == 0:
plot_loss_avg = plot_loss_total / plot_every
plot_losses.append(plot_loss_avg)
plot_loss_total = 0
# TODO: Running average helper
ecs.append(eca / plot_every)
dcs.append(dca / plot_every)
ecs_win = 'encoder grad (%s)' % hostname
dcs_win = 'decoder grad (%s)' % hostname
vis.line(np.array(ecs), win=ecs_win, opts={'title': ecs_win})
vis.line(np.array(dcs), win=dcs_win, opts={'title': dcs_win})
eca = 0
dca = 0
def show_plot(points):
plt.figure()
fig, ax = plt.subplots()
loc = ticker.MultipleLocator(base=0.2) # put ticks at regular intervals
ax.yaxis.set_major_locator(loc)
plt.plot(points)
show_plot(plot_losses)
output_words, attentions = evaluate("je suis trop froid .")
plt.matshow(attentions.numpy())
show_plot_visdom()
evaluate_and_show_attention("elle a cinq ans de moins que moi .")
evaluate_and_show_attention("elle est trop petit .")
evaluate_and_show_attention("je ne crains pas de mourir .")
evaluate_and_show_attention("c est un jeune directeur plein de talent .")
evaluate_and_show_attention("est le chien vert aujourd hui ?")
evaluate_and_show_attention("le chat me parle .")
evaluate_and_show_attention("des centaines de personnes furent arretees ici .")
evaluate_and_show_attention("des centaines de chiens furent arretees ici .")
evaluate_and_show_attention("ce fromage est prepare a partir de lait de chevre .")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we will also define a constant to decide whether to use the GPU (with CUDA specifically) or the CPU. If you don't have a GPU, set this to False. Later when we create tensors, this variable will be used to decide whether we keep them on CPU or move them to GPU.
Step2: Loading data files
Step3: Reading and decoding files
Step4: To read the data file we will split the file into lines, and then split lines into pairs. The files are all English → Other Language, so if we want to translate from Other Language → English I added the reverse flag to reverse the pairs.
Step5: The full process for preparing the data is
Step6: Filtering vocabularies
Step7: Filtering pairs
Step8: Turning training data into Tensors
Step9: We can make better use of the GPU by training on batches of many sequences at once, but doing so brings up the question of how to deal with sequences of varying lengths. The simple solution is to "pad" the shorter sentences with some padding symbol (in this case 0), and ignore these padded spots when calculating the loss.
Step10: To create a Variable for a full batch of inputs (and targets) we get a random sample of sequences and pad them all to the length of the longest sequence. We'll keep track of the lengths of each batch in order to un-pad later.
Step11: We can test this to see that it will return a (max_len x batch_size) tensor for input and target sentences, along with a corresponding list of batch lenghts for each (which we will use for masking later).
Step12: Building the models
Step13: Attention Decoder
Step14: Implementing the Bahdanau et al. model
Step15: Now we can build a decoder that plugs this Attn module in after the RNN to calculate attention weights, and apply those weights to the encoder outputs to get a context vector.
Step16: Testing the models
Step17: Create models with a small size (a good idea for eyeball inspection)
Step18: To test the encoder, run the input batch through to get per-batch encoder outputs
Step19: Then starting with a SOS token, run word tokens through the decoder to get each next word token. Instead of doing this with the whole sequence, it is done one at a time, to support using it's own predictions to make the next prediction. This will be one time step at a time, but batched per time step. In order to get this to work for short padded sequences, the batch size is going to get smaller each time.
Step20: Training
Step21: Running training
Step22: Plus helper functions to print time elapsed and estimated time remaining, given the current time and progress.
Step23: Evaluating the network
Step24: We can evaluate random sentences from the training set and print out the input, target, and output to make some subjective quality judgements
Step25: Visualizing attention
Step26: For a better viewing experience we will do the extra work of adding axes and labels
Step27: Putting it all together
Step28: Plotting training loss
|
4,646
|
<ASSISTANT_TASK:>
Python Code:
As_soon_as_I_become_a_good_programmer_I_will_be_rich=False
# Let us test it :
if As_soon_as_I_become_a_good_programmer_I_will_be_rich: print("Then let us start programming !")
else: print('Do Algorithmics !')
## Check if the french word "ressasser" is a palindroma
# We first declare some variables
word_to_test='ressasser'
# We need to define the indexes in the chain of characters
index_head=0
index_tail=len(word_to_test)-1
# Now we are going to print
print(word_to_test)
print("The lenght of the word is: {}".format(len(word_to_test)))
print("The first letter is: {}".format(word_to_test[index_head]))
print("The last letter: {}".format(word_to_test[index_tail]))
if (word_to_test[index_head]==word_to_test[index_tail]):
if index_head < index_tail:
print("Untill now it seems a palindroma")
index_head=index_head+1
index_tail=index_tail-1
print(index_head)
print(index_tail)
else: print("The word '{}' is a palindroma".format(word_to_test))
else: print("The word '{}' is NOT a palindroma".format(word_to_test))
# Give a number and find the first guess
print('Enter the number y, from whom you are asked to find the square root')
y=eval(input('y = '))
print('Give a first guess for x, called g')
g=eval(input('g = '))
print("End of step 1")
print("(The square product of {0} is {1})".format(g,g*g))
print("The square of the average of the guess and y divided by guess is:")
print((0.5*(g+y/g))**2)
print("How accurate is this result ?")
print("Well, the square of your guess is {}".format(g*g))
if g*g == y:
print("Whoah, it is a good guess")
print("The solution is:")
print((0.5*(g+y/g)))
else:
g=0.5*(g+y/g)
print("Sorry")
print("Try again, with, g={} as new guess".format(g))
print("The average of the new g and y divided by this new g is:")
print(0.5*(g+y/g))
#Check a number
nb = eval(input('Enter a number = '))
if nb < 0:
print('{} is a negative number'.format(nb))
elif nb > 0:
print('{} is a positive number'.format(nb))
else:
print('{} is a null'.format(nb))
# Leap years are back all 4 years, except secular years if these are not multiple of 400
n = eval(input('Enter a year = '))
if n % 4 != 0:
print('Year {} is NOT a bissextile year'.format(n))
elif n % 100 != 0:
print('Year {} is a bissextile year'.format(n))
elif n % 400 != 0:
print('Year {} is NOT a bissextile year'.format(n))
else:
print('Year {} is a bissextile year'.format(n))
if ((n % 4 == 0) and (n % 100 != 0)) or (n % 400 == 0): print('Year {} is a bissextile year'.format(n))
else: print('Year {} is NOT a bissextile year'.format(n))
a=6
print(~a)
# There is an operator for defining an 'implication'
# p implies q means that
# p is a sufficient condition for q to be true
# p --> q is equivalent as (not(p) or q)
Je_fais_de_longues_etudes=True
Je_deviens_bon=False
# Je_fais_de_longues_etudes --> Je_deviens_bon
# This can be writen as the following proposition :
(not(Je_fais_de_longues_etudes) or (Je_deviens_bon))
a=10
b=3
c=a // b
d=a % b
print(a)
print(type(a))
print(b)
print(type(b))
print(c)
print(type(c))
print(d)
print(type(d))
'y'*3
a='Je ne dois pas consulter Facebook en cours' + '//'
a*5
a = 5
b = 20
(a < b)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.1) Declarative Knowledge on Palindromas
Step2: 2.1) Declarative Knowledge on square roots
Step3: 3) Conditional Instructions
Step4: 4) Usual operators
Step5: 4.3) Arithmetic operators
Step6: 4.4) Operations on lists and characters
|
4,647
|
<ASSISTANT_TASK:>
Python Code:
from bqplot import (DateScale, ColorScale, HeatMap,
Figure, LinearScale, OrdinalScale, Axis)
from scipy.stats import percentileofscore
from scipy.interpolate import interp1d
import bqplot.pyplot as plt
from traitlets import List, Float, observe
from ipywidgets import IntRangeSlider, Layout, VBox, HBox, jslink
from pandas import DatetimeIndex
import numpy as np
import pandas as pd
def quantile_space(x, q1=0.1, q2=0.9):
'''
Returns a function that squashes quantiles between q1 and q2
'''
q1_x, q2_x = np.percentile(x, [q1, q2])
qs = np.percentile(x, np.linspace(0, 100, 100))
def get_quantile(t):
return np.interp(t, qs, np.linspace(0, 100, 100))
def f(y):
return np.interp(get_quantile(y), [0, q1, q2, 100], [-1, 0, 0, 1])
return f
class DNA(VBox):
colors = List()
q1 = Float()
q2 = Float()
def __init__(self, data, **kwargs):
self.data = data
date_x, date_y = False, False
transpose = kwargs.pop('transpose', False)
if transpose is True:
if type(data.index) is DatetimeIndex:
self.x_scale = DateScale()
if type(data.columns) is DatetimeIndex:
self.y_scale = DateScale()
x, y = list(data.columns.values), data.index.values
else:
if type(data.index) is DatetimeIndex:
date_x = True
if type(data.columns) is DatetimeIndex:
date_y = True
x, y = data.index.values, list(data.columns.values)
self.q1, self.q2 = kwargs.pop('quantiles', (1, 99))
self.quant_func = quantile_space(self.data.values.flatten(), q1=self.q1, q2=self.q2)
self.colors = kwargs.pop('colors', ['Red', 'Black', 'Green'])
self.x_scale = DateScale() if date_x is True else LinearScale()
self.y_scale = DateScale() if date_y is True else OrdinalScale(padding_y=0)
self.color_scale = ColorScale(colors=self.colors)
self.heat_map = HeatMap(color=self.quant_func(self.data.T), x=x, y=y, scales={'x': self.x_scale, 'y': self.y_scale,
'color': self.color_scale})
self.x_ax = Axis(scale=self.x_scale)
self.y_ax = Axis(scale=self.y_scale, orientation='vertical')
show_axes = kwargs.pop('show_axes', True)
self.axes = [self.x_ax, self.y_ax] if show_axes is True else []
self.height = kwargs.pop('height', '800px')
self.layout = kwargs.pop('layout', Layout(width='100%', height=self.height, flex='1'))
self.fig_margin = kwargs.pop('fig_margin', {'top': 60, 'bottom': 60, 'left': 150, 'right': 0})
kwargs.setdefault('padding_y', 0.0)
self.create_interaction(**kwargs)
self.figure = Figure(marks=[self.heat_map], axes=self.axes, fig_margin=self.fig_margin,
layout=self.layout, min_aspect_ratio=0.,**kwargs)
super(VBox, self).__init__(children=[self.range_slider, self.figure], layout=Layout(align_items='center',
width='100%',
height='100%'),
**kwargs)
def create_interaction(self, **kwargs):
self.range_slider = IntRangeSlider(description='Filter Range', value=(self.q1, self.q2), layout=Layout(width='100%'))
self.range_slider.observe(self.slid_changed, 'value')
self.observe(self.changed, ['q1', 'q2'])
def slid_changed(self, new):
self.q1 = self.range_slider.value[0]
self.q2 = self.range_slider.value[1]
def changed(self, new):
self.range_slider.value = (self.q1, self.q2)
self.quant_func = quantile_space(self.data.values.flatten(), q1=self.q1, q2=self.q2)
self.heat_map.color = self.quant_func(self.data.T)
def get_filtered_df(self, fill_type='median'):
q1_x, q2_x = np.percentile(self.data, [self.q1, self.q2])
if fill_type == 'median':
return self.data[(self.data >= q1_x) & (self.data <= q2_x)].apply(lambda x: x.fillna(x.median()))
elif fill_type == 'mean':
return self.data[(self.data >= q1_x) & (self.data <= q2_x)].apply(lambda x: x.fillna(x.mean()))
else:
raise ValueError("fill_type must be one of ('median', 'mean')")
size = 100
def num_to_col_letters(num):
letters = ''
while num:
mod = (num - 1) % 26
letters += chr(mod + 65)
num = (num - 1) // 26
return ''.join(reversed(letters))
letters = []
for i in range(1, size+1):
letters.append(num_to_col_letters(i))
data = pd.DataFrame(np.random.randn(size, size), columns=letters)
data_dna = DNA(data, title='DNA of our Data', height='1400px', colors=['Red', 'White', 'Green'])
data_dna
data_dna.q1, data_dna.q2 = 5, 95
data_clean = data_dna.get_filtered_df()
data_mean = data_dna.get_filtered_df(fill_type='mean')
DNA(data_clean, title='Cleaned Data', height='1200px', colors=['Red', 'White', 'Green'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We define the size of our matrix here. Larger matrices require a larger height.
Step2: Instead of setting the quantiles by the sliders, we can also set them programatically. Using a range of (5, 95) restricts the data considerably.
Step3: Now, we can use the convenience function to extract a clean DataFrame.
Step4: The DNA fills outliers with the mean of the column. Alternately, we can fill the outliers by the mean.
Step5: We can also visualize the new DataFrame the same way to test how our outliers look now.
|
4,648
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import geopandas as gpd
import os
import numpy as np
import pandas as pd
#filename = os.path.join("/media", "disk", "tl_2013_17031_edges", "tl_2013_17031_edges.shp")
filename = os.path.join("..", "..", "..", "..", "..", "Data", "tl_2013_17031_edges", "tl_2013_17031_edges.shp")
edges = gpd.read_file(filename)
# Slightly curious, if you look it up. But lon/lat
edges.crs
edges.ix[12]
want = {"geometry", "FULLNAME", "LFROMADD", "LTOADD", "RFROMADD", "RTOADD"}
edges = gpd.GeoDataFrame({key:edges[key] for key in want})
edges.crs={'init': 'epsg:4269'}
edges.head()
import open_cp.sources.chicago as chicago
frame = chicago.load_to_geoDataFrame().dropna()
frame = frame.to_crs({'init': 'epsg:3435', "preserve_units":True})
edges = edges.to_crs({'init': 'epsg:3435', "preserve_units":True})
xcs = frame.geometry.map(lambda pt : pt.coords[0][0])
ycs = frame.geometry.map(lambda pt : pt.coords[0][1])
import shapely.geometry
import numpy as np
frame.geometry = [shapely.geometry.Point(np.round(x), np.round(y)) for x,y in zip(xcs, ycs)]
one = frame[frame.address == frame.address.unique()[0]].copy()
one.head()
def find_match_via_distance(point):
dist = edges.geometry.distance(point)
return edges.ix[dist.argmin]
def via_distance(one):
return [ find_match_via_distance(point).name for point in one.geometry ]
one["edge_index"] = via_distance(one)
one.head()
def find_match_via_intersection(point):
possibles = edges[edges.geometry.intersects( point.buffer(0.001) )]
i = possibles.geometry.distance(point).argmin()
return edges.ix[i]
def via_intersection(one):
return [ find_match_via_intersection(point).name
for point in one.geometry ]
one["edge_index1"] = via_intersection(one)
one.head()
np.all(one.edge_index == one.edge_index1)
import rtree
gap = 0.001
def gen():
for i, row in edges.iterrows():
bds = list(row.geometry.bounds)
bds = [bds[0]-gap, bds[1]-gap, bds[2]+gap, bds[3]+gap]
yield i, bds, None
idx = rtree.index.Index(gen())
def find_match_via_rtree(point):
possibles = edges.ix[list(idx.intersection(point.coords[0]))]
if len(possibles) == 0:
#raise ValueError("Found no candidates for {}".format(point))
from collections import namedtuple
Error = namedtuple("Error", ["name"])
return Error(name=-1)
i = possibles.geometry.distance(point).argmin()
return edges.ix[i]
def via_rtree(one):
return [ find_match_via_rtree(point).name
for point in one.geometry ]
one["edge_index2"] = via_rtree(one)
one.head()
all(one.edge_index1 == one.edge_index2)
%timeit(via_distance(one))
%timeit(via_intersection(one))
%timeit(via_rtree(one))
for i in frame.index[100:200]:
pt = frame.ix[i].geometry
a = find_match_via_distance(pt).name
b = find_match_via_intersection(pt).name
c = find_match_via_rtree(pt).name
assert a == b
assert b == c
frame["edge_index"] = via_rtree(frame)
frame[frame.edge_index == -1].location.unique()
frame = frame[frame.edge_index != -1].copy()
lookup = pd.DataFrame({"case": frame.case, "edge":frame.edge_index})
lookup.head()
lookup.to_csv("edge_lookup.csv")
lookup = pd.read_csv("edge_lookup.csv", names = ["case", "edge"], header=0)
# Make a lookup. This breaks things for "HOMICIDE" crime, but we don't care.
l = dict()
for _, row in lookup.iterrows():
l[row.case] = row.edge
lookup = l
block = frame.address.unique()[132]
block
edge_ids = set()
for case in frame[frame.address == block].case:
edge_ids.add(lookup[case])
edge_ids
edges.ix[edge_ids]
def get_xy(frame):
x = frame.geometry.map(lambda pt : pt.coords[0][0])
y = frame.geometry.map(lambda pt : pt.coords[0][1])
return x, y
fig, axes = plt.subplots(ncols=2, figsize=(16,8))
for ax in axes:
edges.ix[edge_ids].plot(ax=ax, color="black")
ax.scatter(*get_xy(frame[frame.address == block]), color="blue", marker="x")
axes[1].set_aspect(1)
None
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We only care about the columns "geometry" and "FULLNAME" (giving the road name) and LFROMADD, LTOADD, RFROMADD, RTOADD
Step2: Optionally project and round?
Step3: Pick out one block
Step4: Distances
Step5: Intersections
Step6: Via rtree
Step7: Check we always get the same result
Step8: Process the entire dataset
Step9: Some common issues found
|
4,649
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import time, os, json
import numpy as np
import skimage.io
import matplotlib.pyplot as plt
from cs231n.classifiers.pretrained_cnn import PretrainedCNN
from cs231n.data_utils import load_tiny_imagenet
from cs231n.image_utils import blur_image, deprocess_image
from cs231n.layers import softmax_loss
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A', subtract_mean=True)
for i, names in enumerate(data['class_names']):
print i, ' '.join('"%s"' % name for name in names)
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(data['y_train'] == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = deprocess_image(data['X_train'][train_idx], data['mean_image'])
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(data['class_names'][class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5')
batch_size = 100
# Test the model on training data
mask = np.random.randint(data['X_train'].shape[0], size=batch_size)
X, y = data['X_train'][mask], data['y_train'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Training accuracy: ', (y_pred == y).mean()
# Test the model on validation data
mask = np.random.randint(data['X_val'].shape[0], size=batch_size)
X, y = data['X_val'][mask], data['y_val'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Validation accuracy: ', (y_pred == y).mean()
def compute_saliency_maps(X, y, model):
Compute a class saliency map using the model for images X and labels y.
Input:
- X: Input images, of shape (N, 3, H, W)
- y: Labels for X, of shape (N,)
- model: A PretrainedCNN that will be used to compute the saliency map.
Returns:
- saliency: An array of shape (N, H, W) giving the saliency maps for the input
images.
N, _, H, W = X.shape
scores, cache = model.forward(X, mode='test')
dscores = np.zeros_like(scores)
dscores[np.arange(N), y] = 1
dX, _ = model.backward(dscores, cache)
saliency = np.max(dX, axis=1)
return saliency
def show_saliency_maps(mask):
mask = np.asarray(mask)
X = data['X_val'][mask]
y = data['y_val'][mask]
saliency = compute_saliency_maps(X, y, model)
for i in xrange(mask.size):
plt.subplot(2, mask.size, i + 1)
plt.imshow(deprocess_image(X[i], data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[i]][0])
plt.subplot(2, mask.size, mask.size + i + 1)
plt.title(mask[i])
plt.imshow(saliency[i])
plt.axis('off')
plt.gcf().set_size_inches(10, 4)
plt.show()
# Show some random images
mask = np.random.randint(data['X_val'].shape[0], size=5)
show_saliency_maps(mask)
# These are some cherry-picked images that should give good results
show_saliency_maps([128, 3225, 2417, 1640, 4619])
def make_fooling_image(X, target_y, model):
Generate a fooling image that is close to X, but that the model classifies
as target_y.
Inputs:
- X: Input image, of shape (1, 3, 64, 64)
- target_y: An integer in the range [0, 100)
- model: A PretrainedCNN
Returns:
- X_fooling: An image that is close to X, but that is classifed as target_y
by the model.
N, _, _, _ = X.shape
num_iter = 200
lr_rate = 1e2
X_fooling = np.copy(X)
y = np.zeros((N), dtype=np.int32)
y[:] = target_y
for i in np.arange(num_iter):
scores, cache = model.forward(X_fooling, mode='test')
print scores[:, target_y]
if np.argmax(scores, axis=1) == y:
print 'Fooled yeah in %d iterations' % i
break
_, dscores = softmax_loss(scores, y)
dX, _ = model.backward(dscores, cache)
X_fooling -= lr_rate * dX
return X_fooling
# Find a correctly classified validation image
while True:
i = np.random.randint(data['X_val'].shape[0])
X = data['X_val'][i:i+1]
y = data['y_val'][i:i+1]
y_pred = model.loss(X)[0].argmax()
if y_pred == y: break
target_y = 67
X_fooling = make_fooling_image(X, target_y, model)
# Make sure that X_fooling is classified as y_target
scores = model.loss(X_fooling)
assert scores[0].argmax() == target_y, 'The network is not fooled!'
# Show original image, fooling image, and difference
plt.subplot(1, 3, 1)
plt.imshow(deprocess_image(X, data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[0]][0])
plt.subplot(1, 3, 2)
plt.imshow(deprocess_image(X_fooling, data['mean_image'], renorm=True))
plt.title(data['class_names'][target_y][0])
plt.axis('off')
plt.subplot(1, 3, 3)
plt.title('Difference')
plt.imshow(deprocess_image(X - X_fooling, data['mean_image']))
plt.axis('off')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introducing TinyImageNet
Step2: TinyImageNet-100-A classes
Step3: Visualize Examples
Step4: Pretrained model
Step5: Pretrained model performance
Step7: Saliency Maps
Step8: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
Step10: Fooling Images
Step11: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image.
|
4,650
|
<ASSISTANT_TASK:>
Python Code:
import pandas
nI1 = pandas.read_excel('lab-3-3.xlsx', 'tab-1', header=None)
nI.head(5)
nI2 = pandas.DataFrame(nI.values[[0, 5, 6, 7, 8], :])
nI2.head()
nI3 = pandas.DataFrame(nI.values[[0, 9, 10, 11, 12], :])
nI3.head()
import matplotlib.pyplot
r1, r500, r3000 = nI1.values, nI2.values, nI3.values
matplotlib.pyplot.figure(figsize=(18, 9))
matplotlib.pyplot.grid(linestyle='--')
matplotlib.pyplot.title('Резонансные кривые при $R = 1,\,500,\,3000\,$ Ом', fontweight='bold')
matplotlib.pyplot.xlabel('$f$, кГц')
matplotlib.pyplot.ylabel('$I_0$, мА')
matplotlib.pyplot.errorbar(r1[0, 1:], r1[4, 1:], xerr=[0.05] * 11, yerr=r1[2, 1:] * 4 / 3, fmt='o', c='black', lw=3)
matplotlib.pyplot.errorbar(r500[0, 1:], r500[4, 1:], xerr=[0.05] * 11, yerr=r500[2, 1:] * 4 / 3, fmt='o', c='black', lw=3)
matplotlib.pyplot.errorbar(r3000[0, 1:], r3000[4, 1:], xerr=[0.05] * 11, yerr=r3000[2, 1:] * 4 / 3, fmt='o', c='black', lw=3)
matplotlib.pyplot.show()
nII = pandas.read_excel('lab-3-3.xlsx', 'tab-2', header=None)
nII.head()
import numpy
f = nII.values
x = f[0, 1:]
y = f[2, 1:]
l = numpy.mean(x * y) / numpy.mean(x ** 2)
dl = ((numpy.mean(x ** 2) * numpy.mean(y ** 2) - (numpy.mean(x * y) ** 2)) / (len(x) * (numpy.mean(x ** 2) ** 2))) ** 0.5
fff = numpy.linspace(0, 10, 100)
matplotlib.pyplot.figure(figsize=(18, 9))
matplotlib.pyplot.grid(linestyle='--')
matplotlib.pyplot.title('Зависимость резонансной частоты от емкости$', fontweight='bold')
matplotlib.pyplot.xlabel('$F$, мс^2')
matplotlib.pyplot.ylabel('$C$, нФ')
matplotlib.pyplot.errorbar(f[0, 1:], f[2, 1:], xerr=f[0, 1:] * 0.05, yerr=f[3, 1:], fmt='o', c='black', lw=3)
matplotlib.pyplot.plot(fff, l * fff, '--', c='black', lw=2)
matplotlib.pyplot.show()
l * 1000, dl * 1000, 1 / (2 * numpy.pi * (3 * l * 10 ** (-9)) ** 0.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Таким образом, резонансная частота примерно равна $f_p = 6.9~кГц$ и не зависит от сопротивления. Это расходится с ожидаемыми данными. Скорее всего, у нашей катушки индективность больше или меньше 100 мГн.
|
4,651
|
<ASSISTANT_TASK:>
Python Code:
import cv2
import scipy.misc
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'images/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# resize to smaller
small_img = scipy.misc.imresize(gray_img, 0.3)
# rescale entries to lie in [0,1]
small_img = small_img.astype("float32")/255
# plot image
plt.imshow(small_img, cmap='gray')
plt.show()
import numpy as np
# TODO: Feel free to modify the numbers here, to try out another filter!
# Please don't change the size of the array ~ :D
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
### do not modify the code below this line ###
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = [filter_1, filter_2, filter_3, filter_4]
# visualize all filters
fig = plt.figure(figsize=(10, 5))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
width, height = filters[i].shape
for x in range(width):
for y in range(height):
ax.annotate(str(filters[i][x][y]), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if filters[i][x][y]<0 else 'black')
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D
import matplotlib.cm as cm
# plot image
plt.imshow(small_img, cmap='gray')
# define a neural network with a single convolutional layer with one filter
model = Sequential()
model.add(Convolution2D(1, (4, 4), activation='relu', input_shape=(small_img.shape[0], small_img.shape[1], 1)))
# apply convolutional filter and return output
def apply_filter(img, index, filter_list, ax):
# set the weights of the filter in the convolutional layer to filter_list[i]
model.layers[0].set_weights([np.reshape(filter_list[i], (4,4,1,1)), np.array([0])])
# plot the corresponding activation map
ax.imshow(np.squeeze(model.predict(np.reshape(img, (1, img.shape[0], img.shape[1], 1)))), cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# visualize all activation maps
fig = plt.figure(figsize=(20, 20))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
apply_filter(small_img, i, filters, ax)
ax.set_title('Activation Map for Filter %s' % str(i+1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Specify the Filters
Step2: 3. Visualize the Activation Maps for Each Filter
|
4,652
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
%pylab inline
import math
import numpy as np
import copy
import seaborn as sns
sns.set(style="ticks", color_codes=True, font_scale=1.5)
sns.set_style({"xtick.direction": "in", "ytick.direction": "in"})
import mdtraj as md
from mastermsm.trajectory import traj
tr = traj.TimeSeries(top='data/alaTB.gro', traj=['data/protein_only.xtc'])
tr.discretize(states=['A', 'E', 'L'])
tr.find_keys()
tr_rev = copy.deepcopy(tr)
tr_rev.distraj = [x for x in reversed(tr.distraj)]
from mastermsm.msm import msm
msm_alaTB = msm.SuperMSM([tr, tr_rev], sym=False)
lagts = [1, 2, 5, 7, 10, 20, 50, 100]
for lt in lagts:
msm_alaTB.do_msm(lt)
msm_alaTB.msms[lt].do_trans()
#msm_alaTB.msms[i].boots(plot=False)
msm_alaTB.msms[1].do_rate()
print (msm_alaTB.msms[1].rate)
rate_Taylor = msm_alaTB.msms[1].rate
tau_Taylor = msm_alaTB.msms[1].tauK
peq_Taylor = msm_alaTB.msms[1].peqK
msm_alaTB.do_lbrate()
print (msm_alaTB.lbrate)
msm_alaTB.msms[1].do_rate(method='MLPB', init=msm_alaTB.lbrate, \
report=True)
print (msm_alaTB.msms[1].rate)
fig, ax = subplots(1,2, figsize=(10,5))
ax[0].plot(msm_alaTB.msms[1].peqT, 'x-', markersize=16, \
markeredgewidth=2, color='g', markeredgecolor='g', \
markerfacecolor='None', label='T')
ax[0].plot(peq_Taylor, 'o-', markersize=16, markeredgewidth=3, \
color='b', markeredgecolor='b', markerfacecolor='None',\
label='Taylor')
ax[0].plot(msm_alaTB.peqK, 's-', markersize=16,markeredgewidth=2, \
color='r', markeredgecolor='r', markerfacecolor='None', \
label='LB')
ax[0].plot(msm_alaTB.msms[1].peqK, '^-', markersize=16, \
markeredgewidth=2, color='c', markeredgecolor='c', \
markerfacecolor='None', label='MLPB')
ax[0].set_xlim(-0.2,2.2)
ax[0].set_xlabel('states', fontsize=20)
ax[0].set_ylim(5e-3,1)
ax[0].set_xticks([0,1,2])
ax[0].set_xticklabels(["A","E", "L"])
ax[0].set_ylabel('P$_\mathrm{eq}$', fontsize=20)
ax[0].set_yscale('log')
ax[1].plot(msm_alaTB.msms[1].tauT, 'x-', markersize=16, \
markeredgewidth=2, color='g', markeredgecolor='g', \
markerfacecolor='None', label='T')
ax[1].plot(tau_Taylor, 'o-', markersize=16, markeredgewidth=3, \
color='b', markeredgecolor='b', markerfacecolor='None', \
label='Taylor')
ax[1].plot(msm_alaTB.tauK, 's-', markersize=16,markeredgewidth=2, \
color='r', markeredgecolor='r', markerfacecolor='None', \
label='LB')
ax[1].plot(msm_alaTB.msms[1].tauK, '^-', markersize=16, \
markeredgewidth=2, color='c', markeredgecolor='c', \
markerfacecolor='None', label='MLPB')
ax[1].set_ylabel(r'$\tau$ [ps]', fontsize=20)
ax[1].set_xlabel('eigenvalues', fontsize=20)
ax[1].set_xlim(-0.2,1.2)
ax[1].set_ylim(0,100)
ax[1].set_xticks([0,1])
ax[0].legend(loc=3)
ax[1].legend(loc=1)
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we read the trajectory data, here corresponding to the Gromacs xtc files using the MDtraj library.
Step2: Then we discretize the data using Ramachandran states. Only alpha (A), extended (E) and left-handed (L) configurations are accepted.
Step3: In order to enforce detailed balance we revert the assigned time series and use both trajectories to build the Markov model.
Step4: Using the discretized data we build the MSM invoking the SuperMSM class.
Step5: We then calculate the transition matrix for a set of lag times.
Step6: The rate matrix can be calculated using a simple Taylor expansion. The method produces acceptable solutions for short lag times although the result rapidly diverges from the transition matrix relaxation time at long lag times. The rate matrix elements have the following values
Step7: Alternatively, we can use the lifetime based method, where we use branching probabilities and average lifetimes. The rate is almost identical to that from the Taylor expansion.
Step8: Clearly, both methods yield approximately the same result. Finally, we use Buchete and Hummer's maximum likelihood propagator-based method, which is also implemented in MasterMSM.
Step9: Finally we compare the eigenvalues and equilibrium probabilities from the three methods and those from the transition probability matrix.
|
4,653
|
<ASSISTANT_TASK:>
Python Code:
%%html
<style>
.example-container { background: #999999; padding: 2px; min-height: 100px; }
.example-container.sm { min-height: 50px; }
.example-box { background: #9999FF; width: 50px; height: 50px; text-align: center; vertical-align: middle; color: white; font-weight: bold; margin: 2px;}
.example-box.med { width: 65px; height: 65px; }
.example-box.lrg { width: 80px; height: 80px; }
</style>
import ipywidgets as widgets
from IPython.display import display
button = widgets.Button(
description='Hello World!',
width=100, # Integers are interpreted as pixel measurements.
height='2em', # em is valid HTML unit of measurement.
color='lime', # Colors can be set by name,
background_color='#0022FF', # and also by color code.
border_color='cyan')
display(button)
from IPython.display import display
float_range = widgets.FloatSlider()
string = widgets.Text(value='hi')
container = widgets.Box(children=[float_range, string])
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container) # Displays the `container` and all of it's children.
container = widgets.Box()
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container)
int_range = widgets.IntSlider()
container.children=[int_range]
name1 = widgets.Text(description='Location:')
zip1 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page1 = widgets.Box(children=[name1, zip1])
name2 = widgets.Text(description='Location:')
zip2 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page2 = widgets.Box(children=[name2, zip2])
accord = widgets.Accordion(children=[page1, page2], width=400)
display(accord)
accord.set_title(0, 'From')
accord.set_title(1, 'To')
name = widgets.Text(description='Name:', padding=4)
color = widgets.Dropdown(description='Color:', padding=4, options=['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'])
page1 = widgets.Box(children=[name, color], padding=4)
age = widgets.IntSlider(description='Age:', padding=4, min=0, max=120, value=50)
gender = widgets.RadioButtons(description='Gender:', padding=4, options=['male', 'female'])
page2 = widgets.Box(children=[age, gender], padding=4)
tabs = widgets.Tab(children=[page1, page2])
display(tabs)
tabs.set_title(0, 'Name')
tabs.set_title(1, 'Details')
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text(description="aaaaaaaaaaaaaaaaaa:"))
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text())
buttons = [widgets.Button(description=str(i)) for i in range(3)]
display(*buttons)
container = widgets.HBox(children=buttons)
display(container)
w1 = widgets.Latex(value="First line")
w2 = widgets.Latex(value="Second line")
w3 = widgets.Latex(value="Third line")
display(w1, w2, w3)
w2.visible=None
w2.visible=False
w2.visible=True
form = widgets.VBox()
first = widgets.Text(description="First:")
last = widgets.Text(description="Last:")
student = widgets.Checkbox(description="Student:", value=False)
school_info = widgets.VBox(visible=False, children=[
widgets.Text(description="School:"),
widgets.IntText(description="Grade:", min=0, max=12)
])
pet = widgets.Text(description="Pet:")
form.children = [first, last, student, school_info, pet]
display(form)
def on_student_toggle(name, value):
if value:
school_info.visible = True
else:
school_info.visible = False
student.on_trait_change(on_student_toggle, 'value')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic styling
Step2: Parent/child relationships
Step3: After the parent is displayed
Step4: Fancy boxes
Step5: TabWidget
Step6: Alignment
Step7: If a label is longer than the minimum width, the widget is shifted to the right
Step8: If a description is not set for the widget, the label is not displayed
Step9: Flex boxes
Step10: Using hbox
Step11: Visibility
Step12: Another example
|
4,654
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
from ipywidgets import interact
from exact_solvers import advection
interact(advection.characteristics);
interact(advection.solution);
interact(advection.riemann_demo);
q_l = 1.
q_r = 0.
advection.plot_riemann_solution(q_l, q_r, a=1.);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Characteristics
Step2: We can think of the initial values $q_0(x)$ being transmitted along these lines; we sometimes say that information is transmitted along characteristics.
Step3: For more complicated hyperbolic problems, we may have multiple sets of characteristics, they may not be parallel, and the solution may not be constant along them. But it will still be the case that information is propagated along characteristics. The idea that information propagates at finite speed is an essential property of hyperbolic PDEs.
Step4: Notice how the initial discontinuity follows the characteristic coming from $x=0$ at $t=0$.
|
4,655
|
<ASSISTANT_TASK:>
Python Code:
# import the necessary package at the very beginning
import numpy as np
import pandas as pd
print(str(float(100*177/891)) + '%')
def foolOne(x): # note: assume x is a number
y = x * 2
y -= 25
return y
## Type Your Answer Below ##
foolOne_lambda = lambda x: x*2-25
# Generate a random 3*4 matrix for test
tlist = np.random.randn(3,4)
tlist
# Check if the lambda function yields same results as previous function
def test_foolOne(tlist, func1, func2):
if func1(tlist).all() == func2(tlist).all():
print("Same results!")
test_foolOne(tlist, foolOne, foolOne_lambda)
def foolTwo(x): # note: assume x here is a string
if x.startswith('g'):
return True
else:
return False
## Type Your Answer Below ##
foolTwo_lambda = lambda x: x.startswith('g')
# Generate a random 3*4 matrix of strings for test
# reference: https://pythontips.com/2013/07/28/generating-a-random-string/
# reference: http://www.programcreek.com/python/example/1246/string.ascii_lowercase
import random
import string
def random_string(size):
new_string = ''.join([random.choice(string.ascii_letters + string.digits) for n in range(size)])
return new_string
def test_foolTwo():
test_string = random_string(6)
if foolTwo_lambda(test_string) == foolTwo(test_string):
return True
for i in range(10):
if test_foolTwo() is False:
print('Different results!')
## Type Your Answer Below ##
# reference: https://docs.python.org/3/tutorial/datastructures.html
# tuple is immutable. They cannot be changed once they are made.
# tuples are easier for the python interpreter to deal with and therefore might end up being easier
# tuples might indicate that each entry has a distinct meaning and their order has some meaning (e.g., year)
# Another pragmatic reason to use tuple is when you have data which you know should not be changed (e.g., constant)
# tuples can be used as keys in dictionaries
# tuples usually contain a heterogeneous sequence of elements that are accessed via unpacking or indexing (or even by attribute in the case of namedtuples).
tuple1 = (1, 2, 3, 'a', True)
print('tuple: ', tuple1)
print('1st item of tuple: ', tuple1[0])
tuple1[0] = 4 # item assignment won't work for tuple
# tuple with just one element
tuple2 = (1) # just a number, so has no elements
print(type(tuple2))
tuple2[0]
# tuple with just one element
tuple3 = (1, )
print(type(tuple3))
tuple3[0]
# Question for TA: is tuple comprehension supported?
tuple4 = (char for char in 'abcdabcdabcd' if char not in 'ac')
print(tuple4)
# Question for TA: is the following two tuples the same?
tuple4= (1,2,'a'),(True, False)
tuple5 = ((1,2,'a'),(True, False))
print(tuple4)
print(tuple5)
# lists' elements are usually homogeneous and are accessed by iterating over the list.
list1 = [1, 2, 3, 'a', True]
print('list1: ', list1)
print('1st item of list: ', list1[0])
list1[0] = 4 # item assignment works for list
# list comprehensions
list_int = [element for element in list1 if type(element)==int]
print("list_int", list2)
## Type Your Answer Below ##
# A set is an unordered collection with no duplicate elements.
# set() can be used to eliminate duplicate entries
list1 = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
set1 = set(list1)
print(set1)
# set can be used for membership testing
set2 = {1, 2, 'abc', True}
print('abc' in set2) # membership testing
set1[0] # set does not support indexing
# set comprehensions
set4 = {char for char in 'abcdabcdabcd' if char not in 'ac'}
print(set4)
# Calculate the time cost differences between set and list
import time
import random
def compute_search_speed_difference(scope):
list1 = []
dic1 = {}
set1 = set(dic1)
for i in range(0,scope):
list1.append(i)
set1.add(i)
random_n = random.randint(0,100000) # look for this random integer in both list and set
list_search_starttime = time.time()
list_search = random_n in list1
list_search_endtime = time.time()
list_search_time = list_search_endtime - list_search_starttime # Calculate the look-up time in list
#print("The look up time for the list is:")
#print(list_search_time)
set_search_starttime = time.time()
set_search = random_n in set1
set_search_endtime = time.time()
set_search_time = set_search_endtime - set_search_starttime # Calculate the look-up time in set
#print("The look up time for the set is:")
#print(set_search_time)
speed_difference = list_search_time - set_search_time
return(speed_difference)
def test(testing_times, scope):
test_speed_difference = []
for i in range(0,testing_times):
test_speed_difference.append(compute_search_speed_difference(scope))
return(test_speed_difference)
#print(test(1000, 100000)) # test 10 times can print out the time cost differences
print("On average, the look up time for a list is more than a set in:")
print(np.mean(test(100, 1000)))
## Type Your Answer Below ##
student = np.array([0, 'Alex', 3, 'M'])
print(student) # all the values' datatype is converted to str
## Type Your Answer Below ##
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/pcsanwald/kaggle-titanic/master/train.csv')
df.sample(3)
df.tail(3)
df.describe()
df.info()
## Type Your Answer Below ##
len(df[df.age.isnull()])/len(df)*100
## Type Your Answer Below ##
df.embarked.value_counts()
print('number of classes: ', len(df.embarked.value_counts().index))
print('names of classes: ', df.embarked.value_counts().index)
# Another method
embarked_set = set(df.embarked)
print(df.embarked.unique())
## Type Your Answer Below ##
male_survived = df[df.survived==1][df.sex=='male']
male_survived_n = len(df.query('''sex=='male' and survived ==1'''))
female_survived = df[df.survived==1][df.sex=='female']
female_survived_n = len(df.query('''sex=='female' and survived ==1'''))
df_survived = pd.DataFrame({'male':male_survived_n, 'female': female_survived_n}, index=['Survived_number'])
df_survived
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
df_survived.plot(kind='bar', title='survived female and male', legend='True')
sns.pointplot(x='embarked', y='survived', hue='sex', data=df, palette={'male':'blue', 'female':'pink'}, markers=["*", "o"], linestyles=['-', '--'])
grid = sns.FacetGrid(df, col='embarked')
grid.map(sns.pointplot, 'pclass', 'survived', 'sex', palette={'male':'blue', 'female':'pink'}, markers=["*", "o"], linestyles=['-', '--'])
grid.add_legend()
grid = sns.FacetGrid(data_train, col='pclass')
grid.map(sns.barplot, 'embarked', 'age', 'sex')
grid.add_legend()
## Type Your Answer Below ##
df_23=df.query('''age>23''')
df_23
# first split name into string lists by ' '
def format_name(df):
df['split_name'] = df.name.apply(lambda x: x.split(' '))
return df
print(df.sample(3).split_name, '\n')
# for each subset string of name, check if "jack" or "rose" in it
for i in format_name(df).split_name:
for l in i:
if (("jack" in l.lower()) | ("rose" in l.lower()) ):
print("found names that contain jack or rose: ", l)
## Type Your Answer Below ##
df4 = df.query('''pclass==1''')
def percent(x):
m = int(x.count())
n = m/len(df4)
return(n)
df[['survived','pclass']].query('''pclass==1''').groupby([ 'survived']).agg({'pclass':percent})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Please rewrite following functions to lambda expressions
Step2: 2. What's the difference between tuple and list?
Step3: 3. Why set is faster than list in python?
Step4: 4. What's the major difference between array in numpy and series in pandas?
Step5: Question 5-11 are related to titanic data (train.csv) on kaggle website
Step6: 6. What's the percentage of null value in 'Age'?
Step7: 7. How many unique classes in 'Embarked' ?
Step8: 8. Compare survival chance between male and female passangers.
Step9: Observations from barplot above
Step10: 10. Is there a Jack or Rose in our dataset?
Step11: 11. What's the percentage of surviving when passangers' pclass is 1?
|
4,656
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from owslib.wms import WebMapService
#We just need a WMS url from one TDS dataset...
serverurl ='http://thredds.ucar.edu/thredds/wms/grib/NCEP/NAM/CONUS_12km/best'
wms = WebMapService( serverurl, version='1.1.1')
#This is general information, common to all datasets in a TDS server
operations =[ op.name for op in wms.operations ]
print 'Available operations: '
print operations
print 'General information (common to all datasets):'
print wms.identification.type
print wms.identification.abstract
print wms.identification.keywords
print wms.identification.version
print wms.identification.title
#Listing all available layers...
layers = list(wms.contents)
for l in layers:
print 'Layer title: '+wms[l].title +', name:'+wms[l].name
#Values common to all GetMap requests: formats and http methods:
print wms.getOperationByName('GetMap').formatOptions
print wms.getOperationByName('GetMap').methods
#Let's choose: 'wind @ Isobaric surface' (the value in the parameter must be name of the layer)
wind = wms['wind @ Isobaric surface']
#What is its bounding box?
print wind.boundingBox
#available CRS
print wind.crsOptions
# --> NOT ALL THE AVAILABLE CRS OPTIONS ARE LISTED
#Function that saves the layer as an image
def saveLayerAsImage(layer, inname):
out = open(inname, 'wb')
out.write(layer.read())
out.close()
#let's get the image...
img_wind = wms.getmap( layers=[wind.name], #only takes one layer
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(512, 512),
format='image/png'
)
#Save it..
saveLayerAsImage(img_wind, 'test_wind.png')
#Display the image we've just saved...
from IPython.core.display import Image
Image(filename='test_wind.png')
#Times are available in the timepositions property of the layer
times= [time.strip() for time in wind.timepositions]
print times
#We can choose any of the available times and make a request for it with the parameter time
#If no time is provided the default in TDS is the closest available time to the current time
img_wind = wms.getmap( layers=[wind.name],
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[len(times)-1]
)
saveLayerAsImage(img_wind, 'test_wind.png')
Image(filename='test_wind.png')
#We can also specify a time interval to get an animated gif
#Format must be image/gif
img_wind = wms.getmap( layers=[wind.name],
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/gif',
time= times[len(times)-4]+'/'+times[len(times)-1]
)
#Image(url='http://python.org/images/python-logo.gif')
#saveLayerAsImage(img_wind, 'test_anim_wind.gif')
Image(url=img_wind.url)
#Next version of OWSLib will support this...
#elevations = [el.strip() for el in wind.elevations]
#print elevations
#In the meantime...
def find_elevations_for_layer(wms, layer_name):
parses the wms capabilities document searching
the elevation dimension for the layer
#Get all the layers
levels =None;
layers = wms._capabilities.findall(".//Layer")
layer_tag = None
for el in layers:
name = el.find("Name")
if name is not None and name.text.strip() == layer_name:
layer_tag = el
break
if layer_tag is not None:
elevation_tag = layer_tag.find("Extent[@name='elevation']")
if elevation_tag is not None:
levels = elevation_tag.text.strip().split(',')
return levels;
elevations = find_elevations_for_layer(wms, wind.name)
print elevations
#now we can change our vertical level with the parameter elevation
#If no elevation parameter is provided the default is the first vertical level in the dimension.
img_wind = wms.getmap( layers=['wind @ Isobaric surface'], #only takes one layer
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[0],
elevation=elevations[len(elevations)-1 ]
)
saveLayerAsImage(img_wind, 'test_wind.png')
Image(filename='test_wind.png')
#available styles:
#print wind.styles
#Change the style of our layer
img_wind = wms.getmap( layers=[wind.name], #only takes one layer
styles=['barb/rainbow'], #one style per layer
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[0]
)
saveLayerAsImage(img_wind, 'test_wind_barb.png')
Image(filename='test_wind_barb.png')
#Reproject the bounding box to a global mercator (EPSG:3875, projection used by Google Maps, OSM...) using pyproj
from mpl_toolkits.basemap import pyproj
epsg = '3857'
psproj = pyproj.Proj(init="epsg:%s" % epsg)
xmin, ymin = psproj(wind.boundingBox[0], wind.boundingBox[1])
xmax, ymax = psproj(wind.boundingBox[2], wind.boundingBox[3])
img_wind = wms.getmap( layers=[wind.name],
srs='EPSG:'+ epsg,
bbox=(xmin, ymin, xmax, ymax),
size=(600, 600),
format='image/png',
time= times[0]
)
saveLayerAsImage(img_wind, 'test_wind_3857.png')
Image(filename='test_wind_3857.png')
temp =wms['Temperature_isobaric']
img_temp = wms.getmap( layers=[temp.name],
styles=['boxfill/rainbow'],
srs='EPSG:4326',
bbox=(temp.boundingBox[0],temp.boundingBox[1], temp.boundingBox[2], temp.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[0]
)
saveLayerAsImage(img_temp, 'test_temp.png')
Image(filename='test_temp.png')
img_temp = wms.getmap( layers=[temp.name],
styles=['boxfill/rainbow'],
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[0],
colorscalerange='250,320'
)
saveLayerAsImage(img_temp, 'test_temp.png')
Image(filename='test_temp.png')
colorscalerange='290,310'
img_temp = wms.getmap( layers=[temp.name],
styles=['boxfill/rainbow'],
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[0],
colorscalerange=colorscalerange,
abovemaxcolor='transparent',
belowmincolor='transparent'
)
saveLayerAsImage(img_temp, 'test_temp.png')
Image(filename='test_temp.png')
params ={'request': 'GetLegendGraphic',
'colorbaronly':'False', #want the text in the legend
'layer':temp.name,
'colorscalerange':colorscalerange}
legendUrl=serverurl+'?REQUEST={request:s}&COLORBARONLY={colorbaronly:s}&LAYER={layer:s}&COLORSCALERANGE={colorscalerange:s}'.format(**params)
Image(url=legendUrl)
import os
import urllib2
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
from matplotlib.offsetbox import AnnotationBbox, OffsetImage
from matplotlib._png import read_png
m = Basemap(llcrnrlon=temp.boundingBox[0], llcrnrlat=temp.boundingBox[1],
urcrnrlon=temp.boundingBox[2], urcrnrlat=temp.boundingBox[3]+5.0,
resolution='l',epsg=4326)
plt.figure(1, figsize=(16,12))
plt.title(temp.title +' '+times[0] )
m.wmsimage(serverurl,xpixels=600, ypixels=600, verbose=False,
layers=[temp.name],
styles=['boxfill/rainbow'],
time= times[0],
colorscalerange=colorscalerange,
abovemaxcolor='extend',
belowmincolor='transparent'
)
m.drawcoastlines(linewidth=0.25)
#Annotating the map with the legend
#Save the legend as image
cwd = os.getcwd()
legend = urllib2.urlopen(legendUrl)
saveLayerAsImage(legend, 'legend_temp.png')
#read the image as an array
arr = read_png('legend_temp.png')
imagebox = OffsetImage(arr, zoom=0.7)
xy =[ temp.boundingBox[2], temp.boundingBox[1] ]
#Gets the current axis
ax = plt.gca()
#Creates the annotation
ab = AnnotationBbox(imagebox, xy,
xybox=(-46.,100.),
xycoords='data',
boxcoords="offset points",
pad=0.)
#Adds the legend image as an AnnotationBbox to the map
ax.add_artist(ab)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The WebMapService object gets all the information available about the service through a GetCapabilities request
Step2: Bounding boxes, styles and dimensions are specific to each layer.
Step3: 3. Getting the basic information we need to perform a GetMap request
Step4: 4. More on GetMap requests
Step6: Getting the available vertical levels
Step7: Changing styles
Step8: Changing the spatial reference system (SRS)
Step9: Cool, we already know how to make get map requests. Let's change our layer...
Step10: ...well not that cool.
Step11: abovemaxcolor, belowmincolor params give us control on how we want the values out of range to be displayed.
Step12: The GetLegendGraphic request gives us a legend for the map, but the request is not supported by OWSLib.
Step13: 5. WMS and basemap
|
4,657
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-2', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
4,658
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression, Perceptron
import numpy as np
import matplotlib.pyplot as plt
from mlxtend.evaluate import plot_decision_regions
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
X = np.array([
[-1, -1],
[-1, 1],
[1, -1],
[1, 1]
])
y_and = np.array([0, 1, 1, 1])
y_xor = np.array([0, 1, 1, 0])
lr = LogisticRegression(C=100000)
for label, y in [('and', y_and), ('xor', y_xor)]:
lr.fit(X, y)
plot_decision_regions(X, y, lr)
plt.xlabel('x')
plt.ylabel('y')
plt.title('linear model fitting "{}"'.format(label))
plt.show()
tree = DecisionTreeClassifier(criterion='entropy', max_depth=2, random_state=0)
tree.fit(X, y)
for label, y in [('and', y_and), ('xor', y_xor)]:
tree.fit(X, y)
plot_decision_regions(X, y, tree)
plt.xlabel('x')
plt.ylabel('y')
plt.title('decision tree model fitting "{}"'.format(label))
plt.show()
from sklearn.metrics import accuracy_score
X_linear_wins = np.array([
# place 2d samples here, each value between -1 and 1
])
y_linear_wins = np.array([
# place class label 0, 1 for each 2d point here
])
# uncommment code below to test out whether your dataset is more accurately predicted by a linear model
# than a tree of depth 3.
# for label, model in [('linear', lr), ('tree', tree)]:
# model.fit(X_linear_wins, y_linear_wins)
# plot_decision_regions(X_linear_wins, y_linear_wins, model)
# plt.xlabel('x')
# plt.ylabel('y')
# title = "{}: accuracy {:.2f}".format(label, accuracy_score(y_linear_wins, model.predict(X_linear_wins)))
# plt.title(title)
# plt.show()
X_needs_depth = np.array([
# place 2d samples here, each value between -1 and 1
])
y_needs_depth = np.array([
# place class label 0, 1 for each 2d point here
])
# uncomment the code below to compare
# tree_d2 = DecisionTreeClassifier(criterion='entropy', max_depth=2, random_state=0)
# tree_d4 = DecisionTreeClassifier(criterion='entropy', max_depth=4, random_state=0)
# tree_d2.fit(X_needs_depth, y_needs_depth)
# plot_decision_regions(X_needs_depth, y_needs_depth, tree_d2)
# plt.title("depth 2 fit: {:.2f}".format(accuracy_score(y_needs_depth, tree_d2.predict(X_needs_depth))))
# plt.show()
# tree_d4.fit(X_needs_depth, y_needs_depth)
# plot_decision_regions(X_needs_depth, y_needs_depth, tree_d4)
# plt.title("depth 4 fit: {:.2f}".format(accuracy_score(y_needs_depth, tree_d4.predict(X_needs_depth))))
# plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Trees can fit XOR
Step2: When a linear models beat a decision tree
Step3: Depth matters
|
4,659
|
<ASSISTANT_TASK:>
Python Code:
def pentagon_pyramidal(n ) :
return n * n *(n + 1 ) / 2
n = 4
print(int(pentagon_pyramidal(n ) ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
4,660
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-2', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
4,661
|
<ASSISTANT_TASK:>
Python Code:
a = 2 + 3j
print(a, type(a))
class NumeroComplesso(object):
def __init__(self, real, imag):
Metodo costruttore, chiamato quando viene
inizializzato un nuovo oggetto
self.a = real
self.b = imag
def somma(self, c):
Somma al numero corrente il numero complesso c
# DA COMPLETARE
pass
def __str__(self):
Ritorna una stringa che rappresenta il numero
return str(self.a) + ' + ' + str(self.b) +'i'
type(NumeroComplesso)
a = NumeroComplesso(2,3)
b = NumeroComplesso(1,-2)
print(a)
a.somma(b)
print(a)
a.a = 0
a.somma(b)
print(a)
class NCO(NumeroComplesso):
# EREDITA DI I METODI E ATTRIBUTI DELLA CLASSE "NUMERO COMPLESSO"
def __add__(self, c):
Esempio di OPERATOR OVERLOADING: addizione
# DA COMPLETARE
pass
def __eq__(self, c):
Esempio di OPERATOR OVERLOADING: confronto
return self.a == c.a and self.b == c.b
c = NCO(1,2)
print(c)
type(c.somma)
type(c.a)
a == c
d = NCO(1,2)
print(c, id(c))
print(d, id(d))
print(d == c)
# Definizione di una classe che implementa un "adder"
class Adder(object):
def __init__(self, n=0):
self.n = n # Stato mutabile della classe (DANGER ZONE!!!)
def __call__(self, m):
return self.n + m
add5_istanza = Adder(5)
print(add5_istanza(1), add5_istanza(7))
# Definizione di una closure che implementa un "adder"
def make_adder(n=0):
def adder(m):
return n + m
return adder
add5_function = make_adder(5)
print(add5_function(1), add5_function(7))
add5_istanza.n = 1
print(add5_istanza(1), add5_istanza(7))
print(add5_function(1), add5_function(7))
# Definizione di una classe che implementa un "counter"
class Counter(object):
def __init__(self, n=0):
self.n = n
def __call__(self):
self.n += 1
return self.n
counter_istanza = Counter(5)
print(counter_istanza(), counter_istanza())
# Definizione di una closure che implementa un "counter"
def make_counter(n=0):
def state():
c = n
while True:
c += 1
yield c
def counter():
return next(f)
f = state()
return counter
counter_function = make_counter(5)
print(counter_function(), counter_function())
# faccio una modifica apparentemente "innocua"
counter_istanza.n = 'ciao'
print(counter_function(), counter_function())
# ... dopo un po` riutilizzo il counter_istanza
print(counter_istanza())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Definizione di un nuovo ADT
Step5: Danger ZONE !!!
Step8: Inheritance e Operator Overloading
Step9: Classes vs. Closures
Step10: NOTA
|
4,662
|
<ASSISTANT_TASK:>
Python Code:
!pip install --upgrade tensorflow
import tensorflow as tf
print tf.__version__
import numpy as np
import tensorflow as tf
import seaborn as sns
import pandas as pd
SEQ_LEN = 10
def create_time_series():
freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6
ampl = np.random.random() + 0.5 # 0.5 to 1.5
x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl
return x
for i in xrange(0, 5):
sns.tsplot( create_time_series() ); # 5 series
def to_csv(filename, N):
with open(filename, 'w') as ofp:
for lineno in xrange(0, N):
seq = create_time_series()
line = ",".join(map(str, seq))
ofp.write(line + '\n')
to_csv('train.csv', 1000) # 1000 sequences
to_csv('valid.csv', 50)
!head -5 train.csv valid.csv
import tensorflow as tf
import shutil
import tensorflow.contrib.learn as tflearn
import tensorflow.contrib.layers as tflayers
from tensorflow.contrib.learn.python.learn import learn_runner
import tensorflow.contrib.metrics as metrics
import tensorflow.contrib.rnn as rnn
DEFAULTS = [[0.0] for x in xrange(0, SEQ_LEN)]
BATCH_SIZE = 20
TIMESERIES_COL = 'rawdata'
N_OUTPUTS = 2 # in each sequence, 1-8 are features, and 9-10 is label
N_INPUTS = SEQ_LEN - N_OUTPUTS
# read data and convert to needed format
def read_dataset(filename, mode=tf.contrib.learn.ModeKeys.TRAIN):
def _input_fn():
num_epochs = 100 if mode == tf.contrib.learn.ModeKeys.TRAIN else 1
# could be a path to one file or a file pattern.
input_file_names = tf.train.match_filenames_once(filename)
filename_queue = tf.train.string_input_producer(
input_file_names, num_epochs=num_epochs, shuffle=True)
reader = tf.TextLineReader()
_, value = reader.read_up_to(filename_queue, num_records=BATCH_SIZE)
value_column = tf.expand_dims(value, -1)
print 'readcsv={}'.format(value_column)
# all_data is a list of tensors
all_data = tf.decode_csv(value_column, record_defaults=DEFAULTS)
inputs = all_data[:len(all_data)-N_OUTPUTS] # first few values
label = all_data[len(all_data)-N_OUTPUTS : ] # last few values
# from list of tensors to tensor with one more dimension
inputs = tf.concat(inputs, axis=1)
label = tf.concat(label, axis=1)
print 'inputs={}'.format(inputs)
return {TIMESERIES_COL: inputs}, label # dict of features, label
return _input_fn
LSTM_SIZE = 3 # number of hidden units in each of the LSTM cells
# create the inference model
def simple_rnn(features, targets, mode):
# 0. Reformat input shape to become a sequence
x = tf.split(features[TIMESERIES_COL], N_INPUTS, 1)
#print 'x={}'.format(x)
# 1. configure the RNN
lstm_cell = rnn.BasicLSTMCell(LSTM_SIZE, forget_bias=1.0)
outputs, _ = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
# slice to keep only the last cell of the RNN
outputs = outputs[-1]
#print 'last outputs={}'.format(outputs)
# output is result of linear activation of last layer of RNN
weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))
bias = tf.Variable(tf.random_normal([N_OUTPUTS]))
predictions = tf.matmul(outputs, weight) + bias
# 2. loss function, training/eval ops
if mode == tf.contrib.learn.ModeKeys.TRAIN or mode == tf.contrib.learn.ModeKeys.EVAL:
loss = tf.losses.mean_squared_error(targets, predictions)
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=0.01,
optimizer="SGD")
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(targets, predictions)
}
else:
loss = None
train_op = None
eval_metric_ops = None
# 3. Create predictions
predictions_dict = {"predicted": predictions}
# 4. return ModelFnOps
return tflearn.ModelFnOps(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
def get_train():
return read_dataset('train.csv', mode=tf.contrib.learn.ModeKeys.TRAIN)
def get_valid():
return read_dataset('valid.csv', mode=tf.contrib.learn.ModeKeys.EVAL)
def serving_input_fn():
feature_placeholders = {
TIMESERIES_COL: tf.placeholder(tf.float32, [None, N_INPUTS])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
features[TIMESERIES_COL] = tf.squeeze(features[TIMESERIES_COL], axis=[2])
print 'serving: features={}'.format(features[TIMESERIES_COL])
return tflearn.utils.input_fn_utils.InputFnOps(
features,
None,
feature_placeholders
)
from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils
def experiment_fn(output_dir):
# run experiment
return tflearn.Experiment(
tflearn.Estimator(model_fn=simple_rnn, model_dir=output_dir),
train_input_fn=get_train(),
eval_input_fn=get_valid(),
eval_metrics={
'rmse': tflearn.MetricSpec(
metric_fn=metrics.streaming_root_mean_squared_error
)
},
export_strategies=[saved_model_export_utils.make_export_strategy(
serving_input_fn,
default_output_alternative_key=None,
exports_to_keep=1
)]
)
shutil.rmtree('outputdir', ignore_errors=True) # start fresh each time
learn_runner.run(experiment_fn, 'outputdir')
%bash
# run module as-is
REPO=$(pwd)
echo $REPO
rm -rf outputdir
export PYTHONPATH=${PYTHONPATH}:${REPO}/simplernn
python -m trainer.task \
--train_data_paths="${REPO}/train.csv*" \
--eval_data_paths="${REPO}/valid.csv*" \
--output_dir=${REPO}/outputdir \
--job-dir=./tmp
%writefile test.json
{"rawdata": [0.0,0.0527,0.10498,0.1561,0.2056,0.253,0.2978,0.3395]}
%bash
MODEL_DIR=$(ls ./outputdir/export/Servo/)
gcloud ml-engine local predict --model-dir=./outputdir/export/Servo/$MODEL_DIR --json-instances=test.json
%bash
# run module on Cloud ML Engine
REPO=$(pwd)
BUCKET=cloud-training-demos-ml # CHANGE AS NEEDED
OUTDIR=gs://${BUCKET}/simplernn/model_trained
JOBNAME=simplernn_$(date -u +%y%m%d_%H%M%S)
REGION=us-central1
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${REPO}/simplernn/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=1.2 \
-- \
--train_data_paths="gs://${BUCKET}/train.csv*" \
--eval_data_paths="gs://${BUCKET}/valid.csv*" \
--output_dir=$OUTDIR \
--num_epochs=100
import tensorflow as tf
import numpy as np
def breakup(sess, x, lookback_len):
N = sess.run(tf.size(x))
windows = [tf.slice(x, [b], [lookback_len]) for b in xrange(0, N-lookback_len)]
windows = tf.stack(windows)
return windows
x = tf.constant(np.arange(1,11, dtype=np.float32))
with tf.Session() as sess:
print 'input=', x.eval()
seqx = breakup(sess, x, 5)
print 'output=', seqx.eval()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2> RNN </h2>
Step2: <h3> Input Fn to read CSV </h3>
Step3: Reading data using the Estimator API in tf.learn requires an input_fn. This input_fn needs to return a dict of features and the corresponding labels.
Step4: <h3> Define RNN </h3>
Step5: <h3> Experiment </h3>
Step6: <h3> Standalone Python module </h3>
Step7: Try out online prediction. This is how the REST API will work after you train on Cloud ML Engine
Step8: <h3> Cloud ML Engine </h3>
Step9: <h2> Variant
|
4,663
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets.samples_generator import make_blobs
X_raw, y_raw = make_blobs(n_samples=100, centers=2,
cluster_std=5.2, random_state=42)
import numpy as np
X = X_raw.astype(np.float32)
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(sparse=False, dtype=np.float32)
y = enc.fit_transform(y_raw.reshape(-1, 1))
import cv2
mlp = cv2.ml.ANN_MLP_create()
n_input = 2
n_hidden = 8
n_output = 2
mlp.setLayerSizes(np.array([n_input, n_hidden, n_output]))
mlp.setActivationFunction(cv2.ml.ANN_MLP_SIGMOID_SYM, 2.5, 1.0)
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
alpha = 2.5
beta = 1.0
x_sig = np.linspace(-1.0, 1.0, 100)
y_sig = beta * (1.0 - np.exp(-alpha * x_sig))
y_sig /= (1 + np.exp(-alpha * x_sig))
plt.figure(figsize=(10, 6))
plt.plot(x_sig, y_sig, linewidth=3)
plt.xlabel('x')
plt.ylabel('y')
mlp.setTrainMethod(cv2.ml.ANN_MLP_BACKPROP)
term_mode = cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS
term_max_iter = 300
term_eps = 0.01
mlp.setTermCriteria((term_mode, term_max_iter, term_eps))
mlp.train(X, cv2.ml.ROW_SAMPLE, y)
_, y_hat = mlp.predict(X)
from sklearn.metrics import accuracy_score
accuracy_score(y_hat.round(), y)
def plot_decision_boundary(classifier, X_test, y_test):
# create a mesh to plot in
h = 0.02 # step size in mesh
x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1
y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
X_hypo = np.c_[xx.ravel().astype(np.float32),
yy.ravel().astype(np.float32)]
_, zz = classifier.predict(X_hypo)
zz = np.argmax(zz, axis=1)
zz = zz.reshape(xx.shape)
plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200)
plt.figure(figsize=(10, 6))
plot_decision_boundary(mlp, X, y_raw)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preprocessing the data
Step2: Furthermore, we need to think back to Chapter 4, Representing Data and Engineering and Features, and remember how to represent categorical variables. We need to find a way to
Step3: Creating an MLP classifier in OpenCV
Step4: However, now we need to specify how many layers we want in the network and how many
Step5: Customizing the MLP classifier
Step6: If you are curious what this activation function looks like, we can take a short excursion
Step7: As mentioned in the preceding part, a training method can be set via
Step8: Lastly, we can specify the criteria that must be met for training to end via
Step9: Training and testing the MLP classifier
Step10: The same goes for predicting target labels
Step11: The easiest way to measure accuracy is by using scikit-learn's helper function
Step12: It looks like we were able to increase our performance from 81% with a single perceptron to
|
4,664
|
<ASSISTANT_TASK:>
Python Code:
#untar and compile sample_stats
!tar zxf ms.tar.gz; cd msdir; gcc -o sample_stats sample_stats.c tajd.c -lm
#now move the program into the current working dir
!mv msdir/sample_stats .
#download discoal and compile it
!wget https://github.com/kern-lab/discoal/archive/master.zip; unzip master.zip; cd discoal-master; make
#or, for our mac OS X users and any others who use curl instead of wget
!curl -O https://github.com/kern-lab/discoal/archive/master.zip; unzip master.zip; cd discoal-master; make
#now move discoal into the current working dir
!mv discoal-master/discoal .
!conda install scikit-learn --yes
!pip install -U scikit-learn
#simulate under the equilibrium model -- could also do this with ms
!./discoal 20 2000 1000 -t 100 -r 100 | ./sample_stats > no_sweep.msOut.stats
#simulate under the soft sweep model with a selection coefficient 2Ns=250
#and an initial selected frequency randomly drawn from (0, 0.2]
!./discoal 20 2000 1000 -t 100 -r 100 -ws 0 -Pa 100 500 -i 4 -Pf 0 0.2 | ./sample_stats > soft_sweep.msOut.stats
#simulate under the hard sweep model with a selection coefficient 2Ns=250
!./discoal 20 2000 1000 -t 100 -r 100 -ws 0 -Pa 100 500 -i 4 | ./sample_stats > hard_sweep.msOut.stats
#now lets suck up the data columns we want for each of these files, and create one big training set; we will use numpy for this
# note that we are only using two columns of the data- these correspond to segSites and Fay & Wu's H
import numpy as np
X1 = np.loadtxt("no_sweep.msOut.stats",usecols=(5,9))
X2 = np.loadtxt("soft_sweep.msOut.stats",usecols=(5,9))
X3 = np.loadtxt("hard_sweep.msOut.stats",usecols=(5,9))
X = np.concatenate((X1,X2,X3))
#create associated 'labels' -- these will be the targets for training
y = [0]*len(X1) + [1]*len(X2) + [2]*len(X3)
Y = np.array(y)
#the last step in this process will be to shuffle the data, and then split it into a training set and a testing set
#the testing set will NOT be used during training, and will allow us to check how well the classifier is doing
#scikit-learn has a very convenient function for doing this shuffle and split operation
#
# will will keep out 25% of the data for testing
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.25)
#from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
#clf = RandomForestClassifier(n_estimators=100,n_jobs=10)
clf = svm.SVC(kernel='rbf', gamma=0.1, C=1)
clf = clf.fit(X_train, Y_train)
#These two functions (taken from scikit-learn.org) plot the decision boundaries for a classifier.
def plot_contours(ax, clf, xx, yy, **params):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
def make_meshgrid(x, y, h=.05):
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
#Let's do the plotting
import matplotlib.pyplot as plt
fig,ax= plt.subplots(1,1)
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1, h=0.2)
plot_contours(ax, clf, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8)
ax.scatter(X_test[:, 0], X_test[:, 1], c=Y_test, cmap=plt.cm.coolwarm, edgecolors='k')
ax.set_xlabel(r"Tajima's $D$", fontsize=14)
ax.set_ylabel(r"Fay and Wu's $H$", fontsize=14)
ax.set_xticks(())
ax.set_yticks(())
ax.set_title("Classifier decision surface", fontsize=14)
plt.show()
from sklearn.preprocessing import normalize
#here's the confusion matrix function
def makeConfusionMatrixHeatmap(data, title, trueClassOrderLs, predictedClassOrderLs, ax):
data = np.array(data)
data = normalize(data, axis=1, norm='l1')
heatmap = ax.pcolor(data, cmap=plt.cm.Blues, vmin=0.0, vmax=1.0)
for i in range(len(predictedClassOrderLs)):
for j in reversed(range(len(trueClassOrderLs))):
val = 100*data[j, i]
if val > 50:
c = '0.9'
else:
c = 'black'
ax.text(i + 0.5, j + 0.5, '%.2f%%' % val, horizontalalignment='center', verticalalignment='center', color=c, fontsize=9)
cbar = plt.colorbar(heatmap, cmap=plt.cm.Blues, ax=ax)
cbar.set_label("Fraction of simulations assigned to class", rotation=270, labelpad=20, fontsize=11)
# put the major ticks at the middle of each cell
ax.set_xticks(np.arange(data.shape[1]) + 0.5, minor=False)
ax.set_yticks(np.arange(data.shape[0]) + 0.5, minor=False)
ax.axis('tight')
ax.set_title(title)
#labels
ax.set_xticklabels(predictedClassOrderLs, minor=False, fontsize=9, rotation=45)
ax.set_yticklabels(reversed(trueClassOrderLs), minor=False, fontsize=9)
ax.set_xlabel("Predicted class")
ax.set_ylabel("True class")
#now the actual work
#first get the predictions
preds=clf.predict(X_test)
counts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]
for i in range(len(Y_test)):
counts[Y_test[i]][preds[i]] += 1
counts.reverse()
classOrderLs=['equil','soft','hard']
#now do the plotting
fig,ax= plt.subplots(1,1)
makeConfusionMatrixHeatmap(counts, "Confusion matrix", classOrderLs, classOrderLs, ax)
plt.show()
X1 = np.loadtxt("no_sweep.msOut.stats",usecols=(1,3,5,7,9))
X2 = np.loadtxt("soft_sweep.msOut.stats",usecols=(1,3,5,7,9))
X3 = np.loadtxt("hard_sweep.msOut.stats",usecols=(1,3,5,7,9))
X = np.concatenate((X1,X2,X3))
#create associated 'labels' -- these will be the targets for training
y = [0]*len(X1) + [1]*len(X2) + [2]*len(X3)
Y = np.array(y)
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.1)
clf = svm.SVC(kernel='rbf', gamma=0.1, C=1)
clf = clf.fit(X_train, Y_train)
preds=clf.predict(X_test)
counts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]
for i in range(len(Y_test)):
counts[Y_test[i]][preds[i]] += 1
counts.reverse()
fig,ax= plt.subplots(1,1)
makeConfusionMatrixHeatmap(counts, "Confusion matrix", classOrderLs, classOrderLs, ax)
plt.show()
from sklearn.model_selection import GridSearchCV
## insert the grid search code here
param_grid = [
{'C': [0.125, 0.25, 0.5, 1, 2, 4, 8],
'gamma': [0.0125, 0.025, 0.05, 0.1, 0.2, 0.4, 0.8],
'kernel': ['rbf']},
]
clf = svm.SVC()
clf = GridSearchCV(clf, param_grid)
clf.fit(X_train,Y_train)
preds=clf.predict(X_test)
counts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]
for i in range(len(Y_test)):
counts[Y_test[i]][preds[i]] += 1
counts.reverse()
fig,ax= plt.subplots(1,1)
makeConfusionMatrixHeatmap(counts, "Confusion matrix", classOrderLs, classOrderLs, ax)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install and compile discoal
Step2: Install scikit-learn
Step3: or if you don't use conda, you can use pip to install scikit-learn with
Step4: Step 1
Step5: Step 2
Step6: That's it! The classifier is trained. This SVM uses a radial basis kernel function which allows for non-linear classification. The gamma parameter is a hyperparameter of this kernel function, and C is the SVM's regularization parameter, which governs the "softness" of the separating margin. (An explanation of these and other concepts integral to understanding the guts of an SVM is beyond the scope of this example, though scikit-learn provides a nice fairly accessible tutorial with more example code here
Step7: Above we can see which regions of our feature space are assigned to each class
Step8: Meh. Let's again see if we can do better by using all of statistics calculated by Hudson's sample_stats
Step9: Hmm, that didn't help all that much. But there is still room for improvement.
Step 4
|
4,665
|
<ASSISTANT_TASK:>
Python Code:
print("This is a "small" program")
print("This is a \"small\" program")
print('This is a text')
print("It's all good!")
print('It\'s all good!')
print('"Python" is a programming language.')
print("\"Python\" is a programming language.")
print("This is a backslash: \")
print("This is a backslash: \\")
print("First line
Second line")
print("First line\nSecond line")
print(First line.
Second line.
All kinds of "special" characters here! Even backslashes! \ \\ \\\ )
print("This is a one-line string but it's a little too long so I want to break
it into two lines")
print("This is a one-line string but it's a little too long so I want to break\
it into two lines")
2 + \
3
print(\
"some string")
1 + \
1 + \
1 + \
1
a = 3
b = 5
c = a + b
print(c)
r = 5
C = 2 * r * 3.14
A = r * r * 3.14
print("Circuference is:")
print(C)
print("Area is:")
print(A)
# The radius of the circle
radius = 5
# The value of the number pi, used throughout the program.
# If we want to use another value for pi, we only need to change it here
pi = 3.14159
# The circumference of the circle
circumference = 2 * radius * pi
# The area of the circle
area = radius * radius * pi
# Print the result
print("The circumference is:")
print(circumference)
print("\n")
print("The area is:")
print(area)
# Declare two variables
a = 2
b = 3
# Switch the values
a = b
b = a
# Print new values
print("a = %d" % a)
print("b = %d" % b)
# Declare two variables
a = 2
b = 3
# Switch the values
tmp = a
a = b
b = tmp
# Print new values
print("a = %d" % a)
print("b = %d" % b)
a = 1
print(a)
a = 2
print(a)
a = 10
print(a)
a = 3 + 2
print(a)
a = 10
# Print the "old" value of a
print("Old a = %d" % a)
# New a is equal old a (which is 10) plus 1
a = a + 1
# Print the new a
print("New a = %d" % a)
# The radius of the circle
radius = 5
# The value of the number pi, used throughout the program.
# If we want to use another value for pi, we only need to change it here
pi = 3.14159
# The circumference of the circle
circumference = 2 * radius * pi
# The area of the circle
area = radius * radius * pi
# Print the input and the result
print("The circle radius is %d and pi is %f." % (radius, pi))
print("The circumference is %f and the area is %f." % (circumference, area) )
name = "John Smith"
print("Hi! My name is %s. How are you?" % name)
print("This is an integer: %f" % 3)
print("This is a float: %d" % 3.35)
# A string
my_variable = "This is a string."
print(my_variable)
print(type(my_variable))
print("\n")
# An integer
my_variable = 3
print(my_variable)
the_type = type(my_variable)
print(the_type)
print("\n")
# A float
my_variable = 3.5
print(my_variable)
print(type(my_variable))
print("\n")
# A boolean
my_variable = 3 > 5
print(my_variable)
print(type(my_variable))
# Adding two integers
sum_of_integers = 3 + 5
print(sum_of_integers)
print(type(sum_of_integers))
# Adding a float to an integer
sum_of_int_and_float = 3 + 5.3
print(sum_of_int_and_float)
print(type(sum_of_int_and_float))
print(32.1 % 10)
print("This is a number: %f" % 3.14)
# Add numbers
number1 = 10
number2 = 11
print(number1 + number2)
print("\n")
# Concatenate strings
str1 = "My name is "
str2 = "John Smith"
print(str1 + str2)
name = input("What is your name?\n")
print("Hello, %s!" % name)
# The radius of the circle
radius = float(input("Please enter the circle radius: "))
# The value of the number pi, used throughout the program.
# If we want to use another value for pi, we only need to change it here
pi = 3.14159
# The circumference of the circle
circumference = 2 * radius * pi
# The area of the circle
area = radius * radius * pi
# Print the input and the result
print("The circle radius is %f and pi is %f." % (radius, pi))
print("The circumference is %f and the area is %f." % (circumference, area) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Obratite pažnju na to kako je kod obojen. Videćete da je tekst prekinut ispred reči "small", a novi tekst je otpočet nakon reči "small". Sama reč "small" je van teksta i Pajton pokušava da je interpretira, što dovodi do sintaksne greške.
Step2: Obeležavanje teksta apostrofima
Step3: To nam pruža više mogućnosti da prikažemo navodnike i apostrofe.
Step4: Apostrof u tekstu označenom apostrofima
Step5: Navodnici u tekstu označenom apostrofima
Step6: Navodnici u tekstu označenom navodnicima
Step7: Ostale upotrebe "escape" karaktera
Step8: Kad god prikazujemo "specijalne" karaktere (poput navodnika i kose crte), koristimo "escape" karakter. U Pajtonu je to silazna kosa crta.
Step9: Pokušajte da funkciji <i>print</i> prosledite tekst koji sadrži novi red.
Step10: Ovo izaziva grešku, jer Pajton očekuje da se tekst otpočet navodnicima završi u istom redu, i to navodnicima.
Step12: Trostruki navodnici
Step13: I novi red je "specijalan karakter", pa i njega možemo "izbeći" <i>escape</i> karakterom.
Step14: Pajton očekuje da string počet navodnicima završimo u istom redu i ukoliko napišemo naredbu gde string počinje u jednom redu (otvoreni navodnici) a završava se u sledećem redu (zatvoreni navodnici), izazvaćemo sintaksnu grešku, jer Pajton ne očekuje "specijalan karakter" za novi red u sred stringa (kao u naredbi iznad). Ukoliko ipak želimo da "razbijemo" string u više linija, svaki novi red, koji Pajton shvata kao specijalan karakter, možemo "izbeći" <i>escape</i> karakterom. Sledi primer.
Step15: Silazna kosa crta (backslash) kao karakter produžetka linije
Step16: Promenljive
Step17: Probajte da napišete program koji na osnovu zadatog poluprečnika računa obim i površinu kruga. Vrednost poluprečnika zadajte na početku programa, a za vrednost broja pi uzmite aproksimaciju 3.14. Program bi trebalo na ekranu da prikaže rezultat.
Step18: Promenićemo postojeći program tako da za vrednost broja pi uzima broj 3.14159. Ukoliko želimo da promenimo aproksimaciju broja pi, moramo to uraditi na dva mesta - na mestu gde se računa obim i na mestu gde se računa površina. U velikim programima, može se desiti da neku vrednost menjamo i na desetinama mesta. Ovo je <b>loša programerska praksa</b>, jer je naporno uvek menjati brojeve na više mesta, a i na taj način je lakše napraviti grešku. Može se desiti da brojeve promenimo na nekim, a ne na svim mestima, pa da prikažemo pogrešan rezultat.
Step19: Operator dodeljivanja (asssignment operator)
Step20: Ovaj pristup je loš, jer smo dodelivši promenljivoj a vrednost promenljive b (3) bespovratno izgubili staru vrednost promenljive a (2). Kada smo zatim promenljivoj b dodelili vrednost promenljive a, promenljiva a je već imala novu vrenost (3=
Step21: Kako bismo bolje ilustrovali način funkcionisanja operatora dodeljivanja i šta se dešava kada istoj promenljivoj, u različitim delovima programa, dodeljujemo različite vrednosti, poslužićemo se sledećim primerom.
Step22: Bitno je primetiti da se u promenljivoj nalazi ono što smo joj poslednje dodelili. Ono što se pre toga u njoj nalazilo - nestaje.
Step23: Jedan od operanada u tim izrazima može biti i sama promenljiva čiju vrednost računamo u izrazu, pod uslovom da smo joj ranije dodelili neku vrednost. U tom slučaju, "stara" vrednost će biti iskorišćena u računanju izraza, zatim će ta "stara" vrednost nestati, a promenljiva će sadržati novu vrednost - rezultat operacije. To pokazuje sledeći primer.
Step24: Operator za formatiranje stringa
Step25: Za dodavanje celih brojeva u string, koristimo oznaku %d. Za dodavanje razlomljenih brojeva, koristimo oznaku %f. Postoje i druge oznake, poput oznake %s, koja se koristi za dodavanje stringova u string. Na primer
Step26: Tipovi podataka u Pajtonu
Step27: Neki od tipova podataka u Pajtonu
Step28: Ponašanje znakova i operatora s različitim tipovima podataka
Step29: Zbir celog broja i razlomljenog broja je razlomljen broj. Prilikom operacija koje uključuju celobrojne i razlomljene brojeve, Pajton implicitno konvertuje celi broj u razlomljeni (npr. 3 u 3.0) i zatim ih sabira. Konverzija je neophodna jer je operacija sabiranja definisana samo nad podacima istog tipa. Kada bi Pajton u ovoj situaciji konvertovao razlomljen broj u ceo broj, došlo bi do zaokruživanja, što bi dalje vodilo gubljenju preciznosti (npr. 5.3 u 5), pa bi 3 + 5.3 bilo jednako 8.
Step30: Znak "%" kod brojeva označava ostatak pri deljenju. Kod stringova, ovaj znak označava operator formatiranja stringa.
Step31: Znak "+" kod brojeva označava sabiranje, a kod stringova konkatenaciju (pripajanje drugog stringa prvom).
Step32: Interakcija s korisnikom (user input)
|
4,666
|
<ASSISTANT_TASK:>
Python Code:
import hail as hl
hl.init()
from bokeh.io import show, output_notebook
from bokeh.layouts import gridplot
output_notebook()
hl.utils.get_1kg('data/')
mt = hl.read_matrix_table('data/1kg.mt')
table = (hl.import_table('data/1kg_annotations.txt', impute=True)
.key_by('Sample'))
mt = mt.annotate_cols(**table[mt.s])
mt = hl.sample_qc(mt)
mt.describe()
dp_hist = mt.aggregate_entries(hl.expr.aggregators.hist(mt.DP, 0, 30, 30))
p = hl.plot.histogram(dp_hist, legend='DP', title='DP Histogram')
show(p)
p = hl.plot.histogram(mt.DP, range=(0, 30), bins=30)
show(p)
p = hl.plot.cumulative_histogram(mt.DP, range=(0,30), bins=30)
show(p)
p = hl.plot.scatter(mt.sample_qc.dp_stats.mean, mt.sample_qc.call_rate, xlabel='Mean DP', ylabel='Call Rate')
show(p)
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
mt = mt.filter_entries(filter_condition_ab)
mt = hl.variant_qc(mt).cache()
common_mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
gwas = hl.linear_regression_rows(y=common_mt.CaffeineConsumption, x=common_mt.GT.n_alt_alleles(), covariates=[1.0])
pca_eigenvalues, pca_scores, _ = hl.hwe_normalized_pca(common_mt.GT)
p = hl.plot.scatter(pca_scores.scores[0], pca_scores.scores[1],
label=common_mt.cols()[pca_scores.s].SuperPopulation,
title='PCA', xlabel='PC1', ylabel='PC2', collect_all=True)
show(p)
p2 = hl.plot.scatter(pca_scores.scores[0], pca_scores.scores[1],
label=common_mt.cols()[pca_scores.s].SuperPopulation,
title='PCA (downsampled)', xlabel='PC1', ylabel='PC2', collect_all=False, n_divisions=50)
show(gridplot([p, p2], ncols=2, plot_width=400, plot_height=400))
p = hl.plot.histogram2d(pca_scores.scores[0], pca_scores.scores[1])
show(p)
p = hl.plot.qq(gwas.p_value, collect_all=True)
p2 = hl.plot.qq(gwas.p_value, n_divisions=75)
show(gridplot([p, p2], ncols=2, plot_width=400, plot_height=400))
p = hl.plot.manhattan(gwas.p_value)
show(p)
hover_fields = dict([('alleles', gwas.alleles)])
p = hl.plot.manhattan(gwas.p_value, hover_fields=hover_fields, collect_all=True)
show(p)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Histogram
Step2: This method, like all Hail plotting methods, also allows us to pass in fields of our data set directly. Choosing not to specify the range and bins arguments would result in a range being computed based on the largest and smallest values in the dataset and a default bins value of 50.
Step3: Cumulative Histogram
Step4: Scatter
Step5: We can also pass in a Hail field as a label argument, which determines how to color the data points.
Step6: Hail's downsample aggregator is incorporated into the scatter(), qq(), and manhattan() functions. The collect_all parameter tells the plot function whether to collect all values or downsample. Choosing not to set this parameter results in downsampling.
Step7: 2-D histogram
Step8: Q-Q (Quantile-Quantile)
Step9: Manhattan
Step10: We can also pass in a dictionary of fields that we would like to show up as we hover over a data point, and choose not to downsample if the dataset is relatively small.
|
4,667
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-2', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
4,668
|
<ASSISTANT_TASK:>
Python Code:
import numpy
numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
Weight_kg = 55
print (Weight_kg)
print('Weight in pounds:', Weight_kg * 2.2)
Weight_kg = 57.5
print ('New weight: ', Weight_kg * 2.2)
%whos
data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
print (data)
print (type(data))
%whos
# Finding out the data type
print (data.dtype)
# Find out the shape
print (data.shape)
# This is 60 rows * 40 columns
# Getting a single number out of the array
print ("First value in data: ", data [0, 0])
print ('A middle value: ', data[30, 20])
# Lets get the first 10 columns for the first 4 rows
print (data[0:4, 0:10])
# Start at index 0 and go up to BUT NOT INCLUDING index 4
# We don't need to start slicing at 0
print (data [5:10, 7:15])
# We don't even need to include the UPPER and LOWER bounds
smallchunk = data [:3, 36:]
print (smallchunk)
# Arithmetic on arrays
doublesmallchunk = smallchunk * 2.0
print (doublesmallchunk)
triplesmallchunk = smallchunk + doublesmallchunk
print (triplesmallchunk)
print (numpy.mean(data))
print (numpy.transpose(data))
print (numpy.max(data))
print (numpy.min(data))
# Get a set of data for the first station
station_0 = data [0, :]
print (numpy.max(station_0))
# We don't need to create 'temporary' array slices
# We can refer to what we call array axes
# axis = 0 gets the mean DOWN each column, so the mean temperature for each recording period
print (numpy.mean(data, axis = 0))
# axis = 1 gets the mean ACROSS each row, so the mean temperature for each recording period
print (numpy.mean(data, axis = 1))
# do some simple visualisations
import matplotlib.pyplot
%matplotlib inline
image = matplotlib.pyplot.imshow(data)
# Let's look at the average temprature over time
avg_temperature = numpy.mean(data, axis = 0)
avg_plot = matplotlib.pyplot.plot(avg_temperature)
max_temprature = numpy.max(data, axis = 0)
min_temprature = numpy.min(data, axis = 0)
max_plot = matplotlib.pyplot.plot(max_temprature)
min_plot = matplotlib.pyplot.plot(min_temprature)
min_p = numpy.min(data, axis = 0)
min_plot = matplotlib.pyplot.plot(min_p)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Variables
Step2: Tasks
|
4,669
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import getpass
import pdvega
# for configuring connection
from configobj import ConfigObj
import os
%matplotlib inline
# Create a database connection using settings from config file
config='../db/config.ini'
# connection info
conn_info = dict()
if os.path.isfile(config):
config = ConfigObj(config)
conn_info["sqluser"] = config['username']
conn_info["sqlpass"] = config['password']
conn_info["sqlhost"] = config['host']
conn_info["sqlport"] = config['port']
conn_info["dbname"] = config['dbname']
conn_info["schema_name"] = config['schema_name']
else:
conn_info["sqluser"] = 'postgres'
conn_info["sqlpass"] = ''
conn_info["sqlhost"] = 'localhost'
conn_info["sqlport"] = 5432
conn_info["dbname"] = 'eicu'
conn_info["schema_name"] = 'public,eicu_crd'
# Connect to the eICU database
print('Database: {}'.format(conn_info['dbname']))
print('Username: {}'.format(conn_info["sqluser"]))
if conn_info["sqlpass"] == '':
# try connecting without password, i.e. peer or OS authentication
try:
if (conn_info["sqlhost"] == 'localhost') & (conn_info["sqlport"]=='5432'):
con = psycopg2.connect(dbname=conn_info["dbname"],
user=conn_info["sqluser"])
else:
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"])
except:
conn_info["sqlpass"] = getpass.getpass('Password: ')
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"],
password=conn_info["sqlpass"])
query_schema = 'set search_path to ' + conn_info['schema_name'] + ';'
patientunitstayid = 287822
query = query_schema +
select *
from allergy
where patientunitstayid = {}
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df.head()
# Look at a subset of columns
cols = ['allergyid','patientunitstayid','allergyoffset','allergyenteredoffset',
'allergynotetype', 'usertype', 'writtenineicu',
'drugname','allergytype','allergyname']
df[cols].head()
drug = 'Tetracycline'
query = query_schema +
select
allergyid, patientunitstayid
, allergyoffset, allergyenteredoffset
, allergytype, allergyname
, drugname, drughiclseqno
from allergy
where allergyname = '{}'
.format(drug)
df = pd.read_sql_query(query, con)
df.set_index('allergyid',inplace=True)
print('{} unit stays with allergy to {}.'.format(df['patientunitstayid'].nunique(), drug))
df.head()
drug = 'Tetracycline'
query = query_schema +
select
allergyid, patientunitstayid
, allergyoffset, allergyenteredoffset
, allergytype, allergyname
, drugname, drughiclseqno
from allergy
where lower(allergyname) like '%{}%'
.format(drug.lower())
df = pd.read_sql_query(query, con)
df.set_index('allergyid',inplace=True)
print('{} unit stays with allergy to {}.'.format(df['patientunitstayid'].nunique(), drug))
df['allergyname'].value_counts()
drug = 'Tetracycline'
query = query_schema +
select
allergyid, patientunitstayid
, allergyoffset, allergyenteredoffset
, allergytype, allergyname
, drugname, drughiclseqno
from allergy
where lower(drugname) like '%{}%'
.format(drug.lower())
df = pd.read_sql_query(query, con)
df.set_index('allergyid',inplace=True)
print('{} unit stays with allergy to {} specified in drugname.'.format(df['patientunitstayid'].nunique(), drug))
print(df['drugname'].value_counts())
df.head()
hicl = 5236
query = query_schema +
select
allergyid, patientunitstayid
, allergyoffset, allergyenteredoffset
, allergytype, allergyname
, drugname, drughiclseqno
from allergy
where drughiclseqno = {}
.format(hicl)
df = pd.read_sql_query(query, con)
df.set_index('allergyid',inplace=True)
print('{} unit stays with allergy to HICL {} specified in drugname.'.format(df['patientunitstayid'].nunique(), hicl))
print(df['drugname'].value_counts())
df.head()
hicl = 5236
drugname = 'Tetracycline'
allergyname = 'Tetracycline'
query = query_schema +
select
patientunitstayid
, max(case when drughiclseqno = {} then 1 else 0 end) as hicl_match
, max(case when lower(drugname) like '%{}%' then 1 else 0 end) as drug_match
, max(case when lower(allergyname) like '%{}%' then 1 else 0 end) as allergy_match
from allergy
group by patientunitstayid
.format(hicl, drugname.lower(), allergyname.lower())
df = pd.read_sql_query(query, con)
# drop non-matching rows
idx = (df['hicl_match'] == 1) | (df['drug_match'] == 1) | (df['allergy_match'] == 1)
df = df.loc[idx, :]
df.groupby(['hicl_match', 'drug_match', 'allergy_match']).count()
drug = 'Sumycin'
query = query_schema +
select
allergyid, patientunitstayid
, allergyoffset, allergyenteredoffset
, allergytype, allergyname
, drugname, drughiclseqno
from allergy
where lower(allergyname) like '%{}%'
.format(drug.lower())
df = pd.read_sql_query(query, con)
df.set_index('allergyid',inplace=True)
print('{} unit stays with allergy to {}.'.format(df['patientunitstayid'].nunique(), drug))
df['allergyname'].value_counts()
query = query_schema +
select
pt.hospitalid
, count(distinct pt.patientunitstayid) as number_of_patients
, count(distinct a.patientunitstayid) as number_of_patients_with_tbl
from patient pt
left join allergy a
on pt.patientunitstayid = a.patientunitstayid
group by pt.hospitalid
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0
df.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True)
df.head(n=10)
df[['data completion']].vgplot.hist(bins=10,
var_name='Number of hospitals',
value_name='Percent of patients with data')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Examine a single patient
Step4: Here we can see that this patient had an admission note highlighting they were allergic to nickel, tetracycline, ampicillin, and clindamycin.
Step6: However, it's also possible that they used mixed case, pluralization, or specified more than that specific string. We can use a string comparison to look for allergies like tetracycline.
Step8: It's also possible that they specified the allergy under the drugname column.
Step10: Since we may not capture all spellings or brands for tetracycline, we can try to use the HICL code to identify other observations. Above, we can see the HICL for tetracycline is 5236.
Step12: Let's combine all these methods and summarize how many patients are identified using each method.
Step14: As we can see, using the allergyname column was the most effective, and always identified patients allergic to tetracycline. Unfortunately we know this is an incomplete search, as likely providers will document brand names rather than generic names from time to time. For example, tetracyclin is marketed under the name Sumycin among others. We can search for Sumycin in the data
Step16: Happily, in this case, only 1 patient is excluded by not searching for Sumycin, but in general it may be more.
|
4,670
|
<ASSISTANT_TASK:>
Python Code:
!pip install hanlp -U
import hanlp
hanlp.pretrained.pos.ALL # 语种见名称最后一个字段或相应语料库
pos = hanlp.load(hanlp.pretrained.pos.CTB9_POS_ELECTRA_SMALL)
pos(["我", "的", "希望", "是", "希望", "张晚霞", "的", "背影", "被", "晚霞", "映红", "。"])
print(pos.dict_tags)
pos.dict_tags = {'HanLP': 'state-of-the-art-tool'}
pos(["HanLP", "为", "生产", "环境", "带来", "次", "世代", "最", "先进", "的", "多语种", "NLP", "技术", "。"])
pos.dict_tags = {('的', '希望'): ('补语成分', '名词'), '希望': '动词'}
pos(["我", "的", "希望", "是", "希望", "张晚霞", "的", "背影", "被", "晚霞", "映红", "。"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 加载模型
Step2: 调用hanlp.load进行加载,模型会自动下载到本地缓存:
Step3: 词性标注
Step4: 注意上面两个“希望”的词性各不相同,一个是名词另一个是动词。
Step5: 自定义单个词性:
Step6: 根据上下文自定义词性:
|
4,671
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
def np_fact(n):
Compute n! = n*(n-1)*...*1 using Numpy.
LOL = np.arange(1, n+1, 1)
Factorial = np.cumprod(LOL)
if n == 0:
return 1
return Factorial[-1]
assert np_fact(0)==1
assert np_fact(1)==1
assert np_fact(10)==3628800
assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
def loop_fact(n):
Compute n! using a Python for loop.
if n == 0:
return 1
else:
Fact = 1
for x in range(1,n+1,1):
Fact = Fact*x
return Fact
assert loop_fact(0)==1
assert loop_fact(1)==1
assert loop_fact(10)==3628800
assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
%timeit -n1 -r1 loop_fact(100000)
%timeit -n1 -r1 np_fact(100000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Factorial
Step4: Write a function that computes the factorial of small numbers using a Python loop.
Step5: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is
|
4,672
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Network Architecture
Step2: Training
Step3: Denoising
Step4: Checking out the performance
|
4,673
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import itertools
import logging
from functools import partial
import gensim
import matplotlib.pyplot as plt
import numpy as np
import pandas as pnd
from sklearn.cluster import *
from sklearn.decomposition import PCA, RandomizedPCA
from sklearn.manifold import TSNE
from knub.thesis.util import *
d = np.array([
[1.0, 2.0, 3.1],
[0.5, 1.2, 4.0],
[-1.0, 2.1, 1.0]
])
pca(d, 2)
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from IPython.core.display import HTML
HTML(
<style>
div.text_cell_render p, div.text_cell_render ul, table.dataframe {
font-size:1.3em;
line-height:1.1em;
}
</style>
)
# Prepare data in long form
df_topics = pnd.read_csv("../models/topic-models/topic.full.fixed-vocabulary.alpha-1-100.256-400.model.ssv",
sep=" ")
df_topics = df_topics.ix[:,-10:]
df_topics.columns = list(range(10))
df_topics["topic"] = df_topics.index
df_topics["topic_name"] = df_topics[0]
df = pnd.melt(df_topics, id_vars=["topic", "topic_name"], var_name="position", value_name="word")
df = df[["word", "topic", "topic_name", "position"]]
df = df.sort_values(by=["topic", "position"]).reset_index(drop=True)
df[df.topic == 0]
WORD2VEC_VECTOR_FILE = "/home/knub/Repositories/master-thesis/models/word-embeddings/GoogleNews-vectors-negative300.bin"
GLOVE_VECTOR_FILE = "/home/knub/Repositories/master-thesis/models/word-embeddings/glove.6B.50d.txt"
CBOW_VECTOR_FILE = "/home/knub/Repositories/master-thesis/models/word-embeddings/embedding.model.cbow"
SKIP_GRAM_VECTOR_FILE = "/home/knub/Repositories/master-thesis/models/word-embeddings/embedding.model.skip-gram"
#vectors_glove = gensim.models.Word2Vec.load_word2vec_format(GLOVE_VECTOR_FILE, binary=False)
#vectors_skip = gensim.models.Word2Vec.load_word2vec_format(SKIP_GRAM_VECTOR_FILE, binary=True)
#vectors_cbow = gensim.models.Word2Vec.load_word2vec_format(CBOW_VECTOR_FILE, binary=True)
vectors_word2vec = gensim.models.Word2Vec.load_word2vec_format(WORD2VEC_VECTOR_FILE, binary=True)
vectors_default = vectors_word2vec
def get_data_frame_from_word_vectors(df_param, vectors):
df_param = df_param[df_param["word"].apply(lambda word: word in vectors)]
df_param["embeddings"] = df_param["word"].apply(lambda word: vectors[word])
return df_param
df = get_data_frame_from_word_vectors(df.copy(), vectors_default)
df[df.topic == 0]
# financial, muslim, teams in sport, atom physics, math
nice_topics = [5, 117, 158, 164, 171]
nice_topics = [0, 7, 236]
df_part = df[df.topic.apply(lambda topic: topic in nice_topics)].copy()
# Show topics of interest
df_tmp = pnd.DataFrame(df_part.groupby("topic")["word"].apply(lambda l: l.tolist()).tolist())
df_tmp.index = nice_topics
df_tmp
def plot_topics_in_embedding_space(reduction_method, df_param):
embeddings = np.array(df_param["embeddings"].tolist())
X = reduction_method(embeddings)
df_tmp = df_param.copy()
df_tmp["x"] = X[:,0]
df_tmp["y"] = X[:,1]
df_tmp = df_tmp[df_tmp.topic.apply(lambda topic: topic in nice_topics)]
colors = {0: "red", 7: "blue", 236: "green", 164: "yellow", 171: "black"}
plt.figure(figsize=(12, 8))
plt.scatter(df_tmp.x, df_tmp.y, c=df_tmp.topic.apply(lambda topic: colors[topic]), s=80)
ylim = plt.gca().get_ylim()
step = (ylim[1] - ylim[0]) / 100
for index, row in df_tmp.iterrows():
plt.text(row.x, row.y - step, row.word, horizontalalignment='center', verticalalignment='top')
#plot_topics_in_embedding_space(pca, df)
plot_topics_in_embedding_space(pca, df_part) # third dimensions
#plot_topics_in_embedding_space(tsne, df)
plot_topics_in_embedding_space(tsne_with_init_pca, df)
def average_pairwise_similarity(words, vectors):
word_pairs = itertools.permutations(words, 2)
similarities = [vectors.similarity(word1, word2) for word1, word2 in word_pairs if word1 < word2]
return np.mean(similarities)
def average_top_similarity(words, vectors):
word_pairs = itertools.permutations(words, 2)
similarities = [(word1, vectors.similarity(word1, word2)) for word1, word2 in word_pairs]
max_similarities = [max([s for w, s in l]) for _, l in itertools.groupby(similarities, lambda s: s[0])]
return np.mean(max_similarities)
topic_lengths = list(range(2, 11))
def calculate_similarities_for_topic(df_topic, sim_function, vectors):
words_in_topic = df_topic["word"].tolist()
average_similarities = [sim_function(words_in_topic[:topic_length], vectors)
for topic_length in topic_lengths]
return pnd.Series(average_similarities)
def calculate_similarity_matrix(sim_function, vectors):
def partial_function(df_topic):
return calculate_similarities_for_topic(df_topic, sim_function, vectors)
df_similarities = df.groupby("topic").apply(partial_function)
df_similarities.columns = ["%s-words" % i for i in topic_lengths]
return df_similarities
df_similarities = calculate_similarity_matrix(average_pairwise_similarity, vectors_default)
df_similarities.mean()
means = df_similarities.mean().tolist()
plt.figure(figsize=(12, 8))
plt.scatter(topic_lengths, means, s=80)
plt.title("Avg. word similarity (cosine similarity in WE space) of topics up to the nth word")
plt.xlim(0, 11)
plt.xticks(list(range(1, 12)))
#plt.ylim((0, 0.35))
plt.xlabel("topic length")
plt.ylabel("average similarity")
def show_highest_similar_topics(topic_length, nr_topics=3):
column = "%s-words" % topic_length
df_top = df_similarities.sort_values(by=column, ascending=False)[:nr_topics]
return df_top.join(df_topics)[[column] + list(range(topic_length))]
show_highest_similar_topics(3)
show_highest_similar_topics(6)
show_highest_similar_topics(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notes
Step2: Preprocessing
Step3: Word embeddings
Step4: Topic model in word embedding space
Step5: PCA
Step6: t-SNE
Step7: t-SNE with PCA initialization
Step8: Findings
Step9: Highest-similar topics
|
4,674
|
<ASSISTANT_TASK:>
Python Code:
print('hello world')
# This is an online comment: Python3
print('hello world')
# Python2:
print 'hello world'
1 * 1.0
a = 3.0
type(a)
b = 3 > 5
type(b)
a = int(a)
type(a)
# Different between Python2 and Python3
3 / 2
L = ['red', 'blue', 'green', 'black', 'white']
L[3], L[-2], L[3:], L[3:4]
L[1] = 'yellow'
G = L
L[1] = 'blue'
L, G
G = L[:]
L[1] = 'yellow'
L, G
L.append('pink')
print(L)
L.pop()
print(L)
T = 'white', 'black', 'yellow'
T
T[1] = 'brown'
print(T)
del T
print(T)
tel = {'Yike': 4546456, 'Philipp': 773456454}
tel
tel[1]
# What is Yike's telephone number?
tel['Yike']
# How do we add Adam?
tel['Adam'] = 7745464
tel
# What keys are defined in our dictionary so far?
tel.keys()
# Yike has a new telephone number.
tel['Yike'] = 77378797
a, b = 1, 5
if a == 1:
print(1)
elif a == 2:
print(2)
else:
if b == 5:
print('A lot')
# Note the indentation.
for key_ in tel.keys():
print(key_, tel[key_])
def return_phone_number(book, name = 'Philipp'):
''' This unction returns the telephone nummber for the requested name.
'''
# Check inputs.
assert (isinstance(name, str)), 'The requested name needs to be a string object.'
assert (name in ['Yike', 'Philipp', 'Adam'])
return book[name]
return_phone_number(tel)
return_phone_number(tel, 'Philipp')
return_phone_number(tel, 'Yike')
return_phone_number(tel, 'Peter')
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1Ki3iXw').read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As some you were asking about differences between Python2 and Python3, here is an example
Step2: More broadly speaking, some function interfaces (here print function) change. We will encounter other examples as we move along. More information regarding this issue is available at https
Step3: What about integer division?
Step4: Let us now turn to containers
Step5: Lists are mutable objects, i.e. they can be changed.
Step6: There is an important distinction between independent copies and references to objects.
Step7: How to work with lists, or objects more generally?
Step8: Now we turn to tuples, which are immutable objects.
Step9: As you asked, here is one way to delete objects.
Step10: Now, let us turn to dictionaries, which are tables that map keys to values.
Step11: Control Flow
Step12: Functions
Step13: Next Steps
|
4,675
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn-whitegrid')
observations = pd.read_csv("data/observations.csv", index_col="occurrenceID")
observations.head()
observations.info()
observations["eventDate"] = pd.to_datetime(observations[["year", "month", "day"]])
observations
observations["datasetName"] = "Ecological Archives E090-118-D1."
observations["verbatimSex"].unique()
sex_dict = {"M": "male",
"F": "female",
"R": "male",
"P": "female",
"Z": np.nan}
observations['sex'] = observations['verbatimSex'].replace(sex_dict)
observations["sex"].unique()
observations['species_ID'].isna().sum()
observations.duplicated().sum()
duplicate_observations = observations[observations.duplicated(keep=False)]
duplicate_observations.sort_values(["eventDate", "verbatimLocality"]).head(9)
observations_unique = observations.drop_duplicates()
len(observations_unique)
len(observations_unique.dropna())
len(observations_unique.dropna(subset=['species_ID']))
observations_with_ID = observations_unique.dropna(subset=['species_ID'])
observations_with_ID.head()
mask = observations['species_ID'].isna() & observations['sex'].notna()
not_identified = observations[mask]
not_identified.head()
# Recap from previous exercises - remove duplicates and observations without species information
observations_unique_ = observations.drop_duplicates()
observations_data = observations_unique_.dropna(subset=['species_ID'])
species_names = pd.read_csv("data/species_names.csv")
species_names.head()
species_names.shape
survey_data = pd.merge(observations_data, species_names, how="left",
left_on="species_ID", right_on="ID")
survey_data
survey_data['taxa'].value_counts()
#survey_data.groupby('taxa').size()
non_rodent_species = survey_data[survey_data['taxa'].isin(['Rabbit', 'Bird', 'Reptile'])]
len(non_rodent_species)
r_species = survey_data[survey_data['name'].str.lower().str.startswith('r')]
len(r_species)
r_species["name"].value_counts()
non_bird_species = survey_data[survey_data['taxa'] != 'Bird']
len(non_bird_species)
birds_85_89 = survey_data[(survey_data["eventDate"] >= "1985-01-01")
& (survey_data["eventDate"] <= "1989-12-31 23:59")
& (survey_data['taxa'] == 'Bird')]
birds_85_89.head()
# alternative solution
birds_85_89 = survey_data[(survey_data["eventDate"].dt.year >= 1985)
& (survey_data["eventDate"].dt.year <= 1989)
& (survey_data['taxa'] == 'Bird')]
birds_85_89.head()
# Multiple lines
obs_with_weight = survey_data.dropna(subset=["weight"])
median_weight = obs_with_weight.groupby(['name'])["weight"].median()
median_weight.sort_values(ascending=False)
# Single line statement
survey_data.dropna(subset=["weight"]).groupby(['name'])["weight"].median().sort_values(ascending=False)
survey_data.groupby("name").size().nlargest(8)
survey_data['name'].value_counts()[:8]
n_species_per_plot = survey_data.groupby(["verbatimLocality"])["name"].nunique()
fig, ax = plt.subplots(figsize=(6, 6))
n_species_per_plot.plot(kind="barh", ax=ax)
ax.set_ylabel("Plot number");
# Alternative option to calculate the species per plot:
# inspired on the pivot table we already had:
# species_per_plot = survey_data.reset_index().pivot_table(
# index="name", columns="verbatimLocality", values="ID", aggfunc='count')
# n_species_per_plot = species_per_plot.count()
n_plots_per_species = survey_data.groupby(["name"])["verbatimLocality"].nunique().sort_values()
fig, ax = plt.subplots(figsize=(10, 8))
n_plots_per_species.plot(kind="barh", ax=ax)
ax.set_xlabel("Number of plots");
ax.set_ylabel("");
n_plot_sex = survey_data.groupby(["sex", "verbatimLocality"]).size().rename("count").reset_index()
n_plot_sex.head()
pivoted = n_plot_sex.pivot_table(columns="sex", index="verbatimLocality", values="count")
pivoted.head()
pivoted.plot(kind='bar', figsize=(12, 6), rot=0)
sns.catplot(data=survey_data, x="verbatimLocality",
hue="sex", kind="count", height=3, aspect=3)
heatmap_prep = survey_data.pivot_table(index='year', columns='month',
values="ID", aggfunc='count')
fig, ax = plt.subplots(figsize=(10, 8))
ax = sns.heatmap(heatmap_prep, cmap='Reds')
species_per_plot = survey_data.reset_index().pivot_table(index="name",
columns="verbatimLocality",
values="ID",
aggfunc='count')
species_per_plot.head()
fig, ax = plt.subplots(figsize=(8,8))
sns.heatmap(species_per_plot, ax=ax, cmap='Greens')
survey_data.resample('A', on='eventDate').size().plot()
merriami = survey_data[survey_data["name"] == "Dipodomys merriami"]
fig, ax = plt.subplots()
merriami.groupby(merriami['eventDate'].dt.month).size().plot(kind="barh", ax=ax)
ax.set_xlabel("number of occurrences")
ax.set_ylabel("Month of the year")
subsetspecies = survey_data[survey_data["name"].isin(['Dipodomys merriami', 'Dipodomys ordii',
'Reithrodontomys megalotis', 'Chaetodipus baileyi'])]
month_evolution = subsetspecies.groupby("name").resample('M', on='eventDate').size()
species_evolution = month_evolution.unstack(level=0)
axs = species_evolution.plot(subplots=True, figsize=(14, 8), sharey=True)
# Given as solution..
subsetspecies = survey_data[survey_data["name"].isin(['Dipodomys merriami', 'Dipodomys ordii',
'Reithrodontomys megalotis', 'Chaetodipus baileyi'])]
month_evolution = subsetspecies.groupby("name").resample('M', on='eventDate').size().rename("counts")
month_evolution = month_evolution.reset_index()
sns.relplot(data=month_evolution, x='eventDate', y="counts",
row="name", kind="line", hue="name", height=2, aspect=5)
year_evolution = survey_data.groupby("taxa").resample('A', on='eventDate').size()
year_evolution.name = "counts"
year_evolution = year_evolution.reset_index()
year_evolution.head()
sns.relplot(data=year_evolution, x='eventDate', y="counts",
col="taxa", col_wrap=2, kind="line", height=2, aspect=5,
facet_kws={"sharey": False})
fig, ax = plt.subplots()
survey_data.groupby(survey_data["eventDate"].dt.weekday).size().plot(kind='barh', color='#66b266', ax=ax)
import calendar
xticks = ax.set_yticklabels(calendar.day_name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: <div class="alert alert-success">
Step3: <div class="alert alert-success">
Step4: Cleaning the verbatimSex column
Step5: For the further analysis (and the species concerned in this specific data set), the sex information should be either male or female. We want to create a new column, named sex and convert the current values to the corresponding sex, taking into account the following mapping
Step6: Tackle missing values (NaN) and duplicate values
Step7: <div class="alert alert-success">
Step8: <div class="alert alert-success">
Step9: <div class="alert alert-success">
Step10: <div class="alert alert-success">
Step11: <div class="alert alert-success">
Step12: Adding the names of the observed species
Step13: In the data set observations, the column specied_ID provides only an identifier instead of the full name. The name information is provided in a separate file species_names.csv
Step14: The species names contains for each identifier in the ID column the scientific name of a species. The species_names data set contains in total 38 different scientific names
Step15: For further analysis, let's combine both in a single DataFrame in the following exercise.
Step16: Select subsets according to taxa of species
Step17: <div class="alert alert-success">
Step18: <div class="alert alert-success">
Step19: <div class="alert alert-success">
Step20: <div class="alert alert-success">
Step21: <div class="alert alert-success">
Step22: Species abundance
Step23: <div class="alert alert-success">
Step24: <div class="alert alert-success">
Step25: <div class="alert alert-success">
Step26: As such, we can use the variable pivoted to plot the result
Step27: <div class="alert alert-success">
Step28: <div class="alert alert-success">
Step29: Remark that we started from a tidy data format (also called long format) and converted to short format with in the row index the years, in the column the months and the counts for each of these year/month combinations as values.
Step30: <div class="alert alert-success">
Step31: (OPTIONAL SECTION) Evolution of species during monitoring period
Step32: <div class="alert alert-success">
Step33: <div class="alert alert-success">
Step34: <div class="alert alert-success">
Step35: <div class="alert alert-success">
|
4,676
|
<ASSISTANT_TASK:>
Python Code:
import os
import zipfile
import requests
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectFromModel
from sklearn.linear_model import Ridge, Lasso, ElasticNet
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
OCCUPANCY = ('http://bit.ly/ddl-occupancy-dataset', 'occupancy.zip')
CREDIT = ('http://bit.ly/ddl-credit-dataset', 'credit.xls')
CONCRETE = ('http://bit.ly/ddl-concrete-data', 'concrete.xls')
def download_data(url, name, path='data'):
if not os.path.exists(path):
os.mkdir(path)
response = requests.get(url)
with open(os.path.join(path, name), 'wb') as f:
f.write(response.content)
def download_all(path='data'):
for href, name in (OCCUPANCY, CREDIT, CONCRETE):
download_data(href, name, path)
# Extract the occupancy zip data
z = zipfile.ZipFile(os.path.join(path, 'occupancy.zip'))
z.extractall(os.path.join(path, 'occupancy'))
path='data'
download_all(path)
# Load the room occupancy dataset into a dataframe
occupancy = os.path.join('data','occupancy','datatraining.txt')
occupancy = pd.read_csv(occupancy, sep=',')
occupancy.columns = [
'date', 'temp', 'humid', 'light', 'co2', 'hratio', 'occupied'
]
features = occupancy[['temp', 'humid', 'light', 'co2', 'hratio']]
labels = occupancy['occupied']
list(features)
model = Lasso()
model.fit(features, labels)
print(list(zip(features, model.coef_.tolist())))
model = Ridge()
model.fit(features, labels)
print(list(zip(features, model.coef_.tolist())))
model = ElasticNet(l1_ratio=0.10)
model.fit(features, labels)
print(list(zip(features, model.coef_.tolist())))
model = Lasso()
sfm = SelectFromModel(model)
sfm.fit(features, labels)
print(list(features[sfm.get_support(indices=True)]))
model = Ridge()
sfm = SelectFromModel(model)
sfm.fit(features, labels)
print(list(features[sfm.get_support(indices=True)]))
model = ElasticNet()
sfm = SelectFromModel(model)
sfm.fit(features, labels)
print(list(features[sfm.get_support(indices=True)]))
pca = PCA(n_components=2)
new_features = pca.fit(features).transform(features)
print(new_features)
lda = LDA(n_components=2)
new_features = lda.fit(features, labels).transform(features)
print(new_features)
# Load the credit card default dataset into a dataframe
credit = os.path.join('data','credit.xls')
credit = pd.read_excel(credit, header=1)
credit.columns = [
'id', 'limit', 'sex', 'edu', 'married', 'age', 'apr_delay', 'may_delay',
'jun_delay', 'jul_delay', 'aug_delay', 'sep_delay', 'apr_bill', 'may_bill',
'jun_bill', 'jul_bill', 'aug_bill', 'sep_bill', 'apr_pay', 'may_pay', 'jun_pay',
'jul_pay', 'aug_pay', 'sep_pay', 'default'
]
# Separate dataframe into features and targets
cred_features = credit[[
'limit', 'sex', 'edu', 'married', 'age', 'apr_delay', 'may_delay',
'jun_delay', 'jul_delay', 'aug_delay', 'sep_delay', 'apr_bill', 'may_bill',
'jun_bill', 'jul_bill', 'aug_bill', 'sep_bill', 'apr_pay', 'may_pay',
'jun_pay', 'jul_pay', 'aug_pay', 'sep_pay'
]]
cred_labels = credit['default']
# Load the concrete compression dataset into a dataframe
concrete = pd.read_excel(os.path.join('data','concrete.xls'))
concrete.columns = [
'cement', 'slag', 'ash', 'water', 'splast',
'coarse', 'fine', 'age', 'strength'
]
# Separate dataframe into features and targets
conc_features = concrete[[
'cement', 'slag', 'ash', 'water', 'splast', 'coarse', 'fine', 'age'
]]
conc_labels = concrete['strength']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fetch the data
Step2: Load the first dataset into a dataframe
Step3: Separate dataframe into features and targets
Step4: Regularization techniques
Step5: Ridge Regression (L2 Regularization)
Step6: ElasticNet
Step7: Transformer methods
Step8: Dimensionality reduction
Step9: Linear discriminant analysis (LDA)
Step10: To learn more about feature selection tools within Scikit-Learn, check out http
|
4,677
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import concise.layers as cl
import keras.layers as kl
import concise.initializers as ci
import concise.regularizers as cr
from keras.callbacks import EarlyStopping
from concise.preprocessing import encodeDNA
from keras.models import Model, load_model
# get the data
def load(split="train", st=None):
dt = pd.read_csv("../data/RBP/PUM2_{0}.csv".format(split))
# DNA/RNA sequence
xseq = encodeDNA(dt.seq) # list of sequences -> np.ndarray
# response variable
y = dt.binding_site.as_matrix().reshape((-1, 1)).astype("float")
return {"seq": xseq}, y
train, valid, test = load("train"), load("valid"), load("test")
# extract sequence length
seq_length = train[0]["seq"].shape[1]
# get the PWM list for initialization
from concise.data import attract
dfa = attract.get_metadata() # table with PWM meta-info
dfa_pum2 = dfa[dfa.Gene_name.str.match("PUM2") & \
dfa.Organism.str.match("Homo_sapiens") & \
(dfa.Experiment_description == "genome-wide in vivo immunoprecipitation")]
pwm_list = attract.get_pwm_list(dfa_pum2.PWM_id.unique()) # retrieve the PWM by id
print(pwm_list)
# specify the model
in_dna = cl.InputDNA(seq_length=seq_length, name="seq") # Convenience wrapper around keras.layers.Input()
x = cl.ConvDNA(filters=4, # Convenience wrapper around keras.layers.Conv1D()
kernel_size=8,
kernel_initializer=ci.PSSMKernelInitializer(pwm_list), # intialize the filters on the PWM values
activation="relu",
name="conv1")(in_dna)
x = kl.AveragePooling1D(pool_size=4)(x)
x = kl.Flatten()(x)
x = kl.Dense(units=1)(x)
m = Model(in_dna, x)
m.compile("adam", loss="binary_crossentropy", metrics=["acc"])
# train the model
m.fit(train[0], train[1], epochs=5);
# save the model
m.save("/tmp/model.h5")
# load the model
m2 = load_model("/tmp/model.h5")
# Convenience layers extend the base class (here keras.layers.Conv1D) with .plot_weights for filter visualization
m.get_layer("conv1").plot_weights(plot_type="motif_pwm_info", figsize=(4, 6));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Concise is fully compatible with Keras; we can save and load the Keras models (note
|
4,678
|
<ASSISTANT_TASK:>
Python Code:
def addFunction(inputNumber):
result = inputNumber + 2
return result
print(addFunction(2))
var = 2
print(addFunction(var))
def addFunction(inputNumber):
if inputNumber < 0:
return 'Number must be positive!'
result = inputNumber + 2
return result
print(addFunction(-2))
print(addFunction(2))
def addTwoNumbers(inputNumber1, inputNumber2):
result = inputNumber1 + inputNumber2
return result
print(addTwoNumbers(2, 3))
def twoNumbers(inputNumber1, inputNumber2):
addition = inputNumber1 + inputNumber2
multiplication = inputNumber1 * inputNumber2
return addition, multiplication
result1, result2 = twoNumbers(2, 3)
print('addition: ', result1)
print('multiplication: ', result2)
class CounterClass:
count = 0
def addToCounter(self, inputValue):
self.count += inputValue
def getCount(self):
return self.count
myCounter = CounterClass()
myCounter.addToCounter(2)
print(myCounter.getCount())
myCounter.count
class CounterClass:
def __init__(self, inputValue):
self.count = inputValue
def addToCounter(self, inputValue):
self.count += inputValue
def getCount(self):
return self.count
myNewCounter = CounterClass(10)
myNewCounter.addToCounter(2)
#this should now return 12
print(myNewCounter.getCount())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: On its own, this code will only define what the function does, but will not actually run any code. To execute the code inside the function you have to call it somewhere within the script and pass it the proper input(s)
Step2: A function's definition begins with the keyword 'def'. After this is the function's name, which follows the same naming conventions as variables. Inside the parenthesis after the function name you can place any number of input variables separated by commas. These variables will be passed to the function when it is called, and are available within the body of the function. When you call a function, you can either directly pass values or pass variables that have values stored inside of them. For example, this code will call the function in the same way
Step3: Here the value of the 'var' variable, which in this case is 2, is being passed to the 'addFunction' function, and is then available within that function through the 'inputNumber' variable. Notice that the names of the two variables 'var' and 'inputNumber' don't have to match. When a value gets passed to a function it forms a direct connection between the two sets of parenthesis which carries the data. In this case 'var' is a global variable that stores the value '2' in the main script, while 'inputNumber' is a local variable which stores that value only for the duration of that function. In this way functions 'wrap up' specific tasks and all the data that is necessary to execute that task to limit the number of global variables necessary in the main function.
Step4: You can see that in this case, if the input is less than zero the conditional will be met, which causes the first return statement to run, skipping the rest of the code in the function.
Step5: You can also return multiple values by separating them by commas in the 'return' statement and specifying the same number of variables before the '=' in the function call. Let's expand our function to return both the addition and multiplication of two numbers
Step6: Functions are extremely useful for creating efficient and readable code. By wrapping up certain functionalities into custom modules, they allow you (and possibly others) to reuse code in a very efficient way, and also force you to be explicit about the various sets of operations happening in your code. You can see that the basic definition of functions is quite simple, however you can quickly start to define more advanced logics, where functions call each other and pass around inputs and returns in highly complex ways (you can even pass a function as an input into another function!). This kind of programming, which uses functions to encapsulate discrete logics within a program is called functional programming.
Step7: Notice we are again using the '+=' shorthand to increment the value of the object's count variable by the input value. To use this class, we first need to create an instance of it, which we will store in a variable just like any other piece of data
Step8: Once we create an instance of a class (this is called 'instantiation'), we can run that instance's methods, and query it's variables. Note that the general class definition is only a construct. All variables within the class only apply to a particular instance, and the methods can only be run as they relate to that instance. For example
Step9: Right away, you will notice a few differences between how we define functions and classes. First of all, no variables are passed on the first line of the definition since the 'class' keyword only defines the overall structure of the class. After the first line you will find a list of variables that are the local variables of that class, and keep track of data for individual instances. After this you will have a collection of local methods (remember 'methods' are simply functions that belong to a particular class) that define the class functionality. These methods are defined the same way as before, except you see that the first input is always the keyword 'self'. This represents the object instance, and is always passed as the first input into each method in a class. This allows you to query the local variables of the instance, as you can see us doing with the 'count' variable.
Step10: However, this is discouraged because it reveals the true name of the local variables to the end user. In a production environment this would pose severe security risks, but it is considered bad practice even in private uses. Instead, you are encouraged to create special 'accessor' methods to pull variable values from the instance, as we have done with the 'getCount()' method in our example. Another advantage of this practice (which is called encapsulation) is that the code is easier to maintain. You are free to make any changes within the class definition, including changing the names of the local variables and what they do. As long as you maintain the accessor functions and they return the expected result, you do not have to update anything in the main code.
Step11: Now we can create a new instance of the counter, but this time pass in a starting value for the count.
|
4,679
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import tensorflow.contrib.keras as keras
%matplotlib inline
# Dataset of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images.
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# input image dimensions and class counts
img_rows, img_cols = 28, 28
num_classes = 10
x_train[0].shape
y_train[0]
plt.imshow(x_train[0], cmap=cm.binary)
# images are expected as 3D tensors with the third dimension containing different image channels; reshape x to a
# 3D tensore with single color channel, the grayscale channel
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train[0].shape
# convert X to [0,1]
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
y_train[:5]
# convert to a one hot encoding of the class labels
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
y_train[:5]
batch_size = 128
epochs = 7 # increasing this would probably make sense but takes longer to compute
# houses a linear stack of layers
model = keras.models.Sequential()
# add layers to the sequential model
model.add(keras.layers.Conv2D(32, # 32 filters/kernels
kernel_size=(3, 3), # filter size of 3x3 pixels
activation='relu',
input_shape=input_shape))
model.add(keras.layers.Conv2D(64, (3, 3), activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Dropout(0.25))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(128, activation='relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
keras.utils.plot_model(model, to_file='chapter_9_cnn.png', show_shapes=True)
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset pre-processing
Step2: Specifying the CNN model
Step3: The model can be visualized as follows
Step4: A convolutional layer 'Conv2D' lookse like this
Step5: Testing the model
|
4,680
|
<ASSISTANT_TASK:>
Python Code:
import scipy as sp
import numpy as np
# we will need to plot stuff later
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 8)
plt.rcParams['font.size'] = 16
plt.rcParams['lines.linewidth'] = 2
import scipy.constants as const
const.epsilon_0
# convert temperatures:
const.convert_temperature(100, old_scale='C', new_scale='K')
# more constants (including units and errors)!
for k, v in const.physical_constants.items():
print(k, ':', v)
val, unit, uncertainty = const.physical_constants['muon mass energy equivalent in MeV']
val, unit, uncertainty
a = -1
b = 5
x = np.linspace(0, 5, 100)
y = np.exp(a * x) + b + np.random.normal(0, 0.1, 100)
plt.plot(x, y, '.', label='data')
from scipy.optimize import curve_fit
def f(x, a, b):
return np.exp(a * x) + b
params, covariance_matrix = curve_fit(f, x, y)
uncertainties = np.sqrt(np.diag(covariance_matrix))
print('a = {:5.2f} ± {:.2f}'.format(params[0], uncertainties[0]))
print('b = {:5.2f} ± {:.2f}'.format(params[1], uncertainties[1]))
x_plot = np.linspace(-0.1, 5.1, 1000)
plt.plot(x, y, '.', label='data')
plt.plot(x_plot, f(x_plot, *params), label='fit result')
plt.legend();
x = np.linspace(0, 1, 100)
y = np.sin(5 * np.pi * x + np.pi / 2)
yerr = np.full_like(y, 0.2)
noise = np.random.normal(0, yerr, 100)
y += noise
def f(x, a, b):
return np.sin(a * x + b)
#params, covariance_matrix = curve_fit(f, x, y)
# params, covariance_matrix = curve_fit(
# f, x, y,
# p0=[15, 2],
#)
params, covariance_matrix = curve_fit(
f, x, y,
p0=[15, 1.5],
sigma=yerr,
absolute_sigma=True,
)
# plot the stuff
x_plot = np.linspace(-0.1, 1.1, 1000)
plt.plot(x, y, '.', label='data')
plt.plot(x_plot, f(x_plot, *params), label='fit result')
plt.legend();
def cov2cor(cov):
'''Convert the covariance matrix to the correlation matrix'''
D = np.diag(1 / np.sqrt(np.diag(cov)))
return D @ cov @ D
covariance_matrix
correlation_matrix = cov2cor(covariance_matrix)
plt.matshow(correlation_matrix, vmin=-1, vmax=1, cmap='RdBu_r')
plt.colorbar(shrink=0.8);
correlation_matrix
lambda_ = 15
k = np.random.poisson(lambda_, 100)
# make sure to use bins of integer width, centered around the integer
bin_edges = np.arange(0, 31) - 0.5
plt.hist(k, bins=bin_edges);
from scipy.optimize import minimize
def negative_log_likelihood(lambda_, k):
return np.sum(lambda_ - k * np.log(lambda_))
result = minimize(
negative_log_likelihood,
x0=(10, ), # initial guess
args=(k, ), # additional arguments for the function to minimize
)
result
print('True λ = {}'.format(lambda_))
print('Fit: λ = {:.2f} ± {:.2f}'.format(result.x[0], np.sqrt(result.hess_inv[0, 0])))
from scipy.stats import norm, expon
x = np.append(
norm.rvs(loc=15, scale=2, size=500),
expon.rvs(scale=10, size=4500),
)
def pdf(x, mu, sigma, tau, p):
return p*norm.pdf(x, mu, sigma) + (1 - p)*expon.pdf(x, scale=tau)
def negative_log_likelihood(params, x):
mu, sigma, tau, p = params
neg_l = -np.sum(np.log(pdf(x, mu, sigma, tau, p)))
return neg_l
result = minimize(
negative_log_likelihood,
x0=(12, 1.5, 8, 0.2), # initial guess
args=(x, ), # additional arguments for the function to minimize
bounds=[
(None, None), # no bounds for mu
(1e-32, None), # sigma > 0
(1e-32, None), # tau > 0
(0, 1), # 0 <= p <= 1
],
method='L-BFGS-B', # method that supports bounds
)
x_plot = np.linspace(0, 100, 1000)
plt.hist(x, bins=100, normed=True)
plt.plot(x_plot, pdf(x_plot, *result.x))
print(result.hess_inv)
# get the covariance matrix as normal numpy array
covariance_matrix = result.hess_inv.todense()
correlation_matrix = cov2cor(covariance_matrix)
plt.matshow(correlation_matrix, vmin=-1, vmax=1, cmap='RdBu_r')
plt.colorbar(shrink=0.8);
plt.xticks(np.arange(4), ['μ', 'σ', 'τ', 'p'])
plt.yticks(np.arange(4), ['μ', 'σ', 'τ', 'p'])
print(correlation_matrix)
import numpy as np
# generate some data
real_values = np.array([1.5, -3])
x = np.linspace(0, 1, 10)
y = real_values[0]*x + real_values[1]
xerr = np.full_like(x, 0.1)
yerr = np.full_like(y, 0.05)
# add noise to the data
x += np.random.normal(0, xerr, 10)
y += np.random.normal(0, yerr, 10)
# plot the data
plt.errorbar(x, y, xerr=xerr, yerr=yerr, fmt='o');
import scipy.odr as odr
# function we want to fit (in this case, a line)
def f(B, x):
return B[0]*x + B[1]
# do the fit!
guess = [5, 0]
linear = odr.Model(f)
data = odr.RealData(x, y, sx=xerr, sy=yerr)
odr_fit = odr.ODR(data, linear, beta0=guess)
odr_output = odr_fit.run()
odr_output.pprint() # pprint = 'pretty print' function
# plot data and ODR fit
z = np.linspace(-0.1, 1.1, 100)
plt.errorbar(x, y, xerr=xerr, yerr=yerr, fmt='o')
plt.plot(z, f(odr_output.beta, z), 'k--');
from scipy.optimize import curve_fit
def g(x, m, b):
return m*x + b
params, covariance_matrix = curve_fit(g, x, y, sigma=yerr, p0=guess)
plt.errorbar(x, y, xerr=xerr, yerr=yerr, fmt='o')
plt.plot(z, f(odr_output.beta, z), 'k--', label='ODR Fit')
plt.plot(z, g(z, *params), 'g-.', label='curve_fit')
plt.legend(loc='best')
print('ODR Fit Results: ', odr_output.beta)
print('curve_fit Results:', params)
print('Real Values:', real_values)
freq1 = 5
freq2 = 50
t = np.linspace(0, 1, 1024*10)
y = np.sin(2*np.pi*freq1*t) + np.sin(2*np.pi*freq2*t)
# add some white noise
y += np.random.normal(y, 5)
plt.scatter(t, y, s=10, alpha=0.25, lw=0)
plt.xlabel(r'$t \ /\ \mathrm{s}$');
from scipy import fftpack
z = fftpack.rfft(y)
f = fftpack.rfftfreq(len(t), t[1] - t[0])
plt.axvline(freq1, color='lightgray', lw=5)
plt.axvline(freq2, color='lightgray', lw=5)
plt.plot(f, np.abs(z)**2)
plt.xlabel('f / Hz')
plt.xscale('log')
# plt.yscale('log');
from scipy.special import ndtr
def s_curve(x, a, b):
return ndtr(-a*(x - b))
# generate mildly noisy data using Gaussian CDF (see end of this notebook)
real_params = [2.5, 3]
x = np.linspace(0, 5, 20)
y = s_curve(x, *real_params)
y += np.random.normal(0, 0.025, len(y))
# add 4 bad data points
outlier_xcoords = [2, 6, 10, 15]
y[outlier_xcoords] = np.random.uniform(0.2, 2, size=4)
plt.plot(x, y, 'bo')
# attempt to fit
params, __ = curve_fit(s_curve, x, y)
z = np.linspace(0, 5, 100)
plt.plot(z, s_curve(z, *params), 'k--')
print('Real value:', real_params[1])
print('Fit value:', params[1])
from scipy.signal import medfilt
filtered_y = medfilt(y)
params, __ = curve_fit(s_curve, x, filtered_y)
print('Real value:', real_params[1])
print('Fit value:', params[1])
z = np.linspace(0, 5, 100)
plt.plot(x, y, 'k*', label='Before Filtering')
plt.plot(x, filtered_y, 'bo', label='After Filtering')
plt.plot(z, s_curve(z, *params), 'g--')
plt.legend();
from scipy.signal import butter, lfilter
def lowpass_filter(data, cutoff, fs, order=5):
Digital Butterworth low-pass filter.
data : 1D array of data to be filtered
cutoff : cutoff frequency in Hz
fs : sampling frequency (samples/second)
nyquist_frequency = fs/2
normal_cutoff = cutoff/nyquist_frequency
b, a = butter(order, normal_cutoff, btype='low')
y = lfilter(b, a, data)
return y
data = np.genfromtxt('../resources/scipy_filter_data.dat')
t = data[:, 0]
y = data[:, 1]
sample_freq = (len(t) - 1)/(t[-1])
plt.plot(t, y); # these are your data
from scipy.stats import norm
def gaussian(x, mu, sigma, A):
return A * norm.pdf(x, mu, sigma)
# %load -r 3-52 solutions/07_01_scipy.py
from scipy.integrate import quad
def f(x):
return 3*x**2 + 6*x - 9
quad(f, 0, 5)
def sinc(x):
return np.sin(x) / x
x = np.linspace(-10, 10, 1000)
y = sinc(x)
plt.plot(x, y)
plt.title('Sinc Function')
print(quad(sinc, -10, 10)) # fails
# numpys sinc handles the singularity correctly
print(quad(np.sinc, -10, 10))
from scipy.integrate import quadrature
# quadrature may complain, but it will work in the end
print(quadrature(sinc, -10, 10)[0])
from scipy.integrate import trapz
# 50 grid points
x = np.linspace(-10, 10)
y = sinc(x)
print(' 50 points:', trapz(y, x)) # note the order of the arguments: y, x
# 1000 grid points
x = np.linspace(-10, 10, 1000)
y = sinc(x)
print('1000 points:', trapz(y, x))
from scipy.interpolate import interp1d
x = (1, 2)
y = (5, 7)
print('Points:', list(zip(x, y)))
f = interp1d(x, y)
z = [1.25, 1.5, 1.75]
print('Interpolation:', list(zip(z, f(z))))
# f(2.5) # uncomment to run me
z = [0.5, 1, 1.5, 2, 2.5]
f = interp1d(x, y, bounds_error=False, fill_value=0)
print("Option 1:", list(zip(z, f(z))))
f = interp1d(x, y, bounds_error=False, fill_value=y) # fill with endpoint values
print("Option 2:", list(zip(z, f(z))))
f = interp1d(x, y, fill_value='extrapolate') # bounds_error set to False automatically
print("Option 3:", list(zip(z, f(z))))
from scipy.interpolate import CubicSpline
npoints = 5
x = np.arange(npoints)
y = np.random.random(npoints)
plt.plot(x, y, label='linear')
f = interp1d(x, y, kind='cubic')
z = np.linspace(np.min(x), np.max(x), 100)
plt.plot(z, f(z), label='interp1d cubic')
f = CubicSpline(x, y)
z = np.linspace(np.min(x), np.max(x), 100)
plt.plot(z, f(z), label='CubicSpline')
plt.plot(x, y, 'ko')
plt.legend(loc='best');
from scipy.special import jn
x = np.linspace(0, 10, 100)
for n in range(6):
plt.plot(x, jn(n, x), label=r'$\mathtt{J}_{%i}(x)$' % n)
plt.grid()
plt.legend();
import mpl_toolkits.mplot3d.axes3d as plt3d
from matplotlib.colors import LogNorm
def airy_disk(x):
mask = x != 0
result = np.empty_like(x)
result[~mask] = 1.0
result[mask] = (2 * jn(1, x[mask]) / x[mask])**2
return result
# 2D plot
r = np.linspace(-10, 10, 500)
plt.plot(r, airy_disk(r))
# 3D plot
x = np.arange(-10, 10.1, 0.1)
y = np.arange(-10, 10.1, 0.1)
X, Y = np.meshgrid(x, y)
Z = airy_disk(np.sqrt(X**2 + Y**2))
result
fig = plt.figure()
ax = plt3d.Axes3D(fig)
ax.plot_surface(X, Y, Z, cmap='gray', norm=LogNorm(), lw=0)
None
from scipy.special import erf, ndtr
def gaussian(z):
return np.exp(-z**2)
x = np.linspace(-3, 3, 100)
plt.plot(x, gaussian(x), label='Gaussian')
plt.plot(x, erf(x), label='Error Function')
plt.plot(x, ndtr(x), label='Gaussian CDF')
plt.ylim(-1.1, 1.1)
plt.legend(loc='best');
from scipy.special import eval_legendre, eval_laguerre, eval_hermite, eval_chebyt
ortho_poly_dict = {'Legendre': eval_legendre,
'Laguerre': eval_laguerre,
'Hermite': eval_hermite,
'Chebyshev T': eval_chebyt}
def plot_ortho_poly(name):
plt.figure()
f = ortho_poly_dict[name]
x = np.linspace(-1, 1, 100)
for n in range(5):
plt.plot(x, f(n, x), label='n = %i' % n)
if name is 'Legendre' or 'Chebyshev' in name:
plt.ylim(-1.1, 1.1)
plt.legend(loc='best', fontsize=16)
plt.title(name + ' Polynomials')
plot_ortho_poly('Legendre')
# plot_ortho_poly('Laguerre')
# plot_ortho_poly('Hermite')
# plot_ortho_poly('Chebyshev T')
# %load -r 57-80 solutions/07_01_scipy.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id=physical_constants></a>
Step2: <a id=fitting></a>
Step3: <a id=uncertainties_guesses></a>
Step4: <a id=plot_corr_matrix></a>
Step5: <a id=minimize></a>
Step6: Poisson pdf
Step7: minimize has lots of options for different minimization algorithms
Step8: <a id=odr></a>
Step9: Cases like these can be handled using orthogonal distance regression (ODR). When the independent variable is error-free (or the errors are negligibly small), the residuals are computed as the vertical distance between the data points and the fit. This is the ordinary least-squares method.
Step10: Finally, we do a comparison to a fit with ordinary least-squares (curve_fit).
Step11: If the x-uncertainties are relatively small, in general curve_fit will produce a better result. However, if the uncertainties on the independent variable are large and/or there is a rapidly changing region of your curve where the x-errors are important, ODR fitting may produce a better result.
Step12: <a id=filtering></a>
Step13: You can see clearly in the data that the mid-point of the S-curve is at about x=3 (which is the real value), but the outliers destroy the fit. We can remove them easily with a median filter. A median filter is particularly suited to edge detection cases, since it tends to preserve edges well.
Step15: Exercise
Step16: You are the unfortunate recipient of the following noisy data, which contains noise at two different (unknown) frequencies
Step17: Somewhere in this mess is a Gaussian
Step18: Use a FFT to identify the two offending noise frequencies. Then convert the lowpass_filter above into a bandstop filter (hint
Step19: <a id=integration></a>
Step20: The first parameter quad returns is the answer; the second is an estimate of the absolute error in the result.
Step21: quad struggles with $\mathrm{sinc}$, but it can be easily handled with Gaussian quadrature
Step22: This result agrees with Mathematica to 13 decimal places (even though only 11 are shown). Note that the problem is the singularity at $x=0$; if we change the boundaries to, say, [-10.1, 10], then it works fine. Also, writing our sinc function more cleverly would eliminate the problem.
Step23: <a id=interpolation></a>
Step24: Right now, if you try to use an x-coordinate outside of the interval $[x_0, x_1]$, a ValueError will be raised
Step25: This is because we haven't told interp1d how we want to handle the boundaries. This is done using the fill_value keyword argument. There are a few options
Step26: <a id=spline_interpolation></a>
Step27: <a id=special_functions></a>
Step28: <a id=erf></a>
Step29: <a id=ortho_polys></a>
Step30: Exercise
|
4,681
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from ripser import ripser
from persim import plot_diagrams
import time
# Create 100 points on the unit circle
N = 100
t = np.linspace(0, 2*np.pi, N+1)[0:N]
X = np.zeros((N, 2))
X[:, 0] = np.cos(t)
X[:, 1] = np.sin(t)
# Compute the persistence diagram of this point cloud
dgms = ripser(X)['dgms']
plt.figure(figsize=(8, 4))
plt.subplot(121)
plt.scatter(X[:, 0], X[:, 1])
plt.axis('equal')
plt.title("Point Cloud")
plt.subplot(122)
plot_diagrams(dgms)
plt.show()
# Step 1: Setup the point cloud
N = 200 #Number of points
X = np.random.randn(N, 3) #Draw from 3D Gaussian
X = X/np.sqrt(np.sum(X**2, 1))[:, None] #Normalize each point to unit length
# Step 2: Compute all pairwise arc lengths between sampled points
dotProds = X.dot(X.T) #Entry ij in this matrix holds the dot product between point i and j, or cos(theta)
#The dot products should be in [-1, 1], but may leave this range due to numerical roundoff
dotProds[dotProds < -1] = -1
dotProds[dotProds > 1] = 1
DSphere = np.arccos(dotProds) #The arc length is the inverse cosine of the dot products of unit vectors
np.fill_diagonal(DSphere, 0) #Be sure distance of points to themselves is zero
tic = time.time()
dgms = ripser(DSphere, distance_matrix=True, maxdim=2)['dgms']
print("Elapsed Time: ", time.time()-tic)
fig = plt.figure(figsize=(8, 4))
ax = plt.subplot(121, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2])
plt.title("Sphere Samples")
plt.subplot(122)
plot_diagrams(dgms)
plt.title("Persistence Diagrams")
DRP2 = np.arccos(dotProds) #TODO: This line currently computes sphere distances;
#update it so that it computes projective plane distances
np.fill_diagonal(DRP2, 0) #Be sure distance of points to themselves is zero
dgmsz2 = ripser(DRP2, distance_matrix=True, maxdim=2)['dgms']
dgmsz3 = ripser(DRP2, distance_matrix=True, maxdim=2, coeff=3)['dgms']
plt.figure(figsize=(8, 4))
plt.subplot(121)
plot_diagrams(dgmsz2)
plt.title("$\mathbb{Z}/2$")
plt.subplot(122)
plot_diagrams(dgmsz3)
plt.title("$\mathbb{Z}/3$")
N = 10000 #Number of initial points in (theta, phi) space
n_perm = 300 #Number of points to evenly subsample in 3D
R = 4
r = 2
theta = np.random.rand(N)*2*np.pi
phi = np.random.rand(N)*2*np.pi
X = np.zeros((N, 3))
X[:, 0] = (R + r*np.cos(theta))*np.cos(phi)
X[:, 1] = (R + r*np.cos(theta))*np.sin(phi)
X[:, 2] = r*np.sin(theta)
xr = [np.min(X.flatten()), np.max(X.flatten())]
#Now compute persistence diagrams up to H2 on a furthest points subsample
res = ripser(X, maxdim=2, n_perm=n_perm)
dgms = res['dgms']
X = X[res['idx_perm'], :] # Take out subsampled points
plt.figure(figsize=(8, 4))
ax = plt.subplot(121, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2])
ax.set_xlim(xr)
ax.set_ylim(xr)
ax.set_zlim(xr)
plt.title("Torus Samples, R = %g, r = %g"%(R, r))
plt.subplot(122)
plot_diagrams(dgms)
plt.show()
def get_flat_torus_dists(x1, y1, x2, y2):
Compute all pairwise distances between all points (x1, y1) and points (x2, y2)
on the flat torus [0, 1] x [0, 1]
Parameters:
x1 : ndarray (M)
An M-length list of x coordinates of each point in the first point set
y1 : ndarray (M)
An M-length list of y coordinates of each point in the first point set
x2 : ndarray (N)
An N-length list of x coordinates of each point in the second point set
y2 : ndarray (N)
An N-length list of y coordinates of each point in the second point set
Returns:
D : ndarray (M, N)
A distance matrix whose ijth entry is the distance along the flat torus between (x1[i], y1[i]) and (x2[j], y2[j])
dx = np.abs(x1[:, None] - x2[None, :])
dy = np.abs(y1[:, None] - y2[None, :])
##TODO: FINISH THIS AND MAKE IDENTIFICATIONS
##
return np.sqrt(dx**2 + dy**2)
res = 15 #Number of points along each dimension
#Sample slightly differently so that the two persistence dots won't be the same
gridx = np.linspace(0, 1, res+1)[0:res]
gridy = gridx*0.99
x, y = np.meshgrid(gridx, gridy)
x = x.flatten()
y = y.flatten()
# Compute the distance matrix, which is the distance between
# all points and themselves
D = get_flat_torus_dists(x, y, x, y)
np.fill_diagonal(D, 0)
plt.figure(figsize=(6, 6))
dgms = ripser(D, distance_matrix=True, maxdim=2)['dgms']
I1 = dgms[1]
I1 = I1[np.argsort(I1[:, 0]-I1[:, 1])[0:2], :]
print(I1)
plot_diagrams(dgms)
plt.show()
N = 10000 #Number of initial points in (theta, phi) space
n_perm = 300 #Number of points to evenly subsample in 3D
R = 4
r = 2
theta = np.random.rand(N)*2*np.pi
phi = np.random.rand(N)*2*np.pi
X = np.zeros((N, 4))
X[:, 0] = (R + r*np.cos(theta))*np.cos(phi)
X[:, 1] = (R + r*np.cos(theta))*np.sin(phi)
X[:, 2] = r*np.sin(theta)*np.cos(phi/2)
X[:, 3] = r*np.sin(theta)*np.sin(phi/2)
#Now compute persistence diagrams up to H2
dgms2 = ripser(X, maxdim=2, coeff=2, n_perm=n_perm)['dgms']
dgms3 = ripser(X, maxdim=2, coeff=3, n_perm=n_perm)['dgms']
plt.figure(figsize=(8, 4))
plt.subplot(121)
plot_diagrams(dgms2)
plt.title("$\mathbb{Z} / 2 \mathbb{Z}$")
plt.subplot(122)
plot_diagrams(dgms3)
plt.title("$\mathbb{Z} / 3 \mathbb{Z}$")
plt.show()
def get_flat_klein_dists(x1, y1, x2, y2):
Compute all pairwise distances between all points (x1, y1) and points (x2, y2)
on the flat torus [0, 1] x [0, 1]
Parameters:
x1 : ndarray (M)
An M-length list of x coordinates of each point in the first point set
y1 : ndarray (M)
An M-length list of y coordinates of each point in the first point set
x2 : ndarray (N)
An N-length list of x coordinates of each point in the second point set
y2 : ndarray (N)
An N-length list of y coordinates of each point in the second point set
Returns:
D : ndarray (M, N)
A distance matrix whose ijth entry is the distance along the flat Klein bottle
between (x1[i], y1[i]) and (x2[j], y2[j]), where the flat Klein bottle is
defined to be a quotient map over the torus $[0, 1] \times [0, 1]$
D = get_flat_torus_dists(x1, y1, x2, y2)
# TODO: Finish this; apply quotients on the torus to turn
# it into a Klein bottle
return D
res = 15 #Number of points along each dimension
#Sample slightly differently so that the two persistence dots won't be the same
gridx = np.linspace(0, 1, res+1)[0:res]
gridy = gridx*0.99 #Only need to sample half in the y direction since there's a double cover
x, y = np.meshgrid(gridx, gridy)
x = x.flatten()
y = y.flatten()
# Compute the distance matrix, which is the distance between
# all points and themselves
D = get_flat_klein_dists(x, y, x, y)
np.fill_diagonal(D, 0)
plt.figure(figsize=(8, 4))
dgmsz2 = ripser(D, distance_matrix=True, maxdim=2)['dgms']
dgmsz3 = ripser(D, distance_matrix=True, maxdim=2, coeff=3)['dgms']
plt.subplot(121)
plot_diagrams(dgmsz2)
plt.title("$\mathbb{Z} / 2 \mathbb{Z}$")
plt.subplot(122)
plot_diagrams(dgmsz3)
plt.title("$\mathbb{Z} / 3 \mathbb{Z}$")
plt.show()
NPoints = 400
R = 4
r = 2
theta = np.linspace(0, 2*np.pi, NPoints+1)[0:NPoints]
phi = theta*2
X = np.zeros((NPoints, 3))
X[:, 0] = (R + r*np.cos(theta))*np.cos(phi)
X[:, 1] = (R + r*np.cos(theta))*np.sin(phi)
X[:, 2] = r*np.sin(theta)
xr = [np.min(X.flatten()), np.max(X.flatten())]
#Now compute persistence diagrams with Z/2 coefficients
dgms2 = ripser(X, maxdim=1, coeff=2)['dgms']
dgms3 = ripser(X, maxdim=1, coeff=3)['dgms']
plt.figure(figsize=(9, 3))
ax = plt.subplot(131, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2])
ax.set_xlim(xr)
ax.set_ylim(xr)
ax.set_zlim(xr)
plt.title("Torus Samples, R = %g, r = %g"%(R, r))
plt.subplot(132)
plot_diagrams(dgms2)
plt.title("$\mathbb{Z} / 2 \mathbb{Z}$")
plt.subplot(133)
plot_diagrams(dgms3)
plt.title("$\mathbb{Z} / 3 \mathbb{Z}$")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 1
Step2: Example 2
Step3: Exercises
Step4: Example 3
Step6: Now we will sample points from a "flat torus." The domain is $[0, 1] \times [0, 1]$, with $(x, 0)$ identified with $(x, 1)$ and $(0, y)$ identified with $(1, y)$. The metric is simply the flat planar Euclidean metric
Step7: Now use your function to compute a distance matrix, run ripser, and verify that you get the correct signature for a torus
Step8: Example 4
Step10: Another way to obtain the Klein bottle is by a quotient map of the torus. For example, for the torus $[0, 1] \times [0, 1]$, make the identification $(x, y) \sim (x + 0.5, 1-y)$.
Step11: Example 5
|
4,682
|
<ASSISTANT_TASK:>
Python Code:
%run ../bst/bst.py
%load ../bst/bst.py
def create_level_lists(root):
# TODO: Implement me
pass
%run ../utils/results.py
# %load test_tree_level_lists.py
from nose.tools import assert_equal
class TestTreeLevelLists(object):
def test_tree_level_lists(self):
node = Node(5)
insert(node, 3)
insert(node, 8)
insert(node, 2)
insert(node, 4)
insert(node, 1)
insert(node, 7)
insert(node, 6)
insert(node, 9)
insert(node, 10)
insert(node, 11)
levels = create_level_lists(node)
results_list = []
for level in levels:
results = Results()
for node in level:
results.add_result(node)
results_list.append(results)
assert_equal(str(results_list[0]), '[5]')
assert_equal(str(results_list[1]), '[3, 8]')
assert_equal(str(results_list[2]), '[2, 4, 7, 9]')
assert_equal(str(results_list[3]), '[1, 6, 10]')
assert_equal(str(results_list[4]), '[11]')
print('Success: test_tree_level_lists')
def main():
test = TestTreeLevelLists()
test.test_tree_level_lists()
if __name__ == '__main__':
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unit Test
|
4,683
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import stats
import statsmodels.stats.proportion as smp
import pandas as pd
import matplotlib.pyplot as plt
def print_stats(data, hist_bins=10, hist_size=(8,4)):
print('--- Statistics ----')
display(data.describe())
print('\n')
print('--- Counting Unique Values ----')
display(data.value_counts())
print('\n')
print('--- Basic Histogram ----')
data.hist(bins=hist_bins, figsize=hist_size)
plt.show()
def calculate_sus(data):
for i in range(len(data.columns)):
if i % 2:
data.iloc[:,i] = 7 - data.iloc[:,i]
else:
data.iloc[:,i] = data.iloc[:,i] - 1
data['Score'] = data.iloc[:,0:10].sum(axis=1)
data['Score 100'] = data['Score'] * (100/60)
return data
def confidence_interval_t(data, confidence_level=0.95):
return stats.t.interval(confidence_level, data.count()-1, data.mean(), data.sem())
data = pd.read_excel('data/userstudies/juxtboard-questionnaire-responses.xlsx', sheet_name=None)
data_basic = data['Basic']
# Translating Genre from Portuguese to English
data_basic['Genre'].replace('Masculino','Male',inplace=True)
data_basic['Genre'].replace('Feminino','Female',inplace=True)
data_basic['Genre'].replace('Prefiro não responder','Prefer not to say',inplace=True)
# Translating Education from Portuguese to English
data_basic['Education'].replace('Educação Básica','Basic Education',inplace=True)
data_basic['Education'].replace('Educação Secundária','Secondary Education',inplace=True)
data_basic['Education'].replace('Educação Pós-Secundária','Post-Secondary Education',inplace=True)
data_basic['Education'].replace('Educação Pós-Secundária','Post-Secondary Education',inplace=True)
data_basic['Education'].replace('Licenciatura','Bachelor\'s Degree',inplace=True)
data_basic['Education'].replace('Mestrado','Master\'s Degree',inplace=True)
data_basic['Education'].replace('Doutoramento','Doctoral Degree',inplace=True)
data_basic['Education'].replace('Prefiro não responder','Prefer not to say',inplace=True)
# Translating Type of Devices from Portuguese to English
data_basic['Type of Devices'].replace({'Computador de secretária':'Desktop'}, regex=True ,inplace=True)
data_basic['Type of Devices'].replace({'Computador portátil':'Laptop'}, regex=True, inplace=True)
# Translating Test mode from Portuguese to English
data_basic['Test Mode'].replace('PRIMEIRO em modo "Multi-Dispositivo" e DEPOIS em modo "Dispositivo Único".','Single Device > Multi-Device',inplace=True)
data_basic['Test Mode'].replace('PRIMEIRO em modo "Dispositivo Único" e DEPOIS em modo "Multi-Disposito".','Multi-Device > Single Device',inplace=True)
#display(data_basic)
data_basic_age = data_basic['Age']
print_stats(data_basic_age, hist_bins=20)
print('Population Standard Deviation:')
print(data_basic_age.std(ddof=0))
data_basic_genre = data_basic['Genre']
print_stats(data_basic_genre)
data_basic_education = data_basic['Education']
print_stats(data_basic_education)
data_basic_type_of_devices = data_basic['Type of Devices'].map(lambda x: [i.strip() for i in x.split(",")])
print('--- Statistics ----')
display(data_basic_type_of_devices.describe())
#print('--- Basic Histogram ----')
#data_basic_type_of_devices.hist()
data_number_of_devices = data_basic_type_of_devices.apply(len)
print_stats(data_number_of_devices)
#print('Population Standard Deviation:')
#print(data_number_of_devices.std(ddof=0))
data_basic_usage_multiple_devices = data_basic['Usage of Multiple-Devices']
print_stats(data_basic_usage_multiple_devices)
#print('Population Standard Deviation:')
#print(data_basic_usage_multiple_devices.std(ddof=0))
(data_basic_usage_multiple_devices[data_basic_usage_multiple_devices >= 5].count()/data_basic_usage_multiple_devices.count())*100
data_basic_test_mode = data_basic['Test Mode']
print_stats(data_basic_test_mode)
data_sus_single_device = calculate_sus(data['SUS Single Device'].copy().dropna())
#data_sus_single_device
print(data_sus_single_device['Score 100'].mean())
data_sus_single_device.describe()
#print_stats(data_sus_single_device)
confidence_interval_t(data_sus_single_device['Score 100'])
##Uncomment if you need these stats
#for column in data_sus_single_device:
#print_stats(data_sus_single_device[column])
data_sus_multi_device = calculate_sus(data['SUS Multi-Device'].copy().dropna())
#data_sus_multi_device
print(data_sus_multi_device['Score 100'].mean())
data_sus_multi_device.describe()
#print_stats(data_sus_multi_device)
confidence_interval_t(data_sus_multi_device['Score 100'])
##Uncomment if you need these stats
#for column in data_sus_multi_device:
#print_stats(data_sus_multi_device[column])
stats.ttest_rel(data_sus_multi_device['Score 100'], data_sus_single_device['Score 100'])
data_sd_md = data['SD vs. MD'].copy().dropna()
data_sd_md.iloc[:,0:5] = data_sd_md.iloc[:,0:5] - 4
#data_sd_md
print_stats(data_sd_md, hist_size=(10,8))
for question in data_sd_md:
print(question,
'Median:', data_sd_md[question].median(),
'Mean:', data_sd_md[question].mean(),
'Standard Deviation:', data_sd_md[question].std(),
'Confidence Interval:', confidence_interval_t(data_sd_md[question]))
data_sd_md_freq = data['SD vs. MD'].copy().dropna()
data_sd_md_freq.iloc[:,0:5] = data_sd_md_freq.iloc[:,0:5] - 4
data_sd_md_freq_res = (data_sd_md_freq.apply(pd.value_counts).fillna(0)/data_sd_md_freq.count()*100).transpose()
data_sd_md_freq_res
data_sd_md_freq_res.to_csv('out/juxtboard-data_sd_md_freq_res.csv')
data_sd_md_freq.describe()
data_sd_md_freq_res.iloc[::-1].plot.barh(stacked=True).legend(loc='center left',bbox_to_anchor=(1.0, 0.5))
plt.show()
# Multi-Device Preference vs. Neutrality + Single Device Preference
data_sd_md_md_pref = (data_sd_md > 0).sum()
data_sd_md_neutral_or_sd_pre = (data_sd_md <= 0).sum()
data_sd_md_md_pref_relative = data_sd_md_md_pref / (data_sd_md_md_pref + data_sd_md_neutral_or_sd_pre)
# Single Device Preference vs. Neutrality + Multi-Device Preference
data_sd_md_sd_pref = (data_sd_md < 0).sum()
data_sd_md_neutral_or_md_pre = (data_sd_md <= 0).sum()
data_sd_md_sd_pref_relative = data_sd_md_sd_pref / (data_sd_md_sd_pref + data_sd_md_neutral_or_md_pre)
print('Multi-Device Preference vs. Neutrality + Single Device Preference')
for question in data_sd_md:
print(question,
'Relative Frequency:',
data_sd_md_md_pref_relative[question],
'Confidence Interval:',
smp.proportion_confint(data_sd_md_md_pref[question], data_sd_md_md_pref[question]+data_sd_md_neutral_or_sd_pre[question], alpha=0.05, method='wilson'))
print('\n')
print('Single Device Preference vs. Neutrality + Multi-Device Preference')
for question in data_sd_md:
print(question,
'Relative Frequency:',
data_sd_md_sd_pref_relative[question],
'Confidence Interval:',
smp.proportion_confint(data_sd_md_sd_pref[question], data_sd_md_sd_pref[question]+data_sd_md_neutral_or_md_pre[question], alpha=0.05, method='wilson'))
data_general_use = data['General Use'].copy().dropna()
data_general_use['Score'] = data_general_use.iloc[:,0:5].sum(axis=1)
data_general_use['Score 100'] = data_general_use['Score'] * (100/35)
#data_general_use
print_stats(data_general_use, hist_size=(10,8))
for column in data_general_use:
print(column,
'Median', data_general_use[column].median(),
'Mean', data_general_use[column].mean(),
'Standard Deviation', data_general_use[column].std(),
'Confidence Interval', confidence_interval_t(data_general_use[column]))
data_general_use_freq = pd.concat([data_basic['Usage of Multiple-Devices'], data['General Use']], axis=1)
data_general_use_freq = data_general_use_freq.rename(columns={"Usage of Multiple-Devices": "GU0"})
data_general_use_freq_res = (data_general_use_freq.apply(pd.value_counts).fillna(0)/data_general_use_freq.count()*100).transpose()
data_general_use_freq_res
data_general_use_freq.to_csv('out/juxtboard-data_general_use_freq.csv')
data_general_use_freq.describe()
data_general_use_freq_res.iloc[::-1].plot.barh(stacked=True).legend(loc='center left',bbox_to_anchor=(1.0, 0.5))
plt.show()
data_yxc = pd.read_excel('data/userstudies/yanux-calculator-questionnaire-responses.xlsx', sheet_name=None)
data_yxc_basic = data['Basic']
#display(data_yxc_basic)
data_yxc_basic_age = data_yxc_basic['Age']
print_stats(data_yxc_basic_age, hist_bins=20)
print('Population Standard Deviation:')
print(data_yxc_basic_age.std(ddof=0))
data_yxc_basic_genre = data_yxc_basic['Genre']
print_stats(data_yxc_basic_genre)
data_yxc_basic_education = data_yxc_basic['Education']
print_stats(data_yxc_basic_education)
data_yxc_basic_type_of_devices = data_yxc_basic['Type of Devices'].map(lambda x: [i.strip() for i in x.split(",")])
print('--- Statistics ----')
display(data_yxc_basic_type_of_devices.describe())
#print('--- Basic Histogram ----')
#data_yxc_basic_type_of_devices.hist()
data_yxc_number_of_devices = data_yxc_basic_type_of_devices.apply(len)
print_stats(data_yxc_number_of_devices)
data_yxc_basic_multiple_devices = data_yxc_basic['Number of Devices']
print_stats(data_yxc_basic_multiple_devices)
print('Population Standard Deviation:')
print(data_yxc_basic_multiple_devices.std(ddof=0))
data_yxc_sus = calculate_sus(data_yxc['SUS'].copy().dropna())
data_yxc_sus
print(data_yxc_sus['Score 100'].mean())
data_yxc_sus.describe()
#print_stats(data_yxc_sus)
confidence_interval_t(data_yxc_sus['Score 100'])
##Uncomment if you need these stats
#for column in data_yxc_sus:
#print_stats(data_yxc_sus[column])
data_yxc_domain_specific = data_yxc['Domain Specific']
#data_yxc_domain_specific
print_stats(data_yxc_domain_specific)
data_yxc_domain_specific_freq_res = (data_yxc_domain_specific.apply(pd.value_counts).fillna(0)/data_yxc_domain_specific.count()*100).transpose()
data_yxc_domain_specific_freq_res
data_yxc_domain_specific_freq_res.iloc[::-1].plot.barh(stacked=True).legend(loc='center left',bbox_to_anchor=(1.0, 0.5))
plt.show()
for question in data_yxc_domain_specific:
print(question,
'Median:', data_yxc_domain_specific[question].median(),
'Mean:', data_yxc_domain_specific[question].mean(),
'Standard Deviation:', data_yxc_domain_specific[question].std(),
'Confidence Interval:', confidence_interval_t(data_yxc_domain_specific[question]))
data_yxc_basic_usage_multiple_devices = data_yxc_domain_specific['DS1']
(data_yxc_basic_usage_multiple_devices[data_yxc_basic_usage_multiple_devices >= 5].count()/data_yxc_basic_usage_multiple_devices.count())*100
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helper Functions
Step2: Load Excel
Step3: Basic Data
Step4: Age
Step5: Genre
Step6: Education
Step7: Type of Devices
Step8: Percentage of Users that User Multiple Devices Sometimes or More Often
Step9: Test Mode
Step10: SUS - Single Device Mode
Step11: Mean Score 100
Step12: General Stats
Step13: Confidence Interval
Step14: Per-question Stats
Step15: SUS - Multi-Device Mode
Step16: Mean Score 100
Step17: General Stats
Step18: Confidence Interval
Step19: Per-question Stats
Step20: SUS Single Device vs. SUS Multi-Device
Step21: Single Device vs. SUS Multi-Device
Step22: Confidence Intervals
Step23: Response Frequency
Step24: General Use
Step25: Confidence Intervals
Step26: Response Frequency
Step27: Yanux Calculator
Step28: Age
Step29: Genre
Step30: Education
Step31: Type of Devices
Step32: SUS
Step33: Mean Score 100
Step34: General Stats
Step35: Confidence Interval
Step36: Per-question Stats
Step37: Domain Specific Questions
Step38: Response Frequency
Step39: Percentage of Users that User Multiple Devices Sometimes or More Often
|
4,684
|
<ASSISTANT_TASK:>
Python Code:
import sys
# Diese Zeile muss angepasst werden!
sys.path.append("/home/moser/MG_2016/pyMG/")
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pymg
from project.helmholtz1d import Helmholtz1D
from project.helmholtz1d_periodic import Helmholtz1D_Periodic
from project.gauss_seidel import GaussSeidel
from project.weighted_jacobi import WeightedJacobi
from project.pfasst.plot_tools import eigvalue_plot_list, matrix_plot, matrix_row_plot
from project.pfasst.transfer_tools import to_dense
from project.pfasst.matrix_method_tools import matrix_power
def plot_3_eigvalueplots(A_p,A_z,A_m):
eig_p.append(sp.linalg.eigvals(to_dense(A_p)))
eig_z.append(sp.linalg.eigvals(to_dense(A_z)))
eig_m.append(sp.linalg.eigvals(to_dense(A_m)))
real_part_p = np.real(eig_p[-1])
img_part_p = np.imag(eig_p[-1])
real_part_z = np.real(eig_z[-1])
img_part_z = np.imag(eig_z[-1])
real_part_m = np.real(eig_m[-1])
img_part_m = np.imag(eig_m[-1])
fig1, (ax1, ax2, ax3) = plt.subplots(ncols=3,figsize=(15,3))
ax1.plot(real_part_p,img_part_p,'ro')
ax1.set_xlabel("real part")
ax1.set_ylabel("img part")
ax1.set_title('eigenvalues')
ax2.plot(real_part_z,img_part_z,'bo')
ax2.set_xlabel("real part")
ax2.set_ylabel("img part")
ax2.set_title('eigenvalues')
ax3.plot(real_part_m,img_part_m,'go')
ax3.set_xlabel("real part")
ax3.set_ylabel("img part")
ax3.set_title('eigenvalues')
fig1.tight_layout()
plt.show()
def plot_2_eigvalueplots(A_p,A_z):
eig_p.append(sp.linalg.eigvals(to_dense(A_p)))
eig_z.append(sp.linalg.eigvals(to_dense(A_z)))
real_part_p = np.real(eig_p[-1])
img_part_p = np.imag(eig_p[-1])
real_part_z = np.real(eig_z[-1])
img_part_z = np.imag(eig_z[-1])
fig1, (ax1, ax2) = plt.subplots(ncols=2,figsize=(15,3))
ax1.plot(real_part_p,img_part_p,'ro')
ax1.set_xlabel("real part")
ax1.set_ylabel("img part")
ax1.set_title('eigenvalues')
ax2.plot(real_part_z,img_part_z,'bo')
ax2.set_xlabel("real part")
ax2.set_ylabel("img part")
ax2.set_title('eigenvalues')
fig1.tight_layout()
plt.show()
def system_matrix_hh1d(n,sig):
hh1d = Helmholtz1D(n, sig)
return hh1d.A
def system_matrix_hh1d_periodic(n,sig):
hh1d = Helmholtz1D_Periodic(n, sig)
return hh1d.A
def spec_rad(A):
return np.max(np.abs(sp.linalg.eigvals(to_dense(A))))
matrix_plot(to_dense(system_matrix_hh1d(10,0)))
matrix_plot(to_dense(system_matrix_hh1d_periodic(10,0)))
n=30
for sigma in [1000,0,-1000]:
plot_2_eigvalueplots(system_matrix_hh1d(n,sigma),system_matrix_hh1d_periodic(n,sigma))
def iteration_matrix_wjac(n, sigma, periodic=True):
if periodic:
A = system_matrix_hh1d_periodic(n,sigma)
else:
A = system_matrix_hh1d(n,sigma)
wjac = WeightedJacobi(A, 2.0/3.0)
P_inv = wjac.Pinv
return np.eye(n) - P_inv.dot(A)
matrix_plot(iteration_matrix_wjac(10,-100))
n = 10
sigma_range = np.linspace(-100,100,100)
sr_wjac_periodic = map(lambda sig : spec_rad(iteration_matrix_wjac(n, sig,periodic=True)), sigma_range)
sr_wjac = map(lambda sig : spec_rad(iteration_matrix_wjac(n, sig,periodic=False)), sigma_range)
# Achsen festhalten
fig1, (ax1, ax2, ax3) = plt.subplots(ncols=3,figsize=(15,4))
ax1.plot(sigma_range, sr_wjac_periodic,'k-')
ax1.set_xlabel('$\sigma$')
ax1.set_ylabel("spectral radius")
ax1.set_title('periodic')
ax2.plot(sigma_range, sr_wjac,'k-')
ax2.set_xlabel('$\sigma$')
ax2.set_ylabel("spectral radius")
ax2.set_title('non-periodic')
ax3.plot(sigma_range, np.abs(np.asarray(sr_wjac) - np.asarray(sr_wjac_periodic)),'k-')
ax3.set_xlabel('$\sigma$')
ax3.set_ylabel("spectral radius")
ax3.set_title('difference')
fig1.tight_layout()
plt.show()
def iteration_matrix_gs(n, sigma, periodic=True):
if periodic:
A = system_matrix_hh1d_periodic(n,sigma)
else:
A = system_matrix_hh1d(n,sigma)
gs = GaussSeidel(A)
P_inv = gs.Pinv
return np.eye(n) - P_inv.dot(A)
matrix_plot(iteration_matrix_gs(10,0,True))
sr_gs_periodic = map(lambda sig : spec_rad(iteration_matrix_gs(n, sig,periodic=True)), sigma_range)
sr_gs = map(lambda sig : spec_rad(iteration_matrix_gs(n, sig,periodic=False)), sigma_range)
fig1, (ax1, ax2, ax3) = plt.subplots(ncols=3,figsize=(15,4))
ax1.plot(sigma_range, sr_gs_periodic,'k-')
ax1.set_xlabel('$\sigma$')
ax1.set_ylabel("spectral radius")
ax1.set_title('periodic')
ax2.plot(sigma_range, sr_gs,'k-')
ax2.set_xlabel('$\sigma$')
ax2.set_ylabel("spectral radius")
ax2.set_title('non-periodic')
ax3.plot(sigma_range, np.abs(np.asarray(sr_gs) - np.asarray(sr_gs_periodic)),'k-')
ax3.set_xlabel('$\sigma$')
ax3.set_ylabel("spectral radius")
ax3.set_title('difference')
fig1.tight_layout()
plt.show()
def transformation_matrix_fourier_basis(N):
psi = np.zeros((N,N),dtype=np.complex128)
for i in range(N):
for j in range(N):
psi[i,j] = np.exp(2*np.pi*1.0j*j*i/N)
return psi/np.sqrt(N)
def plot_fourier_transformed(A):
A = to_dense(A)
n = A.shape[0]
PSI_trafo = transformation_matrix_fourier_basis(n)
PSI_trafo_inv = sp.linalg.inv(PSI_trafo)
A_traf = np.dot(PSI_trafo_inv, np.dot(A,PSI_trafo))
matrix_row_plot([A,np.abs(A_traf)])
plot_fourier_transformed(iteration_matrix_wjac(16,0))
plot_fourier_transformed(iteration_matrix_gs(16,0))
def get_theta_eigvals(A, plot=False,which='all'):
A = to_dense(A)
n = A.shape[0]
PSI_trafo = transformation_matrix_fourier_basis(n)
PSI_trafo_inv = sp.linalg.inv(PSI_trafo)
A_traf = np.dot(PSI_trafo_inv, np.dot(A,PSI_trafo))
if plot:
matrix_plot(np.abs(A_traf))
eigvals = np.asarray(map(lambda k : A_traf[k,k],range(n)))
if which is 'high':
return eigvals[np.ceil(n/4):np.floor(3.0*n/4)]
elif which is 'low':
return np.hstack([eigvals[:np.floor(n/4)],eigvals[np.ceil(3.0*n/4):]])
else:
return eigvals
It_gs = iteration_matrix_gs(16,0)
eigvals = sp.linalg.eigvals(It_gs)
diagonals = get_theta_eigvals(It_gs)
sum_eig = np.sum(np.abs(eigvals))
sum_diag = np.sum(np.abs(diagonals))
print sum_eig
print sum_diag
from project.linear_transfer import LinearTransfer
from project.linear_transfer_periodic import LinearTransferPeriodic
def coarse_grid_correction(n,nc, sigma):
A_fine = to_dense(system_matrix_hh1d(n,sigma))
A_coarse = to_dense(system_matrix_hh1d(nc,sigma))
A_coarse_inv = sp.linalg.inv(A_coarse)
lin_trans = LinearTransfer(n, nc)
prolong = to_dense(lin_trans.I_2htoh)
restrict = to_dense(lin_trans.I_hto2h)
return np.eye(n)- np.dot(prolong.dot(A_coarse_inv.dot(restrict)), A_fine)
plot_fourier_transformed(coarse_grid_correction(31,15,0))
def coarse_grid_correction_periodic(n,nc, sigma):
A_fine = to_dense(system_matrix_hh1d_periodic(n,sigma))
A_coarse = to_dense(system_matrix_hh1d_periodic(nc,sigma))
A_coarse_inv = sp.linalg.inv(A_coarse)
lin_trans = LinearTransferPeriodic(n, nc)
prolong = to_dense(lin_trans.I_2htoh)
restrict = to_dense(lin_trans.I_hto2h)
return np.eye(n)- np.dot(prolong.dot(A_coarse_inv.dot(restrict)), A_fine)
def two_grid_it_matrix(n,nc, sigma, nu1=3,nu2=3,typ='wjac'):
cg = coarse_grid_correction(n,nc,sigma)
if typ is 'wjac':
smoother = iteration_matrix_wjac(n,sigma, periodic=False)
if typ is 'gs':
smoother = iteration_matrix_gs(n,sigma, periodic=False)
pre_sm = matrix_power(smoother, nu1)
post_sm = matrix_power(smoother, nu2)
return pre_sm.dot(cg.dot(post_sm))
plot_fourier_transformed(two_grid_it_matrix(15,7,0,typ='wjac'))
sr_2grid_var_sigma = map(lambda sig : spec_rad(two_grid_it_matrix(15,7,sig)), sigma_range)
plt.semilogy(sigma_range, sr_2grid_var_sigma,'k-')
plt.title('$n_f = 15, n_c = 7$')
plt.xlabel('$\sigma$')
plt.ylabel("spectral radius")
nf_range = map(lambda k: 2**k-1,range(3,10))
nc_range = map(lambda k: 2**k-1,range(2,9))
sr_2grid_m1000 = map(lambda nf,nc : spec_rad(two_grid_it_matrix(nf,nc,-1000)), nf_range, nc_range)
sr_2grid_0 = map(lambda nf,nc : spec_rad(two_grid_it_matrix(nf,nc,0)), nf_range, nc_range)
sr_2grid_p1000 = map(lambda nf,nc : spec_rad(two_grid_it_matrix(nf,nc,1000)), nf_range, nc_range)
plt.semilogy(nf_range, sr_2grid_m1000,'k-',nf_range, sr_2grid_0,'k--',nf_range, sr_2grid_p1000,'k:')
plt.xlabel('$n_f$')
plt.ylabel("spectral radius")
plt.legend(("$\sigma = -1000$","$\sigma = 0$","$\sigma = 1000$"),'upper right',shadow = True)
def two_grid_it_matrix_periodic(n,nc, sigma, nu1=3,nu2=3,typ='wjac'):
cg = coarse_grid_correction_periodic(n,nc,sigma)
if typ is 'wjac':
smoother = iteration_matrix_wjac(n,sigma, periodic=True)
if typ is 'gs':
smoother = iteration_matrix_gs(n,sigma, periodic=True)
pre_sm = matrix_power(smoother, nu1)
post_sm = matrix_power(smoother, nu2)
return pre_sm.dot(cg.dot(post_sm))
plot_fourier_transformed(two_grid_it_matrix_periodic(16,8,-100,typ='wjac'))
n_range = np.arange(10,100)
hs_sysmat_m1000 = map(lambda n: hs_norm(to_dense(system_matrix_hh1d(n,-1000))-to_dense(system_matrix_hh1d_periodic(n,-1000))),n_range)
hs_sysmat_0 = map(lambda n: hs_norm(to_dense(system_matrix_hh1d(n,0.001))-to_dense(system_matrix_hh1d_periodic(n,0.001))),n_range)
hs_sysmat_p1000 = map(lambda n: hs_norm(to_dense(system_matrix_hh1d(n,1000))-to_dense(system_matrix_hh1d_periodic(n,1000))),n_range)
plt.plot(hs_sysmat_m1000)
plt.plot(hs_sysmat_0)
plt.plot(hs_sysmat_p1000)
n_range = 2**np.arange(1,11)
hs_wjac_m1000 = map(lambda n: hs_norm(to_dense(iteration_matrix_wjac(n,-1000))-to_dense(iteration_matrix_wjac(n,-1000,False))),n_range)
hs_wjac_0 = map(lambda n: hs_norm(to_dense(iteration_matrix_wjac(n,0))-to_dense(iteration_matrix_wjac(n,0,False))),n_range)
hs_wjac_p1000 = map(lambda n: hs_norm(to_dense(iteration_matrix_wjac(n,1000))-to_dense(iteration_matrix_wjac(n,1000,False))),n_range)
plt.plot(n_range, hs_wjac_m1000)
plt.plot(n_range, hs_wjac_0)
plt.plot(n_range, hs_wjac_p1000)
n_range = 2**np.arange(1,11)
hs_gs_m1000 = map(lambda n: hs_norm(to_dense(iteration_matrix_gs(n,-1000))-to_dense(iteration_matrix_gs(n,-1000,False))),n_range)
hs_gs_0 = map(lambda n: hs_norm(to_dense(iteration_matrix_gs(n,0))-to_dense(iteration_matrix_gs(n,0,False))),n_range)
hs_gs_p1000 = map(lambda n: hs_norm(to_dense(iteration_matrix_gs(n,1000))-to_dense(iteration_matrix_gs(n,1000,False))),n_range)
plt.plot(n_range, hs_gs_m1000)
plt.plot(n_range, hs_gs_0)
plt.plot(n_range, hs_gs_p1000)
def einmal_einpacken(A):
return np.r_[[np.zeros(A.shape[0]+1)],np.c_[np.zeros(A.shape[0]),A]]
n_f_range = 2**np.arange(3,10)
n_c_range = 2**np.arange(2,9)
hs_cgc_m1000 = map(lambda nf,nc: hs_norm(einmal_einpacken(coarse_grid_correction(nf-1,nc-1,-1000))-coarse_grid_correction_periodic(nf,nc,-1000)),n_f_range ,n_c_range)
hs_cgc_0 = map(lambda nf,nc: hs_norm(einmal_einpacken(coarse_grid_correction(nf-1,nc-1,0))-coarse_grid_correction_periodic(nf,nc,0.001)),n_f_range ,n_c_range)
hs_cgc_p1000 = map(lambda nf,nc: hs_norm(einmal_einpacken(coarse_grid_correction(nf-1,nc-1,1000))-coarse_grid_correction_periodic(nf,nc,1000)),n_f_range ,n_c_range)
plt.semilogy(n_f_range, hs_cgc_m1000)
plt.semilogy(n_f_range, hs_cgc_0)
plt.semilogy(n_f_range, hs_cgc_p1000)
# plt.semilogy(n_f_range, 1/np.sqrt(n_f_range))
n_f_range = 2**np.arange(3,12)
n_c_range = 2**np.arange(2,11)
hs_2grid_m1000 = map(lambda nf,nc: hs_norm(
einmal_einpacken(two_grid_it_matrix(nf-1,nc-1,-1000))-two_grid_it_matrix_periodic(nf,nc,-1000))
,n_f_range ,n_c_range)
hs_2grid_0 = map(lambda nf,nc: hs_norm(
einmal_einpacken(two_grid_it_matrix(nf-1,nc-1,0.001))-two_grid_it_matrix_periodic(nf,nc,0.001))
,n_f_range ,n_c_range)
hs_2grid_p1000 = map(lambda nf,nc: hs_norm(
einmal_einpacken(two_grid_it_matrix(nf-1,nc-1,1000))-two_grid_it_matrix_periodic(nf,nc,1000))
,n_f_range ,n_c_range)
plt.semilogy(n_f_range, hs_2grid_m1000)
plt.semilogy(n_f_range, hs_2grid_0)
plt.semilogy(n_f_range, hs_2grid_p1000)
plt.semilogy(n_f_range, 1/np.sqrt(n_f_range)*30)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Systemmatrizen
Step2: Plotten Sie mithilfe von matrix_plot die Systemmatrizen für $\sigma = 0$ und $n=10$.
Step3: Aufgabe
Step4: Iterationsmatrizen des Glätters
Step5: Aufgabe
Step6: Gehen wir das ganze mal systematischer an!
Step7: Frage
Step8: Frage
Step9: Nochmal das Ganze mit mehr System!
Step10: Frage
Step11: Frage
Step12: Frage
Step13: Eine einfache Methode wäre
Step14: Aufgabe
Step15: Im Folgenden werden wir nun mithilfe des pymg frameworks die Zweigitter-Iterationsmatrix für ein einfaches Multigrid
Step16: Buntebilderaufgabe
Step17: Und nun einmal der periodische Fall.
Step18: Aufgabe
Step19: Buntebilderaufgabe
Step20: Nun etwas genauer!
Step21: Frage
Step22: Buntebilderaufgabe
Step23: Frage
Step24: Glättung
Step25: Gauss-Seidel
Step26: Grobgitterkorrektur
Step27: Frage
Step28: Zweigitter
|
4,685
|
<ASSISTANT_TASK:>
Python Code:
import json
import math
import os
from pprint import pprint
import numpy as np
import tensorflow as tf
print(tf.version.VERSION)
N_POINTS = 10
X = tf.constant(range(N_POINTS), dtype=tf.float32)
Y = 2 * X + 10
# TODO 1
def create_dataset(X, Y, epochs, batch_size):
dataset = # TODO -- Your code here.
dataset = # TODO -- Your code here.
return dataset
BATCH_SIZE = 3
EPOCH = 2
dataset = create_dataset(X, Y, epochs=EPOCH, batch_size=BATCH_SIZE)
for i, (x, y) in enumerate(dataset):
print("x:", x.numpy(), "y:", y.numpy())
assert len(x) == BATCH_SIZE
assert len(y) == BATCH_SIZE
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
# TODO 2
EPOCHS = 250
BATCH_SIZE = 2
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dataset = # TODO -- Your code here.
for step, (X_batch, Y_batch) in # TODO -- Your code here.
dw0, dw1 = # TODO -- Your code here.
# TODO -- Your code here.
if step % 100 == 0:
loss = # TODO -- Your code here.
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
assert loss < 0.0001
assert abs(w0 - 2) < 0.001
assert abs(w1 - 10) < 0.001
!ls -l ../toy_data/taxi*.csv
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
# TODO 3
def create_dataset(pattern):
# TODO -- Your code here.
return dataset
tempds = create_dataset('../toy_data/taxi-train*')
print(tempds)
for data in tempds.take(2):
pprint({k: v.numpy() for k, v in data.items()})
print("\n")
UNWANTED_COLS = ['pickup_datetime', 'key']
# TODO 4a
def features_and_labels(row_data):
label = # TODO -- Your code here.
features = # TODO -- Your code here.
# TODO -- Your code here.
return features, label
for row_data in tempds.take(2):
features, label = features_and_labels(row_data)
pprint(features)
print(label, "\n")
assert UNWANTED_COLS[0] not in features.keys()
assert UNWANTED_COLS[1] not in features.keys()
assert label.shape == [1]
# TODO 4b
def create_dataset(pattern, batch_size):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = # TODO -- Your code here.
return dataset
BATCH_SIZE = 2
tempds = create_dataset('../toy_data/taxi-train*', batch_size=2)
for X_batch, Y_batch in tempds.take(2):
pprint({k: v.numpy() for k, v in X_batch.items()})
print(Y_batch.numpy(), "\n")
assert len(Y_batch) == BATCH_SIZE
# TODO 4c
def create_dataset(pattern, batch_size=1, mode='eval'):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = # TODO -- Your code here.
if mode == 'train':
dataset = # TODO -- Your code here.
# take advantage of multi-threading; 1=AUTOTUNE
dataset = # TODO -- Your code here.
return dataset
tempds = create_dataset('../toy_data/taxi-train*', 2, 'train')
print(list(tempds.take(1)))
tempds = create_dataset('../toy_data/taxi-valid*', 2, 'eval')
print(list(tempds.take(1)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading data from memory
Step2: We begin with implementing a function that takes as input
Step3: Let's test our function by iterating twice over our dataset in batches of 3 datapoints
Step4: Loss function and gradients
Step5: Training loop
Step6: Loading data from disk
Step7: Use tf.data to read the CSV files
Step8: Let's now wrap the call to make_csv_dataset into its own function that will take only the file pattern (i.e. glob) where the dataset files are to be located
Step9: Note that this is a prefetched dataset, where each element is an OrderedDict whose keys are the feature names and whose values are tensors of shape (1,) (i.e. vectors).
Step10: Transforming the features
Step11: Let's iterate over 2 examples from our tempds dataset and apply our feature_and_labels
Step12: Batching
Step13: Let's test that our batches are of the right size
Step14: Shuffling
Step15: Let's check that our function works well in both modes
|
4,686
|
<ASSISTANT_TASK:>
Python Code:
from GongSu21_Statistics_Averages import *
prices_pd.head()
california_pd['HighQ_dev'] = (california_pd['HighQ'] - ca_mean) ** 2
california_pd.head()
ca_HighQ_variance = california_pd.HighQ_dev.sum() / (ca_count - 1)
ca_HighQ_variance
# 캘리포니아에서 거래된 상품(HighQ) 담배(식물) 도매가의 표준편차
ca_HighQ_SD = np.sqrt(ca_HighQ_variance)
ca_HighQ_SD
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 주요 내용
Step2: 모집단과 표본
Step3: 이제 캘리포니아 주 거래된 상품(HighQ) 담배(식물)의 거래가 전체 모집단에 대한 분산 점추정을 계산할 수 있다.
Step4: 주의
|
4,687
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore') #don't display warnings
# %load ../neon_aop_hyperspectral.py
Created on Wed Jun 20 10:34:49 2018
@author: bhass
import matplotlib.pyplot as plt
import numpy as np
import h5py, os, copy
def aop_h5refl2array(refl_filename):
aop_h5refl2array reads in a NEON AOP reflectance hdf5 file and returns
1. reflectance array (with the no data value and reflectance scale factor applied)
2. dictionary of metadata including spatial information, and wavelengths of the bands
--------
Parameters
refl_filename -- full or relative path and name of reflectance hdf5 file
--------
Returns
--------
reflArray:
array of reflectance values
metadata:
dictionary containing the following metadata:
bad_band_window1 (tuple)
bad_band_window2 (tuple)
bands: # of bands (float)
data ignore value: value corresponding to no data (float)
epsg: coordinate system code (float)
map info: coordinate system, datum & ellipsoid, pixel dimensions, and origin coordinates (string)
reflectance scale factor: factor by which reflectance is scaled (float)
wavelength: wavelength values (float)
wavelength unit: 'm' (string)
--------
NOTE: This function applies to the NEON hdf5 format implemented in 2016, and should be used for
data acquired 2016 and after. Data in earlier NEON hdf5 format (collected prior to 2016) is
expected to be re-processed after the 2018 flight season.
--------
Example Execution:
--------
sercRefl, sercRefl_metadata = h5refl2array('NEON_D02_SERC_DP3_368000_4306000_reflectance.h5')
import h5py
#Read in reflectance hdf5 file
hdf5_file = h5py.File(refl_filename,'r')
#Get the site name
file_attrs_string = str(list(hdf5_file.items()))
file_attrs_string_split = file_attrs_string.split("'")
sitename = file_attrs_string_split[1]
#Extract the reflectance & wavelength datasets
refl = hdf5_file[sitename]['Reflectance']
reflData = refl['Reflectance_Data']
reflRaw = refl['Reflectance_Data'].value
#Create dictionary containing relevant metadata information
metadata = {}
metadata['map info'] = refl['Metadata']['Coordinate_System']['Map_Info'].value
metadata['wavelength'] = refl['Metadata']['Spectral_Data']['Wavelength'].value
#Extract no data value & scale factor
metadata['data ignore value'] = float(reflData.attrs['Data_Ignore_Value'])
metadata['reflectance scale factor'] = float(reflData.attrs['Scale_Factor'])
#metadata['interleave'] = reflData.attrs['Interleave']
#Apply no data value
reflClean = reflRaw.astype(float)
arr_size = reflClean.shape
if metadata['data ignore value'] in reflRaw:
print('% No Data: ',np.round(np.count_nonzero(reflClean==metadata['data ignore value'])*100/(arr_size[0]*arr_size[1]*arr_size[2]),1))
nodata_ind = np.where(reflClean==metadata['data ignore value'])
reflClean[nodata_ind]=np.nan
#Apply scale factor
reflArray = reflClean/metadata['reflectance scale factor']
#Extract spatial extent from attributes
metadata['spatial extent'] = reflData.attrs['Spatial_Extent_meters']
#Extract bad band windows
metadata['bad band window1'] = (refl.attrs['Band_Window_1_Nanometers'])
metadata['bad band window2'] = (refl.attrs['Band_Window_2_Nanometers'])
#Extract projection information
#metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value
metadata['epsg'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)
#Extract map information: spatial extent & resolution (pixel size)
mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value
hdf5_file.close
return reflArray, metadata
def plot_aop_refl(band_array,refl_extent,colorlimit=(0,1),ax=plt.gca(),title='',cbar ='on',cmap_title='',colormap='Greys'):
'''plot_refl_data reads in and plots a single band or 3 stacked bands of a reflectance array
--------
Parameters
--------
band_array: array of reflectance values, created from aop_h5refl2array
refl_extent: extent of reflectance data to be plotted (xMin, xMax, yMin, yMax)
use metadata['spatial extent'] from aop_h5refl2array function
colorlimit: optional, range of values to plot (min,max).
- helpful to look at the histogram of reflectance values before plotting to determine colorlimit.
ax: optional, default = current axis
title: optional; plot title (string)
cmap_title: optional; colorbar title
colormap: optional (string, see https://matplotlib.org/examples/color/colormaps_reference.html) for list of colormaps
--------
Returns
--------
plots flightline array of single band of reflectance data
--------
Examples:
--------
plot_aop_refl(sercb56,
sercMetadata['spatial extent'],
colorlimit=(0,0.3),
title='SERC Band 56 Reflectance',
cmap_title='Reflectance',
colormap='Greys_r') '''
import matplotlib.pyplot as plt
plot = plt.imshow(band_array,extent=refl_extent,clim=colorlimit);
if cbar == 'on':
cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap);
cbar.set_label(cmap_title,rotation=90,labelpad=20)
plt.title(title); ax = plt.gca();
ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation for ticklabels
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees
def stack_rgb(reflArray,bands):
red = reflArray[:,:,bands[0]-1]
green = reflArray[:,:,bands[1]-1]
blue = reflArray[:,:,bands[2]-1]
stackedRGB = np.stack((red,green,blue),axis=2)
return stackedRGB
def plot_aop_rgb(rgbArray,ext,ls_pct=5,plot_title=''):
from skimage import exposure
pLow, pHigh = np.percentile(rgbArray[~np.isnan(rgbArray)], (ls_pct,100-ls_pct))
img_rescale = exposure.rescale_intensity(rgbArray, in_range=(pLow,pHigh))
plt.imshow(img_rescale,extent=ext)
plt.title(plot_title + '\n Linear ' + str(ls_pct) + '% Contrast Stretch');
ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
sercRefl, sercRefl_md = aop_h5refl2array('../data/Day1_Hyperspectral_Intro/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5')
print('band 58 center wavelength (nm): ',sercRefl_md['wavelength'][57])
print('band 90 center wavelength (nm) : ', sercRefl_md['wavelength'][89])
vis = sercRefl[:,:,57]
nir = sercRefl[:,:,89]
ndvi = np.divide((nir-vis),(nir+vis))
plot_aop_refl(ndvi,sercRefl_md['spatial extent'],
colorlimit = (np.min(ndvi),np.max(ndvi)),
title='SERC Subset NDVI \n (VIS = Band 58, NIR = Band 90)',
cmap_title='NDVI',
colormap='seismic')
import copy
ndvi_gtpt6 = copy.copy(ndvi)
#set all pixels with NDVI < 0.6 to nan, keeping only values > 0.6
ndvi_gtpt6[ndvi<0.6] = np.nan
print('Mean NDVI > 0.6:',round(np.nanmean(ndvi_gtpt6),2))
plot_aop_refl(ndvi_gtpt6,
sercRefl_md['spatial extent'],
colorlimit=(0.6,1),
title='SERC Subset NDVI > 0.6 \n (VIS = Band 58, NIR = Band 90)',
cmap_title='NDVI',
colormap='RdYlGn')
import numpy.ma as ma
def calculate_mean_masked_spectra(reflArray,ndvi,ndvi_threshold,ineq='>'):
mean_masked_refl = np.zeros(reflArray.shape[2])
for i in np.arange(reflArray.shape[2]):
refl_band = reflArray[:,:,i]
if ineq == '>':
ndvi_mask = ma.masked_where((ndvi<=ndvi_threshold) | (np.isnan(ndvi)),ndvi)
elif ineq == '<':
ndvi_mask = ma.masked_where((ndvi>=ndvi_threshold) | (np.isnan(ndvi)),ndvi)
else:
print('ERROR: Invalid inequality. Enter < or >')
masked_refl = ma.MaskedArray(refl_band,mask=ndvi_mask.mask)
mean_masked_refl[i] = ma.mean(masked_refl)
return mean_masked_refl
sercSpectra_ndvi_gtpt6 = calculate_mean_masked_spectra(sercRefl,ndvi,0.6)
sercSpectra_ndvi_ltpt3 = calculate_mean_masked_spectra(sercRefl,ndvi,0.3,ineq='<')
import pandas
#Remove water vapor bad band windows & last 10 bands
w = copy.copy(sercRefl_md['wavelength'])
w[((w >= 1340) & (w <= 1445)) | ((w >= 1790) & (w <= 1955))]=np.nan
w[-10:]=np.nan;
nan_ind = np.argwhere(np.isnan(w))
sercSpectra_ndvi_gtpt6[nan_ind] = np.nan
sercSpectra_ndvi_ltpt3[nan_ind] = np.nan
#Create dataframe with masked NDVI mean spectra
sercSpectra_ndvi_df = pandas.DataFrame()
sercSpectra_ndvi_df['wavelength'] = w
sercSpectra_ndvi_df['mean_refl_ndvi_gtpt6'] = sercSpectra_ndvi_gtpt6
sercSpectra_ndvi_df['mean_refl_ndvi_ltpt3'] = sercSpectra_ndvi_ltpt3
ax = plt.gca();
sercSpectra_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_gtpt6',color='green',
edgecolor='none',kind='scatter',label='NDVI > 0.6',legend=True);
sercSpectra_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_ltpt3',color='red',
edgecolor='none',kind='scatter',label='NDVI < 0.3',legend=True);
ax.set_title('Mean Spectra of Reflectance Masked by NDVI')
ax.set_xlim([np.nanmin(w),np.nanmax(w)]); ax.set_ylim(0,0.45)
ax.set_xlabel("Wavelength, nm"); ax.set_ylabel("Reflectance")
ax.grid('on');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: syncID
Step3: Read in SERC Reflectance Tile
Step4: Extract NIR and VIS bands
Step5: Calculate & Plot NDVI
Step6: We can use the function plot_aop_refl to plot this, and choose the seismic color pallette to highlight the difference between positive and negative NDVI values. Since this is a normalized index, the values should range from -1 to +1.
Step7: Extract Spectra Using Masks
Step8: Calculate the mean spectra, thresholded by NDVI
Step9: We can test out this function for various NDVI thresholds. We'll test two together, and you can try out different values on your own. Let's look at the average spectra for healthy vegetation (NDVI > 0.6), and for a lower threshold (NDVI < 0.3).
Step10: Finally, we can use pandas to plot the mean spectra. First set up the pandas dataframe.
Step11: Plot the masked NDVI dataframe to display the mean spectra for NDVI values that exceed 0.6 and that are less than 0.3
|
4,688
|
<ASSISTANT_TASK:>
Python Code:
# Load library
from nltk.stem.porter import PorterStemmer
# Create word tokens
tokenized_words = ['i', 'am', 'humbled', 'by', 'this', 'traditional', 'meeting']
# Create stemmer
porter = PorterStemmer()
# Apply stemmer
[porter.stem(word) for word in tokenized_words]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Text Data
Step2: Stem Words
|
4,689
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.stats.stats import pearsonr
np.random.seed(101)
normal = np.random.normal(loc=0.0, scale= 1.0, size=1000)
print 'Mean: %0.3f Median: %0.3f Variance: %0.3f' % (np.mean(normal), np.median(normal), np.var(normal))
outlying = normal.copy()
outlying[0] = 50.0
print 'Mean: %0.3f Median: %0.3f Variance: %0.3f' % (np.mean(outlying), np.median(outlying), np.var(outlying))
print 'Pearson''s correlation coefficient: %0.3f p-value: %0.3f' % pearsonr(normal,outlying)
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
X,y = diabetes.data, diabetes.target
import pandas as pd
pd.options.display.float_format = '{:.2f}'.format
df = pd.DataFrame(X)
print df.describe()
%matplotlib inline
import matplotlib.pyplot as plt
import pylab as pl
box_plots = df.boxplot(return_type='dict')
from sklearn.preprocessing import StandardScaler
Xs = StandardScaler().fit_transform(X)
o_idx = np.where(np.abs(Xs)>3)
# .any(1) method will avoid duplicating
print df[(np.abs(Xs)>3).any(1)]
from scipy.stats.mstats import winsorize
Xs_w = winsorize(Xs, limits=(0.05, 0.95))
Xs_c = Xs.copy()
Xs_c[o_idx] = np.sign(Xs_c[o_idx]) * 3
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
from pandas.tools.plotting import scatter_matrix
dim_reduction = PCA()
Xc = dim_reduction.fit_transform(scale(X))
print 'variance explained by the first 2 components: %0.1f%%' % (sum(dim_reduction.explained_variance_ratio_[:2]*100))
print 'variance explained by the last 2 components: %0.1f%%' % (sum(dim_reduction.explained_variance_ratio_[-2:]*100))
df = pd.DataFrame(Xc, columns=['comp_'+str(j) for j in range(10)])
first_two = df.plot(kind='scatter', x='comp_0', y='comp_1', c='DarkGray', s=50)
last_two = df.plot(kind='scatter', x='comp_8', y='comp_9', c='DarkGray', s=50)
print 'variance explained by the first 3 components: %0.1f%%' % (sum(dim_reduction.explained_variance_ratio_[:3]*100))
scatter_first = scatter_matrix(pd.DataFrame(Xc[:,:3], columns=['comp1','comp2','comp3']),
alpha=0.3, figsize=(15, 15), diagonal='kde', marker='o', grid=True)
scatter_last = scatter_matrix(pd.DataFrame(Xc[:,-2:], columns=['comp9','comp10']),
alpha=0.3, figsize=(15, 15), diagonal='kde', marker='o', grid=True)
outlying = (Xc[:,-1] < -0.3) | (Xc[:,-2] < -1.0)
print df[outlying]
from sklearn.cluster import DBSCAN
DB = DBSCAN(eps=2.5, min_samples=25)
DB.fit(Xc)
from collections import Counter
print Counter(DB.labels_)
print df[DB.labels_==-1]
from sklearn import svm
outliers_fraction = 0.01 #
nu_estimate = 0.95 * outliers_fraction + 0.05
auto_detection = svm.OneClassSVM(kernel="rbf", gamma=0.01, degree=3, nu=nu_estimate)
auto_detection.fit(Xc)
evaluation = auto_detection.predict(Xc)
print df[evaluation==-1]
inliers = Xc[evaluation==+1,:]
outliers = Xc[evaluation==-1,:]
from matplotlib import pyplot as plt
import pylab as pl
inlying = plt.plot(inliers[:,0],inliers[:,1], 'o', markersize=2, color='g', alpha=1.0, label='inliers')
outlying = plt.plot(outliers[:,0],outliers[:,1], 'o', markersize=5, color='k', alpha=1.0, label='outliers')
plt.scatter(outliers[:,0],
outliers[:,1],
s=100, edgecolors="k", facecolors="none")
plt.xlabel('Component 1 ('+str(round(dim_reduction.explained_variance_ratio_[0],3))+')')
plt.ylabel('Component 2'+'('+str(round(dim_reduction.explained_variance_ratio_[1],3))+')')
plt.xlim([-7,7])
plt.ylim([-6,6])
plt.legend((inlying[0],outlying[0]),('inliers','outliers'),numpoints=1,loc='best')
plt.title("")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Finding more things that can go wrong with your data
Step2: Samples total 442<BR>
Step3: Leveraging on the Gaussian distribution
Step4: Making assumptions and checking out
Step5: Developing A Multivariate Approach
Step6: Using cluster analysis
Step7: Automating outliers detection with SVM
|
4,690
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import pandas as pd
import numpy as np
import seaborn as sns
raw_input = pd.read_pickle('input.pkl')
gp_mapper = {
1: 'A1', 2: 'A1', 3: 'A1',
4: 'A2', 5: 'A2', 6: 'A2',
7: 'B1', 8: 'B1', 9: 'B1',
10: 'B2', 11: 'B2', 12: 'B2',
13: 'C1', 14: 'C1', 15: 'C1',
16: 'C2'
}
raw_input = raw_input.assign(group=raw_input.level.map(gp_mapper))
raw_input.info()
raw_input.head()
from sklearn.model_selection import train_test_split
# Split the index of `raw_input` DataFrame into train and test and the use the to split the DataFrame.
train_idx, test_idx = train_test_split(
raw_input.index,
test_size=0.2,
stratify=raw_input.level,
shuffle=True,
random_state=0)
train_df, test_df = raw_input.loc[train_idx], raw_input.loc[test_idx]
train_df.to_pickle('train_full.pkl')
test_df.to_pickle('test.pkl')
# Small sample Dataset from train set using 1000 elements per level
train_df_small = train_df.groupby('level').apply(lambda g: g.sample(n=1000, replace=False, random_state=1234))
train_df_small.index = train_df_small.index.droplevel(0)
train_df_small.to_pickle('train_small.pkl')
raw_input = pd.read_pickle('train_small.pkl')
level_counts = raw_input.level.value_counts().sort_index()
group_counts = raw_input.group.value_counts().sort_index()
_, ax = plt.subplots(1, 2, figsize=(10, 5))
_ = level_counts.plot(kind='bar', title='Counts per Level', ax=ax[0], rot=0)
_ = group_counts.plot(kind='bar', title='Counts per Group', ax=ax[1], rot=0)
plt.tight_layout()
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
en_stopwords = set(stopwords.words('english'))
print(en_stopwords)
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_predict, StratifiedKFold
def classify_v1(input_df, target_label='level'):
Build a classifier for the `target_label` column in the DataFrame `input_df` using the `text` column.
Return the (labels, predicted_labels) tuple.
Use a 10-fold Stratified K-fold cross-validator to generate the out-of-sample predictions.
assert target_label in input_df.columns
counter = TfidfVectorizer(
ngram_range=(1, 2),
stop_words=en_stopwords,
max_df=0.4,
min_df=25,
max_features=3000,
sublinear_tf=True
)
scaler = StandardScaler(with_mean=False)
model = LogisticRegression(penalty='l2', max_iter=200, random_state=4321)
pipeline = make_pipeline(counter, scaler, model)
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=1234)
X = raw_input.text
y = raw_input.level
y_pred = cross_val_predict(pipeline, X=X.values, y=y.values, cv=cv, n_jobs=16, verbose=2)
y_pred = pd.Series(index=raw_input.index.copy(), data=y_pred)
return y.copy(), y_pred
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
def display_results(y, y_pred):
Given some predications y_pred for a target label y,
display the precision/recall/f1 score and the confusion matrix.
report = classification_report(y_pred, y)
print(report)
level_values = y.unique()
level_values.sort()
cm = confusion_matrix(y_true=y, y_pred=y_pred.values, labels=level_values)
cm = pd.DataFrame(index=level_values, columns=level_values, data=cm)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
ax = sns.heatmap(cm, annot=True, ax=ax, fmt='d')
%%time
levels, levels_predicted = classify_v1(raw_input, target_label='level')
display_results(levels, levels_predicted)
# assign the predicated level as a column to the input data
input_with_preds = raw_input.assign(level_predicted=levels_predicted)
input_with_preds.head()
misclassifications = input_with_preds[input_with_preds.level != input_with_preds.level_predicted]
m_counts = misclassifications.groupby(by=['level', 'level_predicted'])['text'].count()
m_counts.sort_values(ascending=False).head(8)
cond = (misclassifications.level.isin([7, 8])) & (misclassifications.level_predicted.isin([7, 8]))
mis_sample = misclassifications.loc[cond, ['topic_text', 'topic_id', 'text', 'level', 'level_predicted']]
mis_sample.groupby(['topic_id', 'topic_text', 'level', 'level_predicted'])['text'].count().sort_values(ascending=False)
from sklearn.feature_extraction.text import CountVectorizer
def calc_bow_matrix_for_topic_id(df, topic_id, limit=5):
Return a dense DataFrame of Word counts with words as index, article IDs as columns.
all_texts = df[df.topic_id == topic_id].text.head(limit)
cv = CountVectorizer(stop_words=en_stopwords)
t = cv.fit_transform(all_texts.values)
words = cv.get_feature_names()
tf_idf_matrix = pd.DataFrame(index=all_texts.index.copy(), columns=words, data=t.todense()).T
return tf_idf_matrix
tid_50, tid_59 = map(lambda x: calc_bow_matrix_for_topic_id(mis_sample, x), [50, 59])
tid_50.head(20)
tid_59.head(20)
uncommon_words = tid_50.index.symmetric_difference(tid_59.index).tolist()
print(uncommon_words)
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_predict, StratifiedKFold
def classify_v2(input_df, target_label='level'):
Build a classifier for the `target_label` column in the DataFrame `input_df` using the `text` column.
Return the (labels, predicted_labels) tuple.
Use a 10-fold Stratified K-fold cross-validator to generate the out-of-sample predictions.
assert target_label in input_df.columns
counter = CountVectorizer(
lowercase=True,
stop_words=en_stopwords,
ngram_range=(1, 1),
min_df=5,
max_df=0.4,
binary=True)
model = LogisticRegression(
penalty='l2',
max_iter=200,
multi_class='multinomial',
solver='lbfgs',
verbose=True,
random_state=4321)
pipeline = make_pipeline(counter, model)
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=1234)
X = input_df.text
y = input_df.loc[:, target_label]
y_pred = cross_val_predict(pipeline, X=X.values, y=y.values, cv=cv, n_jobs=10, verbose=2)
y_pred = pd.Series(index=raw_input.index.copy(), data=y_pred)
return y.copy(), y_pred
%%time
levels, levels_predicted = classify_v2(raw_input, target_label='level')
display_results(levels, levels_predicted)
%%time
groups, groups_predicted = classify_v2(raw_input, target_label='group')
display_results(groups, groups_predicted)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Map Level to Group
Step2: Train-test Split
Step3: For the rest of this notebook, we use the small sample dataset as input.
Step4: Check for Class Imbalance
Step7: Level Classification Based on Text
Step8: Misclassifications
Step9: We can identify the misclassified examples by the condition
Step10: As an example, we investigate the misclassifications between levels 7 and 8.
Step12: So, most of the misclassifications for true level 7 occur for the topic "Planning for the future", whereas for level 8, it is
Step14: So the word count matrix is extremely sparse and a fair amount of words only appear in one set of articles and not the other. Based on that, I concluded that the presence / absence of rare words could be a better indicator of level instead of tf-idf.
Step15: Using binary features the composite f1-score has improved to 0.87 from 0.85.
|
4,691
|
<ASSISTANT_TASK:>
Python Code:
import csv
!cat cars.csv || type cars.csv
with open('cars.csv') as handle:
reader = csv.DictReader(handle, delimiter=',')
kpl = [] # kilometer per litre
displacement = [] # engine displacement
for row in reader:
x = float(row['displacement']) * 0.0163871
y = float(row['mpg']) * 1.60934 / 3.78541
print(f'{row["name"]:35s}: displacement = {x:5.2f} litres, kpl = {y:5.2f} km per litres')
displacement.append(x)
kpl .append(y)
kpl[:5]
displacement[:5]
m = len(displacement)
m
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
X = np.array(displacement)
Y = np.array([100 / y for y in kpl])
plt.figure(figsize=(12, 10))
sns.set(style='whitegrid')
plt.scatter(X, Y, c='b', s=4) # 'b' is short for blue
plt.xlabel('engine displacement in litres')
plt.ylabel('litre per 100 km')
plt.title('Fuel Consumption vs. Engine Displacement')
xMean = np.mean(X)
xMean
yMean = np.mean(Y)
yMean
ϑ1 = np.sum((X - xMean) * (Y - yMean)) / np.sum((X - xMean) ** 2)
ϑ1
ϑ0 = yMean - ϑ1 * xMean
ϑ0
xMax = max(X) + 0.2
plt.figure(figsize=(12, 10))
sns.set(style='whitegrid')
plt.scatter(X, Y, c='b')
plt.plot([0, xMax], [ϑ0, ϑ0 + ϑ1 * xMax], c='r')
plt.xlabel('engine displacement in cubic inches')
plt.ylabel('fuel consumption in litres per 100 km')
plt.title('Fuel Consumption versus Engine Displacement')
TSS = np.sum((Y - yMean) ** 2)
TSS
RSS = np.sum((ϑ1 * X + ϑ0 - Y) ** 2)
RSS
R2 = 1 - RSS/TSS
R2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data we want to read is contained in the <tt>csv</tt> file cars.csv, which is located in the subdirectory Python. In this file, the first column has the miles per gallon, while the engine displacement is given in the third column. On MacOs and Linux systems we can peek at this file via the next cell.
Step2: In order to read the file we use the method DictReader from the module csv.
Step3: Now kpl is a list of floating point numbers specifying the
Step4: The number of data pairs of the form $\langle x, y \rangle$ that we have read is stored in the variable m.
Step5: In order to be able to plot the fuel efficiency versus the engine displacement we turn the
Step6: Since <em style="color
Step7: We compute the average engine displacement according to the formula
Step8: We compute the average fuel consumption according to the formula
Step9: The coefficient $\vartheta_1$ is computed according to the formula
Step10: The coefficient $\vartheta_0$ is computed according to the formula
Step11: Let us plot the line $y(x) = ϑ0 + ϑ1 \cdot x$ together with our data
Step12: We see there is quite a bit of variation and apparently the engine displacement explains only a part of the fuel consumption. In order to compute the coefficient of determination, i.e. the statistics $R^2$, we first compute the total sum of squares TSS according to the following formula
Step13: Next, we compute the residual sum of squares RSS as follows
Step14: Now $R^2$ is calculated via the formula
|
4,692
|
<ASSISTANT_TASK:>
Python Code:
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
import numpy as np
from scipy import ndimage, misc
conv1 = nn.Conv2d(in_channels=1, out_channels=3,kernel_size=3)
Gx=torch.tensor([[1.0,0,-1.0],[2.0,0,-2.0],[1.0,0.0,-1.0]])
Gy=torch.tensor([[1.0,2.0,1.0],[0.0,0.0,0.0],[-1.0,-2.0,-1.0]])
conv1.state_dict()['weight'][0][0]=Gx
conv1.state_dict()['weight'][1][0]=Gy
conv1.state_dict()['weight'][2][0]=torch.ones(3,3)
conv1.state_dict()['bias'][:]=torch.tensor([0.0,0.0,0.0])
conv1.state_dict()['bias']
for x in conv1.state_dict()['weight']:
print(x)
image=torch.zeros(1,1,5,5)
image[0,0,:,2]=1
image
plt.imshow(image[0,0,:,:].numpy(), interpolation='nearest', cmap=plt.cm.gray)
plt.colorbar()
plt.show()
out=conv1(image)
out.shape
for channel,image in enumerate(out[0]):
plt.imshow(image.detach().numpy(), interpolation='nearest', cmap=plt.cm.gray)
print(image)
plt.title("channel {}".format(channel))
plt.colorbar()
plt.show()
image1=torch.zeros(1,1,5,5)
image1[0,0,2,:]=1
print(image1)
plt.imshow(image1[0,0,:,:].detach().numpy(), interpolation='nearest', cmap=plt.cm.gray)
plt.show()
out1=conv1(image1)
for channel,image in enumerate(out1[0]):
plt.imshow(image.detach().numpy(), interpolation='nearest', cmap=plt.cm.gray)
print(image)
plt.title("channel {}".format(channel))
plt.colorbar()
plt.show()
image2=torch.zeros(1,2,5,5)
image2[0,0,2,:]=-2
image2[0,1,2,:]=1
image2
for channel,image in enumerate(image2[0]):
plt.imshow(image.detach().numpy(), interpolation='nearest', cmap=plt.cm.gray)
print(image)
plt.title("channel {}".format(channel))
plt.colorbar()
plt.show()
conv3 = nn.Conv2d(in_channels=2, out_channels=1,kernel_size=3)
Gx1=torch.tensor([[0.0,0.0,0.0],[0,1.0,0],[0.0,0.0,0.0]])
conv3.state_dict()['weight'][0][0]=1*Gx1
conv3.state_dict()['weight'][0][1]=-2*Gx1
conv3.state_dict()['bias'][:]=torch.tensor([0.0])
conv3.state_dict()['weight']
conv3(image2)
conv4 = nn.Conv2d(in_channels=2, out_channels=3,kernel_size=3)
conv4.state_dict()['weight'][0][0]=torch.tensor([[0.0,0.0,0.0],[0,0.5,0],[0.0,0.0,0.0]])
conv4.state_dict()['weight'][0][1]=torch.tensor([[0.0,0.0,0.0],[0,0.5,0],[0.0,0.0,0.0]])
conv4.state_dict()['weight'][1][0]=torch.tensor([[0.0,0.0,0.0],[0,1,0],[0.0,0.0,0.0]])
conv4.state_dict()['weight'][1][1]=torch.tensor([[0.0,0.0,0.0],[0,-1,0],[0.0,0.0,0.0]])
conv4.state_dict()['weight'][2][0]=torch.tensor([[1.0,0,-1.0],[2.0,0,-2.0],[1.0,0.0,-1.0]])
conv4.state_dict()['weight'][2][1]=torch.tensor([[1.0,2.0,1.0],[0.0,0.0,0.0],[-1.0,-2.0,-1.0]])
conv4.state_dict()['bias'][:]=torch.tensor([0.0,0.0,0.0])
image4=torch.zeros(1,2,5,5)
image4[0][0]=torch.ones(5,5)
image4[0][1][2][2]=1
for channel,image in enumerate(image4[0]):
plt.imshow(image.detach().numpy(), interpolation='nearest', cmap=plt.cm.gray)
print(image)
plt.title("channel {}".format(channel))
plt.colorbar()
plt.show()
z=conv4(image4)
z
imageA=torch.zeros(1,1,5,5)
imageB=torch.zeros(1,1,5,5)
imageA[0,0,2,:]=-2
imageB[0,0,2,:]=1
conv5 = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=3)
conv6 = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=3)
Gx1=torch.tensor([[0.0,0.0,0.0],[0,1.0,0],[0.0,0.0,0.0]])
conv5.state_dict()['weight'][0][0]=1*Gx1
conv6.state_dict()['weight'][0][0]=-2*Gx1
conv5.state_dict()['bias'][:]=torch.tensor([0.0])
conv6.state_dict()['bias'][:]=torch.tensor([0.0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id="ref0"></a>
Step2: Pytorch randomly assigns values to each kernel. However, use kernels that have been developed to detect edges
Step3: Each kernel has its own bias, so set them all to zero
Step4: Print out each kernel
Step5: Create an input <code>image</code> to represent the input X
Step6: Plot it as an image
Step7: Perform convolution using each channel
Step8: The result is a 1x3x3x3 tensor. This represents one sample with three channels, and each channel contains a 3x3 image. The same rules that govern the shape of each image were discussed in the last section.
Step9: Print out each channel as a tensor or an image
Step10: Different kernels can be used to detect various features in an image. You can see that the first channel fluctuates, and the second two channels produce a constant value. The following figure summarizes the process
Step11: In this case, the second channel fluctuates, and the first and the third channels produce a constant value.
Step12: The following figure summarizes the process
Step13: Plot out each image
Step14: Create a <code>Conv2d</code> object with two inputs
Step15: Assign kernel values to make the math a little easier
Step16: Perform the convolution
Step17: The following images summarize the process. The object performs Convolution.
Step18: For each output, there is a bias, so set them all to zero
Step19: Create a two-channel image and plot the results
Step20: Perform the convolution
Step21: The output of the first channel is given by
|
4,693
|
<ASSISTANT_TASK:>
Python Code:
import re
from bs4 import BeautifulSoup
def review_to_wordlist(review):
'''
Meant for converting each of the IMDB reviews into a list of words.
'''
# First remove the HTML.
review_text = BeautifulSoup(review).get_text()
# Use regular expressions to only include words.
review_text = re.sub("[^a-zA-Z]"," ", review_text)
# Convert words to lower case and split them into separate words.
words = review_text.lower().split()
# Return a list of words
return(words)
import pandas as pd
train = pd.read_csv('labeledTrainData.tsv', header=0,
delimiter="\t", quoting=3)
test = pd.read_csv('testData.tsv', header=0, delimiter="\t",
quoting=3 )
# Import both the training and test data.
y_train = train['sentiment']
traindata = []
for i in xrange(0,len(train['review'])):
traindata.append(" ".join(review_to_wordlist(train['review'][i])))
testdata = []
for i in xrange(0,len(test['review'])):
testdata.append(" ".join(review_to_wordlist(test['review'][i])))
from sklearn.feature_extraction.text import TfidfVectorizer as TFIV
tfv = TFIV(min_df=3, max_features=None,
strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}',
ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1,
stop_words = 'english')
X_all = traindata + testdata # Combine both to fit the TFIDF vectorization.
lentrain = len(traindata)
tfv.fit(X_all) # This is the slow part!
X_all = tfv.transform(X_all)
X = X_all[:lentrain] # Separate back into training and test sets.
X_test = X_all[lentrain:]
X.shape
from sklearn.linear_model import LogisticRegression as LR
from sklearn.grid_search import GridSearchCV
grid_values = {'C':[30]} # Decide which settings you want for the grid search.
model_LR = GridSearchCV(LR(penalty = 'L2', dual = True, random_state = 0),
grid_values, scoring = 'roc_auc', cv = 20)
# Try to set the scoring on what the contest is asking for.
# The contest says scoring is for area under the ROC curve, so use this.
model_LR.fit(X,y_train) # Fit the model.
model_LR.grid_scores_
model_LR.best_estimator_
from sklearn.naive_bayes import MultinomialNB as MNB
model_NB = MNB()
model_NB.fit(X, y_train)
from sklearn.cross_validation import cross_val_score
import numpy as np
print "20 Fold CV Score for Multinomial Naive Bayes: ", np.mean(cross_val_score
(model_NB, X, y_train, cv=20, scoring='roc_auc'))
# This will give us a 20-fold cross validation score that looks at ROC_AUC so we can compare with Logistic Regression.
from sklearn.linear_model import SGDClassifier as SGD
sgd_params = {'alpha': [0.00006, 0.00007, 0.00008, 0.0001, 0.0005]} # Regularization parameter
model_SGD = GridSearchCV(SGD(random_state = 0, shuffle = True, loss = 'modified_huber'),
sgd_params, scoring = 'roc_auc', cv = 20) # Find out which regularization parameter works the best.
model_SGD.fit(X, y_train) # Fit the model.
model_SGD.grid_scores_
LR_result = model_LR.predict_proba(X_test)[:,1] # We only need the probabilities that the movie review was a 7 or greater.
LR_output = pd.DataFrame(data={"id":test["id"], "sentiment":LR_result}) # Create our dataframe that will be written.
LR_output.to_csv('Logistic_Reg_Proj2.csv', index=False, quoting=3) # Get the .csv file we will submit to Kaggle.
# Repeat this for Multinomial Naive Bayes
MNB_result = model_NB.predict_proba(X_test)[:,1]
MNB_output = pd.DataFrame(data={"id":test["id"], "sentiment":MNB_result})
MNB_output.to_csv('MNB_Proj2.csv', index = False, quoting = 3)
# Last, do the Stochastic Gradient Descent model with modified Huber loss.
SGD_result = model_SGD.predict_proba(X_test)[:,1]
SGD_output = pd.DataFrame(data={"id":test["id"], "sentiment":SGD_result})
SGD_output.to_csv('SGD_Proj2.csv', index = False, quoting = 3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now set up our function. This will clean all of the reviews for us.
Step2: Great! Now it is time to go ahead and load our data in. For this, pandas is definitely the library of choice. If you want to follow along with a downloaded version of the notebook yourself, make sure you obtain the data from Kaggle. You will need a Kaggle account in order to access it.
Step3: Now it is time to get the labels from the training set for our reviews. That way, we can teach our classifier which reviews are positive vs. negative.
Step4: Now we need to clean both the train and test data to get it ready for the next part of our program.
Step5: TF-IDF Vectorization
Step6: Now that we have the vectorization object, we need to run this on all of the data (both training and testing) to make sure it is applied to both datasets. This could take some time on your computer!
Step7: Making Our Classifiers
Step8: That means we have 25,000 training examples (or rows) and 309,798 features (or columns). We need something that is going to be somewhat computationally efficient given how many features we have. Using something like a random forest to classify would be unwieldy (plus random forests can't work with sparse matrices anyway yet in scikit-learn). That means we need something lightweight and fast that scales to many dimensions well. Some possible candidates are
Step9: You can investigate which parameters did the best and what scores they received by looking at the model_LR object.
Step10: Feel free, if you have an interactive version of the notebook, to play around with various settings inside the grid_values object to optimize your ROC_AUC score. Otherwise, let's move on to the next classifier, Naive Bayes.
Step11: Pretty fast, right? This speed comes at a price, however. Naive Bayes assumes all of your features are ENTIRELY independent from each other. In the case of word vectors, that seems like a somewhat reasonable assumption but with the ngrams we included that probably isn't always the case. Because of this, Naive Bayes tends to be less accurate than other classification algorithms, especially if you have a smaller number of training examples.
Step12: Well, it wasn't quite as good as our well-tuned Logistic Regression classifier, but that is a pretty good score considering how little we had to do!
Step13: Again, similar to the Logistic Regression model, we can see which parameter did the best.
Step14: Looks like this beat our previous Logistic Regression model by a very small amount. Now that we have our three models, we can work on submitting our final scores in the proper format. It was found that submitting predicted probabilities of each score instead of the final predicted score worked better for evaluation from the contest participants, so we want to output this instead.
Step15: Repeat this with the other two.
|
4,694
|
<ASSISTANT_TASK:>
Python Code:
!pip install tokenizers
BIG_FILE_URL = 'https://raw.githubusercontent.com/dscape/spell/master/test/resources/big.txt'
# Let's download the file and save it somewhere
from requests import get
with open('big.txt', 'wb') as big_f:
response = get(BIG_FILE_URL, )
if response.status_code == 200:
big_f.write(response.content)
else:
print("Unable to get the file: {}".format(response.reason))
# For the user's convenience `tokenizers` provides some very high-level classes encapsulating
# the overall pipeline for various well-known tokenization algorithm.
# Everything described below can be replaced by the ByteLevelBPETokenizer class.
from tokenizers import Tokenizer
from tokenizers.decoders import ByteLevel as ByteLevelDecoder
from tokenizers.models import BPE
from tokenizers.normalizers import Lowercase, NFKC, Sequence
from tokenizers.pre_tokenizers import ByteLevel
# First we create an empty Byte-Pair Encoding model (i.e. not trained model)
tokenizer = Tokenizer(BPE())
# Then we enable lower-casing and unicode-normalization
# The Sequence normalizer allows us to combine multiple Normalizer that will be
# executed in order.
tokenizer.normalizer = Sequence([
NFKC(),
Lowercase()
])
# Our tokenizer also needs a pre-tokenizer responsible for converting the input to a ByteLevel representation.
tokenizer.pre_tokenizer = ByteLevel()
# And finally, let's plug a decoder so we can recover from a tokenized input to the original one
tokenizer.decoder = ByteLevelDecoder()
from tokenizers.trainers import BpeTrainer
# We initialize our trainer, giving him the details about the vocabulary we want to generate
trainer = BpeTrainer(vocab_size=25000, show_progress=True, initial_alphabet=ByteLevel.alphabet())
tokenizer.train(files=["big.txt"], trainer=trainer)
print("Trained vocab size: {}".format(tokenizer.get_vocab_size()))
# You will see the generated files in the output.
tokenizer.model.save('.')
# Let's tokenizer a simple input
tokenizer.model = BPE('vocab.json', 'merges.txt')
encoding = tokenizer.encode("This is a simple input to be tokenized")
print("Encoded string: {}".format(encoding.tokens))
decoded = tokenizer.decode(encoding.ids)
print("Decoded string: {}".format(decoded))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now that we have our training data we need to create the overall pipeline for the tokenizer
Step2: The overall pipeline is now ready to be trained on the corpus we downloaded earlier in this notebook.
Step3: Et voilà ! You trained your very first tokenizer from scratch using tokenizers. Of course, this
Step4: Now, let load the trained model and start using out newly trained tokenizer
|
4,695
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
df = pd.read_csv('../../data/processed/facilities-3-29-scrape.csv')
df.count()[0]
df[(df['offline'].isnull())].count()[0]
df[(df['offline'].notnull())].count()[0]
df[(df['offline']>df['online']) & (df['online'].notnull())].count()[0]
df[(df['online'].isnull()) & (df['offline'].notnull())].count()[0]
df[(df['online'].notnull()) & (df['offline'].isnull())].count()[0]
df[(df['online'].notnull()) | df['offline'].notnull()].count()[0]
df[(df['offline'].isnull())].count()[0]/df.count()[0]*100
df[df['offline'].notnull()].sum()['fac_capacity']
df[df['online'].isnull()].count()[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h3>How many facilities are there?</h3>
Step2: <h3>How many facilities have accurate records online?</h3>
Step3: <h3>How many facilities have inaccurate records online?<h/3>
Step4: <h3>How many facilities had more than double the number of complaints shown online?</h3>
Step5: <h3>How many facilities show zero complaints online but have complaints offline?</h3>
Step6: <h3>How many facilities have complaints and are accurate online?</h3>
Step7: <h3>How many facilities have complaints?</h3>
Step8: <h3>What percent of facilities have accurate records online?</h3>
Step9: <h3>What is the total capacity of all facilities with inaccurate records?</h3>
Step10: <h3>How many facilities appear to have no complaints, whether or not they do?</h3>
|
4,696
|
<ASSISTANT_TASK:>
Python Code:
from paralleldomain.decoding.dgp.decoder import DGPDatasetDecoder
from paralleldomain.model.dataset import Dataset # optional import, just for type reference in this tutorial
dataset_path = "s3://pd-sdk-c6b4d2ea-0301-46c9-8b63-ef20c0d014e9/testset_dgp"
dgp_decoder = DGPDatasetDecoder(dataset_path=dataset_path)
dataset: Dataset = dgp_decoder.get_dataset()
from paralleldomain.decoding.helper import decode_dataset
from paralleldomain.decoding.common import DecoderSettings
# To deactivate caching of certain data types use the DecoderSettings
settings = DecoderSettings(cache_images=False)
# decode dgp dataset
dgp_dataset: Dataset = decode_dataset(dataset_path=dataset_path, dataset_format="dgp", settings=settings)
nu_images_dataset_path = "some/path/to/a/nuimages/root/folder"
nu_images_dataset: Dataset = decode_dataset(dataset_path=nu_images_dataset_path, dataset_format="nuimages")
cityscapes_dataset_path = "some/path/to/a/cityscapes/root/folder"
cityscapes_dataset: Dataset = decode_dataset(dataset_path=nu_images_dataset_path, dataset_format="cityscapes")
print("Dataset Metadata:")
print("Name:", dataset.metadata.name)
print("Available Annotation Types:", *[f"\t{a}" for a in dataset.available_annotation_types], sep="\n")
print("Custom Attributes:", *[f"\t{k}: {v}" for k,v in dataset.metadata.custom_attributes.items()], sep="\n")
for sn in dataset.scene_names:
print(f"Found scene {sn}")
for usn in dataset.unordered_scene_names:
print(f"Found unordered scene {usn}")
from paralleldomain.model.scene import Scene # optional import, just for type reference in this tutorial
from pprint import PrettyPrinter
selected_scene = dataset.scene_names[0] # for future
scene: Scene = dataset.get_scene(scene_name=selected_scene)
# Use prettyprint for nested dictionaries
pp = PrettyPrinter(indent=2)
pp.pprint(scene.metadata)
pp.pprint(scene.available_annotation_types)
print(f"{scene.name} has {len(scene.frame_ids)} frames available.")
print(scene.frame_ids)
frame_0_id = "0"
frame_0 = scene.get_frame(frame_id=frame_0_id)
print(frame_0.date_time)
print("Cameras:", *scene.camera_names, sep='\n')
print("\n")
print("LiDARs:", *scene.lidar_names, sep='\n')
camera_0_name = scene.camera_names[0]
camera_0 = scene.get_camera_sensor(camera_name=camera_0_name)
camera_frame_via_frame = frame_0.get_camera(camera_name=camera_0_name)
camera_frame_via_camera = camera_0.get_frame(frame_id=frame_0_id)
assert(camera_frame_via_camera is camera_frame_via_camera)
print(f"Both objects are equal: {id(camera_frame_via_frame)} == {id(camera_frame_via_camera)}")
# Get `CameraSensorFrame` for the first camera on the first frame within the scene.
lidar_0_name = scene.lidar_names[0]
camera_0_frame_0 = frame_0.get_camera(camera_name=camera_0_name)
lidar_0_frame_0 = frame_0.get_lidar(lidar_name=lidar_0_name)
print(f"{camera_0_frame_0.sensor_name} recorded at {camera_0_frame_0.date_time}")
print(f"{lidar_0_frame_0.sensor_name} recorded at {lidar_0_frame_0.date_time}")
print(camera_0_frame_0.pose, " -> ", lidar_0_frame_0.pose)
camera_to_lidar_pose = camera_0_frame_0.pose.inverse @ lidar_0_frame_0.pose
print(camera_0_frame_0.extrinsic, " -> ", lidar_0_frame_0.extrinsic)
camera_to_lidar_extrinsic = camera_0_frame_0.extrinsic.inverse @ lidar_0_frame_0.extrinsic
import numpy as np
print("Diff Pose:", camera_to_lidar_pose)
print("Diff Extrinsic:", camera_to_lidar_extrinsic)
assert np.all(np.isclose(camera_to_lidar_pose.transformation_matrix, camera_to_lidar_extrinsic.transformation_matrix, atol=1e-05))
print("If you see this, the difference are close to equal.")
camera_1_name = scene.camera_names[1]
camera_1_frame_0 = frame_0.get_camera(camera_name=camera_1_name)
print(camera_0_frame_0.extrinsic, " -> ", camera_1_frame_0.extrinsic)
print("Diff Extrinsic: ", camera_0_frame_0.extrinsic.inverse @ camera_1_frame_0.extrinsic)
from paralleldomain.utilities.coordinate_system import CoordinateSystem
extrinsic_diff = (camera_0_frame_0.extrinsic.inverse @ camera_1_frame_0.extrinsic)
RDF_to_FLU = (CoordinateSystem("RDF") > CoordinateSystem("FLU"))
print( RDF_to_FLU @ extrinsic_diff)
pp.pprint(camera_0_frame_0.available_annotation_types)
pp.pprint(lidar_0_frame_0.available_annotation_types)
from paralleldomain.model.annotation import AnnotationTypes
from paralleldomain.model.annotation import BoundingBoxes2D # optional import, just for type reference in this tutorial
# Quick check if `BoundingBoxes2D` is an available annotation type. If not, and we do not check for it, we will receive a `ValueError` exception.
if AnnotationTypes.BoundingBoxes2D in camera_0_frame_0.available_annotation_types:
boxes2d: BoundingBoxes2D = camera_0_frame_0.get_annotations(annotation_type=AnnotationTypes.BoundingBoxes2D)
for b in boxes2d.boxes[:10]:
print(b)
from paralleldomain.model.annotation import SemanticSegmentation3D # optional import, just for type reference in this tutorial
annotation_type = AnnotationTypes.BoundingBoxes2D
try:
boxes2d: BoundingBoxes2D = lidar_0_frame_0.get_annotations(annotation_type=annotation_type)
except ValueError as e:
print(f"LiDAR Frame doesn't have {annotation_type} as annotation type available. Original exception below:")
print(str(e))
# Move on to the actual task:
annotation_type = AnnotationTypes.SemanticSegmentation3D
count_by_class_id = {}
try:
semseg3d: SemanticSegmentation3D = lidar_0_frame_0.get_annotations(annotation_type=annotation_type)
u_class_ids, u_counts = np.unique(semseg3d.class_ids, return_counts=True)
count_by_class_id = {u_class_ids[idx]: u_counts[idx] for idx in range(len(u_class_ids))}
pp.pprint(count_by_class_id)
except ValueError as e:
print(f"LiDAR Frame doesn't have {annotation_type} as annotation type available.")
print(str(e))
from paralleldomain.model.class_mapping import ClassMap # optional import, just for type reference in this tutorial
semseg3d_classmap: ClassMap = scene.get_class_map(annotation_type=AnnotationTypes.SemanticSegmentation3D)
count_by_class_label = {k: f"{semseg3d_classmap[k].name} [{v}]" for k,v in count_by_class_id.items()}
pp.pprint(count_by_class_label)
from matplotlib import pyplot as plt
from paralleldomain.model.sensor import Image # optional import, just for type reference in this tutorial
image_data: Image = camera_0_frame_0.image
print(f"Below is an image with {image_data.channels} channels and resolution {image_data.width}x{image_data.height} sqpx")
plt.imshow(image_data.rgba) # `.rgba` returns image including alpha-channel, otherwise `.rgb` can be used for convenience.
plt.title(camera_0_frame_0.sensor_name)
plt.show()
pp.pprint(vars(camera_0_frame_0.intrinsic))
from paralleldomain.model.sensor import PointCloud # optional import, just for type reference in this tutorial
pc_data: PointCloud = lidar_0_frame_0.point_cloud
pc_xyz_one: np.ndarray = pc_data.xyz_one # Returns the xyz coordinates with an additional column full of "1" to allow for direct transformation
pc_intensity: np.ndarray = pc_data.intensity
pc_ego = (lidar_0_frame_0.extrinsic @ pc_xyz_one.T).T
pc_ego = pc_ego[:,:3] # throw away "1" - we are done transforming
subset_slice = slice(None, None, 5) # we want a slice of every 5th point to reduce rendering time
pc_ego_subset = pc_ego[subset_slice]
pc_intensity_subset = pc_intensity[subset_slice]
plt.scatter(x=pc_ego_subset[:,0], y=pc_ego_subset[:,1], s=pc_intensity_subset, c=pc_ego_subset[:,2])
plt.grid(True)
plt.title(lidar_0_frame_0.sensor_name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Alternatively you can also use the decode_dataset helper method.
Step2: If you want to load a dataset which is stored in Cityscapes or NuImages format simply change the dataset_format to "cityscapes" or "nuimages"
Step3: Dataset Information
Step4: As you can see, the property .available_annotation_types includes classes from paralleldomain.model.annotation. In tutorials around reading annotations from a dataset, these exact classes will be re-used, which allows for a consistent type-check across objects.
Step5: Load Scene
Step6: Scene metadata usually contains any variables that changes with each scene and are not necessarily consistent across a whole dataset.
Step7: Normally, in a scene, we expect to have more than one frame available, especially when we work with sequential data.
Step8: Load Frame + Sensor
Step9: Date/Times are presented as Python's std library datetime objects. When decoding data, the PD SDK also adds timezone information to these objects.
Step10: Similar to how we used this information to get a scene from a dataset, we can use this information to get a sensor from a scene.
Step11: Knowing which frames and sensors are available allows us to now query for the actual sensor data.
Step12: Load Sensor Frames
Step13: Every SensorFrame always has information about the sensor pose (where is it in the world coordinate system?) and sensor extrinsic (how is the sensor positioned relative to the ego-vehicle reference coordinate system?).
Step14: We can use the associated homogenous transformation matrix to compare both results.
Step15: In the same manner, it is easily possible to calculate the relative location between two sensors. Let's calculate the difference between two camera sensors.
Step16: It is important to remember that a sensor extrinsic is provided in the ego-vehicles reference coordinate system. For DGP dataset, that is FLU (Front (x), Left (y), Up (z)).
Step17: Accessing Annotations
Step18: To actually the annotations into memory and use them for further analysis, we can leverage the AnnotationTypes class.
Step19: For the LiDAR sensor, we are going to retrieve the 3D Semantic Segmentation of the point cloud and count objects by class ID. Instead of checking explicitly if the annotation type is available, we are going to use try/catch on a ValueError. To see if it works, we will try to receive 2D Bounding Boxes from the LiDAR sensor.
Step20: Instead of showing just class IDs, we can show the actual class labels quite easily. On the Scene object we can retrieve the ClassMap for each annotation style.
Step21: Access Camera Data
Step22: Access LiDAR Data
|
4,697
|
<ASSISTANT_TASK:>
Python Code:
from pygsf.spatial.rasters.geotransform import *
gt1 = GeoTransform(1500, 3000, 10, 10)
gt1
ijPixToxyGeogr(gt1, 0, 0)
xyGeogrToijPix(gt1, 1500, 3000)
ijPixToxyGeogr(gt1, 1, 1)
xyGeogrToijPix(gt1, 1510, 2990)
ijPixToxyGeogr(gt1, 10, 10)
xyGeogrToijPix(gt1, 1600, 3100)
X, Y = gtToxyCellCenters(
gt=gt1,
num_rows=10,
num_cols=5)
print(X)
print(Y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Forward and backward transformation examples
Step2: calculating the X, Y geographic coordinate arrays
|
4,698
|
<ASSISTANT_TASK:>
Python Code:
# glass identification dataset
import pandas as pd
import numpy as np
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/glass/glass.data'
col_names = ['id','ri','na','mg','al','si','k','ca','ba','fe','glass_type']
glass = pd.read_csv(url, names=col_names, index_col='id')
glass.sort_values('al', inplace=True)
glass.head()
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('bmh')
# scatter plot using Pandas
glass.plot(kind='scatter', x='al', y='ri')
# equivalent scatter plot using Matplotlib
plt.scatter(glass.al, glass.ri)
plt.xlabel('al')
plt.ylabel('ri')
# fit a linear regression model
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
feature_cols = ['al']
X = glass[feature_cols]
y = glass.ri
linreg.fit(X, y)
# make predictions for all values of X
glass['ri_pred'] = linreg.predict(X)
glass.head()
# put the plots together
plt.scatter(glass.al, glass.ri)
plt.plot(glass.al, glass.ri_pred, color='red')
plt.xlabel('al')
plt.ylabel('ri')
# compute prediction for al=2 using the equation
linreg.intercept_ + linreg.coef_ * 2
# compute prediction for al=2 using the predict method
test = np.array(2)
test = test.reshape(-1,1)
linreg.predict(test)
# examine coefficient for al
print(feature_cols, linreg.coef_)
# increasing al by 1 (so that al=3) decreases ri by 0.0025
1.51699012 - 0.0024776063874696243
# compute prediction for al=3 using the predict method
test = np.array(3)
test = test.reshape(-1,1)
linreg.predict(test)
# examine glass_type
glass.glass_type.value_counts().sort_index()
# types 1, 2, 3 are window glass
# types 5, 6, 7 are household glass
glass['household'] = glass.glass_type.map({1:0, 2:0, 3:0, 5:1, 6:1, 7:1})
glass.head()
plt.scatter(glass.al, glass.household)
plt.xlabel('al')
plt.ylabel('household')
# fit a linear regression model and store the predictions
feature_cols = ['al']
X = glass[feature_cols]
y = glass.household
linreg.fit(X, y)
glass['household_pred'] = linreg.predict(X)
# scatter plot that includes the regression line
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred, color='red')
plt.xlabel('al')
plt.ylabel('household')
# understanding np.where
import numpy as np
nums = np.array([5, 15, 8])
# np.where returns the first value if the condition is True, and the second value if the condition is False
np.where(nums > 10, 'big', 'small')
# transform household_pred to 1 or 0
glass['household_pred_class'] = np.where(glass.household_pred >= 0.5, 1, 0)
glass.head()
# plot the class predictions
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_class, color='red')
plt.xlabel('al')
plt.ylabel('household')
# fit a logistic regression model and store the class predictions
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(solver='liblinear',C=1e9)
feature_cols = ['al']
X = glass[feature_cols]
y = glass.household
logreg.fit(X, y)
glass['household_pred_class'] = logreg.predict(X)
# plot the class predictions
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_class, color='red')
plt.xlabel('al')
plt.ylabel('household')
# store the predicted probabilites of class 1
glass['household_pred_prob'] = logreg.predict_proba(X)[:, 1]
# plot the predicted probabilities
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_prob, color='red')
plt.xlabel('al')
plt.ylabel('household')
# examine some example predictions
print(logreg.predict_proba([[1]]))
print(logreg.predict_proba([[2]]))
print(logreg.predict_proba([[3]]))
# create a table of probability versus odds
table = pd.DataFrame({'probability':[0.1, 0.2, 0.25, 0.5, 0.6, 0.8, 0.9]})
table['odds'] = table.probability/(1 - table.probability)
table
# exponential function: e^1
np.exp(1)
# time needed to grow 1 unit to 2.718 units
np.log(2.718)
np.log(np.exp(5))
# add log-odds to the table
table['logodds'] = np.log(table.odds)
table
# plot the predicted probabilities again
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_prob, color='red')
plt.xlabel('al')
plt.ylabel('household')
# compute predicted log-odds for al=2 using the equation
logodds = logreg.intercept_ + logreg.coef_[0] * 2
logodds
# convert log-odds to odds
odds = np.exp(logodds)
odds
# convert odds to probability
prob = odds/(1 + odds)
prob
# compute predicted probability for al=2 using the predict_proba method
logreg.predict_proba([[2]])[:, 1]
# examine the coefficient for al
feature_cols, logreg.coef_[0]
# increasing al by 1 (so that al=3) increases the log-odds by 4.18
logodds = 0.64722323 + 4.1804038614510901
odds = np.exp(logodds)
prob = odds/(1 + odds)
prob
# compute predicted probability for al=3 using the predict_proba method
logreg.predict_proba([[3]])[:, 1]
# examine the intercept
logreg.intercept_
# convert log-odds to probability
logodds = logreg.intercept_
odds = np.exp(logodds)
prob = odds/(1 + odds)
prob
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question
Step2: Refresher
Step3: Interpretation
Step4: Predicting a Categorical Response
Step5: Let's change our task, so that we're predicting household using al. Let's visualize the relationship to figure out how to do this
Step6: Let's draw a regression line, like we did before
Step7: If al=3, what class do we predict for household? 1
Step8: $h_\beta(x)$ can be lower 0 or higher than 1, which is countra intuitive
Step9: What if we wanted the predicted probabilities instead of just the class predictions, to understand how confident we are in a given prediction?
Step10: The first column indicates the predicted probability of class 0, and the second column indicates the predicted probability of class 1.
Step11: What is e? It is the base rate of growth shared by all continually growing processes
Step12: What is a (natural) log? It gives you the time needed to reach a certain level of growth
Step13: It is also the inverse of the exponential function
Step14: What is Logistic Regression?
Step15: Interpretation
Step16: Bottom line
Step17: Interpretation
|
4,699
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
pylab.style.use('ggplot')
import pandas as pd
import numpy as np
import seaborn as sns
train_url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
train_df = pd.read_csv(train_url, header=None)
train_df.head()
train_df.columns = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'sex',
'capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'income_band']
train_df.head()
train_df.dtypes
# Remove excess whitespace from strings
nn_cols = train_df.dtypes[train_df.dtypes == np.object].index
for c in nn_cols:
train_df.loc[:, c] = train_df.loc[:, c].str.strip()
# Missing values per column
train_df.loc[:, nn_cols].isin(['?']).sum(axis=0).plot(kind='barh')
# Drop rows with missing data
n_missing_per_row = train_df.loc[:, nn_cols].isin(['?']).sum(axis=1)
train_df_full = train_df.loc[n_missing_per_row.isin([0]), :]
train_df_full.isin(['?']).sum(axis=0)
train_df_full.income_band.value_counts().plot(kind='barh')
train_df_full.dtypes
num_cols = train_df_full.dtypes[train_df_full.dtypes == np.int64].index
for n_col in num_cols:
g_col = sns.FacetGrid(col='income_band', data=train_df_full)
g_col = g_col.map(pylab.hist, n_col)
obj_cols = train_df_full.dtypes[train_df_full.dtypes == np.object].index
_, axes = pylab.subplots(len(obj_cols)-1, 1, figsize=(10, 20))
for i, colname in enumerate(obj_cols.drop('income_band')):
sns.countplot(x=train_df_full[colname],
hue=train_df_full.income_band,
ax=axes[i]
)
pylab.tight_layout()
num_cols = train_df_full.dtypes[train_df_full.dtypes == np.int64].index
num_features = train_df_full.loc[:, num_cols]
f_corrs = num_features.corr()
sns.heatmap(f_corrs, annot=True)
n_samples = train_df_full.shape[0]
label_counts = train_df_full.income_band.value_counts()
label_counts
accuracy = label_counts['<=50K'] / label_counts.sum()
accuracy
precision = label_counts['<=50K'] / label_counts.sum() # precision = TP / (TP + FP)
recall = 1.0 # recall = TP/(TP + FN), here FN=0
f1_score = (2.0 * precision * recall) / (precision + recall)
f1_score
from sklearn.metrics import classification_report
scores = pd.Series(index=train_df_full.index, data='<=50K')
print(classification_report(train_df_full.income_band, scores))
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.ensemble import BaggingClassifier
from sklearn.metrics import classification_report, accuracy_score, f1_score
from IPython.display import display
def cross_validation_score(features, labels, C=1000, scoring='f1_macro'):
Build an SVM with the Radial Basis Kernel and return cross-validation scores.
Use a `BaggingClassifier` to construct an ensemble of SVMs on a subset of samples for all features. This speeds up the
training phase considerably, and also acts as a regularizer.
base = SVC(C=C, kernel='rbf')
model = BaggingClassifier(base_estimator=base, n_estimators=20, max_samples=0.05)
prep = StandardScaler()
estimator = make_pipeline(prep, model)
scores = cross_val_score(estimator=estimator,
X=features,
y=labels,
cv=10,
verbose=10,
scoring=scoring)
scores = pd.Series(scores)
return scores
num_only_scores = cross_validation_score(
features=num_features,
labels=train_df_full.income_band)
num_only_scores.plot(kind='bar',
title='10-Fold CV scores for numeric only features.')
num_only_scores.mean()
from sklearn.preprocessing import MultiLabelBinarizer
def encode_one_hot(source, target, feature_name):
One-hot encode categorical feature `feature_name` in `source` DataFrame, and append it to `target`.
labels = sorted(pd.unique(source.loc[:, feature_name]))
encoder = MultiLabelBinarizer(classes=labels)
raw = np.atleast_2d(train_df_full.loc[:, feature_name].values).T
encoded_df = pd.DataFrame(index=source.index, data=encoder.fit_transform(raw))
encoded_df.columns = [feature_name + '_' + str(c) for c in encoded_df.columns]
return pd.concat([target, encoded_df], axis=1)
numeric_plus_occupation = encode_one_hot(train_df_full, num_features, 'occupation')
numeric_plus_occupation_scores = cross_validation_score(
features=numeric_plus_occupation,
labels=train_df_full.income_band)
numeric_plus_occupation_scores.plot(kind='bar',
title='10-Fold CV scores for numeric only features + occupation')
numeric_plus_occupation_scores.mean()
numeric_plus_occupation_plus_sex = numeric_plus_occupation.assign(
sex=train_df_full.sex.map(lambda s: 0 if s=='Male' else 1))
numeric_plus_occupation_plus_sex_scores = cross_validation_score(
features=numeric_plus_occupation_plus_sex,
labels=train_df_full.income_band)
numeric_plus_occupation_plus_sex_scores.plot(kind='bar',
title='10-Fold CV scores for numeric only features + occupation + sex')
numeric_plus_occupation_plus_sex_scores.mean()
numeric_plus_3 = numeric_plus_occupation_plus_sex.assign(
race=train_df_full.race.map(lambda s: 0 if s=='White' else 1))
numeric_plus_3_scores = cross_validation_score(
features=numeric_plus_3,
labels=train_df_full.income_band)
numeric_plus_3_scores.plot(kind='bar',
title='10-Fold CV scores for numeric only features + occupation + sex + race')
numeric_plus_3_scores.mean()
numeric_plus_3 = encode_one_hot(train_df_full, numeric_plus_occupation_plus_sex, 'relationship')
numeric_plus_3_scores = cross_validation_score(
features=numeric_plus_3,
labels=train_df_full.income_band)
numeric_plus_3_scores.plot(kind='bar',
title='10-Fold CV scores for numeric only features + occupation + sex + relationship')
numeric_plus_3_scores.mean()
numeric_plus_4 = encode_one_hot(train_df_full, numeric_plus_3, 'workclass')
numeric_plus_4_scores = cross_validation_score(
features=numeric_plus_4,
labels=train_df_full.income_band)
numeric_plus_4_scores.plot(
kind='bar',
title='10-Fold CV scores for numeric only features + occupation + sex + relationship + workclass')
numeric_plus_4_scores.mean()
numeric_plus_4 = encode_one_hot(train_df_full, numeric_plus_3, 'marital_status')
numeric_plus_4_scores = cross_validation_score(
features=numeric_plus_4,
labels=train_df_full.income_band)
numeric_plus_4_scores.plot(
kind='bar',
title='10-Fold CV scores for numeric only features + occupation + sex + relationship + marital_status')
numeric_plus_4_scores.mean()
numeric_plus_4 = numeric_plus_3.assign(
native_country=train_df_full.native_country.map(lambda v: 0.0 if v=='United-States' else 1.0))
numeric_plus_4_scores = cross_validation_score(
features=numeric_plus_4,
labels=train_df_full.income_band)
numeric_plus_4_scores.plot(
kind='bar',
title='10-Fold CV scores for numeric only features + occupation + sex + relationship + native_country')
numeric_plus_4_scores.mean()
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
base = SVC()
model = BaggingClassifier(base_estimator=base, n_estimators=20, max_samples=0.05)
prep = StandardScaler()
estimator = Pipeline([
('prep', prep),
('model', model),
])
params = [
{'model__base_estimator__C': [100, 1000],
'model__base_estimator__gamma': [0.1, 0.001]
},
]
grid_search = GridSearchCV(estimator=estimator, param_grid=params, scoring='accuracy', verbose=10, cv=10)
grid_search = grid_search.fit(numeric_plus_3, train_df_full.income_band)
grid_search.best_score_
grid_search.best_params_
params = [
{'model__base_estimator__C': [1000, 5000],
'model__base_estimator__gamma': [0.001, 0.005]
},
]
grid_search = GridSearchCV(estimator=estimator, param_grid=params, scoring='accuracy', verbose=10, cv=10)
grid_search = grid_search.fit(numeric_plus_3, train_df_full.income_band)
grid_search.best_score_
grid_search.best_params_
params = [
{'model__base_estimator__C': [6000, 10000],
'model__base_estimator__gamma': [0.006, 0.0001]
},
]
grid_search = GridSearchCV(estimator=estimator, param_grid=params, scoring='accuracy', verbose=10, cv=10)
grid_search = grid_search.fit(numeric_plus_3, train_df_full.income_band)
grid_search.best_score_
grid_search.best_params_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the Data
Step2: Attribute Information
Step3: Check for Missing Data
Step4: Check for Class Imbalance
Step5: Bivariate Analysis
Step6: Bivariate Features
Step7: Feature Correlations
Step8: Modeling
Step9: So assuming our naive baseline model predicts the income band for all samples as '<=50K'
Step11: SVM with Only the Numerical Features
Step13: SVM with One-Hot Encoded Features
Step14: Binarize sex
Step15: Binarize Race
Step16: Include Relationship
Step17: Include Workclass
Step18: Include Marital Status
Step19: Include Native Country
Step20: Grid Search for the Best Parameters
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.