Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
12,600
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
n = 5 # Toplam nesne sayısı
ust_ag = 30 # Olabilecek en yüksek ağırlık
x_degerleri = np.random.rand(n)
y_degerleri = np.random.rand(n)
agirliklar = ust_ag*np.random.rand(n) #
print x_degerleri
print y_degerleri
print agirliklar
x = 0
y = 0
toplam_agirlik = sum(agirliklar)
for i in range(0, n):
x += (agirliklar[i]*x_degerleri[i])/toplam_agirlik
y += (agirliklar[i]*y_degerleri[i])/toplam_agirlik
print x, y
plt.plot(x_degerleri, y_degerleri, 'ro') # Nesneler kırmızı yuvarlaklar
plt.plot(x, y, 'b*', ms=15.0) # Ağırlık merkezi büyük mavi bir yıldız
plt.xticks(np.arange(0, 1.1, 0.1))
plt.yticks(np.arange(0, 1.1, 0.1))
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.grid(True)
for i in range(0, n):
oran = agirliklar[i] / ust_ag
plt.plot(x_degerleri[i], y_degerleri[i], 'ro', ms=50*oran)
plt.plot(x, y, 'b*', ms=15.0)
plt.xticks(np.arange(0, 1.1, 0.1))
plt.yticks(np.arange(0, 1.1, 0.1))
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.grid(True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: İlk işimiz nesnelerin ağırlıklarını ve koordinatlarını rassal olarak üretmek olsun.
Step2: Her noktanın koordinatlarını rassal olarak ürettik. Şimdi ağırlık merkezini bulabiliriz.
Step3: Grafiği çizmeye hazırız.
Step4: Ayrıca her nesneyi ağırlığına bağlı olarak farklı boyutlarda çizebiliriz. Tek yapmamız gereken ağırlıklarını en yüksek ağırlığa (ust_ag) göre oranlamak. Yani yarıçaplarını belirlemek.
|
12,601
|
<ASSISTANT_TASK:>
Python Code:
import imapclient
import email
conn = imapclient.IMAPClient('imap.gmail.com', ssl=True)
# Real values were used in testing, and removed for Github
# Due to the nature of Gmail's security, you may have to allow access from 'less secure apps' (like this script)
# The setting can be changed here: https://www.google.com/settings/u/2/security/lesssecureapps
conn.login('youremail@gmail.com','yourpassword')
conn.list_folders()
conn.select_folder('INBOX', readonly=True)
UIDs = conn.search('SINCE 26-May-2016')
print(UIDs)
conn.delete_messages([88130, 88131, 88132])
# The following function is also available in this module to search Gmail for a line of text.
# I found this faster than iterating over the set to find the Python test email from Lesson 46
conn.gmail_search('Subject: Python Test Email')
rawMessage = conn.fetch([88177], ['BODY[]','FLAGS'])
print(rawMessage)
type(rawMessage)
message = email.message_from_bytes(rawMessage[88177][b'BODY[]'])
print(message)
print(type(message))
message.get('Subject')
message.get('from')
message.get_payload()
conn.logout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first step in setting up email is creating a connection object again to interact with an email, but this time for the IMAP server.
Step2: We can now pass in our login paramaters via the .login() method.
Step3: We get a byte string with a response if we login correctly.
Step4: We get a series of byte strings, if we have connected correctly.
Step5: We can use the .delete_messages() methods to delete a list of UIDs. We won't be running it here.
Step6: We now have to translate these UIDs into the actual emails, and we can do that using the .fetch() method.
Step7: Because this value is returned as a collections.defaultdict, we must use a series of keys to parse it and explore its values. This process is explained more thoroughly on Mr. Sweigart's blog..
Step8: Once this value has been stored, we can use the .get() method to pull out various variables from the email.
Step9: We can now access different elements in this message object.
Step10: The value for an email's body is defined as its 'payload', and accessible via the method get_payload()
Step11: We can end our session using the .logout method on the connection object.
|
12,602
|
<ASSISTANT_TASK:>
Python Code:
breakfast = ["sausage", "eggs", "bacon", "spam"]
for item in breakfast:
print(item)
squares = []
for i in range(1, 10, 2):
squares.append(i**2)
print(squares)
fruits = {'banana' : 5, 'strawberry' : 7, 'pineapple' : 3}
for fruit in fruits:
print(fruit)
sum = 0
for price in fruits.values():
sum += price
print(sum)
f = [0, 1]
while True:
new = f[-1] + f[-2]
if new > 100:
break
f.append(new)
print(f)
number = 7
if number < 0:
print("Negative")
elif number == 0:
print("Zero")
elif number in [3, 5, 7, 11, 17]:
print("Prime")
xys = [[2, 3], [0, -1], [4, -2], [1, 6]]
tmp = []
for x, y in xys:
tmp.append([y,x])
tmp.sort()
for i, (y,x) in enumerate(tmp):
xys[i] = [x,y]
print(xys)
ys = []
for x, y in xys:
ys.append(y)
print(ys)
sums = []
for x, y in xys:
if x > 0 and y > 0:
sums.append(x + y)
print(sums)
xys = [[2, 3], [0, -1], [4, -2], [1, 6]]
tmp = [[y, x] for x, y in xys]
tmp.sort()
xys = [[x, y] for y, x in tmp]
# One liner is possible but not very readable anymore:
xys = [[x, y] for y, x in sorted([[ytmp, xtmp] for xtmp, ytmp in xys])]
# Summing positives with one liner is ok:
sums = [x+y for x,y in xys if x > 0 and y > 0]
for number in range(1, 101):
if number % 3 == 0 and number % 5 == 0:
print("FizzBuzz")
elif number % 3 == 0:
print("Fizz")
elif number % 5 == 0:
print("Buzz")
print(number)
import random
while True:
value = random.random()
if value < 0.1:
break
print("done")
temperatures_celsius = [0, -15, 20.15, 13.3, -5.2]
temperatures_kelvin = [c+273.15 for c in temperatures_celsius]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Write then a for which loop determines the squares of the odd
Step2: Looping through a dictionary
Step3: Next, write a loop that sums up the prices.
Step4: While loop
Step5: If - else
Step6: Advanced exercises
Step7: Next, create a new list containing only the sorted y values.
Step8: Finally, create a new list consisting of sums the (x,y) pairs where both x and y are positive.
Step9: List comprehension is often convenient in this kind of situations
Step10: FizzBuzz
Step11: Food for thought
Step12: List comprehension
|
12,603
|
<ASSISTANT_TASK:>
Python Code:
from arcgis.gis import GIS
from getpass import getpass
from IPython.display import display
# Get username and password
username = input('Username: ')
password = getpass(prompt='Password: ')
# Connect to portal
gis = GIS("https://arcgis.com/", username, password)
user = gis.users.get(username)
user
title = input("Feature class to search for: ")
items = gis.content.search(query="title:'" + title + "' AND owner:" + username, item_type="Feature Service")
print(type(items), len(items))
print(type(items[0]))
item = items[0]
item
item.tags
# First set up some variables for input ot the *update* method.
thumbnail_path = "c:/temp/Hospitals.JPG"
tags = list(item.tags)
tags.append("health")
item_properties = {"snippet": "Location of Cambridge hospitals.",
"title": "Cambridge Hospitals",
"tags": ','.join(tags),
"accessinformation": "City of Cambridge GIS",
"licenseInfo": "License Info"
}
# Then perform the update
item.update(item_properties, thumbnail=thumbnail_path)
item
# Get the updated *item*
items[0].tags
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the GIS object and point it to AGOL
Step2: Test the connection
Step3: Get the item that you want to update
Step4: Update the metadata
|
12,604
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from cycler import cycler
# import all shogun classes
from shogun import *
slope = 3
X_train = rand(30)*10
y_train = slope*(X_train)+random.randn(30)*2+2
y_true = slope*(X_train)+2
X_test = concatenate((linspace(0,10, 50),X_train))
#Convert data to shogun format features
feats_train = RealFeatures(X_train.reshape(1,len(X_train)))
feats_test = RealFeatures(X_test.reshape(1,len(X_test)))
labels_train = RegressionLabels(y_train)
ls = LeastSquaresRegression(feats_train, labels_train)
ls.train()
w = ls.get_w()
print 'Weights:'
print w
out = ls.apply(feats_test).get_labels()
figure(figsize=(20,5))
#Regression and true plot
pl1 = subplot(131)
title('Regression')
_ = plot(X_train,labels_train, 'ro')
_ = plot(X_test,out, color='blue')
_ = plot(X_train, y_true, color='green')
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
p3 = Rectangle((0, 0), 1, 1, fc="g")
pl1.legend((p1, p2, p3), ["Training samples", "Predicted output", "True relationship"], loc=2)
xlabel('Samples (X)', fontsize=12)
ylabel('Response variable (Y)', fontsize=12)
#plot residues
pl2 = subplot(132)
title("Squared error and output")
_ = plot(X_test,out, linewidth=2)
gray()
_ = scatter(X_train,labels_train,c=ones(30) ,cmap=gray(), s=40)
for i in range(50,80):
plot([X_test[i],X_test[i]],[out[i],y_train[i-50]] , linewidth=2, color='red')
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
pl2.legend((p1, p2), ["Error/residuals to be squared", "Predicted output"], loc=2)
xlabel('Samples (X)', fontsize=12)
ylabel('Response variable (Y)', fontsize=12)
jet()
tau = 0.8
rr = LinearRidgeRegression(tau, feats_train, labels_train)
rr.train()
w = rr.get_w()
print w
out = rr.apply(feats_test).get_labels()
figure(figsize=(20,5))
#Regression and true plot
pl1 = subplot(131)
title('Ridge Regression')
_ = plot(X_train,labels_train, 'ro')
_ = plot(X_test, out, color='blue')
_ = plot(X_train, y_true, color='green')
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
p3 = Rectangle((0, 0), 1, 1, fc="g")
pl1.legend((p1, p2, p3), ["Training samples", "Predicted output", "True relationship"], loc=2)
xlabel('Samples (X)', fontsize=12)
ylabel('Response variable (Y)', fontsize=12)
jet()
#Generate Data
def generate_data(N, D):
w = randn(D,1)
X = zeros((N,D))
y = zeros((N,1))
for i in range(N):
x = randn(1,D)
for j in range(D):
X[i][j] = x[0][j]
y = dot(X,w) + randn(N,1);
y.reshape(N,)
return X, y.T
def generate_weights(taus, feats_train, labels_train):
preproc = PruneVarSubMean(True)
preproc.init(feats_train)
feats_train.add_preprocessor(preproc)
feats_train.apply_preprocessor()
weights = []
rr = LinearRidgeRegression(tau, feats_train, labels_train)
#vary regularization
for t in taus:
rr.set_tau(t)
rr.train()
weights.append(rr.get_w())
return weights, rr
def plot_regularization(taus, weights):
ax = gca()
ax.set_prop_cycle(cycler('color', ['b', 'r', 'g', 'c', 'k', 'y', 'm']))
ax.plot(taus, weights, linewidth=2)
xlabel('Tau', fontsize=12)
ylabel('Weights', fontsize=12)
ax.set_xscale('log')
def xval_results(taus):
errors = []
for t in taus:
rr.set_tau(t)
splitting_strategy = CrossValidationSplitting(labels_train, 5)
# evaluation method
evaluation_criterium = MeanSquaredError()
# cross-validation instance
cross_validation = CrossValidation(rr, feats_train, labels_train, splitting_strategy, evaluation_criterium, False)
cross_validation.set_num_runs(100)
result = cross_validation.evaluate()
result = CrossValidationResult.obtain_from_generic(result)
errors.append(result.mean)
return errors
n = 500
taus = logspace(-6, 4, n)
figure(figsize=(20,6))
suptitle('Effect of Regularisation for 10-dimensional data with 200 samples', fontsize=12)
matrix, y = generate_data(200,10)
feats_train = RealFeatures(matrix.T)
labels_train = RegressionLabels(y[0])
weights, rr = generate_weights(taus, feats_train, labels_train)
errors = xval_results(taus)
p1=subplot(121)
plot_regularization(taus, weights)
p2 = subplot(122)
plot(taus, errors)
p2.set_xscale('log')
xlabel('Tau', fontsize=12)
ylabel('Error', fontsize=12)
jet()
figure(figsize=(20,6))
suptitle('Effect of Regularisation for 10-dimensional data with 10 samples', fontsize=12)
matrix, y = generate_data(10,10)
feats_train = RealFeatures(matrix.T)
labels_train = RegressionLabels(y[0])
weights, rr = generate_weights(taus, feats_train, labels_train)
errors = xval_results(taus)
p1 = subplot(121)
plot_regularization(taus, weights)
p2 = subplot(122)
plot(taus, errors)
p2.set_xscale('log')
xlabel('Tau', fontsize=12)
ylabel('Error', fontsize=12)
jet()
#sample some data
X=rand(10)*1.5
for i in range(9):
x=random.standard_normal(10)*0.5
X=vstack((X, x))
y=ones(10)
feats_train=RealFeatures(X)
labels_train=RegressionLabels(y)
#Preprocess data
preproc=PruneVarSubMean()
preproc.init(feats_train)
feats_train.add_preprocessor(preproc)
feats_train.apply_preprocessor()
preprocessor=NormOne()
preprocessor.init(feats_train)
feats_train.add_preprocessor(preprocessor)
feats_train.apply_preprocessor()
print "(No. of attributes, No. of samples) of data:"
print feats_train.get_feature_matrix().shape
#Train and generate weights
la=LeastAngleRegression()
la.set_labels(labels_train)
la.train(feats_train)
size=la.get_path_size()
print ("Size of path is %s" %size)
#calculate weights
weights=[]
for i in range(size):
weights.append(la.get_w_for_var(i))
s = sum(abs(array(weights)), axis=1)
print ('Max. norm is %s' %s[-1])
figure(figsize(30,7))
#plot 1
ax=subplot(131)
title('Lasso path')
ax.plot(s, weights, linewidth=2)
ymin, ymax = ylim()
ax.vlines(s[1:-1], ymin, ymax, linestyle='dashed')
xlabel("Norm")
ylabel("weights")
#Restrict norm to half for early termination
la.set_max_l1_norm(s[-1]*0.5)
la.train(feats_train)
size=la.get_path_size()
weights=[]
for i in range(size):
weights.append(la.get_w_for_var(i))
s = sum(abs(array(weights)), axis=1)
#plot 2
ax2=subplot(132)
title('Lasso path with restricted norm')
ax2.plot(s, weights, linewidth=2)
ax2.vlines(s[1:-1], ymin, ymax, linestyle='dashed')
xlabel("Norm")
ylabel("weights")
print ('Restricted norm is %s' %(s[-1]))
feats = RealFeatures(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))
train_labels = RegressionLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))
mat = feats.get_feature_matrix()
crime_rate = mat[0]
feats_train = RealFeatures(crime_rate.reshape(1, len(mat[0])))
preproc = RescaleFeatures()
preproc.init(feats_train)
feats_train.add_preprocessor(preproc)
feats_train.apply_preprocessor(True)
# Store preprocessed feature matrix.
preproc_data = feats_train.get_feature_matrix()
size=500
x1=linspace(0, 1, size)
width=0.5
tau=0.5
kernel=GaussianKernel(feats_train, feats_train, width)
krr=KernelRidgeRegression(tau, kernel, train_labels)
krr.train(feats_train)
feats_test=RealFeatures(x1.reshape(1,len(x1)))
kernel.init(feats_train, feats_test)
out = krr.apply().get_labels()
#Visualization of regression
fig=figure(figsize(6,6))
#first plot with only one attribute
title("Regression with 1st attribute")
_=scatter(preproc_data[0:], train_labels.get_labels(), c=ones(506), cmap=gray(), s=20)
_=xlabel('Crime rate')
_=ylabel('Median value of homes')
_=plot(x1,out, linewidth=3)
# Use different kernels
gaussian_kernel=GaussianKernel(feats_train, feats_train, 0.1)
#Polynomial kernel of degree 2
poly_kernel=PolyKernel(feats_train, feats_train, 2, True)
linear_kernel=LinearKernel(feats_train, feats_train)
kernels=[linear_kernel, poly_kernel, gaussian_kernel]
svr_param=1
svr_C=10
svr=LibSVR(svr_C, svr_param, gaussian_kernel, train_labels, LIBSVR_EPSILON_SVR)
#Visualization of regression
x1=linspace(0, 1, size)
feats_test_=RealFeatures(x1.reshape(1,len(x1)))
def svr_regress(kernels):
fig=figure(figsize(8,8))
for i, kernel in enumerate(kernels):
svr.set_kernel(kernel)
svr.train()
out=svr.apply(feats_test_).get_labels()
#subplot(1,len(kernels), i)
#first plot with only one attribute
title("Support Vector Regression")
_=scatter(preproc_data[0:], train_labels.get_labels(), c=ones(506), cmap=gray(), s=20)
_=xlabel('Crime rate')
_=ylabel('Median value of homes')
_=plot(x1,out, linewidth=3)
ylim([0, 40])
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
p3 = Rectangle((0, 0), 1, 1, fc="g")
_=legend((p1, p2, p3), ["Gaussian Kernel", "Linear Kernel", "Polynomial Kernel"], loc=1)
svr_regress(kernels)
import time
gaussian_kernel=GaussianKernel(feats, feats, 13)
nus=[0.2, 0.4, 0.6, 0.8]
epsilons=[0.16, 0.09, 0.046, 0.0188]
svr_C=10
def compare_svr(nus, epsilons):
time_eps=[]
time_nus=[]
for i in range(len(epsilons)):
svr_param=1
svr=LibSVR(svr_C, epsilons[i], gaussian_kernel, train_labels, LIBSVR_EPSILON_SVR)
t_start=time.clock()
svr.train()
time_test=(time.clock() - t_start)
time_eps.append(time_test)
for i in range(len(nus)):
svr_param=1
svr=LibSVR(svr_C, nus[i], gaussian_kernel, train_labels, LIBSVR_NU_SVR)
t_start=time.clock()
svr.train()
time_test=(time.clock() - t_start)
time_nus.append(time_test)
print "-"*72
print "|", "%15s" % 'Nu' ,"|", "%15s" % 'Epsilon',"|","%15s" % 'Time (Nu)' ,"|", "%15s" % 'Time(Epsilon)' ,"|"
for i in range(len(nus)):
print "-"*72
print "|", "%15s" % nus[i] ,"|", "%15s" %epsilons[i],"|","%15s" %time_nus[i] ,"|", "%15s" %time_eps[i] ,"|"
print "-"*72
title_='SVR Performance on Boston Housing dataset'
print "%50s" %title_
compare_svr(nus, epsilons)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Training and generating weights
Step2: This value of $\text w$ is pretty close to 3, which certifies a pretty good fit for the training data. Now let's apply this trained machine to our test data to get the ouput values.
Step3: As an aid to visualisation, a plot of the output and also of the residuals is shown. The sum of the squares of these residuals is minimised.
Step4: Ridge Regression
Step5: Relationship between weights and regularization
Step6: The mean squared error (MSE) of an estimator measures the average of the squares of the errors. CMeanSquaredError class is used to compute the MSE as
Step7: Data with dimension
Step8: As seen from the plot of errors, regularisation doesn't seem to affect the errors significantly. One interpretation could be that this is beacuse there is less overfitting as we have large number of samples. For a small sample size as compared to the dimensionality, the test set performance may be poor even. The reason for this is that the regression function will fit the noise too much, while the interesting part of the signal is too small. We now generate 10 samples of 10-dimensions to test this.
Step9: The first plot is the famous ridge trace that is the signature of this technique. The plot is really very straight forward to read. It presents the standardized regression coefficients (weights) on the vertical axis and various values of tau (Regularisation constant) along the horizontal axis. Since the values of tau ($\tau$) span several orders of magnitude, we adopt a logarithmic scale along this axis. As tau is increased, the values of the regression estimates change, often wildly at first. At some point, the coefficients seem to settle down and then gradually drift towards zero. Often the value of tau for which these coefficients are at their stable values is the best one. This should be supported by a low error value for that tau.
Step10: CLeastAngleRegression requires the features to be normalized with a zero mean and unit norm. Hence we use two preprocessors
Step11: Next we train on the data. Keeping in mind that we had 10 attributes/dimensions in our data, let us have a look at the size of LASSO path which is obtained readily using get_path_size().
Step12: The weights generated ($\beta_i$) and their norm ($\sum_i|\beta_i|$) change with each step. This is when a new variable is added to path. To get the weights at each of these steps get_w_for_var() method is used. The argument is the index of the variable which should be in the range [0, path_size).
Step13: Each color in the plot represents a coefficient and the vertical lines denote steps. It is clear that the weights are piecewise linear function of the norm.
Step14: As seen from the example KRR (using the kernel trick) can apply techniques for linear regression in the feature space to perform nonlinear regression in the input space.
Step15: Let us do comparison of time taken for the 2 different models simliar to that done in section 6 of [1]. The Boston Housing Dataset is used.
|
12,605
|
<ASSISTANT_TASK:>
Python Code:
import requests
from bs4 import BeautifulSoup
from IPython.display import Pretty
import pprint
pp = pprint.PrettyPrinter(indent=4)
url = 'http://seclists.org/fulldisclosure/2017/Jan'
r = requests.get(url)
raw = r.text
Pretty(raw)
raw = raw.replace('<a name="begin">', '<a name="begin"></a>')
soup = BeautifulSoup(raw, 'html5lib')
begin = soup.find(attrs={'name':'begin'}) #beginning of msg links
items = begin.find_next('ul').find_all('li', recursive=False)
pp.pprint(items)
import re
def read_messages(items, messages, idroot, parent):
for li in items:
msg = li.find('a')
if msg == None:
#some messages just read "Possible follow-ups" with no link--skip
continue
id = idroot + msg['href']
title = msg.text
whowhen = li.find('em').text
rx = re.compile('(.+) \((.+)\)')
m = rx.search(whowhen)
who = m.group(1)
when = m.group(2)
messages.append({
'index': msg['href'],
'id': id,
'title': title,
'parent': parent,
'author': who,
'date': when
})
replies = li.find('ul')
if replies != None:
read_messages(replies.find_all('li', recursive=False), messages, idroot, id)
return messages
messages = []
idroot = '2017_Jan_'
read_messages(items, messages, idroot, None)
pp.pprint(messages)
import pendulum
message = messages[4]
reply_url = url + '/' + message['index']
r = requests.get(reply_url)
reply = r.text
start = reply.index('<!--X-Head-of-Message-->') + 24
end = reply.index('<!--X-Head-of-Message-End-->')
head = reply[start:end]
soup = BeautifulSoup(head, 'html5lib')
ems = soup.find_all('em')
for em in ems:
if em.text == 'From':
author = em.next_sibling
#list obfuscates email by replacing @ with ' () ' and removing periods from domain name
if author.startswith(': '):
author = author[2:]
author = author.replace(' () ', '@')
at = author.find('@')
author = author[:at] + author[at:].replace(' ', '.')
message['author'] = author
elif em.text == 'Date':
date = em.next_sibling
if date.startswith(': '):
date = date[2:]
message['date'] = str(pendulum.parse(date).in_timezone('UTC'))
print(message)
import csv
import sys
output = csv.writer(sys.stdout)
output.writerow(['id', 'title', 'date', 'author', 'parent'])
for x in messages:
output.writerow([x['id'],
x['title'],
x['date'],
x['author'],
x['parent']])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The generated code from seclists.org contains an unterminated anchor tag, so to make things easier for BeautifulSoup's parser, we'll just replace this manually. This particular tag is a good tag to use as a locator for the messages section, so it's good to make sure that it's valid html.
Step2: We end up with an array of <li> tags, but note that children tags are encoded as embedded <ul> portions.
Step3: The index file summarizes the reply's details, but we would like the full author with email, and a complete timestamp. To do this, we'll need to delve into the actual raw.html. For this notebook, we'll download the file again, but in actual usage, we would just open the existing file.
Step4: Finally, let's put this data into csv format.
|
12,606
|
<ASSISTANT_TASK:>
Python Code:
import openpnm as op
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(10)
from ipywidgets import interact, IntSlider
%matplotlib inline
ws = op.Workspace()
ws.settings["loglevel"] = 40
N = 100
net = op.network.Cubic(shape=[N, N, 1], spacing=2.5e-5)
geom = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts)
water = op.phases.Water(network=net)
phys = op.physics.Standard(network=net, phase=water, geometry=geom)
phys.models['throat.entry_pressure']
#NBVAL_IGNORE_OUTPUT
alg = op.algorithms.OrdinaryPercolation(network=net)
alg.setup(phase=water, pore_volume='pore.volume', throat_volume='throat.volume')
alg.set_inlets(pores=net.pores('left'))
alg.set_outlets(pores=net.pores('right'))
alg.run(points=1000)
alg.plot_intrusion_curve()
plt.show()
data = alg.get_intrusion_data()
mask = np.logical_and(np.asarray(data.Snwp) > 0.0 , np.asarray(data.Snwp) < 1.0)
mask = np.argwhere(mask).flatten()
pressures = np.asarray(data.Pcap)[mask]
def plot_saturation(step):
arg = mask[step]
Pc = np.ceil(data.Pcap[arg])
sat = np.around(data.Snwp[arg], 3)
is_perc = alg.is_percolating(Pc)
pmask = alg['pore.invasion_pressure'] <= Pc
im = pmask.reshape([N, N])
fig, ax = plt.subplots(figsize=[5, 5])
ax.imshow(im, cmap='Blues');
title = ('Capillary Pressure: '+str(Pc)+' Saturation: '+str(sat)+
' Percolating : '+str(is_perc))
plt.title(title)
plt.show()
#NBVAL_IGNORE_OUTPUT
perc_thresh = alg.get_percolation_threshold()
thresh_step = np.argwhere(np.asarray(pressures) == perc_thresh)
interact(plot_saturation, step=IntSlider(min=0, max=len(mask)-1, step=1, value=thresh_step));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a 2D Cubic network with standard PSD and define the phase as Water and use Standard physics which implements the washburn capillary pressure relation for throat entry pressure.
Step2: We can check the model by looking at the model dict on the phys object
Step3: Now set up and run the algorithm choosing the left and right sides of the network for inlets and outlets respectively. Because we did not set up the network with boundary pores with zero volume a little warning is given because the starting saturation for the algorithm is not zero. However, this is fine and because the network is quite large the starting saturation is actually quite close to zero.
Step4: The algorithm completes very quickly and the invading phase saturation can be plotted versus the applied boundary pressure.
Step5: As the network is 2D and cubic we can easily plot the invading phase configuration at the different invasion steps
|
12,607
|
<ASSISTANT_TASK:>
Python Code:
import sys
import logging
# Import the GEM-PRO class
from ssbio.pipeline.gempro import GEMPRO
# Printing multiple outputs per cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# Create logger
logger = logging.getLogger()
logger.setLevel(logging.INFO) # SET YOUR LOGGING LEVEL HERE #
# Other logger stuff for Jupyter notebooks
handler = logging.StreamHandler(sys.stderr)
formatter = logging.Formatter('[%(asctime)s] [%(name)s] %(levelname)s: %(message)s', datefmt="%Y-%m-%d %H:%M")
handler.setFormatter(formatter)
logger.handlers = [handler]
# SET FOLDERS AND DATA HERE
import tempfile
ROOT_DIR = tempfile.gettempdir()
PROJECT = 'ssbio_protein_properties'
LIST_OF_GENES = ['b1276', 'b0118']
# Create the GEM-PRO project
my_gempro = GEMPRO(gem_name=PROJECT, root_dir=ROOT_DIR, genes_list=LIST_OF_GENES, pdb_file_type='pdb')
# UniProt mapping
my_gempro.uniprot_mapping_and_metadata(model_gene_source='ENSEMBLGENOME_ID')
print('Missing UniProt mapping: ', my_gempro.missing_uniprot_mapping)
my_gempro.df_uniprot_metadata.head()
# Set representative sequences
my_gempro.set_representative_sequence()
print('Missing a representative sequence: ', my_gempro.missing_representative_sequence)
my_gempro.df_representative_sequences.head()
# Mapping using the PDBe best_structures service
my_gempro.map_uniprot_to_pdb(seq_ident_cutoff=.3)
my_gempro.df_pdb_ranking.head()
# Mapping using BLAST
my_gempro.blast_seqs_to_pdb(all_genes=True, seq_ident_cutoff=.7, evalue=0.00001)
my_gempro.df_pdb_blast.head(2)
import pandas as pd
import os.path as op
# Creating manual mapping dictionary for ECOLI I-TASSER models
homology_models = '/home/nathan/projects_archive/homology_models/ECOLI/zhang/'
homology_models_df = pd.read_csv('/home/nathan/projects_archive/homology_models/ECOLI/zhang_data/160804-ZHANG_INFO.csv')
tmp = homology_models_df[['zhang_id','model_file','m_gene']].drop_duplicates()
tmp = tmp[pd.notnull(tmp.m_gene)]
homology_model_dict = {}
for i,r in tmp.iterrows():
homology_model_dict[r['m_gene']] = {r['zhang_id']: {'model_file':op.join(homology_models, r['model_file']),
'file_type':'pdb'}}
my_gempro.get_manual_homology_models(homology_model_dict)
# Creating manual mapping dictionary for ECOLI SUNPRO models
homology_models = '/home/nathan/projects_archive/homology_models/ECOLI/sunpro/'
homology_models_df = pd.read_csv('/home/nathan/projects_archive/homology_models/ECOLI/sunpro_data/160609-SUNPRO_INFO.csv')
tmp = homology_models_df[['sunpro_id','model_file','m_gene']].drop_duplicates()
tmp = tmp[pd.notnull(tmp.m_gene)]
homology_model_dict = {}
for i,r in tmp.iterrows():
homology_model_dict[r['m_gene']] = {r['sunpro_id']: {'model_file':op.join(homology_models, r['model_file']),
'file_type':'pdb'}}
my_gempro.get_manual_homology_models(homology_model_dict)
# Download all mapped PDBs and gather the metadata
my_gempro.pdb_downloader_and_metadata()
my_gempro.df_pdb_metadata.head(2)
# Set representative structures
my_gempro.set_representative_structure()
my_gempro.df_representative_structures.head()
# Requires EMBOSS "pepstats" program
# See the ssbio wiki for more information: https://github.com/SBRG/ssbio/wiki/Software-Installations
# Install using:
# sudo apt-get install emboss
my_gempro.get_sequence_properties()
# Requires SCRATCH installation, replace path_to_scratch with own path to script
# See the ssbio wiki for more information: https://github.com/SBRG/ssbio/wiki/Software-Installations
my_gempro.get_scratch_predictions(path_to_scratch='scratch',
results_dir=my_gempro.data_dir,
num_cores=4)
my_gempro.find_disulfide_bridges(representatives_only=False)
# Requires DSSP installation
# See the ssbio wiki for more information: https://github.com/SBRG/ssbio/wiki/Software-Installations
my_gempro.get_dssp_annotations()
# Requires MSMS installation
# See the ssbio wiki for more information: https://github.com/SBRG/ssbio/wiki/Software-Installations
my_gempro.get_msms_annotations()
# for g in my_gempro.genes_with_a_representative_sequence:
# g.protein.representative_sequence.feature_path = '/path/to/new/feature/file.gff'
# Kyte-Doolittle scale for hydrophobicity
kd = { 'A': 1.8,'R':-4.5,'N':-3.5,'D':-3.5,'C': 2.5,
'Q':-3.5,'E':-3.5,'G':-0.4,'H':-3.2,'I': 4.5,
'L': 3.8,'K':-3.9,'M': 1.9,'F': 2.8,'P':-1.6,
'S':-0.8,'T':-0.7,'W':-0.9,'Y':-1.3,'V': 4.2 }
# Use Biopython to calculated hydrophobicity using a set sliding window length
from Bio.SeqUtils.ProtParam import ProteinAnalysis
window = 7
for g in my_gempro.genes_with_a_representative_sequence:
# Create a ProteinAnalysis object -- see http://biopython.org/wiki/ProtParam
my_seq = g.protein.representative_sequence.seq_str
analysed_seq = ProteinAnalysis(my_seq)
# Calculate scale
hydrophobicity = analysed_seq.protein_scale(param_dict=kd, window=window)
# Correct list length by prepending and appending "inf" (result needs to be same length as sequence)
for i in range(window//2):
hydrophobicity.insert(0, float("Inf"))
hydrophobicity.append(float("Inf"))
# Add new annotation to the representative sequence's "letter_annotations" dictionary
g.protein.representative_sequence.letter_annotations['hydrophobicity-kd'] = hydrophobicity
# Printing all global protein properties
from pprint import pprint
# Only looking at 2 genes for now, remove [:2] to gather properties for all
for g in my_gempro.genes_with_a_representative_sequence[:2]:
repseq = g.protein.representative_sequence
repstruct = g.protein.representative_structure
repchain = g.protein.representative_chain
print('Gene: {}'.format(g.id))
print('Number of structures: {}'.format(g.protein.num_structures))
print('Representative sequence: {}'.format(repseq.id))
print('Representative structure: {}'.format(repstruct.id))
print('----------------------------------------------------------------')
print('Global properties of the representative sequence:')
pprint(repseq.annotations)
print('----------------------------------------------------------------')
print('Global properties of the representative structure:')
pprint(repstruct.chains.get_by_id(repchain).seq_record.annotations)
print('****************************************************************')
print('****************************************************************')
print('****************************************************************')
# Looking at all features
for g in my_gempro.genes_with_a_representative_sequence[:2]:
g.id
# UniProt features
[x for x in g.protein.representative_sequence.features]
# Catalytic site atlas features
for s in g.protein.structures:
if s.structure_file:
for c in s.mapped_chains:
if s.chains.get_by_id(c).seq_record:
if s.chains.get_by_id(c).seq_record.features:
[x for x in s.chains.get_by_id(c).seq_record.features]
metal_info = []
for g in my_gempro.genes:
for f in g.protein.representative_sequence.features:
if 'metal' in f.type.lower():
res_info = g.protein.get_residue_annotations(f.location.end, use_representatives=True)
res_info['gene_id'] = g.id
res_info['seq_id'] = g.protein.representative_sequence.id
res_info['struct_id'] = g.protein.representative_structure.id
res_info['chain_id'] = g.protein.representative_chain
metal_info.append(res_info)
cols = ['gene_id', 'seq_id', 'struct_id', 'chain_id',
'seq_residue', 'seq_resnum', 'struct_residue','struct_resnum',
'seq_SS-sspro','seq_SS-sspro8','seq_RSA-accpro','seq_RSA-accpro20',
'struct_SS-dssp','struct_RSA-dssp', 'struct_ASA-dssp',
'struct_PHI-dssp', 'struct_PSI-dssp', 'struct_CA_DEPTH-msms', 'struct_RES_DEPTH-msms']
pd.DataFrame.from_records(metal_info, columns=cols).set_index(['gene_id', 'seq_id', 'struct_id', 'chain_id', 'seq_resnum'])
for g in my_gempro.genes:
# Gather residue numbers
metal_binding_structure_residues = []
for f in g.protein.representative_sequence.features:
if 'metal' in f.type.lower():
res_info = g.protein.get_residue_annotations(f.location.end, use_representatives=True)
metal_binding_structure_residues.append(res_info['struct_resnum'])
print(metal_binding_structure_residues)
# Display structure
view = g.protein.representative_structure.view_structure()
g.protein.representative_structure.add_residues_highlight_to_nglview(view=view, structure_resnums=metal_binding_structure_residues)
view
# Run all sequence to structure alignments
for g in my_gempro.genes:
for s in g.protein.structures:
g.protein.align_seqprop_to_structprop(seqprop=g.protein.representative_sequence, structprop=s)
metal_info_compared = []
for g in my_gempro.genes:
for f in g.protein.representative_sequence.features:
if 'metal' in f.type.lower():
for s in g.protein.structures:
for c in s.mapped_chains:
res_info = g.protein.get_residue_annotations(seq_resnum=f.location.end,
seqprop=g.protein.representative_sequence,
structprop=s, chain_id=c,
use_representatives=False)
res_info['gene_id'] = g.id
res_info['seq_id'] = g.protein.representative_sequence.id
res_info['struct_id'] = s.id
res_info['chain_id'] = c
metal_info_compared.append(res_info)
cols = ['gene_id', 'seq_id', 'struct_id', 'chain_id',
'seq_residue', 'seq_resnum', 'struct_residue','struct_resnum',
'seq_SS-sspro','seq_SS-sspro8','seq_RSA-accpro','seq_RSA-accpro20',
'struct_SS-dssp','struct_RSA-dssp', 'struct_ASA-dssp',
'struct_PHI-dssp', 'struct_PSI-dssp', 'struct_CA_DEPTH-msms', 'struct_RES_DEPTH-msms']
pd.DataFrame.from_records(metal_info_compared, columns=cols).sort_values(by=['seq_resnum','struct_id','chain_id']).set_index(['gene_id','seq_id','seq_resnum','seq_residue','struct_id'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Logging
Step2: Initialization
Step3: Mapping gene ID --> sequence
Step4: Mapping representative sequence --> structure
Step5: Homology models
Step6: Downloading and ranking structures
Step7: Computing and storing protein properties
Step8: Additional annotations
Step9: Adding more properties
Step10: Global protein properties
Step11: Local protein properties
Step12: Column definitions
Step13: Comparing features in different structures of the same protein
|
12,608
|
<ASSISTANT_TASK:>
Python Code:
data['Outcomes'] = 'plural'
data['Outcomes'][1] = 'singular'
data
W = ndl.rw(data,M=10)
A = activation(W)
A
pd.DataFrame([data['Outcomes'], A.idxmax(1), A.idxmax(1) == data['Outcomes']], index = ['Truth', 'Prediction', 'Accurate?']).T
np.mean(A.idxmax(1) == data['Outcomes'])
float(sum(data['Frequency'] * (A.idxmax(1) == data['Outcomes']))) / float(sum(data['Frequency']))
def accuracy(data, M):
W = ndl.rw(data, M=M)
A = activation(W)
return np.mean(A.idxmax(1) == data['Outcomes'])
accuracy(data, 10)
np.mean([accuracy(data, M=10) == 1 for i in xrange(100)])
def population_accuracy(M=10, pop=100):
return np.mean([accuracy(data, M=M) == 1 for i in xrange(pop)])
MAX_TRIALS = 500
P = {}
P['sg / pl'] = [population_accuracy(M=i) for i in xrange(1,MAX_TRIALS)]
import matplotlib.pyplot as plt
plt.plot(range(1,len(P['sg / pl'])+1), P['sg / pl'], '-', linewidth=2)
plt.title('Singular / plural distinction')
plt.xlabel('Trial Number')
plt.suptitle('Proportion of 100 learners who label all 15 items correctly')
data['Outcomes'] = 'notdual'
data['Outcomes'][2] = 'dual'
data
P['du / non-du'] = [population_accuracy(M=i) for i in xrange(1,MAX_TRIALS)]
plt.plot(range(1,len(P['du / non-du'])+1), P['du / non-du'], '-', linewidth=2)
plt.title('Dual / non-dual distinction')
plt.xlabel('Trial Number')
plt.suptitle('Proportion of 100 learners who label all 15 items correctly')
data['Outcomes'] = 'plural'
data['Outcomes'][1] = 'singular'
data['Outcomes'][2] = 'dual'
P['sg / du / pl'] = [population_accuracy(M=i) for i in xrange(1,MAX_TRIALS)]
plt.plot(range(1,len(P['sg / du / pl'])+1), P['sg / du / pl'], '-', linewidth=2)
plt.title('Singular / dual / plural distinction')
plt.xlabel('Trial Number')
plt.suptitle('Proportion of 100 learners who label all 15 items correctly')
data['Outcomes'] = 'plural'
data['Outcomes'][1] = 'singular'
data['Outcomes'][2] = 'dual'
data['Outcomes'][3] = 'trial'
P['sg / du / tr / pl'] = [population_accuracy(M=i) for i in xrange(1,MAX_TRIALS)]
plt.plot(range(1,len(P['sg / du / tr / pl'])+1), P['sg / du / tr / pl'], '-', linewidth=2)
plt.title('Singular / dual / trial / plural distinction')
plt.xlabel('Trial Number')
plt.suptitle('Proportion of 100 learners who label all 15 items correctly')
data['Outcomes'] = 'plural'
data['Outcomes'][1] = 'singular'
data['Outcomes'][2] = 'dual'
data['Outcomes'][3] = 'trial'
data['Outcomes'][4] = '4ial'
P['sg / du / tr / qu / pl'] = [population_accuracy(M=i) for i in xrange(1,MAX_TRIALS)]
plt.plot(range(1,len(P['sg / du / tr / qu / pl'])+1), P['sg / du / tr / qu / pl'], '-', linewidth=2)
plt.title('Singular / dual / trial / quadral plural distinction')
plt.xlabel('Trial Number')
plt.suptitle('Proportion of 100 learners who label all 15 items correctly')
for n in ('sg / pl', 'sg / du / pl', 'sg / du / tr / pl', 'du / non-du', 'sg / du / tr / qu / pl'):
plt.plot(range(1,len(P[n])+1), P[n], '-', linewidth=1.5, label=n)
plt.suptitle('Proportion of 100 learners who label all 15 items correctly')
plt.xlabel('Trials')
plt.legend(loc=(-0.55,0.5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With these associations, how many of the 15 items will the learner correctly label?
Step2: How often are they correct (using relative item frequencies)?
Step3: For a population of 100 learners trying to acquire the number system, what proportion are able to successfully label all 15 items given M trials?
Step4: Dual / non-dual distinction
Step5: Singular, dual, plural
Step6: Singular, dual, trial, plural
Step7: Singular, dual, trial, quadral, plural
Step8: A prediction is that the typology of number systems should roughly correspond to how learnable each type of number system is
|
12,609
|
<ASSISTANT_TASK:>
Python Code:
from scipy.signal import convolve2d
img = color.rgb2gray(io.imread('../images/snakes.png'))
# Reduce all lines to one pixel thickness
snakes = morphology.skeletonize(img < 1)
# Find pixels with only one neighbor
corners = convolve2d(snakes, [[1, 1, 1],
[1, 0, 1],
[1, 1, 1]], mode='same') == 1
corners = corners & snakes
# Those are the start and end positions of the segments
y, x = np.where(corners)
plt.figure(figsize=(10, 5))
plt.imshow(img, cmap=plt.cm.gray, interpolation='nearest')
plt.scatter(x, y)
plt.axis('off')
plt.show()
image = io.imread("../images/round_pill.jpg")
image_equalized = exposure.equalize_adapthist(image)
edges = filters.canny(color.rgb2gray(image_equalized))
f, (ax0, ax1, ax2) = plt.subplots(1, 3, figsize=(15, 8))
ax0.imshow(image)
ax1.imshow(image_equalized)
ax2.imshow(edges, cmap='gray');
from skimage import measure
coords = np.column_stack(np.nonzero(edges))
model, inliers = measure.ransac(coords, measure.CircleModel,
min_samples=3, residual_threshold=1,
max_trials=500)
print('Circle parameters:', model.params)
row, col, radius = model.params
f, ax = plt.subplots()
ax.imshow(image, cmap='gray');
circle = plt.Circle((col, row), radius=radius, edgecolor='green', linewidth=2, fill=False)
ax.add_artist(circle);
from skimage import restoration, color, io, filter as filters, morphology
image = color.rgb2gray(io.imread('../images/fingers.png'))
denoised = restoration.denoise_tv_bregman(image, 1)
edges = filters.canny(denoised, low_threshold=0.01, high_threshold=0.21)
fig, axes = plt.subplots(1, 2, figsize=(15, 10))
axes[0].imshow(denoised, cmap='gray')
axes[1].imshow(edges, cmap='gray')
for ax in axes:
ax.set_axis_off()
from skimage import data
plt.imshow(data.coins(), cmap='gray');
from scipy import ndimage
from skimage import segmentation
image = data.coins()
equalized = exposure.equalize_adapthist(image)
edges = equalized > filters.threshold_otsu(equalized)
edges = segmentation.clear_border(edges)
edges = morphology.closing(edges, morphology.square(3))
f, (ax0, ax1) = plt.subplots(1, 2)
ax0.imshow(image, cmap='gray')
ax1.imshow(edges, cmap='gray');
labels = measure.label(edges)
for region in measure.regionprops(labels):
if region.area < 200:
rows, cols = region.coords.T
labels[rows, cols] = 0
print("Number of coins:", len(np.unique(labels)) - 1)
out = color.label2rgb(labels, image, bg_label=0)
plt.imshow(out);
from skimage import img_as_float
image = img_as_float(io.imread('../images/color-wheel.jpg'))
blue_lab = color.rgb2lab([[[0, 0, 1.]]])
light_blue_lab = color.rgb2lab([[[0, 1, 1.]]])
red_lab = color.rgb2lab([[[1, 0, 0.]]])
image_lab = color.rgb2lab(image)
distance_blue = color.deltaE_cmc(blue_lab, image_lab, kL=0.5, kC=0.5)
distance_light_blue = color.deltaE_cmc(light_blue_lab, image_lab, kL=0.5, kC=0.5)
distance_red = color.deltaE_cmc(red_lab, image_lab, kL=0.5, kC=0.5)
distance = distance_blue + distance_light_blue - distance_red
distance = exposure.rescale_intensity(distance)
image_blue = image.copy()
image_blue[distance > 0.3] = 0
f, (ax0, ax1, ax2) = plt.subplots(1, 3, figsize=(10, 5))
ax0.imshow(image)
ax1.imshow(distance, cmap='gray')
ax2.imshow(image_blue)
plt.show()
image = io.imread("../images/hand-coin.jpg")
label_image = segmentation.slic(image, n_segments=2)
label_image = measure.label(label_image)
regions = measure.regionprops(label_image)
areas = [r.area for r in regions]
ix = np.argsort(areas)
hand = regions[ix[-1]]
coin = regions[ix[-2]]
selected_labels = np.zeros_like(image[..., 0], dtype=np.uint8)
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(8, 8))
for n, region in enumerate([hand, coin]):
selected_labels[region.coords[:, 0], region.coords[:, 1]] = n + 2
y0, x0 = region.centroid
orientation = region.orientation
x1 = x0 + np.cos(orientation) * 0.5 * region.major_axis_length
y1 = y0 - np.sin(orientation) * 0.5 * region.major_axis_length
x2 = x0 - np.sin(orientation) * 0.5 * region.minor_axis_length
y2 = y0 - np.cos(orientation) * 0.5 * region.minor_axis_length
ax.plot((x0, x1), (y0, y1), '-r', linewidth=2.5)
ax.plot((x0, x2), (y0, y2), '-r', linewidth=2.5)
ax.plot(x0, y0, '.g', markersize=15)
image_label_overlay = color.label2rgb(selected_labels, image=image, bg_label=0)
ax.imshow(image_label_overlay, cmap='gray')
ax.axis('image')
plt.show()
%reload_ext load_style
%load_style ../themes/tutorial.css
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parameters of a pill
Step2: Viscous fingers
Step3: Counting coins
Step4: Color wheel
Step5: Hand-coin
Step6: <div style="height
|
12,610
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([1, 2, 3, 4])
c = np.array([[1, 2, 3, 4],[4, 5, 6, 7], [7, 8, 9, 10]])
c
c.shape
d = a.reshape((2,2))
d
a
a[1] = 100
d
a.dtype
b=np.array([[1, 2, 3, 4],[4, 5, 6, 7], [7, 8, 9, 10]], dtype=np.float)
b
np.arange(0, 1, 0.1)
np.linspace(0, 1, 12)
np.logspace(0, 2, 20)
a = np.arange(10)
a[5]
a[:5] # 省略开始下标,表示从a[0]开始
b = a[3:7]
b
b[2] = -10 # 将b的第2个元素修改为-10
a # a的第5个元素也被修改为-10
a= np.arange(0, 60, 10).reshape(-1, 1)+ np.arange(0, 6)
a
a[(0,1,2,3,4),(1,2,3,4,5)]
x = np.linspace(0, 2*np.pi, 10)
x
y = np.sin(x) # 对数组x中的每个元素进行正弦计算,返回一个同样大小的新数组
y
import time
import math
import numpy as np
x = [i * 0.001 for i in xrange(1000000)]
start = time.clock()
for i, t in enumerate(x):
x[i] = math.sin(t)
print "math.sin:", time.clock() - start
x = [i * 0.001 for i in xrange(1000000)]
x = np.array(x)
start = time.clock()
np.sin(x,x)
print "numpy.sin:", time.clock() - start
# 输出
# math.sin: 1.15426932753
# numpy.sin: 0.0882399858083
a = np.arange(0,12,0.5).reshape(4,-1)
np.savetxt("a.txt", a) # 缺省按照'%.18e'格式保存数据,以空格分隔
np.loadtxt("a.txt")
np.savetxt("a.txt", a, fmt="%d", delimiter=",") #改为保存为整数,以逗号分隔
np.loadtxt("a.txt",delimiter=",") # 读入的时候也需要指定逗号分隔
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 数组的大小可以通过其shape属性获得:
Step2: 使用数组的reshape方法,可以创建一个改变了尺寸的新数组,原数组的shape保持不变:
Step3: 数组a和d其实共享数据存储内存区域,因此修改其中任意一个数组的元素都会同时修改另外一个数组的内容:
Step4: 数组的元素类型可以通过dtype属性获得。上面例子中的参数序列的元素都是整数,因此所创建的数组的元素类型也是整数,并且是32bit的长整型。可以通过dtype参数在创建时指定元素类型
Step5: 上面的例子都是先创建一个Python序列,然后通过array函数将其转换为数组,这样做显然效率不高。因此NumPy提供了很多专门用来创建数组的函数。下面的每个函数都有一些关键字参数,具体用法请查看函数说明。
Step6: linspace函数通过指定开始值、终值和元素个数来创建一维数组,可以通过endpoint关键字指定是否包括终值,缺省设置是包括终值
Step7: logspace函数和linspace类似,不过它创建等比数列,下面的例子产生1(10^0)到100(10^2)、有20个元素的等比数列
Step8: 1.2 存取元素
Step9: 和Python的列表序列不同,通过下标范围获取的新的数组是原始数组的一个视图。它与原始数组共享同一块数据空间:
Step10: 1.3 多维数组
Step11: 用于存取数组的下标和仍然是一个有两个元素的元组,元组中的每个元素都是整数序列,分别对应数组的第0轴和第1轴。从两个序列的对应位置取出两个整数组成下标: a[0,1], a[1,2], ..., a[4,5]。
Step12: 2 ufunc运算
Step13: 我用下面这个小程序,比较了一下numpy.math和Python标准库的math.sin的计算速度:
Step14: 通过上面的例子我们了解了如何最有效率地使用math库和numpy库中的数学函数。因为它们各有长短,因此在导入时不建议使用*号全部载入,而是应该使用import numpy as np的方式载入,这样我们可以根据需要选择合适的函数调用。
|
12,611
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# print type(full_data)
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
# display(outcomes.head())
display(data.head())
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
# predictions = pd.Series(np.ones(len(outcomes), dtype = int))
# print accuracy_score(outcomes, predictions)
# print predictions
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
# print data.iterrows()
# print predictions
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Sex')
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append((passenger['Sex'] == 'female'))
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
print accuracy_score(outcomes, predictions)
# survival_stats(data, outcomes, 'Age', ["Sex == 'female'"])
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if((passenger['Age'] < 10)):
predictions.append(1)
else:
predictions.append((passenger['Sex'] == 'female'))
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
print accuracy_score(outcomes, predictions)
# survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"])
survival_stats(data, outcomes, 'Pclass', ["Sex == 'female'", "Age > 38","Age < 60"])
survival_stats(data, outcomes, 'Embarked', ["Sex == 'male'","Pclass == 3","Parch == 0","SibSp == 0"])
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if(passenger['Sex'] == 'female'):
if((passenger['Pclass'] == 3) and (passenger['Age'] >= 38) and (passenger['Age'] <= 60) ):
predictions.append(0)
else:
predictions.append(1)
else:
if((passenger['Age'] < 10)):
predictions.append(1)
else :
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
Step5: Tip
Step6: Question 1
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Step18: Question 4
|
12,612
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
class GaussianCopulaTriL(tfd.TransformedDistribution):
Takes a location, and lower triangular matrix for the Cholesky factor.
def __init__(self, loc, scale_tril):
super(GaussianCopulaTriL, self).__init__(
distribution=tfd.MultivariateNormalTriL(
loc=loc,
scale_tril=scale_tril),
bijector=tfb.NormalCDF(),
validate_args=False,
name="GaussianCopulaTriLUniform")
# Plot an example of this.
unit_interval = np.linspace(0.01, 0.99, num=200, dtype=np.float32)
x_grid, y_grid = np.meshgrid(unit_interval, unit_interval)
coordinates = np.concatenate(
[x_grid[..., np.newaxis],
y_grid[..., np.newaxis]], axis=-1)
pdf = GaussianCopulaTriL(
loc=[0., 0.],
scale_tril=[[1., 0.8], [0., 0.6]],
).prob(coordinates)
# Plot its density.
plt.contour(x_grid, y_grid, pdf, 100, cmap=plt.cm.jet);
a = 2.0
b = 2.0
gloc = 0.
gscale = 1.
x = tfd.Kumaraswamy(a, b)
y = tfd.Gumbel(loc=gloc, scale=gscale)
# Plot the distributions, assuming independence
x_axis_interval = np.linspace(0.01, 0.99, num=200, dtype=np.float32)
y_axis_interval = np.linspace(-2., 3., num=200, dtype=np.float32)
x_grid, y_grid = np.meshgrid(x_axis_interval, y_axis_interval)
pdf = x.prob(x_grid) * y.prob(y_grid)
# Plot its density
plt.contour(x_grid, y_grid, pdf, 100, cmap=plt.cm.jet);
class WarpedGaussianCopula(tfd.TransformedDistribution):
Application of a Gaussian Copula on a list of target marginals.
This implements an application of a Gaussian Copula. Given [x_0, ... x_n]
which are distributed marginally (with CDF) [F_0, ... F_n],
`GaussianCopula` represents an application of the Copula, such that the
resulting multivariate distribution has the above specified marginals.
The marginals are specified by `marginal_bijectors`: These are
bijectors whose `inverse` encodes the CDF and `forward` the inverse CDF.
block_sizes is a 1-D Tensor to determine splits for `marginal_bijectors`
length should be same as length of `marginal_bijectors`.
See tfb.Blockwise for details
def __init__(self, loc, scale_tril, marginal_bijectors, block_sizes=None):
super(WarpedGaussianCopula, self).__init__(
distribution=GaussianCopulaTriL(loc=loc, scale_tril=scale_tril),
bijector=tfb.Blockwise(bijectors=marginal_bijectors,
block_sizes=block_sizes),
validate_args=False,
name="GaussianCopula")
# Create our coordinates:
coordinates = np.concatenate(
[x_grid[..., np.newaxis], y_grid[..., np.newaxis]], -1)
def create_gaussian_copula(correlation):
# Use Gaussian Copula to add dependence.
return WarpedGaussianCopula(
loc=[0., 0.],
scale_tril=[[1., 0.], [correlation, tf.sqrt(1. - correlation ** 2)]],
# These encode the marginals we want. In this case we want X_0 has
# Kumaraswamy marginal, and X_1 has Gumbel marginal.
marginal_bijectors=[
tfb.Invert(tfb.KumaraswamyCDF(a, b)),
tfb.Invert(tfb.GumbelCDF(loc=0., scale=1.))])
# Note that the zero case will correspond to independent marginals!
correlations = [0., -0.8, 0.8]
copulas = []
probs = []
for correlation in correlations:
copula = create_gaussian_copula(correlation)
copulas.append(copula)
probs.append(copula.prob(coordinates))
# Plot it's density
for correlation, copula_prob in zip(correlations, probs):
plt.figure()
plt.contour(x_grid, y_grid, copula_prob, 100, cmap=plt.cm.jet)
plt.title('Correlation {}'.format(correlation))
def kumaraswamy_pdf(x):
return tfd.Kumaraswamy(a, b).prob(np.float32(x))
def gumbel_pdf(x):
return tfd.Gumbel(gloc, gscale).prob(np.float32(x))
copula_samples = []
for copula in copulas:
copula_samples.append(copula.sample(10000))
plot_rows = len(correlations)
plot_cols = 2 # for 2 densities [kumarswamy, gumbel]
fig, axes = plt.subplots(plot_rows, plot_cols, sharex='col', figsize=(18,12))
# Let's marginalize out on each, and plot the samples.
for i, (correlation, copula_sample) in enumerate(zip(correlations, copula_samples)):
k = copula_sample[..., 0].numpy()
g = copula_sample[..., 1].numpy()
_, bins, _ = axes[i, 0].hist(k, bins=100, density=True)
axes[i, 0].plot(bins, kumaraswamy_pdf(bins), 'r--')
axes[i, 0].set_title('Kumaraswamy from Copula with correlation {}'.format(correlation))
_, bins, _ = axes[i, 1].hist(g, bins=100, density=True)
axes[i, 1].plot(bins, gumbel_pdf(bins), 'r--')
axes[i, 1].set_title('Gumbel from Copula with correlation {}'.format(correlation))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: コピュラ入門
Step3: [copula](https
Step4: しかし、このようなモデルの力は、確率積分変換を使用して任意の R.V. にコピュラを使用するところにあります。こうすることで、任意の周辺分布を指定し、コピュラを使って接合することができます。
Step6: 異なる周辺分布を使用した同時分布
Step7: 最後に、このガウスコピュラを実際に使用してみましょう。バリアンス 1 に対応する $\begin{bmatrix}1 & 0\rho & \sqrt{(1-\rho^2)}\end{bmatrix}$ のコレスキー、そして多変量正規分布の相関 $\rho$ を使用します。
Step8: 最後に、実際に求めていた周辺分布を実際に取得することを確認しましょう。
|
12,613
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
# Set up code checking
import os
if not os.path.exists("../input/candy.csv"):
os.symlink("../input/data-for-datavis/candy.csv", "../input/candy.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex4 import *
print("Setup Complete")
# Path of the file to read
candy_filepath = "../input/candy.csv"
# Fill in the line below to read the file into a variable candy_data
candy_data = ____
# Run the line below with no changes to check that you've loaded the data correctly
step_1.check()
#%%RM_IF(PROD)%%
candy_data = pd.read_csv(candy_filepath, index_col="id")
step_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.hint()
#_COMMENT_IF(PROD)_
step_1.solution()
# Print the first five rows of the data
____ # Your code here
# Fill in the line below: Which candy was more popular with survey respondents:
# '3 Musketeers' or 'Almond Joy'? (Please enclose your answer in single quotes.)
more_popular = ____
# Fill in the line below: Which candy has higher sugar content: 'Air Heads'
# or 'Baby Ruth'? (Please enclose your answer in single quotes.)
more_sugar = ____
# Check your answers
step_2.check()
#%%RM_IF(PROD)%%
more_popular = '3 Musketeers'
more_sugar = 'Air Heads'
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution()
# Scatter plot showing the relationship between 'sugarpercent' and 'winpercent'
____ # Your code here
# Check your answer
step_3.a.check()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_3.a.assert_check_passed()
#%%RM_IF(PROD)%%
sns.regplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_3.a.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_3.a.hint()
#_COMMENT_IF(PROD)_
step_3.a.solution_plot()
#_COMMENT_IF(PROD)_
step_3.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_3.b.solution()
# Scatter plot w/ regression line showing the relationship between 'sugarpercent' and 'winpercent'
____ # Your code here
# Check your answer
step_4.a.check()
#%%RM_IF(PROD)%%
sns.regplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_4.a.assert_check_passed()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_4.a.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_4.a.hint()
#_COMMENT_IF(PROD)_
step_4.a.solution_plot()
#_COMMENT_IF(PROD)_
step_4.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_4.b.solution()
# Scatter plot showing the relationship between 'pricepercent', 'winpercent', and 'chocolate'
____ # Your code here
# Check your answer
step_5.check()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['pricepercent'], y=candy_data['winpercent'], hue=candy_data['chocolate'])
step_5.assert_check_passed()
#%%RM_IF(PROD)%%
#sns.scatterplot(x=candy_data['pricepercent'], y=candy_data['winpercent'])
#step_5.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_5.hint()
#_COMMENT_IF(PROD)_
step_5.solution_plot()
# Color-coded scatter plot w/ regression lines
____ # Your code here
# Check your answer
step_6.a.check()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['pricepercent'], y=candy_data['winpercent'])
step_6.a.assert_check_failed()
#%%RM_IF(PROD)%%
sns.lmplot(x="pricepercent", y="winpercent", hue="chocolate", data=candy_data)
step_6.a.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_6.a.hint()
#_COMMENT_IF(PROD)_
step_6.a.solution_plot()
#_COMMENT_IF(PROD)_
step_6.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_6.b.solution()
# Scatter plot showing the relationship between 'chocolate' and 'winpercent'
____ # Your code here
# Check your answer
step_7.a.check()
#%%RM_IF(PROD)%%
sns.swarmplot(x=candy_data['chocolate'], y=candy_data['winpercent'])
step_7.a.assert_check_passed()
#%%RM_IF(PROD)%%
#sns.swarmplot(x=candy_data['chocolate'], y=candy_data['sugarpercent'])
#step_7.a.assert_check_failed()
#%%RM_IF(PROD)%%
#sns.swarmplot(x=candy_data['fruity'], y=candy_data['winpercent'])
#step_7.a.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_7.a.hint()
#_COMMENT_IF(PROD)_
step_7.a.solution_plot()
#_COMMENT_IF(PROD)_
step_7.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_7.b.solution()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The questions below will give you feedback on your work. Run the following cell to set up our feedback system.
Step2: Step 1
Step3: Step 2
Step4: The dataset contains 83 rows, where each corresponds to a different candy bar. There are 13 columns
Step5: Step 3
Step6: Part B
Step7: Step 4
Step8: Part B
Step9: Step 5
Step10: Can you see any interesting patterns in the scatter plot? We'll investigate this plot further by adding regression lines in the next step!
Step 6
Step11: Part B
Step12: Step 7
Step13: Part B
|
12,614
|
<ASSISTANT_TASK:>
Python Code:
path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt")
text = open(path).read()
print('corpus length:', len(text))
chars = sorted(list(set(text)))
vocab_size = len(chars)+1
print('total chars:', vocab_size)
chars.insert(0, "\0")
''.join(chars[1:])
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
idx = [char_indices[c] for c in text]
idx[:10]
''.join(indices_char[i] for i in idx[:70])
cs=3
c1_dat = [idx[i] for i in xrange(0, len(idx)-1-cs, cs)]
c2_dat = [idx[i+1] for i in xrange(0, len(idx)-1-cs, cs)]
c3_dat = [idx[i+2] for i in xrange(0, len(idx)-1-cs, cs)]
c4_dat = [idx[i+3] for i in xrange(0, len(idx)-1-cs, cs)]
len(idx)//3, len(c1_dat), len(c2_dat), len(c3_dat), len(c4_dat)
[indices_char[x] for xs in (c1_dat[-2:], c2_dat[-2:], c3_dat[-2:]) for x in xs]
idx[-16:], c1_dat[-2:], c2_dat[-2:], c3_dat[-2:], c4_dat[-2:]
x1 = np.stack(c1_dat[:-2])
x2 = np.stack(c2_dat[:-2])
x3 = np.stack(c3_dat[:-2])
y = np.stack(c4_dat[:-2])
x1[:4], x2[:4], x3[:4]
y[:4]
x1.shape, y.shape
n_fac = 42
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name)
emb = Embedding(n_in, n_out, input_length=1)(inp)
return inp, Flatten()(emb)
c1_in, c1 = embedding_input('c1', vocab_size, n_fac)
c2_in, c2 = embedding_input('c2', vocab_size, n_fac)
c3_in, c3 = embedding_input('c3', vocab_size, n_fac)
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
c1_hidden = dense_in(c1)
dense_hidden = Dense(n_hidden, activation='tanh')
c2_dense = dense_in(c2)
hidden_2 = dense_hidden(c1_hidden)
c2_hidden = merge([c2_dense, hidden_2])
c3_dense = dense_in(c3)
hidden_3 = dense_hidden(c2_hidden)
c3_hidden = merge([c3_dense, hidden_3])
dense_out = Dense(vocab_size, activation='softmax')
c4_out = dense_out(c3_hidden)
model = Model([c1_in, c2_in, c3_in], c4_out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.optimizer.lr.set_value(0.000001)
model.fit([x1,x2,x3], y, batch_size=64, nb_epoch=4)
model.optimizer.lr.set_value(0.01)
model.fit([x1,x2,x3], y, batch_size=64, nb_epoch=4)
model.optimizer.lr.set_value(0.000001)
model.fit([x1,x2,x3], y, batch_size=64, nb_epoch=4)
model.optimizer.lr.set_value(0.01)
model.fit([x1,x2,x3], y, batch_size=64, nb_epoch=4)
def get_next(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict(arrs)
i = np.argmax(p)
return chars[i]
get_next('phi')
get_next(' th')
get_next(' an')
model_path = "data/rnn/models/"
%mkdir -p $model_path
model.save_weights(model_path+'model1.h5')
model.load_weights(model_path+'model1.h5')
cs=8
c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]
for n in xrange(cs)]
c_out_dat = [idx[i+cs] for i in xrange(0, len(idx)-1-cs, cs)]
xs = [np.stack(c[:-2]) for c in c_in_dat]
len(xs), xs[0].shape
y = np.stack(c_out_dat[:-2])
[xs[n][:cs] for n in range(cs)]
y[:cs]
n_fac = 42
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name+'_in')
emb = Embedding(n_in, n_out, input_length=1, name=name+'_emb')(inp)
return inp, Flatten()(emb)
c_ins = [embedding_input('c'+str(n), vocab_size, n_fac) for n in range(cs)]
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax')
hidden = dense_in(c_ins[0][1])
for i in range(1,cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden])
c_out = dense_out(hidden)
model = Model([c[0] for c in c_ins], c_out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(xs, y, batch_size=64, nb_epoch=12)
def get_next(inp):
idxs = [np.array(char_indices[c])[np.newaxis] for c in inp]
p = model.predict(idxs)
return chars[np.argmax(p)]
get_next('for thos')
get_next('part of ')
get_next('queens a')
model.save_weights(model_path+'model2.h5')
model.load_weights(model_path+'model2.h5')
n_hidden, n_fac, cs, vocab_size = (256, 42, 8, 86)
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, activation='relu', inner_init='identity'),
Dense(vocab_size, activation='softmax')
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(np.concatenate(xs,axis=1), y, batch_size=64, nb_epoch=8)
def get_next_keras(inp):
idxs = [char_indices[c] for c in inp]
arrs = np.array(idxs)[np.newaxis,:]
p = model.predict(arrs)[0]
return chars[np.argmax(p)]
get_next_keras('this is ')
get_next_keras('part of ')
get_next_keras('queens a')
model.save_weights(model_path+'model3.h5')
model.load_weights(model_path+'model3.h5')
#c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]
# for n in range(cs)]
c_out_dat = [[idx[i+n] for i in xrange(1, len(idx)-cs, cs)]
for n in range(cs)]
ys = [np.stack(c[:-2]) for c in c_out_dat]
len(ys), ys[0].shape
[xs[n][:cs] for n in range(cs)]
[ys[n][:cs] for n in range(cs)]
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax', name='output')
inp1 = Input(shape=(n_fac,), name='zeros')
hidden = dense_in(inp1)
outs = []
for i in range(cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden], mode='sum')
# every layer now has an output
outs.append(dense_out(hidden))
model = Model([inp1]+[c[0] for c in c_ins], outs)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
zeros = np.tile(np.zeros(n_fac), (len(xs[0]),1))
zeros.shape
model.fit([zeros]+xs, ys, batch_size=64, nb_epoch=12)
ys[0].shape
def get_nexts(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict([np.zeros(n_fac)[np.newaxis,:]] + arrs)
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts(' this is')
get_nexts(' part of')
n_hidden, n_fac, cs, vocab_size
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, activation='relu', inner_init='identity', return_sequences=True),
TimeDistributed(Dense(vocab_size, activation='softmax'))
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
xs[0].shape, ys[0].shape
x_rnn=np.stack(xs, axis=1)
y_rnn=np.atleast_3d(np.stack(ys, axis=1)) # only need to expand dims on ys if fit was not called, above
x_rnn.shape, y_rnn.shape
model.fit(x_rnn, y_rnn, batch_size=64, nb_epoch=8)
def get_nexts_keras(inp):
idxs = [char_indices[c] for c in inp]
arr = np.array(idxs)[np.newaxis,:]
p = model.predict(arr)[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_keras(' this is')
model.save_weights(model_path+'model5.h5')
model.load_weights(model_path+'model5.h5')
model=Sequential([
SimpleRNN(n_hidden, activation='relu', inner_init='identity',
input_shape=(cs, vocab_size), return_sequences=True),
TimeDistributed(Dense(vocab_size, activation='softmax'))
])
model.compile(loss='categorical_crossentropy', optimizer=Adam())
oh_ys = [to_categorical(y, vocab_size) for y in ys]
oh_y_rnn=np.stack(oh_ys, axis=1)
oh_xs = [to_categorical(x, vocab_size) for x in xs]
oh_x_rnn=np.stack(oh_xs, axis=1)
oh_x_rnn.shape, oh_y_rnn.shape
model.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8)
def get_nexts_oh(inp):
idxs = np.array([char_indices[c] for c in inp])
arr = to_categorical(idxs, vocab_size)
p = model.predict(arr[np.newaxis,:])[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_oh(' this is')
model.save_weights(model_path+'model6.h5')
model.load_weights(model_path+'model6.h5')
bs=64
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs, batch_input_shape=(bs,cs)),
BatchNormalization(),
LSTM(n_hidden, activation='relu', return_sequences=True, stateful=True),
TimeDistributed(Dense(vocab_size, activation='softmax'))
])
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
mx = len(x_rnn)//bs*bs
model.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)
model.optimizer.lr=1e-4
model.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)
model.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)
model.save_weights(model_path+'model7.h5')
model.load_weights(model_path+'model7.h5')
n_input = vocab_size
n_output = vocab_size
def init_wgts(rows, cols):
scale = math.sqrt(2/rows)
return shared(normal(scale=scale, size=(rows, cols)).astype(np.float32))
def init_bias(rows):
return shared(np.zeros(rows, dtype=np.float32))
def wgts_and_bias(n_in, n_out):
return init_wgts(n_in, n_out), init_bias(n_out)
def id_and_bias(n):
return shared(np.eye(n, dtype=np.float32)), init_bias(n)
t_inp = T.matrix('inp')
t_outp = T.matrix('outp')
t_h0 = T.vector('h0')
lr = T.scalar('lr')
all_args = [t_h0, t_inp, t_outp, lr]
W_h = id_and_bias(n_hidden)
W_x = wgts_and_bias(n_input, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
w_all = list(chain.from_iterable([W_h, W_x, W_y]))
def step(x, h, W_h, b_h, W_x, b_x, W_y, b_y):
# Calculate the hidden activations
h = nnet.relu(T.dot(x, W_x) + b_x + T.dot(h, W_h) + b_h)
# Calculate the output activations
y = nnet.softmax(T.dot(h, W_y) + b_y)
# Return both (the 'Flatten()' is to work around a theano bug)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp, outputs_info=[t_h0, None], non_sequences=w_all)
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
def upd_dict(wgts, grads, lr):
return OrderedDict({w: w-lr*g for (w,g) in zip(wgts,grads)})
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
X = oh_x_rnn
Y = oh_y_rnn
X.shape, Y.shape
err=0.0; l_rate=0.01
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 2000 == 1999:
print ("Error:{:.3f}".format(err/2000))
err=0.0
f_y = theano.function([t_h0, t_inp], v_y, allow_input_downcast=True)
pred = np.argmax(f_y(np.zeros(n_hidden), X[6]), axis=1)
act = np.argmax(X[6], axis=1)
[indices_char[o] for o in act]
[indices_char[o] for o in pred]
def sigmoid(x): return 1/(1+np.exp(-x))
def sigmoid_d(x):
output = sigmoid(x)
return output * (1-output)
def relu(x): return np.maximum(0., x)
def relu_d(x): return (x > 0.)*1.
relu(np.array([3.,-3.])), relu_d(np.array([3.,-3.]))
def dist(a,b): return pow(a-b,2)
def dist_d(a,b): return 2*(a-b)
import pdb
eps = 1e-7
def x_entropy(pred, actual):
return -np.sum(actual * np.log(np.clip(pred, eps, 1-eps)))
def x_entropy_d(pred, actual): return -actual/pred
def softmax(x): return np.exp(x)/np.exp(x).sum()
def softmax_d(x):
sm = softmax(x)
res = np.expand_dims(-sm,-1)*sm
res[np.diag_indices_from(res)] = sm*(1-sm)
return res
test_preds = np.array([0.2,0.7,0.1])
test_actuals = np.array([0.,1.,0.])
nnet.categorical_crossentropy(test_preds, test_actuals).eval()
x_entropy(test_preds, test_actuals)
test_inp = T.dvector()
test_out = nnet.categorical_crossentropy(test_inp, test_actuals)
test_grad = theano.function([test_inp], T.grad(test_out, test_inp))
test_grad(test_preds)
x_entropy_d(test_preds, test_actuals)
pre_pred = random(oh_x_rnn[0][0].shape)
preds = softmax(pre_pred)
actual = oh_x_rnn[0][0]
np.allclose(softmax_d(pre_pred).dot(x_entropy_d(preds,actual)), preds-actual)
softmax(test_preds)
nnet.softmax(test_preds).eval()
test_out = T.flatten(nnet.softmax(test_inp))
test_grad = theano.function([test_inp], theano.gradient.jacobian(test_out, test_inp))
test_grad(test_preds)
softmax_d(test_preds)
act=relu
act_d=relu_d
loss=x_entropy
loss_d=x_entropy_d
def scan(fn, start, seq):
res = []
prev = start
for s in seq:
app = fn(prev, s)
res.append(app)
prev = app
return res
scan(lambda prev,curr: prev+curr, 0, range(5))
inp = oh_x_rnn
outp = oh_y_rnn
n_input = vocab_size
n_output = vocab_size
inp.shape, outp.shape
def one_char(prev, item):
# Previous state
tot_loss, pre_hidden, pre_pred, hidden, ypred = prev
# Current inputs and output
x, y = item
pre_hidden = np.dot(x, w_x) + np.dot(hidden, w_h)
hidden = act(pre_hidden)
pre_pred = np.dot(hidden, w_y)
ypred = softmax(pre_pred)
return (
# Keep track of loss so we can report it
tot_loss + loss(ypred, y),
# Used in backprop
pre_hidden, pre_pred,
# Used in next iteration
hidden,
# To provide predictions
ypred)
def get_chars(n): return zip(inp[n], outp[n])
def one_fwd(n): return scan(one_char, (0,0,0,np.zeros(n_hidden),0), get_chars(n))
# "Columnify" a vector
def col(x): return x[:,newaxis]
def one_bkwd(args, n):
global w_x,w_y,w_h
i=inp[n] # 8x86
o=outp[n] # 8x86
d_pre_hidden = np.zeros(n_hidden) # 256
for p in reversed(range(len(i))):
totloss, pre_hidden, pre_pred, hidden, ypred = args[p]
x=i[p] # 86
y=o[p] # 86
d_pre_pred = softmax_d(pre_pred).dot(loss_d(ypred, y)) # 86
d_pre_hidden = act_d(pre_hidden) * (np.dot(d_pre_pred, w_y.T) + np.dot(d_pre_hidden, w_h.T)) # 256
# d(loss)/d(w_y) = d(loss)/d(pre_pred) * d(pre_pred)/d(w_y)
w_y -= col(hidden) * d_pre_pred * alpha
# d(loss)/d(w_h) = d(loss)/d(pre_hidden[p-1]) * d(pre_hidden[p-1])/d(w_h)
if (p>0): w_h -= args[p-1][3].dot(d_pre_hidden) * alpha
w_x -= col(x) * d_pre_hidden * alpha
return d_pre_hidden
scale=math.sqrt(2./n_input)
w_x = normal(scale=scale, size=(n_input, n_hidden))
w_y = normal(scale=scale, size=(n_hidden, n_output))
w_h = np.eye(n_hidden, dtype=np.float32)
overallError=0
alpha=0.0001
for n in range(10000):
res = one_fwd(n)
overallError+=res[-1][0]
deriv = one_bkwd(res, n)
if(n % 1000 == 999):
print ("Error:{:.4f}; Gradient:{:.5f}".format(
overallError/1000, np.linalg.norm(deriv)))
overallError=0
model=Sequential([
GRU(n_hidden, return_sequences=True, input_shape=(cs, vocab_size),
activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='categorical_crossentropy', optimizer=Adam())
model.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8)
get_nexts_oh(' this is')
W_h = id_and_bias(n_hidden)
W_x = init_wgts(n_input, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
rW_h = init_wgts(n_hidden, n_hidden)
rW_x = wgts_and_bias(n_input, n_hidden)
uW_h = init_wgts(n_hidden, n_hidden)
uW_x = wgts_and_bias(n_input, n_hidden)
w_all = list(chain.from_iterable([W_h, W_y, uW_x, rW_x]))
w_all.extend([W_x, uW_h, rW_h])
def gate(x, h, W_h, W_x, b_x):
return nnet.sigmoid(T.dot(x, W_x) + b_x + T.dot(h, W_h))
def step(x, h, W_h, b_h, W_y, b_y, uW_x, ub_x, rW_x, rb_x, W_x, uW_h, rW_h):
reset = gate(x, h, rW_h, rW_x, rb_x)
update = gate(x, h, uW_h, uW_x, ub_x)
h_new = gate(x, h * reset, W_h, W_x, b_h)
h = update*h + (1-update)*h_new
y = nnet.softmax(T.dot(h, W_y) + b_y)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp, outputs_info=[t_h0, None], non_sequences=w_all)
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
err=0.0; l_rate=0.1
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 3000 == 2999:
l_rate *= 0.95
print ("Error:{:.2f}".format(err/3000))
err=0.0
W = (shared(np.concatenate([np.eye(n_hidden), normal(size=(n_input, n_hidden))])
.astype(np.float32)), init_bias(n_hidden))
rW = wgts_and_bias(n_input+n_hidden, n_hidden)
uW = wgts_and_bias(n_input+n_hidden, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
w_all = list(chain.from_iterable([W, W_y, uW, rW]))
def gate(m, W, b): return nnet.sigmoid(T.dot(m, W) + b)
def step(x, h, W, b, W_y, b_y, uW, ub, rW, rb):
m = T.concatenate([h, x])
reset = gate(m, rW, rb)
update = gate(m, uW, ub)
m = T.concatenate([h*reset, x])
h_new = gate(m, W, b)
h = update*h + (1-update)*h_new
y = nnet.softmax(T.dot(h, W_y) + b_y)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp, outputs_info=[t_h0, None], non_sequences=w_all)
def upd_dict(wgts, grads, lr):
return OrderedDict({w: w-lr*g for (w,g) in zip(wgts,grads)})
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
err=0.0; l_rate=0.01
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 3000 == 2999:
print ("Error:{:.2f}".format(err/3000))
err=0.0
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sometimes it's useful to have a zero value in the dataset, e.g. for padding
Step2: Map from chars to indices and back again
Step3: idx will be the data we use from now own - it simply converts all the characters to their index (based on the mapping above)
Step4: 3 char model
Step5: Our inputs
Step6: Our output
Step7: The first 4 inputs and outputs
Step8: The number of latent factors to create (i.e. the size of the embedding matrix)
Step9: Create inputs and embedding outputs for each of our 3 character inputs
Step10: Create and train model
Step11: This is the 'green arrow' from our diagram - the layer operation from input to hidden.
Step12: Our first hidden activation is simply this function applied to the result of the embedding of the first character.
Step13: This is the 'orange arrow' from our diagram - the layer operation from hidden to hidden.
Step14: Our second and third hidden activations sum up the previous hidden state (after applying dense_hidden) to the new input state.
Step15: This is the 'blue arrow' from our diagram - the layer operation from hidden to output.
Step16: The third hidden state is the input to our output layer.
Step17: Test model
Step18: Our first RNN!
Step19: For each of 0 through 7, create a list of every 8th character with that starting point. These will be the 8 inputs to out model.
Step20: Then create a list of the next character in each of these series. This will be the labels for our model.
Step21: So each column below is one series of 8 characters from the text.
Step22: ...and this is the next character after each sequence.
Step23: Create and train model
Step24: The first character of each sequence goes through dense_in(), to create our first hidden activations.
Step25: Then for each successive layer we combine the output of dense_in() on the next character with the output of dense_hidden() on the current hidden state, to create the new hidden state.
Step26: Putting the final hidden state through dense_out() gives us our output.
Step27: So now we can create our model.
Step28: Test model
Step29: Our first RNN with keras!
Step30: This is nearly exactly equivalent to the RNN we built ourselves in the previous section.
Step31: Returning sequences
Step32: Reading down each column shows one set of inputs and outputs.
Step33: Create and train model
Step34: We're going to pass a vector of all zeros as our starting point - here's our input layers for that
Step35: Test model
Step36: Sequence model with keras
Step37: To convert our previous keras model into a sequence model, simply add the 'return_sequences=True' parameter, and add TimeDistributed() around our dense layer.
Step38: One-hot sequence model with keras
Step39: Stateful model with keras
Step40: A stateful model is easy to create (just add "stateful=True") but harder to train. We had to add batchnorm and use LSTM to get reasonable results.
Step41: Since we're using a fixed batch shape, we have to ensure our inputs and outputs are a even multiple of the batch size.
Step42: Theano RNN
Step43: Using raw theano, we have to create our weight matrices and bias vectors ourselves - here are the functions we'll use to do so (using glorot initialization).
Step44: We return the weights and biases together as a tuple. For the hidden weights, we'll use an identity initialization (as recommended by Hinton.)
Step45: Theano doesn't actually do any computations until we explicitly compile and evaluate the function (at which point it'll be turned into CUDA code and sent off to the GPU). So our job is to describe the computations that we'll want theano to do - the first step is to tell theano what inputs we'll be providing to our computation
Step46: Now we're ready to create our intial weight matrices.
Step47: Theano handles looping by using the GPU scan operation. We have to tell theano what to do at each step through the scan - this is the function we'll use, which does a single forward pass for one character
Step48: Now we can provide everything necessary for the scan operation, so we can setup that up - we have to pass in the function to call at each step, the sequence to step through, the initial values of the outputs, and any other arguments to pass to the step function.
Step49: We can now calculate our loss function, and all of our gradients, with just a couple of lines of code!
Step50: We even have to show theano how to do SGD - so we set up this dictionary of updates to complete after every forward pass, which apply to standard SGD update rule to every weight.
Step51: We're finally ready to compile the function!
Step52: To use it, we simply loop through our input data, calling the function compiled above, and printing our progress from time to time.
Step53: Pure python RNN!
Step54: We also have to define our own scan function. Since we're not worrying about running things in parallel, it's very simple to implement
Step55: ...for instance, scan on + is the cumulative sum.
Step56: Set up training
Step57: Here's the function to do a single forward pass of an RNN, for a single character.
Step58: We use scan to apply the above to a whole sequence of characters.
Step59: Now we can define the backward step. We use a loop to go through every element of the sequence. The derivatives are applying the chain rule to each step, and accumulating the gradients across the sequence.
Step60: Now we can set up our initial weight matrices. Note that we're not using bias at all in this example, in order to keep things simpler.
Step61: Our loop looks much like the theano loop in the previous section, except that we have to call the backwards step ourselves.
Step62: Keras GRU
Step63: Theano GRU
Step64: Here's the definition of a gate - it's just a sigmoid applied to the addition of the dot products of the input vectors.
Step65: Our step is nearly identical to before, except that we multiply our hidden state by our reset gate, and we update our hidden state based on the update gate.
Step66: Everything from here on is identical to our simple RNN in theano.
Step67: Combined weights
|
12,615
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
import pandas as pd
habilitando plots no notebook
%matplotlib inline
plot libs
import matplotlib.pyplot as plt
import seaborn as sns
Configurando o Matplotlib para o modo manual
plt.interactive(False)
DataFrame contendo 5 Séries com Distribuições Normais distintas
df = pd.DataFrame(
columns=["S1", "S2", "S3", "S4", "S5"],
data=(
np.random.randn(100, 5) * np.array([10, 15, 50, 100, 200]) + np.array([0, 5, 30, 30, 50])
)
)
Histograma sem Normalização
plt.figure(figsize=(12,8))
plt.hist(df.S1, bins=10)
plt.show()
Histograma com Normalização
plt.figure(figsize=(12,8))
plt.hist(df.S1, bins=10, normed=True)
plt.show()
df.S1.describe()
Histograma de duas Séries
plt.figure(figsize=(12,8))
plt.hist(df[["S1", "S2"]], bins=10, normed=True)
plt.show()
df[["S1", "S2"]].describe()
Histograma de mais de duas Séries
plt.figure(figsize=(12,8))
plt.hist(df, bins=10, normed=True)
plt.show()
df.describe()
plt.figure(figsize=(15,10))
plt.hist(df.S1, bins=10, normed=True, color="blue", alpha=0.5, label="S1")
plt.hist(df.S2, bins=10, normed=True, color="red", alpha=0.5, label="S2")
plt.legend()
plt.show()
Uma Série
df.S1.hist(bins=10, normed=True, figsize=(12,8))
plt.show()
Histograma de duas Séries
df[["S1", "S2"]].hist(bins=10, normed=True, figsize=(12,8))
plt.show()
df[["S1", "S2"]].describe()
Histograma de mais de duas Séries
df.hist(bins=10, figsize=(12,8))
plt.show()
df.describe()
Uma Série
plt.figure( figsize=(12,8))
sns.distplot(df.S1)
plt.show()
Histograma de duas Séries (1)
plt.figure( figsize=(12,8))
f, axes = plt.subplots(2, 1, figsize=(15, 8), sharex=True)
sns.distplot(df.S1, kde=False, color="blue", ax=axes[0])
sns.distplot(df.S2, kde=True, color="red", ax=axes[1])
plt.show()
df[["S1", "S2"]].describe()
Histograma de duas Séries (1)
f, axes = plt.subplots(1, 2, figsize=(15, 8), sharex=True)
sns.distplot(df.S1, kde=False, color="blue", ax=axes[0])
sns.distplot(df.S2, kde=True, color="red", ax=axes[1])
plt.show()
df[["S1", "S2"]].describe()
Histograma de mais de duas Séries
plt.figure( figsize=(12,8))
f, axes = plt.subplots(3, 2, figsize=(15, 8), sharex=True)
sns.distplot(df.S1, kde=False, color="blue", ax=axes[0, 0])
sns.distplot(df.S2, kde=True, color="red", ax=axes[0, 1])
sns.distplot(df.S3, kde=True, color="orange", ax=axes[1, 0])
sns.distplot(df.S4, kde=True, rug=True, color="gray", ax=axes[1, 1])
sns.distplot(df.S5, hist=False, kde_kws={"shade": True}, color="purple", ax=axes[2, 1])
plt.show()
df[["S1", "S2"]].describe()
default: vertical
plt.figure(figsize=(15,10))
plt.boxplot(df.S1)
plt.show()
horizontal pra variar
plt.figure(figsize=(15,10))
plt.boxplot(df.S1, vert=False)
plt.show()
plt.figure(figsize=(15,10))
plt.boxplot(df.T)
plt.show()
tmp1 = df[["S1", "S2"]]
tmp2 = df[["S3", "S3"]]
tmp2.columns = tmp1.columns # append com colunas iguais não cria NaNs
tmp = tmp1.append(tmp2)
plt.figure(figsize=(15,10))
plt.boxplot(tmp.T)
plt.show()
tmp.describe(percentiles=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
df.plot(kind="box", figsize=(15,10))
plt.show()
tmp1 = df.copy()
tmp2 = df[["S5", "S5", "S5", "S5", "S5"]]
tmp2.columns = tmp1.columns # append com colunas iguais não cria NaNs
tmp = tmp1.append(tmp2)
plt.figure(figsize=(15,10))
tmp.plot(kind="box", figsize=(15,10))
plt.show()
tmp.describe(percentiles=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
plt.figure(figsize=(15,10))
sns.boxplot(data=df)
plt.show()
serie_original = pd.Series(np.random.randn(900)) * 32 + 230
outliers = pd.Series(np.random.randn(100)) * 320 + 230
Escreva a a Solução Aqui
Escreva a a Solução Aqui
Escreva a a Solução Aqui
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Módulo 3
Step5: Dataset
Step10: Histogram Plot
Step11: Observação
Step15: Usando Pandas
Step17: Usando Seaborn
Step21: Observação
Step24: Box Plot
Step25: Mais Séries
Step26: Outliers!!!
Step27: Pandas for the Go!
Step28: Outliers!!!
Step29: Seaborn
Step30: Desafio
Step32: [ A ] Exploração
Step34: Parte 2
Step36: [ B ]
|
12,616
|
<ASSISTANT_TASK:>
Python Code:
import dx
import datetime as dt
import pandas as pd
from pylab import plt
plt.style.use('seaborn')
r = dx.constant_short_rate('r', 0.01)
me_1 = dx.market_environment('me', dt.datetime(2016, 1, 1))
me_1.add_constant('initial_value', 100.)
# starting value of simulated processes
me_1.add_constant('volatility', 0.2)
# volatiltiy factor
me_1.add_constant('final_date', dt.datetime(2017, 6, 30))
# horizon for simulation
me_1.add_constant('currency', 'EUR')
# currency of instrument
me_1.add_constant('frequency', 'W')
# frequency for discretization
me_1.add_constant('paths', 10000)
# number of paths
me_1.add_curve('discount_curve', r)
# number of paths
gbm_1 = dx.geometric_brownian_motion('gbm_1', me_1)
pdf = pd.DataFrame(gbm_1.get_instrument_values(), index=gbm_1.time_grid)
%matplotlib inline
pdf.iloc[:, :10].plot(legend=False, figsize=(10, 6));
me_2 = dx.market_environment('me_2', me_1.pricing_date)
me_2.add_environment(me_1) # add complete environment
me_2.add_constant('volatility', 0.5) # overwrite value
gbm_2 = dx.geometric_brownian_motion('gbm_2', me_2)
pdf = pd.DataFrame(gbm_2.get_instrument_values(), index=gbm_2.time_grid)
pdf.iloc[:, :10].plot(legend=False, figsize=(10, 6));
me_opt = dx.market_environment('me_opt', me_1.pricing_date)
me_opt.add_environment(me_1)
me_opt.add_constant('maturity', dt.datetime(2017, 6, 30))
me_opt.add_constant('strike', 110.)
am_put = dx.valuation_mcs_american_single(
name='am_put',
underlying=gbm_1,
mar_env=me_opt,
payoff_func='np.maximum(strike - instrument_values, 0)')
am_put.present_value()
am_put.delta()
am_put.gamma()
am_put.vega()
am_put.theta()
am_put.rho()
eur_call = dx.valuation_mcs_european_single(
name='eur_call',
underlying=gbm_2,
mar_env=me_opt,
payoff_func='np.maximum(maturity_value - strike, 0)')
eur_call.present_value()
eur_call.delta()
eur_call.gamma()
eur_call.vega()
eur_call.theta()
eur_call.rho()
me_3 = dx.market_environment('me_3', me_1.pricing_date)
me_3.add_environment(me_1) # add complete environment
# interest rate like parmeters
me_3.add_constant('initial_value', 0.05)
# initial value
me_3.add_constant('alpha', 0.1)
# initial variance
me_3.add_constant('beta', 0.5)
# exponent
me_3.add_constant('rho', 0.1)
# correlation factor
me_3.add_constant('vol_vol', 0.5)
# volatility of volatility/variance
sabr = dx.sabr_stochastic_volatility('sabr', me_3)
me_opt.add_constant('strike', me_3.get_constant('initial_value'))
sabr_call = dx.valuation_mcs_european_single(
name='sabr_call',
underlying=sabr,
mar_env=me_opt,
payoff_func='np.maximum(maturity_value - strike, 0)')
sabr_call.present_value(fixed_seed=True)
sabr_call.delta()
sabr_call.rho()
# resetting the option strike
me_opt.add_constant('strike', 110.)
me_1.add_constant('model', 'gbm')
me_2.add_constant('model', 'gbm')
put = dx.derivatives_position(
name='put',
quantity=2,
underlyings=['gbm_1'],
mar_env=me_opt,
otype='American single',
payoff_func='np.maximum(strike - instrument_values, 0)')
call = dx.derivatives_position(
name='call',
quantity=3,
underlyings=['gbm_2'],
mar_env=me_opt,
otype='European single',
payoff_func='np.maximum(maturity_value - strike, 0)')
risk_factors = {'gbm_1': me_1, 'gbm_2' : me_2}
correlations = [['gbm_1', 'gbm_2', -0.4]]
positions = {'put' : put, 'call' : call}
val_env = dx.market_environment('general', dt.datetime(2016, 1, 1))
val_env.add_constant('frequency', 'W')
val_env.add_constant('paths', 10000)
val_env.add_constant('starting_date', val_env.pricing_date)
val_env.add_constant('final_date', val_env.pricing_date)
val_env.add_curve('discount_curve', r)
port = dx.derivatives_portfolio(
name='portfolio', # name
positions=positions, # derivatives positions
val_env=val_env, # valuation environment
risk_factors=risk_factors, # relevant risk factors
correlations=correlations, # correlation between risk factors
parallel=False) # parallel valuation
port.get_values()
port.get_statistics()
deltas, benchvalue = port.get_port_risk(Greek='Delta')
deltas
deltas.loc(axis=0)[:, 'value'] - benchvalue
vegas, benchvalue = port.get_port_risk(Greek='Vega', step=0.05)
vegas
vegas.loc(axis=0)[:, 'value'] - benchvalue
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Risk Factor Models
Step2: We then define a market environment containing the major parameter specifications needed,
Step3: Next, the model object for the first risk factor, based on the geometric Brownian motion (Black-Scholes-Merton (1973) model).
Step4: Some paths visualized.
Step5: Second risk factor with higher volatility. We overwrite the respective value in the market environment.
Step6: Valuation Models
Step7: The first derivative is an American put option on the first risk factor gbm_1.
Step8: Let us calculate a Monte Carlo present value estimate and estimates for the option Greeks.
Step9: The second derivative is a European call option on the second risk factor gbm_2.
Step10: Valuation and Greek estimation for this option.
Step11: Excursion
Step12: The model object instantiation.
Step13: The valuation object instantiation.
Step14: Some statistics --- same syntax/API even if the model is more complex.
Step15: Options Portfolio
Step16: To compose a portfolio consisting of our just defined options, we need to define derivatives positions. Note that this step is independent from the risk factor model and option model definitions. We only use the market environment data and some additional information needed (e.g. payoff functions).
Step17: Let us define the relevant market by 2 Python dictionaries, the correlation between the two risk factors and a valuation environment.
Step18: These are used to define the derivatives portfolio.
Step19: Simulation and Valuation
Step20: Via the get_statistics methods delta and vega values are provided as well.
Step21: Much more complex scenarios are possible with DX Analytics
|
12,617
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%qtconsole --colors=linux
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import optimize
import pymc3 as pm
import theano as thno
import theano.tensor as T
# configure some basic options
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.notebook_repr_html', True)
plt.rcParams['figure.figsize'] = 12, 8
np.random.seed(0)
#### cut & pasted directly from the fetch_hogg2010test() function
## identical to the original dataset as hardcoded in the Hogg 2010 paper
dfhogg = pd.DataFrame(np.array([[1, 201, 592, 61, 9, -0.84],
[2, 244, 401, 25, 4, 0.31],
[3, 47, 583, 38, 11, 0.64],
[4, 287, 402, 15, 7, -0.27],
[5, 203, 495, 21, 5, -0.33],
[6, 58, 173, 15, 9, 0.67],
[7, 210, 479, 27, 4, -0.02],
[8, 202, 504, 14, 4, -0.05],
[9, 198, 510, 30, 11, -0.84],
[10, 158, 416, 16, 7, -0.69],
[11, 165, 393, 14, 5, 0.30],
[12, 201, 442, 25, 5, -0.46],
[13, 157, 317, 52, 5, -0.03],
[14, 131, 311, 16, 6, 0.50],
[15, 166, 400, 34, 6, 0.73],
[16, 160, 337, 31, 5, -0.52],
[17, 186, 423, 42, 9, 0.90],
[18, 125, 334, 26, 8, 0.40],
[19, 218, 533, 16, 6, -0.78],
[20, 146, 344, 22, 5, -0.56]]),
columns=['id','x','y','sigma_y','sigma_x','rho_xy'])
## for convenience zero-base the 'id' and use as index
dfhogg['id'] = dfhogg['id'] - 1
dfhogg.set_index('id', inplace=True)
## standardize (mean center and divide by 1 sd)
dfhoggs = (dfhogg[['x','y']] - dfhogg[['x','y']].mean(0)) / dfhogg[['x','y']].std(0)
dfhoggs['sigma_y'] = dfhogg['sigma_y'] / dfhogg['y'].std(0)
dfhoggs['sigma_x'] = dfhogg['sigma_x'] / dfhogg['x'].std(0)
## create xlims ylims for plotting
xlims = (dfhoggs['x'].min() - np.ptp(dfhoggs['x'])/5
,dfhoggs['x'].max() + np.ptp(dfhoggs['x'])/5)
ylims = (dfhoggs['y'].min() - np.ptp(dfhoggs['y'])/5
,dfhoggs['y'].max() + np.ptp(dfhoggs['y'])/5)
## scatterplot the standardized data
g = sns.FacetGrid(dfhoggs, size=8)
_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker="o", ls='')
_ = g.axes[0][0].set_ylim(ylims)
_ = g.axes[0][0].set_xlim(xlims)
plt.subplots_adjust(top=0.92)
_ = g.fig.suptitle('Scatterplot of Hogg 2010 dataset after standardization', fontsize=16)
with pm.Model() as mdl_ols:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=100)
b1 = pm.Normal('b1_slope', mu=0, sd=100)
## Define linear model
yest = b0 + b1 * dfhoggs['x']
## Use y error from dataset, convert into theano variable
sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],
dtype=thno.config.floatX), name='sigma_y')
## Define Normal likelihood
likelihood = pm.Normal('likelihood', mu=yest, sd=sigma_y, observed=dfhoggs['y'])
with mdl_ols:
## find MAP using Powell, seems to be more robust
start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)
## take samples
traces_ols = pm.sample(2000, start=start_MAP, step=pm.NUTS(), progressbar=True)
_ = pm.traceplot(traces_ols[-1000:], figsize=(12,len(traces_ols.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces_ols[-1000:]).iterrows()})
with pm.Model() as mdl_studentt:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=100)
b1 = pm.Normal('b1_slope', mu=0, sd=100)
## Define linear model
yest = b0 + b1 * dfhoggs['x']
## Use y error from dataset, convert into theano variable
sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],
dtype=thno.config.floatX), name='sigma_y')
## define prior for Student T degrees of freedom
nu = pm.DiscreteUniform('nu', lower=1, upper=100)
## Define Student T likelihood
likelihood = pm.StudentT('likelihood', mu=yest, sd=sigma_y, nu=nu
,observed=dfhoggs['y'])
with mdl_studentt:
## find MAP using Powell, seems to be more robust
start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)
## two-step sampling to allow Metropolis for nu (which is discrete)
step1 = pm.NUTS([b0, b1])
step2 = pm.Metropolis([nu])
## take samples
traces_studentt = pm.sample(2000, start=start_MAP, step=[step1, step2], progressbar=True)
_ = pm.traceplot(traces_studentt[-1000:]
,figsize=(12,len(traces_studentt.varnames)*1.5)
,lines={k: v['mean'] for k, v in pm.df_summary(traces_studentt[-1000:]).iterrows()})
def logp_signoise(yobs, is_outlier, yest_in, sigma_y_in, yest_out, sigma_y_out):
'''
Define custom loglikelihood for inliers vs outliers.
NOTE: in this particular case we don't need to use theano's @as_op
decorator because (as stated by Twiecki in conversation) that's only
required if the likelihood cannot be expressed as a theano expression.
We also now get the gradient computation for free.
'''
# likelihood for inliers
pdfs_in = T.exp(-(yobs - yest_in + 1e-4)**2 / (2 * sigma_y_in**2))
pdfs_in /= T.sqrt(2 * np.pi * sigma_y_in**2)
logL_in = T.sum(T.log(pdfs_in) * (1 - is_outlier))
# likelihood for outliers
pdfs_out = T.exp(-(yobs - yest_out + 1e-4)**2 / (2 * (sigma_y_in**2 + sigma_y_out**2)))
pdfs_out /= T.sqrt(2 * np.pi * (sigma_y_in**2 + sigma_y_out**2))
logL_out = T.sum(T.log(pdfs_out) * is_outlier)
return logL_in + logL_out
with pm.Model() as mdl_signoise:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=100)
b1 = pm.Normal('b1_slope', mu=0, sd=100)
## Define linear model
yest_in = b0 + b1 * dfhoggs['x']
## Define weakly informative priors for the mean and variance of outliers
yest_out = pm.Normal('yest_out', mu=0, sd=100)
sigma_y_out = pm.HalfNormal('sigma_y_out', sd=100)
## Define Bernoulli inlier / outlier flags according to a hyperprior
## fraction of outliers, itself constrained to [0,.5] for symmetry
frac_outliers = pm.Uniform('frac_outliers', lower=0., upper=.5)
is_outlier = pm.Bernoulli('is_outlier', p=frac_outliers, shape=dfhoggs.shape[0])
## Extract observed y and sigma_y from dataset, encode as theano objects
yobs = thno.shared(np.asarray(dfhoggs['y'], dtype=thno.config.floatX), name='yobs')
sigma_y_in = thno.shared(np.asarray(dfhoggs['sigma_y']
, dtype=thno.config.floatX), name='sigma_y_in')
## Use custom likelihood using DensityDist
likelihood = pm.DensityDist('likelihood', logp_signoise,
observed={'yobs':yobs, 'is_outlier':is_outlier,
'yest_in':yest_in, 'sigma_y_in':sigma_y_in,
'yest_out':yest_out, 'sigma_y_out':sigma_y_out})
with mdl_signoise:
## two-step sampling to create Bernoulli inlier/outlier flags
step1 = pm.NUTS([frac_outliers, yest_out, sigma_y_out, b0, b1])
step2 = pm.BinaryMetropolis([is_outlier], tune_interval=100)
## find MAP using Powell, seems to be more robust
start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)
## take samples
traces_signoise = pm.sample(2000, start=start_MAP, step=[step1,step2], progressbar=True)
_ = pm.traceplot(traces_signoise[-1000:], figsize=(12,len(traces_signoise.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces_signoise[-1000:]).iterrows()})
outlier_melt = pd.melt(pd.DataFrame(traces_signoise['is_outlier', -1000:],
columns=['[{}]'.format(int(d)) for d in dfhoggs.index]),
var_name='datapoint_id', value_name='is_outlier')
ax0 = sns.pointplot(y='datapoint_id', x='is_outlier', data=outlier_melt,
kind='point', join=False, ci=None, size=4, aspect=2)
_ = ax0.vlines([0,1], 0, 19, ['b','r'], '--')
_ = ax0.set_xlim((-0.1,1.1))
_ = ax0.set_xticks(np.arange(0, 1.1, 0.1))
_ = ax0.set_xticklabels(['{:.0%}'.format(t) for t in np.arange(0,1.1,0.1)])
_ = ax0.yaxis.grid(True, linestyle='-', which='major', color='w', alpha=0.4)
_ = ax0.set_title('Prop. of the trace where datapoint is an outlier')
_ = ax0.set_xlabel('Prop. of the trace where is_outlier == 1')
cutoff = 5
dfhoggs['outlier'] = np.percentile(traces_signoise[-1000:]['is_outlier'],cutoff, axis=0)
dfhoggs['outlier'].value_counts()
g = sns.FacetGrid(dfhoggs, size=8, hue='outlier', hue_order=[True,False],
palette='Set1', legend_out=False)
lm = lambda x, samp: samp['b0_intercept'] + samp['b1_slope'] * x
pm.glm.plot_posterior_predictive(traces_ols[-1000:],
eval=np.linspace(-3, 3, 10), lm=lm, samples=200, color='#22CC00', alpha=.2)
pm.glm.plot_posterior_predictive(traces_studentt[-1000:], lm=lm,
eval=np.linspace(-3, 3, 10), samples=200, color='#FFA500', alpha=.5)
pm.glm.plot_posterior_predictive(traces_signoise[-1000:], lm=lm,
eval=np.linspace(-3, 3, 10), samples=200, color='#357EC7', alpha=.3)
_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker="o", ls='').add_legend()
_ = g.axes[0][0].annotate('OLS Fit: Green\nStudent-T Fit: Orange\nSignal Vs Noise Fit: Blue',
size='x-large', xy=(1,0), xycoords='axes fraction',
xytext=(-160,10), textcoords='offset points')
_ = g.axes[0][0].set_ylim(ylims)
_ = g.axes[0][0].set_xlim(xlims)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and Prepare Data
Step2: Observe
Step3: Sample
Step4: View Traces
Step5: NOTE
Step6: Sample
Step7: View Traces
Step8: Observe
Step9: Sample
Step10: View Traces
Step11: NOTE
Step12: Observe
Step13: Posterior Prediction Plots for OLS vs StudentT vs SignalNoise
|
12,618
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from time import time
import datetime
import lightgbm as lgb
import gc, warnings
gc.collect()
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import precision_score, recall_score, confusion_matrix, accuracy_score
from sklearn.metrics import roc_auc_score, f1_score, roc_curve, auc,precision_recall_curve
from scipy import interp
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
traintr = pd.read_csv('input/train_transaction.csv.zip')
trainid = pd.read_csv('input/train_identity.csv.zip')
testtr = pd.read_csv('input/test_transaction.csv.zip')
testid = pd.read_csv('input/test_identity.csv.zip')
START_DATE = '2017-12-01'
startdate = datetime.datetime.strptime(START_DATE, '%Y-%m-%d')
traintr['tdt'] = traintr['TransactionDT'].apply(lambda x: (startdate + datetime.timedelta(seconds = x)))
traintr['thour'] = traintr.tdt.dt.hour
traintr['tdate'] = traintr.tdt.dt.date
testtr['tdt'] = testtr['TransactionDT'].apply(lambda x: (startdate + datetime.timedelta(seconds = x)))
testtr['thour'] = testtr.tdt.dt.hour
testtr['tdate'] = testtr.tdt.dt.date
z = traintr.groupby('tdate').isFraud.sum()
plt.figure(figsize=(14,10))
plt.title('isFraud.sum() per day')
plt.plot(z.values, linewidth=1)
plt.scatter(np.arange(z.shape[0]),z.values, s=10, alpha=0.5, c='red')
tags = []
for i in range(7):
mean = z[i:][::7].median()
tags.append(plt.axhline(mean, linewidth=1, linestyle='--', label='Day{} : {}'.format(i, np.round(mean,2))))
tags.append(plt.plot(z.rolling(7).mean().fillna(z.mean()).values, label='Weekly MA')[0])
plt.legend(handles=tags)
plt.show()
plt.title('# Frauds / Day')
plt.hist(z, 100)
plt.axvline(120, c='r')
plt.show()
for i in range(7):
plt.title('Day '+str(i))
plt.hist(z[i:][::7], 20)
plt.axvline(120, c='r')
plt.show()
traintr['fraudCnt'] = traintr.groupby('tdate').isFraud.transform('sum')
# Here are some columns that had >0.03 correlation
# It looks like I copied and pasted some of them twice.
fraudCnt = traintr[[
# NOTE: If we lavel encode the M columns, they also
# have decent corr ...
'fraudCnt',
'D4','D6','D10','D11','D14','D15',
'V38','V25','V45','V37','V44',
'V67','V86','V66','V99','V95','V96','V97',
'V147', 'V149', 'V99', 'V100', 'V144', 'V136', 'V139', 'V140', 'V126',
'V132', 'V101', 'V104', 'V133', 'V127', 'V102', 'V134', 'V128', 'V103',
'V105', 'V106', 'V143', 'V169', 'V157', 'V185', 'V156', 'V149', 'V158', 'V186', 'V189', 'V188',
'V190', 'V199', 'V170', 'V176', 'V175', 'V180', 'V177', 'V167', 'V181',
'V178', 'V182', 'V168', 'V179', 'V183', 'V165', 'V164', 'V229', 'V217', 'V231', 'V243', 'V232', 'V233', 'V226', 'V218', 'V219',
'V216', 'V210', 'V236', 'V237', 'V221', 'V215', 'V222', 'V234', 'V199',
'V200', 'V201', 'V230', 'V209', 'V242', 'V244', 'V203', 'V228', 'V246',
'V202', 'V211', 'V212', 'V204', 'V213', 'V274', 'V275', 'V273', 'V259', 'V297', 'V294', 'V293', 'V279', 'V295',
'V280', 'V296', 'V298', 'V299', 'V258', 'V257', 'V320', 'V306', 'V317', 'V316', 'V307', 'V325', 'V308', 'V318', 'V336',
'V299', 'V339', 'V335', 'V338', 'V326', 'V331', 'V322', 'V327', 'V332',
'V333', 'V323', 'V328', 'V324', 'V329', 'V330'
]].corr().fraudCnt.abs().sort_values()
fraudCnt
introspect = pd.DataFrame({
'col': fraudCnt.index,
'correlation': fraudCnt.values,
'nas': traintr[fraudCnt.index].isna().sum().values/traintr.shape[0]
})
introspect
introspect[introspect.nas>0.286047].sort_values(['nas','correlation'])
# These are the columns with the smallest %nans yet the highest correlation
# to daily fraud count:
V299, V106, V296, D10, V25, V66, D15, V86, D4, V44, D11
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Above we notice that the number of frauds per day seems to stay pretty stable throughout the trainset
Step2: Correlation to daily isFraud.sum()?
Step3: It is no surprise to me that variables with high nan ratio (sparse values) have good correlation with isFraud.sum(). We should look for those variables that have a low nan count but high correlation and research them further...
|
12,619
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import random as rnd
import pandas as pd
import numpy as np
import time
import datetime
import calendar
# fix what is missing with the datetime/time/calendar package
def add_months(sourcedate,months):
month = sourcedate.month - 1 + months
year = int(sourcedate.year + month / 12 )
month = month % 12 + 1
day = min(sourcedate.day,calendar.monthrange(year, month)[1])
return datetime.date(year,month,day)
# measure how long it takes to run the script
startit = time.time()
dtstartit = datetime.datetime.now()
class Seller():
def __init__(self, name):
self.name = name
self.wta = []
self.step = 0
self.prod = 2000
self.lb_price = 10
self.lb_multiplier = 0
self.ub_price = 20
self.ub_multiplier = 0
self.init_reserve = 500000
self.reserve = 500000
#multiple market idea, also 'go away from market'
self.subscr_market = {}
# the supplier has n quantities that they can sell
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self):
self.update_price()
n = self.prod
l = self.lb_price + self.lb_multiplier
u = self.ub_price + self.ub_multiplier
wta = []
for i in range(n):
p = rnd.uniform(l, u)
wta.append(p)
if len(wta) < self.reserve:
self.wta = wta
else:
self.wta = wta[0:(self.reserve-1)]
self.prod = self.reserve
def get_name(self):
return self.name
def get_asks(self):
return self.wta
def extract(self, cur_extraction):
if self.reserve > 0:
self.reserve = self.reserve - cur_extraction
else:
self.prod = 0
# production costs rise a 100%
def update_price(self):
depletion = (self.init_reserve - self.reserve) / self.init_reserve
self.ub_multiplier = int(self.ub_price * depletion)
self.lb_multiplier = int(self.lb_price * depletion)
class Buyer():
def __init__(self, name):
self.name = name
self.type = 0
self.wtp = []
self.step = 0
self.base_demand = 0
self.max_demand = 0
self.lb_price = 10
self.ub_price = 20
# the supplier has n quantities that they can buy
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self):
n = int(self.consumption(self.step))
l = self.lb_price
u = self.ub_price
wtp = []
for i in range(n):
p = rnd.uniform(l, u)
wtp.append(p)
self.wtp = wtp
# gets a little to obvious
def get_name(self):
return self.name
# return list of willingness to pay
def get_bids(self):
return self.wtp
def consumption(self, x):
# make it initialise to seller
b = self.base_demand
m = self.max_demand
y = b + m * (.5 * (1 + np.cos((x/6)*np.pi)))
return(y)
def update_price(self):
if self.type == 1: #home
self.lb_price = 20
self.ub_price = 40
if self.type == 2: # elec
self.lb_price = 10
self.lb_price = 20
if self.type == 3: #indu
self.lb_price = 11
self.ub_price = 21
# the book is an object of the market used for the clearing procedure
class Book():
def __init__(self):
self.ledger = pd.DataFrame(columns = ("role","name","price","cleared"))
def set_asks(self,seller_list):
# ask each seller their name
# ask each seller their willingness
# for each willingness append the data frame
for seller in seller_list:
seller_name = seller.get_name()
seller_price = seller.get_asks()
for price in seller_price:
self.ledger=self.ledger.append({"role":"seller","name":seller_name,"price":price,"cleared":"in process"},
ignore_index=True)
def set_bids(self,buyer_list):
# ask each seller their name
# ask each seller their willingness
# for each willingness append the data frame
for buyer in buyer_list:
buyer_name = buyer.get_name()
buyer_price = buyer.get_bids()
for price in buyer_price:
self.ledger=self.ledger.append({"role":"buyer","name":buyer_name,"price":price,"cleared":"in process"},
ignore_index=True)
def update_ledger(self,ledger):
self.ledger = ledger
def get_ledger(self):
return self.ledger
def clean_ledger(self):
self.ledger = pd.DataFrame(columns = ("role","name","price","cleared"))
class Market():
def __init__(self):
self.count = 0
self.last_price = ''
self.book = Book()
self.b = []
self.s = []
self.buyer_list = []
self.seller_list = []
self.buyer_dict = {}
self.seller_dict = {}
self.ledger = ''
def update_seller(self):
for i in self.seller_dict:
self.seller_dict[i].step += 1
self.seller_dict[i].set_quantity()
def update_buyer(self):
for i in self.buyer_dict:
self.buyer_dict[i].step += 1
self.buyer_dict[i].set_quantity()
def add_buyer(self,buyer):
self.b.append(buyer)
self.buyer_list.append(buyer)
def add_seller(self,seller):
self.s.append(seller)
self.seller_list.append(seller)
def set_book(self):
self.book.set_bids(self.buyer_list)
self.book.set_asks(self.seller_list)
#def get_ledger(self):
# self.ledger = self.book.get_ledger()
# return self.ledger
def get_bids(self):
# this is a data frame
ledger = self.book.get_ledger()
rows= ledger.loc[ledger['role'] == 'buyer']
# this is a series
prices=rows['price']
# this is a list
bids = prices.tolist()
return bids
def get_asks(self):
# this is a data frame
ledger = self.book.get_ledger()
rows = ledger.loc[ledger['role'] == 'seller']
# this is a series
prices=rows['price']
# this is a list
asks = prices.tolist()
return asks
# return the price at which the market clears
# this fails because there are more buyers then sellers
def get_clearing_price(self):
# buyer makes a bid starting with the buyer which wants it most
b = self.get_bids()
s = self.get_asks()
# highest to lowest
self.b=sorted(b, reverse=True)
# lowest to highest
self.s=sorted(s, reverse=False)
# find out whether there are more buyers or sellers
# then drop the excess buyers or sellers; they won't compete
n = len(b)
m = len(s)
# there are more sellers than buyers
# drop off the highest priced sellers
if (m > n):
s = s[0:n]
matcher = n
# There are more buyers than sellers
# drop off the lowest bidding buyers
else:
b = b[0:m]
matcher = m
# It's possible that not all items sold actually clear the market here
count = 0
for i in range(matcher):
if (self.b[i] > self.s[i]):
count +=1
self.last_price = self.b[i]
# copy count to market object
self.count = count
return self.last_price
# TODO: Annotate the ledger
def annotate_ledger(self,clearing_price):
ledger = self.book.get_ledger()
for index, row in ledger.iterrows():
if (row['role'] == 'seller'):
if (row['price'] < clearing_price):
ledger.loc[index,'cleared'] = 'True'
else:
ledger.loc[index,'cleared'] = 'False'
else:
if (row['price'] > clearing_price):
ledger.loc[index,'cleared'] = 'True'
else:
ledger.loc[index,'cleared'] = 'False'
self.book.update_ledger(ledger)
def get_units_cleared(self):
return self.count
def clean_ledger(self):
self.ledger = ''
self.book.clean_ledger()
def run_it(self):
self.pre_clearing_operation()
self.clearing_operation()
self.after_clearing_operation()
#pre clearing empty out the last run and start
# clean ledger is kind of sloppy, rewrite functions to overide the ledger
def pre_clearing_operation(self):
self.clean_ledger()
self.update_buyer()
self.update_seller()
def clearing_operation(self):
self.set_book()
clearing_price = self.get_clearing_price()
self.annotate_ledger(clearing_price)
def after_clearing_operation(self):
for i in self.seller_dict:
name = self.seller_dict[i].name
cur_extract = len(self.book.ledger[(self.book.ledger.cleared == 'True') &
(self.book.ledger.name == name)])
self.seller_dict[i].extract(cur_extract)
class Observer():
def __init__(self, x, y, z):
self.init_buyer = x
self.init_seller = y
self.maxrun = z
self.hist_book = []
self.buyer_dict = {}
self.seller_dict = {}
self.timetick = 0
self.gas_market = ''
self.reserve = []
def set_buyer(self, buyer_info):
for name in buyer_info:
self.buyer_dict[name] = Buyer('%s' % name)
self.buyer_dict[name].base_demand = buyer_info[name]['b']
self.buyer_dict[name].max_demand = buyer_info[name]['m']
self.buyer_dict[name].lb_price = buyer_info[name]['lb_price']
self.buyer_dict[name].ub_price = buyer_info[name]['ub_price']
def set_seller(self, seller_info):
for name in seller_info:
self.seller_dict[name] = Seller('%s' % name)
self.seller_dict[name].prod = seller_info[name]['prod']
self.seller_dict[name].lb_price = seller_info[name]['lb_price']
self.seller_dict[name].ub_price = seller_info[name]['ub_price']
self.seller_dict[name].reserve = seller_info[name]['reserve']
self.seller_dict[name].init_reserve = seller_info[name]['reserve']
def get_reserve(self):
reserve = []
for name in self.seller_dict:
reserve.append(self.seller_dict[name].reserve)
return reserve
def set_market(self):
self.gas_market = Market()
#add suplliers and buyers to this market
for supplier in self.seller_dict.values():
self.gas_market.add_seller(supplier)
for buyer in self.buyer_dict.values():
self.gas_market.add_buyer(buyer)
self.gas_market.seller_dict = self.seller_dict
self.gas_market.buyer_dict = self.buyer_dict
def run_it(self):
# Timing
# time initialising
startit_init = time.time()
#initialise, setting up all the agents
first_run = True
if first_run:
self.set_buyer(self.init_buyer)
self.set_seller(self.init_seller)
self.set_market()
first_run=False
# time init stop
stopit_init = time.time() - startit_init
print('%s : init' % stopit_init)
for period in range(self.maxrun):
# time the period
startit_period = time.time()
self.timetick += 1
print('#######################################')
period_now = add_months(period_null, self.timetick-1)
print(period_now.strftime('%Y-%b'))
# real action on the market
self.gas_market.run_it()
# data collection
p_clearing = self.gas_market.last_price
q_sold = self.gas_market.count
self.reserve.append([period_now.strftime('%Y-%b'),*self.get_reserve()])
# recording the step_info
# since this operation can take quite a while, print after every operation
period_time = time.time() - startit_period
print('%s : seconds to clear period' % period_time)
self.hist_book.append([period_now.strftime('%Y-%b'), p_clearing, q_sold])
# Show some real consumption data, for more data see folder data analytics
#read montly consumption data of 2010 into a dataframe
df = pd.read_csv('2010cbstestrun.csv', header=0, index_col=0)
df = df.transpose()
#plot the 2010 monthly consumption data
df.plot();
df
# make initialization dictionary
init_buyer = {'elec':{'b':400, 'm' : 673, 'lb_price': 10, 'ub_price' : 20},
'indu':{'b':400, 'm':1171, 'lb_price': 10, 'ub_price' : 20},
'home':{'b': 603, 'm': 3615, 'lb_price': 10, 'ub_price' : 20}}
init_seller = {'NL' : {'prod': 2000, 'lb_price': 10, 'ub_price' : 20, 'reserve': 50000},
'RU' : {'prod': 2000, 'lb_price': 15, 'ub_price' : 30, 'reserve': 500000}}
# make a history book to record every timestep
hist_book = []
# set the starting time
period_null= datetime.date(2010,1,1)
# create observer and run the model
# first data about buyers then sellers and then model ticks
years = 10
timestep = 12
obser1 = Observer(init_buyer, init_seller, years*timestep)
obser1.run_it()
#get the info from the observer
hist_book = obser1.hist_book
# recording the total run
def write_to_csv(hist_book):
f = open('hist_book.csv', 'a')
for item in hist_book:
f.write('%s,%s\n' % (item[0], item[1]))
f.close()
#write_to_csv(hist_book)
# make a dataframe of clearing prices
df_hb = pd.DataFrame(hist_book)
df_hb = df_hb.set_index(0)
df_hb.index.name = 'month'
df_hb.rename(columns={1: 'price', 2: 'quantity'}, inplace=True)
# timeit
stopit = time.time()
dtstopit = datetime.datetime.now()
print('it took us %s seconds to get to this conclusion' % (stopit-startit))
print('in another notation (h:m:s) %s'% (dtstopit - dtstartit))
# print the run results
price = df_hb['price']
fig = price.plot()
plt.ylabel('€ / unit')
plt.show()
quantity = df_hb['quantity']
fig = quantity.plot()
plt.ylabel('quantity')
plt.show()
# print the time of last run
print('last run of this notebook:')
time.strftime("%a, %d %b %Y %H:%M:%S", time.localtime())
#df_hb
df_res = pd.DataFrame(obser1.reserve, columns=['time', *[i for i in init_seller]])
df_res = df_res.set_index('time')
df_res.plot();
df_res['NL']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: classes buyers and sellers
Step2: Construct the market
Step3: Observer
Step4: Example Market
Step5: run the model
Step6: Operations Research Formulation
Step7: Time of last run
|
12,620
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import TruncatedSVD
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score, recall_score
df = pd.read_table('data/preprocessed.tsv', usecols=['title', 'description', 'selected'])
df.fillna(value="", inplace=True)
y = df['selected'].astype(int).values
corpus = df['title']
vect = TfidfVectorizer(sublinear_tf=True, stop_words='english')
X = vect.fit_transform(corpus)
pd.DataFrame(X.toarray(), columns=vect.get_feature_names()).head()
svd = TruncatedSVD(n_components=250)
X = svd.fit_transform(X)
pd.DataFrame(X).head()
gnb = GaussianNB()
gnb.fit(X, y)
predictions = gnb.predict(X)
print((predictions == y).sum() / 290)
# Retrieve the corpus from the dataset
# Obtain the TD Matrix
# Reduce the dimensionality of the TD matrix to 250
# Train the classifier
# Test the classifier
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Training & Prediction pipeline
Step2: Text Vectorization & The TD Matrix
Step3: Dimensionality Reduction
Step4: Training the Classifier
Step5: Testing the classifier
Step6: Exercise 1
|
12,621
|
<ASSISTANT_TASK:>
Python Code:
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import iris
import matplotlib.pyplot as plt
import numpy as np
import os
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
iris.FUTURE.netcdf_promote = True
filepath = os.path.join(os.path.pardir,'data','data.nc') # ../data/data.nc on Unix
f = iris.load(filepath)
print(f)
u, v = f[2], f[4]
print(u)
print('\nType of u variable: {}'.format(type(u)))
wspd = (u**2 + v**2)**0.5
wspd.rename('wind_magnitude')
print(wspd)
(123*u*u.coord('time')/u.coord('pressure_level')).units
sublon = iris.Constraint(longitude=lambda cell: 120 <= cell <= 160)
sublat = iris.Constraint(latitude=lambda cell: 30 <= cell <= 60)
wspd_subset = wspd.extract(sublon & sublat)
print(wspd_subset)
import shapely.geometry as sgeom
fig, ax = plt.subplots(subplot_kw=dict(projection=ccrs.PlateCarree()))
ax.coastlines()
extent_box = sgeom.box(120,30,160,60)
ax.add_geometries([extent_box], ccrs.PlateCarree(), color='red', alpha=0.5, edgecolor='red', linewidth=2)
print('Subset location')
from iris.analysis import trajectory
pnts = [{'longitude': 155, 'latitude': 35}, # start
{'longitude': 125, 'latitude': 55} # end
]
traj = trajectory.Trajectory(pnts, sample_count=100)
traj
lon = [d['longitude'] for d in traj.sampled_points]
lat = [d['latitude'] for d in traj.sampled_points]
sampled_points = [('longitude', lon),
('latitude', lat)]
section = trajectory.interpolate(wspd_subset, sampled_points)
print(section)
lon, lat = wspd_subset.coord('longitude').points, wspd_subset.coord('latitude').points
seclon, seclat = section.coord('longitude').points, section.coord('latitude').points
ilev = 0
it = 0
fig = plt.figure(figsize=(30,10))
ax1 = fig.add_subplot(121, projection=ccrs.PlateCarree())
ax1.coastlines('50m')
ax1.contourf(lon, lat, wspd_subset.data[it,ilev,...], cmap=plt.cm.viridis)
#--------Fancy formatting---------
gl = ax1.gridlines(crs=ccrs.PlateCarree(), # using the same projection
draw_labels=True, # add labels
linewidth=2, color='gray', alpha=0.5, linestyle='--') # grid line specs
# Remove labels above and on the right of the map (note that Python allows the double equality)
gl.xlabels_top = gl.ylabels_right = False
# Format the labels using the formatters imported from cartopy
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
#---------------------------------
ax1.plot(seclon, seclat, color='r', linestyle='', marker='o', linewidth=3)
ax1.set_title('Pressure level: {}hPa'.format(wspd_subset.coord('pressure_level').points[ilev]))
ax2 = fig.add_subplot(122)
ax2.contourf(np.arange(section.data.shape[2]), section.coord('pressure_level').points, section.data[it,...], cmap=plt.cm.viridis)
ax2.invert_yaxis()
ax2.set_ylabel('Pressure, hPa')
ax2.set_xlabel('Distance along the section, points')
fig.suptitle('Wind speed at {}'.format(wspd_subset.coord('time').units.num2date(wspd_subset.coord('time').points[0]).strftime('%Y-%b-%d')))
HTML(html)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Don't bother me with warnings!
Step2: Read the NetCDF data file
Step3: Use the simplest loading method to open a NetCDF file as a iris.cube.Cubelist variable f.
Step4: Let's take the horizontal wind components from this list
Step5: Next, we construct a wind speed variable
Step6: Note that we are performing arithmetic operations on an iris.cube.Cube, and the units of the derived variable are also correct. Another random example of the iris handling units conversions
Step7: Subset the cube
Step8: iris trajectory
Step9: Define the starting and ending points of a cross-section
Step10: Initialise a Trajectory with list of dictionaries pnts and set the number of sample positions to use along the trajectory to 100.
Step11: Now the linearly spaced points of the trajectory are stored in its attribute sampled_points, which is a list of dictionaries. We could have got the same result just by applying numpy.linspace function, but this way is more convenient. However, we have to rearrange this data structure so that it can be fed to the interpolate() function
Step12: We then can easily dissect our cube of wind speed along the defined trajectory.
Step13: Note that the result is also an iris cube, whose dimensional coordinates that are not used in the interpolation remained preserved, as well as all other metadata. Also, the longitude and latitude coordinates moved to the auxiliary coordinate group.
Step14: We can also use the coordinates of the section to mark its location on a map.
Step15: And then, we create a figure with two subplots
|
12,622
|
<ASSISTANT_TASK:>
Python Code:
#The ibmseti package contains some useful tools to faciliate reading the data.
#The `ibmseti` package version 1.0.5 works on Python 2.7.
# !pip install --user ibmseti
#A development version runs on Python 3.5.
# !pip install --user ibmseti==2.0.0.dev5
# If running on DSX, YOU WILL NEED TO RESTART YOUR SPARK KERNEL to use a newly installed Python Package.
# Click Kernel -> Restart above!
import ibmseti
import os
import zipfile
!ls my_data_folder/basic4
zz = zipfile.ZipFile(mydatafolder + '/' + 'basic4.zip')
basic4list = zz.namelist()
firstfile = basic4list[0]
print firstfile
import ibmseti
aca = ibmseti.compamp.SimCompamp(zz.open(firstfile, 'rb').read())
# This data file is classified as a 'squiggle'
aca.header()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
## ibmseti.compamp.SimCompamp has a method to calculate the spectrogram for you (without any signal processing applied to the time-series data)
spectrogram = aca.get_spectrogram()
fig, ax = plt.subplots(figsize=(10, 5))
ax.imshow(np.log(spectrogram), aspect = 0.5*float(spectrogram.shape[1]) / spectrogram.shape[0])
complex_data = aca.complex_data()
#complex valued time-series
complex_data
complex_data = complex_data.reshape(32, 6144)
complex_data
#Apply a Hanning Window
complex_data = complex_data * np.hanning(complex_data.shape[1])
complex_data
# Build Spectogram & Plot
cpfft = np.fft.fftshift( np.fft.fft(complex_data), 1)
spectrogram = np.abs(cpfft)**2
fig, ax = plt.subplots(figsize=(10, 5))
ax.imshow(np.log(spectrogram), aspect = 0.5*float(spectrogram.shape[1]) / spectrogram.shape[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: No Spark Here
Step2: Assume you have the data in a local folder
Step3: Use ibmseti for convenience
Step4: The Goal
Step5: 2. Build the spectogram yourself
|
12,623
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from urllib2 import Request, urlopen, URLError
from lxml import html
import time
from netCDF4 import Dataset
import datetime
import calendar
from collections import OrderedDict
from bokeh.plotting import figure, ColumnDataSource
from bokeh.models import HoverTool
from bokeh.models import LinearAxis, Range1d, CustomJS
from bokeh.models.widgets import Panel, Tabs
from bokeh.io import output_notebook, show, output_file, vplot, hplot
import bokeh
#output_notebook()
def get_data_array(data_array):
if type(data_array.__array__()) is np.ma.masked_array:
return data_array.__array__().data
else:
return data_array.__array__()
def get_qc_variable_name(variable):
try:
qc_variable_name = variable.ancillary_variables
except AttributeError:
# print "No QC variable found for " + variable.name
qc_variable_name = None
return qc_variable_name
def get_pandas_timestamp_series(datetime_array):
out = pd.Series(np.zeros(len(datetime_array)))
counter = 0
for i in datetime_array:
out[counter] = pd.tslib.Timestamp(i)
counter += 1
return out
def days_to_seconds(days):
return int(days) * 24 * 60 * 60
def get_str_time(x): return str(x)
def totimestamp(dt, epoch=datetime.datetime(1970,1,1)):
td = dt - epoch
# return td.total_seconds()
return (td.microseconds + (td.seconds + td.days * 86400) * 10**6) / 10**6
def get_mooring_stations(url):
name_list = []
end_URLBuilder = []
req = Request(url)
try:
response = urlopen(req)
except URLError as e:
if hasattr(e, 'reason'):
print 'We failed to reach a server.'
print 'Reason: ', e.reason
elif hasattr(e, 'code'):
print 'The server couldn\'t fulfill the request.'
print 'Error code: ', e.code
else:
URLBuilder = []
tree = html.fromstring(response.read())
link_path = tree.xpath('//a')
for x in range(1, len(link_path)):
URLBuilder.append(link_path[x].values())
URLLister = []
for n in range(0, len(URLBuilder) - 4):
string = str(URLBuilder[n])
idx = string.find("/")
url = "http://thredds.socib.es/thredds/catalog/mooring/weather_station/" + URLBuilder[n][0][0:idx - 1] + "/L1/catalog.html"
name = URLBuilder[n][0][0:idx - 2]
req = Request(url)
try:
response = urlopen(req)
except URLError as e:
if hasattr(e, 'reason'):
print 'We failed to reach a server.'
print 'Reason: ', e.reason
elif hasattr(e, 'code'):
print 'The server couldn\'t fulfill the request.'
print 'Error code: ', e.code
else:
URLLister.append(url)
name_list.append(name)
for m in URLLister:
req = Request(m)
try:
response = urlopen(req)
except URLError as e:
if hasattr(e, 'reason'):
print 'We failed to reach a server.'
print 'Reason: ', e.reason
elif hasattr(e, 'code'):
print 'The server couldn\'t fulfill the request.'
print 'Error code: ', e.code
else:
tree = html.fromstring(response.read())
link_path = tree.xpath('//a')
for x in range(1, len(link_path)):
string = str(link_path[x].values())
idx = string.find("=")
end_URLBuilder.append("http://thredds.socib.es/thredds/dodsC/" + str(
link_path[x].values()[0][idx - 1:len(string)]))
break
return name_list, end_URLBuilder
def draw_data(links, desired_start_time, station_names):
global VARIABLES_OF_INTEREST
counter = 0
output_stations = []
for station in links:
root = Dataset(station)
time = get_data_array(root.variables["time"])
idx = time >= desired_start_time
if not np.any(idx):
counter += 1
continue
variables = root.get_variables_by_attributes(standard_name=lambda n: n in VARIABLES_OF_INTEREST)
time = time[idx]
subplot = []
variable_names = []
for v in variables:
try:
qc_data = get_data_array(root.variables[get_qc_variable_name(v)])
qc_data = qc_data[idx]
bad_idx = get_data_array(qc_data) != 1
except KeyError:
print "No QC found for " + v.name
v_name = v.name
variable_names.append(v_name)
v = get_data_array(v)
v = v[idx]
conv_time = get_pandas_timestamp_series([datetime.datetime.fromtimestamp(ts) for ts in time])
subplot.append(get_bokeh_grid_figure(v, qc_data, conv_time, station_names[counter]))
sub_counter = 0
my_tabs = []
for sp in subplot:
my_tabs.append(Panel(child=sp, title=variable_names[sub_counter]))
sub_counter += 1
p = Tabs(tabs=my_tabs)
output_stations.append(p)
counter += 1
amount_stations = len(output_stations)
rest = amount_stations % 2
verticals = []
if amount_stations >= 2:
verticals.append(hplot(output_stations[0], output_stations[1]))
elif amount_stations == 1:
verticals.append(hplot(output_stations[0]))
else:
print("No stations to plot (PerformQC.draw_bokeh()).")
return 1
for i in range(1, int(amount_stations/2)):
verticals.append(hplot(output_stations[i*2], output_stations[i*2+1]))
if rest > 0:
verticals.append(output_stations[-1])
show(vplot(*verticals))
def get_bokeh_grid_figure(data, qc, converted_time, variable_name):
time_strings = map(get_str_time, converted_time)
hover = HoverTool(names=["data"])
fig = figure(width=800, plot_height=300, title=variable_name, tools=["pan, box_zoom, xwheel_zoom, save, reset, resize", hover], x_axis_type="datetime")
source = ColumnDataSource(
data=dict(
time=time_strings,
data=data,
qc=qc
)
)
# data line
fig.line(converted_time, data, color="navy", alpha=0.5, name="data", source=source)
# data points
fig.square(converted_time, data, color="navy", alpha=0.5)
fig.extra_y_ranges = {"foo": Range1d(start=0, end=10)}
fig.add_layout(LinearAxis(y_range_name="foo"), 'right')
fig.line(converted_time, qc, color="green", alpha=0.5, y_range_name="foo")
jscode =
range.set('start', parseInt(%s));
range.set('end', parseInt(%s));
fig.extra_y_ranges['foo'].callback = CustomJS(
args=dict(range=fig.extra_y_ranges['foo']),
code=jscode % (fig.extra_y_ranges['foo'].start,
fig.extra_y_ranges['foo'].end)
)
pan_tool = fig.select(dict(type=bokeh.models.PanTool))
pan_tool.dimensions = ["width"]
hover = fig.select(dict(type=HoverTool))
hover.tooltips = OrderedDict([
('time', '@time'),
('value', '@data{0.0}'),
('qc', '@qc')
])
# check for ranges, if they are nan
if (np.isnan(np.nanmin(data)) & np.isnan(np.nanmax(data))) or (np.nanmin(data) == np.nanmax(data)):
bottom_y_range = 0
top_y_range = 10
else:
# add a 10% buffer to the max ranges
temp_min = np.nanmin(data)
temp_max = np.nanmax(data)
temp_diff = abs(temp_max-temp_min)
temp_thresh = round(temp_diff*0.1, 3)
bottom_y_range = temp_min - temp_thresh
top_y_range = temp_max + temp_thresh
fig.y_range = Range1d(bottom_y_range, top_y_range)
translate_time = converted_time.apply(lambda x: x.to_pydatetime())
converted_time_backward = map(totimestamp, translate_time)
source = ColumnDataSource({'x': converted_time_backward, 'y': data})
jscode =
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n);
}
var data = source.get('data');
var start = yrange.get('start');
var end = yrange.get('end');
var time_start = xrange.get('start')/1000;
var time_end = xrange.get('end')/1000;
var pre_max_old = end;
var pre_min_old = start;
var time = data['x'];
var pre = data['y'];
t_idx_start = time.filter(function(st){return st>=time_start})[0];
t_idx_start = time.indexOf(t_idx_start);
t_idx_end = time.filter(function(st){return st>=time_end})[0];
t_idx_end = time.indexOf(t_idx_end);
var pre_interval = pre.slice(t_idx_start, t_idx_end);
pre_interval = pre_interval.filter(function(st){return !isNaN(st)});
var pre_max = Math.max.apply(null, pre_interval);
var pre_min = Math.min.apply(null, pre_interval);
var ten_percent = (pre_max-pre_min)*0.1;
pre_max = pre_max + ten_percent;
pre_min = pre_min - ten_percent;
if((!isNumeric(pre_max)) || (!isNumeric(pre_min))) {
pre_max = pre_max_old;
pre_min = pre_min_old;
}
yrange.set('start', pre_min);
yrange.set('end', pre_max);
console.log(yrange.get('end'))
source.trigger('change');
fig.y_range.callback = CustomJS(
args=dict(source=source, yrange=fig.y_range, xrange=fig.x_range), code=jscode)
fig.x_range.callback = CustomJS(
args=dict(source=source, yrange=fig.y_range, xrange=fig.x_range), code=jscode)
return fig
VARIABLES_OF_INTEREST = [
"sea_water_temperature",
"air_temperature",
"sea_surface_wave_from_direction",
"sea_surface_wave_significant_height",
"wind_speed",
"wind_from_direction",
"wind_speed_of_gust",
"water_surface_height_above_reference_datum",
"air_pressure",
"sea_water_speed",
"direction_of_sea_water_velocity",
"sea_water_salinity",
"relative_humidity"]
station_names, station_links = get_mooring_stations('http://thredds.socib.es/thredds/catalog/mooring/weather_station/catalog.html')
# get latest x days
days = 2
html_file = 'bokeh_latest_data.html'
seconds = days_to_seconds(days)
dt = datetime.datetime.now()
desired_start_time = calendar.timegm(dt.utctimetuple()) - seconds
output_file(html_file)
draw_data(station_links, desired_start_time, station_names)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In case, the output wants to be seen within the jupyter notebook, this line must be un-commented. However, since the generated HTML file will be opened in a new window, this is not really necessary.
Step2: Define data sources
Step3: Note that these scripts differ from the socib mooring station report generation tool. Here, we use a simple web - scraping from the socib thredds server.
Step6: Here, we define the bokeh plotting parameters. Also, we create a javascript callback to automatically adjust the y-axis according to the current zoom-extend.
Step7: Also, we have to define the variables we want to plot. In this case, we just used the "List of important parameters" from the socib DataDiscovery service and added the relative humidity to it (since we will plot weather stations here).
Step8: Get latest data
|
12,624
|
<ASSISTANT_TASK:>
Python Code:
# importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
matplotlib.rc('figure', figsize=(18, 6) )
########################
# find impulse response of an RC filter
########################
def get_rc_ir(K, n_up, t_symbol, r):
'''
Determines coefficients of an RC filter
Formula out of: K.-D. Kammeyer, Nachrichtenübertragung
At poles, l'Hospital was used
NOTE: Length of the IR has to be an odd number
IN: length of IR, upsampling factor, symbol time, roll-off factor
OUT: filter coefficients
'''
# check that IR length is odd
assert K % 2 == 1, 'Length of the impulse response should be an odd number'
# map zero r to close-to-zero
if r == 0:
r = 1e-32
# initialize output length and sample time
rc = np.zeros( K )
t_sample = t_symbol / n_up
# time indices and sampled time
k_steps = np.arange( -(K-1) / 2.0, (K-1) / 2.0 + 1 )
t_steps = k_steps * t_sample
for k in k_steps.astype(int):
if t_steps[k] == 0:
rc[ k ] = 1. / t_symbol
elif np.abs( t_steps[k] ) == t_symbol / ( 2.0 * r ):
rc[ k ] = r / ( 2.0 * t_symbol ) * np.sin( np.pi / ( 2.0 * r ) )
else:
rc[ k ] = np.sin( np.pi * t_steps[k] / t_symbol ) / np.pi / t_steps[k] \
* np.cos( r * np.pi * t_steps[k] / t_symbol ) \
/ ( 1.0 - ( 2.0 * r * t_steps[k] / t_symbol )**2 )
return rc
# constellation points of modulation
M = 4
constellation_points = [ np.exp( 1j * 2 * np.pi * m / M + 1j * np.pi / M ) for m in range( M ) ]
# symbol time and number of symbols
t_symb = 1.0
n_symb = 100
# get filter impulse response
r = 0.33
n_up = 16 # samples per symbol
syms_per_filt = 4 # symbols per filter (plus-minus in both directions)
K_filt = 2 * syms_per_filt * n_up + 1 # length of the fir filter
# generate random vector and modulate the specified modulation scheme
data = np.random.randint( M, size = n_symb )
s = [ constellation_points[ d ] for d in data ]
# prepare sequence to be filtered
s_up = np.zeros( n_symb * n_up, dtype=complex )
s_up[ : : n_up ] = s
# get RC pulse
rc = get_rc_ir( n_up * syms_per_filt * 2 + 1, n_up, t_symb, r )
# pulse-shaping
s_rc = np.convolve( rc, s_up )
# extracting real and imaginary part
s_rc_I = np.real( s_rc )
s_rc_Q = np.imag( s_rc )
# generating OQPSK by relatively shifting I and Q component
s_oqpsk = s_rc_I[ : - n_up//2 ] + 1j * s_rc_Q[ n_up//2 : ]
# plotting
plt.subplot(121)
plt.plot( np.real( s_rc[syms_per_filt*n_up:-syms_per_filt*n_up] ), np.imag( s_rc[syms_per_filt*n_up:-syms_per_filt*n_up] ), linewidth=2.0, c=(0,0.59,0.51) )
plt.grid( True )
plt.xlabel( '$\mathrm{Re}\\{s(t)\\}$' )
plt.ylabel(' $\mathrm{Im}\\{s(t)\\}$' )
plt.gca().set_aspect('equal', adjustable='box')
plt.title( 'QPSK signal' )
plt.subplot(122)
plt.plot( np.real( s_oqpsk[syms_per_filt*n_up:-syms_per_filt*n_up] ), np.imag( s_oqpsk[syms_per_filt*n_up:-syms_per_filt*n_up] ), linewidth=2.0, c=(0,0.59,0.51) )
plt.grid( True )
plt.xlabel( '$\mathrm{Re}\\{s(t)\\}$' )
plt.ylabel(' $\mathrm{Im}\\{s(t)\\}$' )
plt.gca().set_aspect('equal', adjustable='box')
plt.title( 'OQPSK signal' )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Function for determining the impulse response of an RC filter
Step2: Parameters
Step3: Get QPSK and OQPSK signal
Step4: Plotting
|
12,625
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import nibabel
import matplotlib.pyplot as plt
import matplotlib.patheffects as path_effects
import mne
from mne.transforms import apply_trans
from mne.io.constants import FIFF
data_path = mne.datasets.sample.data_path()
subjects_dir = os.path.join(data_path, 'subjects')
subject = 'sample'
t1_fname = os.path.join(subjects_dir, subject, 'mri', 'T1.mgz')
t1 = nibabel.load(t1_fname)
t1.orthoview()
data = np.asarray(t1.dataobj)
print(data.shape)
print(t1.affine)
vox = np.array([122, 119, 102])
xyz_ras = apply_trans(t1.affine, vox)
print('Our voxel has real-world coordinates {}, {}, {} (mm)'
.format(*np.round(xyz_ras, 3)))
ras_coords_mm = np.array([1, -17, -18])
inv_affine = np.linalg.inv(t1.affine)
i_, j_, k_ = np.round(apply_trans(inv_affine, ras_coords_mm)).astype(int)
print('Our real-world coordinates correspond to voxel ({}, {}, {})'
.format(i_, j_, k_))
def imshow_mri(data, img, vox, xyz, suptitle):
Show an MRI slice with a voxel annotated.
i, j, k = vox
fig, ax = plt.subplots(1, figsize=(6, 6))
codes = nibabel.orientations.aff2axcodes(img.affine)
# Figure out the title based on the code of this axis
ori_slice = dict(P='Coronal', A='Coronal',
I='Axial', S='Axial',
L='Sagittal', R='Saggital')
ori_names = dict(P='posterior', A='anterior',
I='inferior', S='superior',
L='left', R='right')
title = ori_slice[codes[0]]
ax.imshow(data[i], vmin=10, vmax=120, cmap='gray', origin='lower')
ax.axvline(k, color='y')
ax.axhline(j, color='y')
for kind, coords in xyz.items():
annotation = ('{}: {}, {}, {} mm'
.format(kind, *np.round(coords).astype(int)))
text = ax.text(k, j, annotation, va='baseline', ha='right',
color=(1, 1, 0.7))
text.set_path_effects([
path_effects.Stroke(linewidth=2, foreground='black'),
path_effects.Normal()])
# reorient view so that RAS is always rightward and upward
x_order = -1 if codes[2] in 'LIP' else 1
y_order = -1 if codes[1] in 'LIP' else 1
ax.set(xlim=[0, data.shape[2] - 1][::x_order],
ylim=[0, data.shape[1] - 1][::y_order],
xlabel=f'k ({ori_names[codes[2]]}+)',
ylabel=f'j ({ori_names[codes[1]]}+)',
title=f'{title} view: i={i} ({ori_names[codes[0]]}+)')
fig.suptitle(suptitle)
fig.subplots_adjust(0.1, 0.1, 0.95, 0.85)
return fig
imshow_mri(data, t1, vox, {'Scanner RAS': xyz_ras}, 'MRI slice')
Torig = t1.header.get_vox2ras_tkr()
print(t1.affine)
print(Torig)
xyz_mri = apply_trans(Torig, vox)
imshow_mri(data, t1, vox, dict(MRI=xyz_mri), 'MRI slice')
fiducials = mne.coreg.get_mni_fiducials(subject, subjects_dir=subjects_dir)
nasion_mri = [d for d in fiducials if d['ident'] == FIFF.FIFFV_POINT_NASION][0]
print(nasion_mri) # note it's in Freesurfer MRI coords
nasion_mri = nasion_mri['r'] * 1000 # meters → millimeters
nasion_vox = np.round(
apply_trans(np.linalg.inv(Torig), nasion_mri)).astype(int)
imshow_mri(data, t1, nasion_vox, dict(MRI=nasion_mri),
'Nasion estimated from MRI transform')
info = mne.io.read_info(
os.path.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif'))
nasion_head = [d for d in info['dig'] if
d['kind'] == FIFF.FIFFV_POINT_CARDINAL and
d['ident'] == FIFF.FIFFV_POINT_NASION][0]
print(nasion_head) # note it's in "head" coordinates
trans = mne.read_trans(
os.path.join(data_path, 'MEG', 'sample', 'sample_audvis_raw-trans.fif'))
# first we transform from head to MRI, and *then* convert to millimeters
nasion_dig_mri = apply_trans(trans, nasion_head['r']) * 1000
# ...then we can use Torig to convert MRI to voxels:
nasion_dig_vox = np.round(
apply_trans(np.linalg.inv(Torig), nasion_dig_mri)).astype(int)
imshow_mri(data, t1, nasion_dig_vox, dict(MRI=nasion_dig_mri),
'Nasion transformed from digitization')
fname = os.path.join(subjects_dir, subject, 'surf', 'rh.white')
rr_mm, tris = mne.read_surface(fname)
print(f'rr_mm.shape == {rr_mm.shape}')
print(f'tris.shape == {tris.shape}')
print(f'rr_mm.max() = {rr_mm.max()}') # just to show that we are in mm
renderer = mne.viz.backends.renderer.create_3d_figure(
size=(600, 600), bgcolor='w', scene=False)
gray = (0.5, 0.5, 0.5)
renderer.mesh(*rr_mm.T, triangles=tris, color=gray)
view_kwargs = dict(elevation=90, azimuth=0)
mne.viz.set_3d_view(
figure=renderer.figure, distance=350, focalpoint=(0., 0., 40.),
**view_kwargs)
renderer.show()
rr_vox = apply_trans(np.linalg.inv(Torig), rr_mm)
fig = imshow_mri(data, t1, vox, {'Scanner RAS': xyz_ras}, 'MRI slice')
# Based on how imshow_mri works, the "X" here is the last dim of the MRI vol,
# the "Y" is the middle dim, and the "Z" is the first dim, so now that our
# points are in the correct coordinate frame, we need to ask matplotlib to
# do a tricontour slice like:
fig.axes[0].tricontour(rr_vox[:, 2], rr_vox[:, 1], tris, rr_vox[:, 0],
levels=[vox[0]], colors='r', linewidths=1.0,
zorder=1)
renderer_kwargs = dict(bgcolor='w', smooth_shading=False)
renderer = mne.viz.backends.renderer.create_3d_figure(
size=(800, 400), scene=False, **renderer_kwargs)
curvs = [
(mne.surface.read_curvature(os.path.join(
subjects_dir, subj, 'surf', 'rh.curv'),
binary=False) > 0).astype(float)
for subj in ('sample', 'fsaverage') for _ in range(2)]
fnames = [os.path.join(subjects_dir, subj, 'surf', surf)
for subj in ('sample', 'fsaverage')
for surf in ('rh.white', 'rh.sphere')]
y_shifts = [-450, -150, 450, 150]
z_shifts = [-40, 0, -30, 0]
for name, y_shift, z_shift, curv in zip(fnames, y_shifts, z_shifts, curvs):
this_rr, this_tri = mne.read_surface(name)
this_rr += [0, y_shift, z_shift]
renderer.mesh(*this_rr.T, triangles=this_tri, color=None, scalars=curv,
colormap='copper_r', vmin=-0.2, vmax=1.2)
zero = [0., 0., 0.]
width = 50.
y = np.sort(y_shifts)
y = (y[1:] + y[:-1]) / 2. - width / 2.
renderer.quiver3d(zero, y, zero,
zero, [1] * 3, zero, 'k', width, 'arrow')
view_kwargs['focalpoint'] = (0., 0., 0.)
mne.viz.set_3d_view(figure=renderer.figure, distance=1000, **view_kwargs)
renderer.show()
cyan = '#66CCEE'
purple = '#AA3377'
renderer = mne.viz.backends.renderer.create_3d_figure(
size=(800, 800), scene=False, **renderer_kwargs)
fnames = [os.path.join(subjects_dir, subj, 'surf', 'rh.sphere')
for subj in ('sample', 'fsaverage')]
colors = [cyan, purple]
for name, color in zip(fnames, colors):
this_rr, this_tri = mne.read_surface(name)
renderer.mesh(*this_rr.T, triangles=this_tri, color=color,
representation='wireframe')
mne.viz.set_3d_view(figure=renderer.figure, distance=20, **view_kwargs)
renderer.show()
src = mne.read_source_spaces(os.path.join(subjects_dir, 'sample', 'bem',
'sample-oct-6-src.fif'))
print(src)
blue = '#4477AA'
renderer = mne.viz.backends.renderer.create_3d_figure(
size=(800, 800), scene=False, **renderer_kwargs)
rr_sph, _ = mne.read_surface(fnames[0])
for tris, color in [(src[1]['tris'], cyan), (src[1]['use_tris'], blue)]:
renderer.mesh(*rr_sph.T, triangles=tris, color=color,
representation='wireframe')
mne.viz.set_3d_view(figure=renderer.figure, distance=20, **view_kwargs)
renderer.show()
renderer = mne.viz.backends.renderer.create_3d_figure(
size=(800, 400), scene=False, **renderer_kwargs)
y_shifts = [-125, 125]
tris = [src[1]['tris'], src[1]['use_tris']]
for y_shift, tris in zip(y_shifts, tris):
this_rr = src[1]['rr'] * 1000. + [0, y_shift, -40]
renderer.mesh(*this_rr.T, triangles=tris, color=None, scalars=curvs[0],
colormap='copper_r', vmin=-0.2, vmax=1.2)
renderer.quiver3d([0], [-width / 2.], [0], [0], [1], [0], 'k', width, 'arrow')
mne.viz.set_3d_view(figure=renderer.figure, distance=400, **view_kwargs)
renderer.show()
brain = mne.viz.Brain('sample', 'lh', 'white', subjects_dir=subjects_dir,
background='w')
xyz = np.array([[-55, -10, 35]])
brain.add_foci(xyz, hemi='lh', color='k')
brain.show_view('lat')
mri_mni_trans = mne.read_talxfm(subject, subjects_dir)
print(mri_mni_trans)
xyz_mni = apply_trans(mri_mni_trans, xyz / 1000.) * 1000.
print(np.round(xyz_mni, 1))
brain = mne.viz.Brain('fsaverage', 'lh', 'white', subjects_dir=subjects_dir,
background='w')
brain.add_foci(xyz_mni, hemi='lh', color='k')
brain.show_view('lat')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MRI coordinate frames
Step2: Notice that the axes in the
Step3: These data are voxel intensity values. Here they are unsigned integers in the
Step4: If you have a point (x, y, z) in scanner-native RAS space and you want
Step6: Let's write a short function to visualize where our voxel lies in an
Step7: Notice that the axis scales (i, j, and k) are still in voxels
Step8: Knowing these relationships and being mindful about transformations, we
Step9: When we print the nasion, it displays as a DigPoint and shows its
Step10: We can also take the digitization point from the MEG data, which is in the
Step11: .. sidebar
Step12: Using FreeSurfer's surface reconstructions
Step13: Let's actually plot it
Step14: We can also plot the mesh on top of an MRI slice. The mesh surfaces are
Step15: This is the method used by
Step16: Let's look a bit more closely at the spherical alignment by overlaying the
Step17: You can see that the fsaverage (purple) mesh is uniformly spaced, and the
Step18: We can also then look at how these two meshes compare by plotting the
Step19: <div class="alert alert-danger"><h4>Warning</h4><p>Some source space vertices can be removed during forward computation.
Step20: We can take this point and transform it to MNI space
Step21: And because fsaverage is special in that it's already in MNI space
|
12,626
|
<ASSISTANT_TASK:>
Python Code:
import os
path_to_file = os.path.join(os.pardir, 'data', 'new.nc')
from __future__ import division, print_function # py2to3 compatibility
import netCDF4 as nc
import numpy as np
print('NetCDF package version: {}'.format(nc.__version__))
try:
ncfile.close()
except:
pass
# another way of checking this:
# if ncfile.isopen():
# ncfile.close()
ncfile = nc.Dataset(path_to_file, mode='w',
format='NETCDF4_CLASSIC')
print(ncfile)
nlat = 73
nlon = 144
lat_dim = ncfile.createDimension('lat', nlat) # latitude axis
lon_dim = ncfile.createDimension('lon', nlon) # longitude axis
time_dim = ncfile.createDimension('time', None) # unlimited axis
for dim in ncfile.dimensions.items():
print(dim)
ncfile.author = 'UEA Python Group'
ncfile.title='My model data'
print(ncfile)
ncfile.some_unnecessary_attribute = '123456'
ncfile.delncattr('some_unnecessary_attribute')
# Define two variables with the same names as dimensions,
# a conventional way to define "coordinate variables".
lat = ncfile.createVariable('lat', np.float32, ('lat',))
lat.units = 'degrees_north'
lat.long_name = 'latitude'
#
lon = ncfile.createVariable('lon', np.float32, ('lon',))
lon.units = 'degrees_east'
lon.long_name = 'longitude'
#
time = ncfile.createVariable('time', np.float64, ('time',))
time.units = 'hours since 1800-01-01'
time.long_name = 'time'
temp = ncfile.createVariable('temp', np.float64,
('time', 'lat', 'lon')) # note: unlimited dimension is leftmost
temp.units = 'K' # degrees Kelvin
temp.standard_name = 'air_temperature' # this is a CF standard name
print(temp)
print("Some pre-defined attributes for variable temp:\n")
print("temp.dimensions:", temp.dimensions)
print("temp.shape:", temp.shape)
print("temp.dtype:", temp.dtype)
print("temp.ndim:", temp.ndim)
# Write latitudes, longitudes.
# Note: the ":" is necessary in these "write" statements
lat[:] = -90. + (180 / nlat) * np.arange(nlat) # south pole to north pole
lon[:] = (180 / nlat) * np.arange(nlon) # Greenwich meridian eastward
ntimes = 5 # 5 Time slices to begin with
# create a 3D array of random numbers
data_arr = np.random.uniform(low=280, high=330, size=(ntimes, nlat, nlon))
# Write the data. This writes the whole 3D netCDF variable all at once.
temp[:] = data_arr # Appends data along unlimited dimension
# create a 2D array of random numbers
data_slice = np.random.uniform(low=270, high=290, size=(nlat, nlon))
temp[5, :, :] = data_slice # Appends the 6th time slice
print(" Wrote more data, temp.shape is now ", temp.shape)
print(time)
times_arr = time[:]
print(type(times_arr), times_arr)
import datetime as dt
from netCDF4 import date2num, num2date
# 1st 6 days of October.
dates = [dt.datetime(2016, 10, 1, 0),
dt.datetime(2016, 10, 2, 0),
dt.datetime(2016, 10, 3, 0),
dt.datetime(2016, 10, 4, 0),
dt.datetime(2016, 10, 5, 0),
dt.datetime(2016, 10, 6, 0)]
print('\n'.join([str(i) for i in dates]))
times = date2num(dates, time.units)
print(times, time.units) # numeric values
time[:] = times
# read time data back, convert to datetime instances, check values.
print(num2date(time[:], time.units))
# first print the Dataset object to see what we've got
print(ncfile)
# close the Dataset.
ncfile.close()
!ncdump -h ../data/new.nc
ncfile = nc.Dataset(path_to_file, 'a')
temp_ave = ncfile.createVariable('zonal_mean_temp',
np.float64, ('time', 'lat'))
temp_ave.units = 'K'
temp_ave.standard_name = 'zonally_averaged_air_temperature'
print(temp_ave)
temp = ncfile.variables['temp'][:]
print(temp.shape)
ave_arr = np.mean(temp[:], axis=2)
print(ave_arr.shape)
temp_ave[:] = ave_arr # again, note the square brackets!
ncfile.close()
import matplotlib.pyplot as plt
%matplotlib inline
ncfile = nc.Dataset(path_to_file, 'r')
try:
ncfile.get_variables_by_attributes(units='K')
ncfile.get_variables_by_attributes(ndim=1)
except:
pass
t = ncfile.variables['zonal_mean_temp']
lats = ncfile.variables['lat']
times = ncfile.variables['time']
dt = num2date(times[:], times.units)
fig, ax = plt.subplots(figsize=(10, 6))
p = ax.contourf(lats[:], dt, t[:], cmap='inferno')
cb = fig.colorbar(p, ax=ax)
ax.tick_params(labelsize=20)
ax.set_xlabel(lats.long_name, fontsize=22)
ax.set_ylabel(times.long_name, fontsize=22)
ax.set_title('{} ({})'.format(t.standard_name.replace('_', ' '), t.units), fontsize=20)
print('Here is the plot')
HTML(html)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: mode='r' is the default.
Step2: Just to be safe, make sure dataset is not already open
Step3: Creating dimensions
Step4: Creating attributes
Step5: You can also easily delete a netCDF attribute of a Dataset by using delncattr method
Step6: Creating variables
Step7: Define a 3D variable to hold the data
Step8: Pre-defined variable attributes (read only)
Step9: Writing data
Step10: You can just treat a netCDF Variable object like a numpy array and assign values to it.
Step11: Note that we have not yet written any data to the time variable. It automatically grew as we appended data along the time dimension to the variable temp, but the data are missing.
Step12: Dashes indicate masked values (where data have not yet been written).
Step13: Closing a netCDF file
Step14: Check again using ncdump utility
Step15: Appending data to NetCDF dataset
Step16: Create an averaged array using the existing "air_temperature" field
Step17: Write the data
Step18: Open the resulting dataset and plot some data
Step19: Open the file for reading
Step20: First, try this handy methods of extracting variables
Step21: References
|
12,627
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
cookbook_df = pd.DataFrame({'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]})
cookbook_df['BBB']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: dictionary like operations
|
12,628
|
<ASSISTANT_TASK:>
Python Code:
# Here we'll import data processing libraries like Numpy and Tensorflow
import numpy as np
import tensorflow as tf
# Use matplotlib for visualizing the model
from matplotlib import pyplot as plt
# Here we'll show the currently installed version of TensorFlow
print(tf.__version__)
# Creates a constant tensor from a tensor-like object.
x = tf.constant([2, 3, 4])
x
# The Variable() constructor requires an initial value for the variable, which can be a Tensor of any type and shape.
x = tf.Variable(2.0, dtype=tf.float32, name='my_variable')
# The .assign() method will assign the value to referance object.
x.assign(45.8)
x
# The .assign_add() method will update the referance object by adding value to it.
x.assign_add(4)
x
# The .assign_add() method will update the referance object by subtracting value to it.
x.assign_sub(3)
x
# Creates a constant tensor from a tensor-like object.
a = tf.constant([5, 3, 8]) # TODO 1a
b = tf.constant([3, -1, 2])
# Using the .add() method components of a tensor will be added.
c = tf.add(a, b)
d = a + b
# Let's output the value of `c` and `d`.
print("c:", c)
print("d:", d)
# Creates a constant tensor from a tensor-like object.
a = tf.constant([5, 3, 8]) # TODO 1b
b = tf.constant([3, -1, 2])
# Using the .multiply() method components of a tensor will be multiplied.
c = tf.multiply(a, b)
d = a * b
# Let's output the value of `c` and `d`.
print("c:", c)
print("d:", d)
# TODO 1c
# tf.math.exp expects floats so we need to explicitly give the type
a = tf.constant([5, 3, 8], dtype=tf.float32)
b = tf.math.exp(a)
# Let's output the value of `b`.
print("b:", b)
# native python list
a_py = [1, 2]
b_py = [3, 4]
# Using the .add() method components of a tensor will be added.
tf.add(a_py, b_py)
# numpy arrays
a_np = np.array([1, 2])
b_np = np.array([3, 4])
# Using the .add() method components of a tensor will be added.
tf.add(a_np, b_np)
# native TF tensor
a_tf = tf.constant([1, 2])
b_tf = tf.constant([3, 4])
# Using the .add() method components of a tensor will be added.
tf.add(a_tf, b_tf)
# Here using the .numpy() method we'll convert a `native TF tensor` to a `NumPy array`.
a_tf.numpy()
# Creates a constant tensor from a tensor-like object.
X = tf.constant(range(10), dtype=tf.float32)
Y = 2 * X + 10
# Let's output the value of `X` and `Y`.
print("X:{}".format(X))
print("Y:{}".format(Y))
# Creates a constant tensor from a tensor-like object.
X_test = tf.constant(range(10, 20), dtype=tf.float32)
Y_test = 2 * X_test + 10
# Let's output the value of `X_test` and `Y_test`.
print("X_test:{}".format(X_test))
print("Y_test:{}".format(Y_test))
# The numpy().mean() will compute the arithmetic mean or average of the given data (array elements) along the specified axis.
y_mean = Y.numpy().mean()
# Let's define predict_mean() function.
def predict_mean(X):
y_hat = [y_mean] * len(X)
return y_hat
Y_hat = predict_mean(X_test)
# Let's evaluate the loss.
errors = (Y_hat - Y)**2
loss = tf.reduce_mean(errors)
loss.numpy()
# Let's define loss_mse() function which is taking arguments as coefficients of the model
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
# Let's define compute_gradients() procedure for computing the loss gradients with respect to the model weights:
# TODO 2
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
# The Variable() constructor requires an initial value for the variable, which can be a Tensor of any type and shape.
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dw0, dw1 = compute_gradients(X, Y, w0, w1)
# Let's output the value of `dw0`.
print("dw0:", dw0.numpy())
# Let's output the value of `dw1`.
print("dw1", dw1.numpy())
# TODO 3
STEPS = 1000
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
# The Variable() constructor requires an initial value for the variable, which can be a Tensor of any type and shape.
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
for step in range(0, STEPS + 1):
dw0, dw1 = compute_gradients(X, Y, w0, w1)
w0.assign_sub(dw0 * LEARNING_RATE)
w1.assign_sub(dw1 * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(X, Y, w0, w1)
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
# Here we can compare the test loss for this linear regression to the test loss from the baseline model.
# Its output will always be the mean of the training set:
loss = loss_mse(X_test, Y_test, w0, w1)
loss.numpy()
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)
Y = X * tf.exp(-X**2)
%matplotlib inline
# The .plot() is a versatile function, and will take an arbitrary number of arguments. For example, to plot x versus y.
plt.plot(X, Y)
# Let's make_features() procedure.
def make_features(X):
# The tf.ones_like() method will create a tensor of all ones that has the same shape as the input.
f1 = tf.ones_like(X)
f2 = X
# The tf.square() method will compute square of input tensor element-wise.
f3 = tf.square(X)
# The tf.sqrt() method will compute element-wise square root of the input tensor.
f4 = tf.sqrt(X)
# The tf.exp() method will compute exponential of input tensor element-wise.
f5 = tf.exp(X)
# The tf.stack() method will stacks a list of rank-R tensors into one rank-(R+1) tensor.
return tf.stack([f1, f2, f3, f4, f5], axis=1)
# Let's define predict() procedure that will remove dimensions of size 1 from the shape of a tensor.
def predict(X, W):
return tf.squeeze(X @ W, -1)
# Let's define loss_mse() procedure that will evaluate the mean of elements across dimensions of a tensor.
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
# Let's define compute_gradients() procedure for computing the loss gradients.
def compute_gradients(X, Y, W):
with tf.GradientTape() as tape:
loss = loss_mse(Xf, Y, W)
return tape.gradient(loss, W)
STEPS = 2000
LEARNING_RATE = .02
Xf = make_features(X)
n_weights = Xf.shape[1]
W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)
# For plotting
steps, losses = [], []
plt.figure()
for step in range(1, STEPS + 1):
dW = compute_gradients(X, Y, W)
W.assign_sub(dW * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
print("STEP: {} MSE: {}".format(STEPS, loss_mse(Xf, Y, W)))
# The .figure() method will create a new figure, or activate an existing figure.
plt.figure()
# The .plot() is a versatile function, and will take an arbitrary number of arguments. For example, to plot x versus y.
plt.plot(X, Y, label='actual')
plt.plot(X, predict(Xf, W), label='predicted')
# The .legend() method will place a legend on the axes.
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Operations on Tensors
Step2: Point-wise operations
Step3: NumPy Interoperability
Step4: Linear Regression
Step5: Let's also create a test dataset to evaluate our models
Step6: Loss Function
Step7: Using mean squared error, our loss is
Step8: This value for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Step9: Gradient Function
Step10: Training Loop
Step11: This is indeed much better!
|
12,629
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib nbagg
import matplotlib.pyplot as plt
import sys
import matplotlib
import numpy as np
from NuPyCEE import sygma as s
from NuPyCEE import omega as o
from NuPyCEE import stellab
from NuPyCEE import read_yields as ry
table='yield_tables/agb_and_massive_stars_nugrid_MESAonly_fryer12delay.txt'
# Initial metallicity 0
#includes pop3_table='yield_tables/popIII_heger10.txt',
iniZ = 0.0
# Original yields
o_NG = o.omega(galaxy='milky_way', table=table, \
special_timesteps=60, exp_ml=1.0, mass_frac_SSP=0.35, nb_1a_per_m=1.5e-3, DM_evolution=True, sfe=0.04,\
t_sf_z_dep=0.3, mass_loading=1.02, iniZ=iniZ)
# Initial metallicity 0
#includes pop3_table='yield_tables/popIII_heger10.txt',
iniZ = 0.0
#turn on net yield capability
yield_interp='wiersma'
#yield input not net yields
netyields_on=False
#should not matter
wiersmamod=False
Z_trans=-1
# Original yields
o_NG_net = o.omega(galaxy='milky_way', table=table, \
special_timesteps=60, exp_ml=1.0, mass_frac_SSP=0.35, nb_1a_per_m=1.5e-3, DM_evolution=True, sfe=0.04,\
t_sf_z_dep=0.3, mass_loading=1.02, iniZ=iniZ,yield_interp=yield_interp,netyields_on=netyields_on,\
Z_trans=0.0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Default setup - total yields
Step2: Setup with total yields as input but net yields are calculated in the code and then applied
|
12,630
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
matplotlib.style.use('ggplot')
%matplotlib inline
data = pd.read_csv('data.csv')
data.shape
X = data.drop('Grant.Status', 1)
y = data['Grant.Status']
data.head()
numeric_cols = ['RFCD.Percentage.1', 'RFCD.Percentage.2', 'RFCD.Percentage.3',
'RFCD.Percentage.4', 'RFCD.Percentage.5',
'SEO.Percentage.1', 'SEO.Percentage.2', 'SEO.Percentage.3',
'SEO.Percentage.4', 'SEO.Percentage.5',
'Year.of.Birth.1', 'Number.of.Successful.Grant.1', 'Number.of.Unsuccessful.Grant.1']
categorical_cols = list(set(X.columns.values.tolist()) - set(numeric_cols))
data.dropna().shape
def calculate_means(numeric_data):
means = np.zeros(numeric_data.shape[1])
for j in range(numeric_data.shape[1]):
to_sum = numeric_data.iloc[:,j]
indices = np.nonzero(~numeric_data.iloc[:,j].isnull())[0]
correction = np.amax(to_sum[indices])
to_sum /= correction
for i in indices:
means[j] += to_sum[i]
means[j] /= indices.size
means[j] *= correction
return pd.Series(means, numeric_data.columns)
# place your code here
X_real_zeros = X[numeric_cols].replace(np.nan, 0)
X_real_mean = X[numeric_cols].replace(np.nan,
calculate_means(X[numeric_cols]))
X_cat = X[categorical_cols].replace(np.nan, 'NA').astype(str)
from sklearn.linear_model import LogisticRegression as LR
from sklearn.feature_extraction import DictVectorizer as DV
categorial_data = pd.DataFrame({'sex': ['male', 'female', 'male', 'female'],
'nationality': ['American', 'European', 'Asian', 'European']})
print('Исходные данные:\n')
print(categorial_data)
encoder = DV(sparse = False)
encoded_data = encoder.fit_transform(categorial_data.T.to_dict().values())
print('\nЗакодированные данные:\n')
print(encoded_data)
encoder = DV(sparse = False)
X_cat_oh = encoder.fit_transform(X_cat.T.to_dict().values())
from sklearn.cross_validation import train_test_split
(X_train_real_zeros,
X_test_real_zeros,
y_train, y_test) = train_test_split(X_real_zeros, y,
test_size=0.3,
random_state=0)
(X_train_real_mean,
X_test_real_mean) = train_test_split(X_real_mean,
test_size=0.3,
random_state=0)
(X_train_cat_oh,
X_test_cat_oh) = train_test_split(X_cat_oh,
test_size=0.3,
random_state=0)
from sklearn.linear_model import LogisticRegression
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import roc_auc_score
def plot_scores(optimizer):
scores = [[item[0]['C'],
item[1],
(np.sum((item[2]-item[1])**2)/(item[2].size-1))**0.5] for item in optimizer.grid_scores_]
scores = np.array(scores)
plt.semilogx(scores[:,0], scores[:,1])
plt.fill_between(scores[:,0], scores[:,1]-scores[:,2],
scores[:,1]+scores[:,2], alpha=0.3)
plt.show()
def write_answer_1(auc_1, auc_2):
auc = (auc_1 + auc_2)/2
with open("preprocessing_lr_answer1.txt", "w") as fout:
fout.write(str(auc))
param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
cv = 3
# place your code here
# step 1
X_train_real_zeros_cat_oh = np.hstack((X_train_real_zeros, X_train_cat_oh))
X_train_real_mean_cat_oh = np.hstack((X_train_real_mean , X_train_cat_oh))
# step 2
def train_linreg(X_train, y_train, param_grid, cv):
estimator = GridSearchCV(LogisticRegression(), param_grid, cv=cv)
estimator.fit(X_train, y_train)
return estimator
optimizer_zeros_cat = train_linreg(X_train=X_train_real_zeros_cat_oh,
y_train=y_train,
param_grid=param_grid,
cv=cv)
optimizer_mean_cat = train_linreg(X_train=X_train_real_mean_cat_oh,
y_train=y_train,
param_grid=param_grid,
cv=cv)
# step 3
plot_scores(optimizer_zeros_cat)
plot_scores(optimizer_mean_cat)
# step 4
X_test_real_zeros_cat_oh = np.hstack((X_test_real_zeros, X_test_cat_oh))
X_test_real_mean_cat_oh = np.hstack((X_test_real_mean , X_test_cat_oh))
roc_auc_zeros_cat = roc_auc_score(y_true=y_test,
y_score=optimizer_zeros_cat.predict_proba(X_test_real_zeros_cat_oh)[:,1])
roc_auc_mean_cat = roc_auc_score(y_true=y_test,
y_score=optimizer_mean_cat.predict_proba(X_test_real_mean_cat_oh)[:,1])
# step 5
write_answer_1(roc_auc_zeros_cat, roc_auc_mean_cat)
print('roc_auc_zeros = ', roc_auc_zeros_cat,
'roc_auc_mean = ', roc_auc_mean_cat)
from pandas.tools.plotting import scatter_matrix
data_numeric = pd.DataFrame(X_train_real_zeros, columns=numeric_cols)
list_cols = ['Number.of.Successful.Grant.1', 'SEO.Percentage.2', 'Year.of.Birth.1']
scatter_matrix(data_numeric[list_cols], alpha=0.5, figsize=(10, 10))
plt.show()
from sklearn.preprocessing import StandardScaler
# place your code here
StSc = StandardScaler()
X_train_real_scaled = StSc.fit_transform(X_train_real_zeros)
X_test_real_scaled = StSc.transform(X_test_real_zeros)
data_numeric_scaled = pd.DataFrame(X_train_real_scaled, columns=numeric_cols)
list_cols = ['Number.of.Successful.Grant.1', 'SEO.Percentage.2', 'Year.of.Birth.1']
scatter_matrix(data_numeric_scaled[list_cols], alpha=0.5, figsize=(10, 10))
plt.show()
def write_answer_2(auc):
with open("preprocessing_lr_answer2.txt", "w") as fout:
fout.write(str(auc))
# place your code here
X_train_real_scaled_cat_oh = np.hstack((X_train_real_scaled, X_train_cat_oh))
estimator = LogisticRegression()
optimizer_scaled_cat = GridSearchCV(estimator, param_grid, cv=cv)
optimizer_scaled_cat.fit(X_train_real_scaled_cat_oh, y_train)
plot_scores(optimizer_scaled_cat)
X_test_real_scaled_cat_oh = np.hstack((X_test_real_scaled, X_test_cat_oh))
auc_scaled_cat = roc_auc_score(y_true=y_test, y_score=optimizer_scaled_cat.predict_proba(X_test_real_scaled_cat_oh)[:,1])
print('auc_scaled =', auc_scaled_cat)
write_answer_2(auc_scaled_cat)
np.random.seed(0)
Сэмплируем данные из первой гауссианы
data_0 = np.random.multivariate_normal([0,0], [[0.5,0],[0,0.5]], size=40)
И из второй
data_1 = np.random.multivariate_normal([0,1], [[0.5,0],[0,0.5]], size=40)
На обучение берём 20 объектов из первого класса и 10 из второго
example_data_train = np.vstack([data_0[:20,:], data_1[:10,:]])
example_labels_train = np.concatenate([np.zeros((20)), np.ones((10))])
На тест - 20 из первого и 30 из второго
example_data_test = np.vstack([data_0[20:,:], data_1[10:,:]])
example_labels_test = np.concatenate([np.zeros((20)), np.ones((30))])
Задаём координатную сетку, на которой будем вычислять область классификации
xx, yy = np.meshgrid(np.arange(-3, 3, 0.02), np.arange(-3, 3, 0.02))
Обучаем регрессию без балансировки по классам
optimizer = GridSearchCV(LogisticRegression(), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Строим предсказания регрессии для сетки
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
Считаем AUC
auc_wo_class_weights = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('Without class weights')
plt.show()
print('AUC: %f'%auc_wo_class_weights)
Для второй регрессии в LogisticRegression передаём параметр class_weight='balanced'
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced'), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
auc_w_class_weights = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('With class weights')
plt.show()
print('AUC: %f'%auc_w_class_weights)
print(np.sum(y_train==0))
print(np.sum(y_train==1))
def write_answer_3(auc_1, auc_2):
auc = (auc_1 + auc_2) / 2
with open("preprocessing_lr_answer3.txt", "w") as fout:
fout.write(str(auc))
# place your code here
estimator = LogisticRegression(class_weight='balanced')
optimizer_scaled_cat_b = GridSearchCV(estimator, param_grid, cv=cv)
optimizer_scaled_cat_b.fit(X_train_real_scaled_cat_oh, y_train)
plot_scores(optimizer_scaled_cat_b)
X_test_real_scaled_cat_oh = np.hstack((X_test_real_scaled, X_test_cat_oh))
auc_scaled_cat_b = roc_auc_score(y_true=y_test, y_score=optimizer_scaled_cat_b.predict_proba(X_test_real_scaled_cat_oh)[:,1])
print('auc_scaled_balanced =', auc_scaled_cat_b)
np.random.seed(0)
indices_to_add = np.random.randint(432, size=432)
X_train_to_add = X_train_real_scaled_cat_oh[y_train.as_matrix() == 1,:][indices_to_add,:]
X_train_with_add = np.vstack((X_train_real_scaled_cat_oh, X_train_to_add))
y_train_with_add = np.hstack((y_train, np.ones(432)))
estimator = LogisticRegression()
optimizer_with_add = GridSearchCV(estimator, param_grid, cv=cv)
optimizer_with_add.fit(X_train_with_add, y_train_with_add)
plot_scores(optimizer_with_add)
X_test_real_scaled_cat_oh = np.hstack((X_test_real_scaled, X_test_cat_oh))
auc_with_add = roc_auc_score(y_true=y_test, y_score=optimizer_with_add.predict_proba(X_test_real_scaled_cat_oh)[:,1])
print('auc_with_add =', auc_with_add)
write_answer_3(auc_scaled_cat_b, auc_with_add)
print('AUC ROC for classifier without weighted classes', auc_wo_class_weights)
print('AUC ROC for classifier with weighted classes: ', auc_w_class_weights)
Разделим данные по классам поровну между обучающей и тестовой выборками
example_data_train = np.vstack([data_0[:20,:], data_1[:20,:]])
example_labels_train = np.concatenate([np.zeros((20)), np.ones((20))])
example_data_test = np.vstack([data_0[20:,:], data_1[20:,:]])
example_labels_test = np.concatenate([np.zeros((20)), np.ones((20))])
Обучим классификатор
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced'), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
auc_stratified = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('With class weights')
plt.show()
print('AUC ROC for stratified samples: ', auc_stratified)
def write_answer_4(auc):
with open("preprocessing_lr_answer4.txt", "w") as fout:
fout.write(str(auc))
# place your code here
(X_train_real_zeros,
X_test_real_zeros,
y_train, y_test) = train_test_split(X_real_zeros, y,
test_size=0.3,
random_state=0, stratify=y)
(X_train_cat_oh,
X_test_cat_oh) = train_test_split(X_cat_oh,
test_size=0.3,
random_state=0, stratify=y)
StSc = StandardScaler()
X_train_zeros_scaled = StSc.fit_transform(X_train_real_zeros)
X_test_zeros_scaled = StSc.transform(X_test_real_zeros)
X_train_zeros_scaled_cat_oh = np.hstack((X_train_zeros_scaled, X_train_cat_oh))
X_test_zeros_scaled_cat_oh = np.hstack((X_test_zeros_scaled, X_test_cat_oh))
estimator = LogisticRegression(class_weight='balanced')
optimizer_balanced = GridSearchCV(estimator, param_grid, cv=cv)
optimizer_balanced.fit(X_train_zeros_scaled_cat_oh, y_train)
plot_scores(optimizer_balanced)
auc_balanced = roc_auc_score(y_true=y_test, y_score=optimizer_balanced.predict_proba(X_test_zeros_scaled_cat_oh)[:,1])
print('auc_balanced =', auc_balanced)
write_answer_4(auc_balanced)
from sklearn.preprocessing import PolynomialFeatures
Инициализируем класс, который выполняет преобразование
transform = PolynomialFeatures(2)
Обучаем преобразование на обучающей выборке, применяем его к тестовой
example_data_train_poly = transform.fit_transform(example_data_train)
example_data_test_poly = transform.transform(example_data_test)
Обращаем внимание на параметр fit_intercept=False
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train_poly, example_labels_train)
Z = optimizer.predict(transform.transform(np.c_[xx.ravel(), yy.ravel()])).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
plt.title('With class weights')
plt.show()
print(example_data_train_poly.shape)
transform = PolynomialFeatures(11)
example_data_train_poly = transform.fit_transform(example_data_train)
example_data_test_poly = transform.transform(example_data_test)
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train_poly, example_labels_train)
Z = optimizer.predict(transform.transform(np.c_[xx.ravel(), yy.ravel()])).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
plt.title('Corrected class weights')
plt.show()
print(example_data_train_poly.shape)
def write_answer_5(auc):
with open("preprocessing_lr_answer5.txt", "w") as fout:
fout.write(str(auc))
# place your code here
transform = PolynomialFeatures(2)
X_train_real_zeros_poly = transform.fit_transform(X_train_real_zeros)
X_test_real_zeros_poly = transform.transform(X_test_real_zeros)
StSc = StandardScaler()
X_train_zeros_scaled = StSc.fit_transform(X_train_real_zeros_poly)
X_test_zeros_scaled = StSc.transform(X_test_real_zeros_poly)
X_train_poly = np.hstack((X_train_zeros_scaled, X_train_cat_oh))
X_test_poly = np.hstack((X_test_zeros_scaled, X_test_cat_oh))
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(X_train_poly, y_train)
roc = roc_auc_score(np.array(y_test), optimizer.predict_proba(X_test_poly)[:, 1])
print(roc)
write_answer_5(roc)
def write_answer_6(features):
with open("preprocessing_lr_answer6.txt", "w") as fout:
fout.write(" ".join([str(num) for num in features]))
# place your code here
StSc = StandardScaler()
X_train_real_scaled = StSc.fit_transform(X_train_real_zeros)
X_test_real_scaled = StSc.transform(X_test_real_zeros)
X_train_scaled = np.hstack((X_train_real_scaled, X_train_cat_oh))
estimator = LogisticRegression(class_weight='balanced', penalty='l1')
optimizer_scaled = GridSearchCV(estimator=estimator, param_grid=param_grid, cv=cv)
optimizer_scaled.fit(X_train_scaled, y_train)
plot_scores(optimizer_scaled)
X_test_scaled = np.hstack((X_test_real_scaled, X_test_cat_oh))
auc_scaled_cat = roc_auc_score(y_true=y_test, y_score=optimizer_scaled.predict_proba(X_test_scaled)[:,1])
results = []
for i in range(optimizer_scaled.best_estimator_.coef_.shape[1]):
if optimizer_scaled.best_estimator_.coef_[:, i] == 0 and i < 14:
results.append(i)
print results
write_answer_6(results)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Описание датасета
Step2: Выделим из датасета целевую переменную Grant.Status и обозначим её за y
Step3: Теория по логистической регрессии
Step4: Видно, что в датасете есть как числовые, так и категориальные признаки. Получим списки их названий
Step5: Также в нём присутствуют пропущенные значения. Очевидны решением будет исключение всех данных, у которых пропущено хотя бы одно значение. Сделаем это
Step6: Видно, что тогда мы выбросим почти все данные, и такой метод решения в данном случае не сработает.
Step7: Преобразование категориальных признаков.
Step8: Как видно, в первые три колонки оказалась закодированна информация о стране, а во вторые две - о поле. При этом для совпадающих элементов выборки строки будут полностью совпадать. Также из примера видно, что кодирование признаков сильно увеличивает их количество, но полностью сохраняет информацию, в том числе о наличии пропущенных значений (их наличие просто становится одним из бинарных признаков в преобразованных данных).
Step9: Для построения метрики качества по результату обучения требуется разделить исходный датасет на обучающую и тестовую выборки.
Step10: Описание классов
Step11: Масштабирование вещественных признаков.
Step12: Как видно из графиков, разные признаки очень сильно отличаются друг от друга по модулю значений (обратите внимание на диапазоны значений осей x и y). В случае обычной регрессии это никак не влияет на качество обучаемой модели, т.к. у меньших по модулю признаков будут большие веса, но при использовании регуляризации, которая штрафует модель за большие веса, регрессия, как правило, начинает работать хуже.
Step13: Сравнение признаковых пространств.
Step14: Как видно из графиков, мы не поменяли свойства признакового пространства
Step24: Балансировка классов.
Step25: Как видно, во втором случае классификатор находит разделяющую поверхность, которая ближе к истинной, т.е. меньше переобучается. Поэтому на сбалансированность классов в обучающей выборке всегда следует обращать внимание.
Step26: Видно, что нет.
Step27: Стратификация выборок.
Step30: Насколько эти цифры реально отражают качество работы алгоритма, если учесть, что тестовая выборка так же несбалансирована, как обучающая? При этом мы уже знаем, что алгоритм логистический регрессии чувствителен к балансировке классов в обучающей выборке, т.е. в данном случае на тесте он будет давать заведомо заниженные результаты. Метрика классификатора на тесте имела бы гораздо больший смысл, если бы объекты были разделы в выборках поровну
Step31: Как видно, после данной процедуры ответ классификатора изменился незначительно, а вот качество увеличилось. При этом, в зависимости от того, как вы разбили изначально данные на обучение и тест, после сбалансированного разделения выборок итоговая метрика на тесте может как увеличиться, так и уменьшиться, но доверять ей можно значительно больше, т.к. она построена с учётом специфики работы классификатора. Данный подход является частным случаем т.н. метода стратификации.
Step35: Теперь вы разобрались с основными этапами предобработки данных для линейных классификаторов.
Step36: Видно, что данный метод преобразования данных уже позволяет строить нелинейные разделяющие поверхности, которые могут более тонко подстраиваться под данные и находить более сложные зависимости. Число признаков в новой модели
Step37: Но при этом одновременно данный метод способствует более сильной способности модели к переобучению из-за быстрого роста числа признаком с увеличением степени $p$. Рассмотрим пример с $p=11$
Step38: Количество признаков в данной модели
Step39: Задание 5. Трансформация вещественных признаков.
Step40: Регрессия Lasso.
|
12,631
|
<ASSISTANT_TASK:>
Python Code:
# This model training code is directly from:
# https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py
'''Trains an LSTM model on the IMDB sentiment classification task.
The dataset is actually too small for LSTM to be of any advantage
compared to simpler, much faster methods such as TF-IDF + LogReg.
# Notes
- RNNs are tricky. Choice of batch size is important,
choice of loss and optimizer is critical, etc.
Some configurations won't converge.
- LSTM loss decrease patterns during training can be quite different
from what you see with CNNs/MLPs/etc.
'''
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb
max_features = 20000
maxlen = 80 # cut texts after this number of words (among top max_features most common words)
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
import shap
# we use the first 100 training examples as our background dataset to integrate over
explainer = shap.DeepExplainer(model, x_train[:100])
# explain the first 10 predictions
# explaining each prediction requires 2 * background dataset size runs
shap_values = explainer.shap_values(x_test[:10])
# init the JS visualization code
shap.initjs()
# transform the indexes to words
import numpy as np
words = imdb.get_word_index()
num2word = {}
for w in words.keys():
num2word[words[w]] = w
x_test_words = np.stack([np.array(list(map(lambda x: num2word.get(x, "NONE"), x_test[i]))) for i in range(10)])
# plot the explanation of the first prediction
# Note the model is "multi-output" because it is rank-2 but only has one column
shap.force_plot(explainer.expected_value[0], shap_values[0][0], x_test_words[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explain the model with DeepExplainer and visualize the first prediction
|
12,632
|
<ASSISTANT_TASK:>
Python Code:
# Set Path
import sys
sys.path.append('../../src/')
%autoreload 2
# Import Libraries
from fem import Function
from fem import QuadFE
from fem import DofHandler
from fem import Kernel
from fem import Basis
from fem import Form
from fem import Assembler
from fem import LinearSystem
from plot import Plot
from mesh import convert_to_array
from mesh import QuadMesh
from mesh import Mesh1D
import matplotlib.pyplot as plt
import scipy.sparse as sp
import numpy as np
% matplotlib inline
plt.rcParams['figure.figsize'] = [7, 7]
#
# Define the element
#
Q1 = QuadFE(2, 'Q1')
mesh1 = QuadMesh(resolution=(2,2))
mesh2 = QuadMesh(resolution=(2,2))
mesh2.cells.get_child(2).mark(1)
mesh2.cells.refine(refinement_flag=1)
#
# Weak form
#
# Kernel functions
one = Function(1, 'constant')
zero = Function(0, 'constant')
# Basis functions
u = Basis(Q1, 'u')
ux = Basis(Q1, 'ux')
uy = Basis(Q1, 'uy')
# Forms
ax = Form(kernel=Kernel(one), trial=ux, test=ux)
ay = Form(kernel=Kernel(one), trial=uy, test=uy)
L = Form(kernel=Kernel(zero), test=u)
# Assembler for mesh1
assembler1 = Assembler([ax, ay, L], mesh1)
assembler1.assemble()
# Assembler for mesh2
assembler2 = Assembler([ax,ay,L], mesh2)
assembler2.assemble()
# Get dofhandlers
dh1 = assembler1.dofhandlers['Q1']
dh2 = assembler2.dofhandlers['Q1']
# Plotting mesh 1
plot = Plot()
plot.mesh(mesh1, dofhandler=dh1, dofs=True)
# Plotting mesh 2
plot = Plot()
plot.mesh(mesh2, dofhandler=dh2, dofs=True)
# Assembled matrices
# Mesh1
# bilinear
rows = assembler1.af[0]['bilinear']['rows']
cols = assembler1.af[0]['bilinear']['cols']
vals = assembler1.af[0]['bilinear']['vals']
dofs = assembler1.af[0]['bilinear']['row_dofs']
A1 = sp.coo_matrix((vals, (rows, cols)))
A1 = A1.todense()
# linear
b1 = assembler1.af[0]['linear']['vals']
# number of dofs
n = len(dofs)
# Print
print('Mesh 1')
print('A1 = \n', 6*A1)
print('b1 = \n', 6*b1)
print('n_dofs=', n)
print('='*60)
#
# Mesh2
#
# bilinear
rows = assembler2.af[0]['bilinear']['rows']
cols = assembler2.af[0]['bilinear']['cols']
vals = assembler2.af[0]['bilinear']['vals']
dofs = assembler2.af[0]['bilinear']['row_dofs']
A2 = sp.coo_matrix((vals, (rows, cols)))
A2 = A2.todense()
# linear
b2 = assembler1.af[0]['linear']['vals']
# number of dofs
n = len(dofs)
# Print
print('Mesh 2')
print('A2 = \n', 6*A2)
print('b2 = \n', 6*b2)
print('n_dofs=', n)
print(A1[np.ix_([0,1,4,5,8],[0,1,4,5,8])] - A2[np.ix_([0,1,4,5,8],[0,1,4,5,8])])
# System for mesh1
system1 = LinearSystem(assembler1)
# Check that it's the same as before
assert np.allclose(A1, system1.A().todense())
# Mark Dirichlet Regions
f_left = lambda x,dummy: np.abs(x)<1e-9
f_right = lambda x,dummy: np.abs(x-1)<1e-9
# Mesh 1
mesh1.mark_region('left', f_left, on_boundary=True)
mesh1.mark_region('right', f_right, on_boundary=True)
# Mesh 2
mesh2.mark_region('left', f_left, on_boundary=True)
mesh2.mark_region('right', f_right, on_boundary=True)
#
# Check that we get the correct vertices back
#
for side in ['left', 'right']:
# mesh1
print('mesh1: ', side)
for v in mesh1.get_region(side, entity_type='vertex', \
on_boundary=True, return_cells=False):
print(v.coordinates())
print('')
# mesh2
print('mesh2: ', side)
for v in mesh2.get_region(side, entity_type='vertex', \
on_boundary=True, return_cells=False):
print(v.coordinates())
print('\n\n')
#
# Extract Dirichlet conditions (uncompressed format)
#
system1a = LinearSystem(assembler1, compressed=False)
print('System matrix and vector before left Dirichlet nodes')
print('6A = \n', 6*system1a.A().todense())
print('6b = \n', 6*system1a.b() )
print('Extracting Dirichlet nodes on left')
system1a.extract_dirichlet_nodes('left', 0)
print('')
print('6A = \n', 6*system1a.A().todense())
print('6b = \n', 6*system1a.b() )
print('\n\n')
print('Extracting Dirichlet nodes on right')
system1a.extract_dirichlet_nodes('right',1)
print('')
print('6A = \n', 6*system1a.A().todense())
print('6b = \n', 6*system1a.b() )
#
# Extract Dirichlet conditions (compressed format)
#
system1b = LinearSystem(assembler1, compressed=True)
print('System matrix and vector before left Dirichlet nodes')
print('6A = \n', 6*system1b.A().todense())
print('6b = \n', 6*system1b.b() )
print('Extracting Dirichlet nodes on left')
system1b.extract_dirichlet_nodes('left', 0)
print('')
print('6A = \n', 6*system1b.A().todense())
print('6b = \n', 6*system1b.b() )
print('\n\n')
print('Extracting Dirichlet nodes on right')
system1b.extract_dirichlet_nodes('right',1)
print('')
print('6A = \n', 6*system1b.A().todense())
print('6b = \n', 6*system1b.b() )
#
# Check solutions
#
system1a.solve()
u1a = system1a.sol(as_function=True)
plot = Plot()
plot.wire(u1a)
#
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We test the system
Step2: Since we have already tested the assembly, we focus here on the linear system. In particular
Step3: To test extract_hanging_nodes and resolve_hanging_nodes we construct a simple mesh with hanging_nodes.
Step4: For the assembly, we must define the bilinear form
Step5: Let's visualize the meshes.
Step6: It looks like the following dofs from mesh1 and mesh2 are equivalent
Step7: Check that A1 and A2 coincide when restricting to the nodes
Step8: Mark Dirichlet Regions on Meshes
Step9: Now extract Dirichlet nodes
|
12,633
|
<ASSISTANT_TASK:>
Python Code:
# Run this cell to set up the notebook.
import numpy as np
import pandas as pd
import seaborn as sns
import scipy as sci
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import patches, cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from mpl_toolkits.mplot3d import Axes3D
from client.api.notebook import Notebook
ok = Notebook('lab07.ok')
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
factorial = sci.misc.factorial # so you don't have to look it up
def likelihood(n, p, x):
...
def log_likelihood(n, p, x):
...
def highest_likelihood(n, x):
...
n_widget = widgets.FloatSlider(min=1, max=20, step=1, value=20)
x_widget = widgets.FloatSlider(min=0, max=20, step=1, value=5)
# We want to make sure x <= n, otherwise we get into trouble
def update_x_range(*args):
x_widget.max = n_widget.value
n_widget.observe(update_x_range, 'value')
def plot_likelihood(n, x, plot_log=False):
# values of p are on the x-axis.
# We plot every value from 0.01 to 0.99
pvals = np.arange(1, 100)/100
# values of either Likelihood(p) or log(Likelihood(p))
# are on the y-axis, depending on the method
if plot_log:
yvals = ...
else:
yvals = ...
plt.plot(pvals, yvals)
# Put a line where L(p) is maximized and print the value p*
p_star = highest_likelihood(n, x)
plt.axvline(p_star, lw=1.5, color='r', ls='dashed')
plt.text(p_star + 0.01, min(yvals), 'p*=%.3f' % (p_star))
plt.xlabel('p')
if plot_log:
plt.ylabel('lik(p)')
plt.title("log(Likelihood(p)), if X ~ bin(n, p) = k")
else:
plt.ylabel('L(p)')
plt.title("Likelihood of p, if X ~ bin(n, p) = k")
plt.show()
interact(plot_likelihood, n=n_widget, x=x_widget, log=False);
def btype_likelihood(pa, pb, po, O, A, B, AB):
...
def btype_log_likelihood(pa, pb, po, O, A, B, AB):
...
def plot_surface_3d(X, Y, Z, orient_x = 45, orient_y = 45):
highest_Z = max(Z.reshape(-1,1))
lowest_Z = min(Z.reshape(-1,1))
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z,
cmap=cm.coolwarm,
linewidth=0,
antialiased=False,
rstride=5, cstride=5)
ax.zaxis.set_major_locator(LinearLocator(5))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.view_init(orient_y, orient_x)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.title("log(Likelihood(p_a, p_b))")
plt.xlabel("p_a")
plt.ylabel("p_b")
plt.show()
O = ...
A = ...
B = ...
AB = ...
def plot_btype_likelihood_3d(O, A, B, AB):
pa = np.arange(1, 50)/100
pb = np.arange(1, 50)/100
pa, pb = np.meshgrid(pa, pb) # get all pairs
po = ...
likelihoods = ...
plot_surface_3d(pa, pb, likelihoods)
interact(plot_btype_likelihood_3d, O=O, A=A, B=B, AB=AB);
O2 = ...
A2 = ...
B2 = ...
AB2 = ...
X = ...
Y = ...
def plot_btype_likelihood_3d_oriented(O, A, B, AB, X, Y):
pa = np.arange(1, 50)/100
pb = np.arange(1, 50)/100
pa, pb = np.meshgrid(pa, pb) # get all pairs
po = ...
likelihoods = ...
plot_surface_3d(pa, pb, likelihoods, orient_x=X, orient_y=Y)
interact(plot_btype_likelihood_3d_oriented, O=O2, A=A2, B=B2, AB=AB2, X=X, Y=Y);
O3 = widgets.FloatSlider(min=1, max=200, step=1, value=120)
A3 = widgets.FloatSlider(min=1, max=200, step=1, value=100)
B3 = widgets.FloatSlider(min=1, max=200, step=1, value=30)
AB3 = widgets.FloatSlider(min=1, max=200, step=1, value=5)
def plot_btype_log_likelihood_heatmap(O, A, B, AB):
pa = np.arange(1, 50)/100
pb = np.arange(1, 50)/100
pa, pb = np.meshgrid(pa, pb) # get all possible pairs
po = 1 - pa - pb
likelihoods = btype_log_likelihood(pa, pb, po, O, A, B, AB)
plt.pcolor(pa, pb, likelihoods, cmap=cm.coolwarm)
plt.xlabel("p_a")
plt.ylabel("p_b")
plt.title("log(Likelihood(p_a, p_b))")
plt.show()
interact(plot_btype_log_likelihood_heatmap, O=O3, A=A3, B=B3, AB=AB3);
O4 = widgets.FloatSlider(min=1, max=200, step=1, value=120)
A4 = widgets.FloatSlider(min=1, max=200, step=1, value=100)
B4 = widgets.FloatSlider(min=1, max=200, step=1, value=30)
AB4 = widgets.FloatSlider(min=1, max=200, step=1, value=5)
def plot_btype_likelihood_heatmap(O, A, B, AB):
pa = np.arange(1, 100)/100
pb = np.arange(1, 100)/100
pa, pb = np.meshgrid(pa, pb) # get all possible pairs
po = 1 - pa - pb
likelihoods = btype_likelihood(pa, pb, po, O, A, B, AB)
likelihoods[(pa + pb) > 1] = 0 # Don't plot impossible probability pairs
plt.pcolor(pa, pb, likelihoods, cmap=cm.coolwarm)
plt.xlabel("p_a")
plt.ylabel("p_b")
plt.title("Likelihood(p_a, p_b)")
plt.show()
interact(plot_btype_likelihood_heatmap, O=O4, A=A4, B=B4, AB=AB4);
O5 = widgets.FloatSlider(min=1, max=200, step=1, value=120)
A5 = widgets.FloatSlider(min=1, max=200, step=1, value=100)
B5 = widgets.FloatSlider(min=1, max=200, step=1, value=30)
AB5 = widgets.FloatSlider(min=1, max=200, step=1, value=5)
def maximize_btype_likelihood(O, A, B, AB):
def flipped_btype_fixed_params(params):
# "params" is a list containing p_a, p_b, p_o
pa, pb, po = params
# We wish to return a value which is minimized when the log-likelihood is maximized...
# What function would accomplish this?
...
# We need to provide an initial guess at the solution
initial_guess = [1/3, 1/3, 1/3]
# Each variable is bounded between zero and one
# sci.optimize.minimize seems to dislike exact zero bounds, though, so we use 10^-6
bnds = ((1e-6, 1), (1e-6, 1), (1e-6, 1))
# An additional constraint on our parameters - they must sum to one
# The minimizer will only check params where constraint_fn(params) = 0
def constraint_fn(params):
# "params" is a list containing p_a, p_b, p_o
return sum(params) - 1
constraint = ({'type': 'eq', 'fun': constraint_fn},)
pa, pb, po = sci.optimize.minimize(flipped_btype_fixed_params,
x0=initial_guess,
bounds=bnds,
constraints=constraint).x
return "pa* = %.3f, pb* = %.2f, po* = %.3f" % (pa, pb, po)
interact(maximize_btype_likelihood, O=O5, A=A5, B=B5, AB=AB5);
i_finished_the_lab = False
_ = ok.grade('qcompleted')
_ = ok.backup()
_ = ok.submit()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Today's lab reviews Maximum Likelihood Estimation, and introduces interctive plotting in the jupyter notebook.
Step2: Question 2
Step3: Question 3
Step4: Question 4
Step5: Part 2
Step6: What's the log-likelihood? As before, don't just use np.log(btype_likelihood).
Step7: Question 6
Step8: Now, complete the plot_btype_likelihood_3d function.
Step9: Question 7
Step10: We also can make some 2d color plots, to get a better view of exactly where our values are maximized. As in the 3d plots, redder colors refer to higher likelihoods.
Step11: As with the binomial, the likelihood has a "sharper" distribution than the log-likelihood. So, plotting the likelihood, we can see our maximal point with greater clarity.
Step12: Question 8
Step13: Submitting your assignment
|
12,634
|
<ASSISTANT_TASK:>
Python Code:
X, y = puzzleData(puzzle=0, n=25)
residualPuzzle1D(X, y, hint=True)
x, y = puzzleData(puzzle=1, n=25)
X = x
# Add a new feature as a column of X
# with X = np.column_stack((x, #TODO))
residualPuzzle1D(X, y, hint=False)
x, y = puzzleData(puzzle=2, n=25)
X = x
# Add a new feature as a column of X
# with X = np.column_stack((x, #TODO))
residualPuzzle1D(X, y, hint=False)
import numpy as np
D = np.genfromtxt('data/Advertising.csv', delimiter=',')
X, y = D[:,:3], D[:,3]
from sklearn.linear_model import LinearRegression
regrTV = LinearRegression()
regrTV.fit(X[:,0].reshape(-1,1), y)
print "sales = ", regrTV.intercept_, " + ", regrTV.coef_[0], " x TV"
regrRadio = LinearRegression()
regrRadio.fit(X[:,1].reshape(-1,1), y)
print "sales = ", regrRadio.intercept_, " + ", regrRadio.coef_[0], " x Radio"
regrNews = LinearRegression()
regrNews.fit(X[:,2].reshape(-1,1), y)
print "sales = ", regrNews.intercept_, " + ", regrNews.coef_[0], " x Newspaper"
import matplotlib.pyplot as plt
%matplotlib inline
mycolors = {"blue": "steelblue", "red": "#a76c6e", "green": "#6a9373"}
fig = plt.figure(figsize=(14,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
axes = [ax1, ax2, ax3]
regrs = [regrTV, regrRadio, regrNews]
features = ["TV", "Radio", "Newspaper"]
ax1.set_ylim(0,30)
ax1.set_ylabel("Sales")
for jj, (ax, regr, feat) in enumerate(zip(axes, regrs, features)):
ax.grid(True)
ax.set_xlim(0, np.max(X[:,jj]))
ax.set_xlabel(feat, fontsize=20)
ax.scatter(X[:,jj], y, s=25, color=mycolors["blue"], alpha=0.7)
x = np.linspace(0, np.max(X[:,jj]))
ax.plot(x, regr.intercept_ + regr.coef_[0]*x, lw=3, color=mycolors["blue"])
regrAll = LinearRegression()
regrAll.fit(X, y)
print "sales = ", regrAll.intercept_, " + ", regrAll.coef_[0], " x TV + ", \
regrAll.coef_[1], " x Radio + ", \
regrAll.coef_[2], " x Newspaper"
print np.corrcoef(X.T, y)
import numpy as np
D = np.genfromtxt('data/Advertising.csv', delimiter=',')
X, y = D[:,:3], D[:,3]
from sklearn.linear_model import LinearRegression
regrTV = LinearRegression()
regrTV.fit(X[:,0].reshape(-1,1), y)
print "sales = ", regrTV.intercept_, " + ", regrTV.coef_[0], " x TV"
regrRadio = LinearRegression()
regrRadio.fit(X[:,1].reshape(-1,1), y)
print "sales = ", regrRadio.intercept_, " + ", regrRadio.coef_[0], " x Radio"
regrNews = LinearRegression()
regrNews.fit(X[:,2].reshape(-1,1), y)
print "sales = ", regrNews.intercept_, " + ", regrNews.coef_[0], " x Newspaper"
import matplotlib.pyplot as plt
%matplotlib inline
mycolors = {"blue": "steelblue", "red": "#a76c6e", "green": "#6a9373"}
fig = plt.figure(figsize=(14,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
axes = [ax1, ax2, ax3]
regrs = [regrTV, regrRadio, regrNews]
features = ["TV", "Radio", "Newspaper"]
ax1.set_ylim(0,30)
ax1.set_ylabel("Sales")
for jj, (ax, regr, feat) in enumerate(zip(axes, regrs, features)):
ax.grid(True)
ax.set_xlim(0, np.max(X[:,jj]))
ax.set_xlabel(feat, fontsize=20)
ax.scatter(X[:,jj], y, s=25, color=mycolors["blue"], alpha=0.7)
x = np.linspace(0, np.max(X[:,jj]))
ax.plot(x, regr.intercept_ + regr.coef_[0]*x, lw=3, color=mycolors["blue"])
regrAll = LinearRegression()
regrAll.fit(X, y)
print "sales = ", regrAll.intercept_, " + ", regrAll.coef_[0], " x TV + ", \
regrAll.coef_[1], " x Radio + ", \
regrAll.coef_[2], " x Newspaper"
print np.corrcoef(X.T, y)
import numpy as np
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
%matplotlib inline
mycolors = {"blue": "steelblue", "red": "#a76c6e", "green": "#6a9373"}
def puzzleData(puzzle, n=25):
if puzzle < 0 or puzzle > 2:
print "Puzzles must be numbered 0-2. Defaulting to 0."
puzzle = 0
np.random.seed(1237)
X = np.linspace(0,1,n) + .05 * np.random.rand(n)
if puzzle == 0:
return X, 0.5 + 0.75 * X + .5*np.random.rand(25)
elif puzzle == 1:
return X, 0.25 - X + X*X + .1*np.random.rand(25)
elif puzzle == 2:
return X, 2*(3*(2*x-1.2)**3 + 2*(2*x-1.2)**2 -(2*x-1.2)) + 1.5*np.random.rand(25)
def residualPuzzle1D(X, y, hint=False):
regr = LinearRegression()
if len(X.shape) == 1:
X = X.reshape(-1,1)
regr.fit(X, y)
yhat = regr.intercept_ * np.ones(y.shape)
for ii, coef in enumerate(regr.coef_):
yhat += coef*X[:,ii]
res = yhat - y
fig = plt.figure(figsize=(14,6))
ax1 = fig.add_subplot(121)
ax1.scatter(X[:,0], res, color=mycolors["green"], s=100)
rmax = np.max(abs(res))
xmin = np.min(X[:,0])
xmax = np.max(X[:,0])
ax1.set_xlim([xmin-.05, xmax+.05])
ax1.set_ylim([-2*rmax, 2*rmax])
ax1.plot([-.5,1.5], [0,0], color="black", lw=3)
ax1.set_xlabel(r"x", fontsize=20)
ax1.set_ylabel(r"res", fontsize=20)
if hint:
ax2 = fig.add_subplot(122)
ax2.scatter(X[:,0], y, color=mycolors["blue"], s=100)
ax2.plot(X[:,0], yhat, color=mycolors["red"], lw=3)
ax2.set_xlim([-0.05,1.05])
ax2.set_xlabel("x", fontsize=20)
ax2.set_ylabel("y", fontsize=20)
print "RSS = ", np.linalg.norm(res)**2
from IPython.core.display import HTML
HTML(
<style>
.MathJax nobr>span.math>span{border-left-width:0 !important};
</style>
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Q
Step2: Q
Step3: <br><br>
Step4: Each row in the data matrix $X$ contains advertising budgets for a particular market. The first through third columns report advertising budgets for TV, Radio, and Newspaper, respectively, in units of thousands of dollars. The entries of $y$ give the product sales for each market (in thousands of products sold).
Step5: You can visualize the three linear regressions by running the code below.
Step6: Q
Step7: Q
Step8: Based on the correlation of the features, can you explain the disparity in model coefficients in the single linear regression and multiple linear regression settings?
Step9: Each row in the data matrix $X$ contains advertising budgets for a particular market. The first through third columns report advertising budgets for TV, Radio, and Newspaper, respectively, in units of thousands of dollars. The entries of $y$ give the product sales for each market (in thousands of products sold).
Step10: You can visualize the three linear regressions by running the code below.
Step11: Q
Step12: Q
Step14: Based on the correlation of the features, can you explain the disparity in model coefficients in the single linear regression and multiple linear regression settings?
|
12,635
|
<ASSISTANT_TASK:>
Python Code:
# Creating a class called Bike
class Bike:
pass
# An 'instance' of a bike
my_bike = Bike()
type(my_bike)
class Bike:
def __init__(self, speed, wheel, weight):
self.speed = speed
self.wheel = wheel
self.weight = weight
# Instantiating a Bike Object
woo = Bike(2, 4, 5)
woo.speed
woo.wheel
woo.weight
class Bike:
# __init__() function
def __init__(self, speed, wheel, weight):
self.speed = speed
self.wheel = wheel
self.weight = weight
# A method calculates the max weight of a person on the bike
def max_weight(self, rider_weight):
max_weight = rider_weight * self.weight
return max_weight
# Another method
def some_method(self):
pass
woo = Bike(2, 4, 5)
woo.max_weight(30)
class Bike():
def __init__(self, speed, wheel, weight):
self.speed = speed
self.wheel = wheel
self.weight = weight
def __str__(self):
return "Bike Speed: {} Wheel Size: {} Weight: {}".format(self.speed, self.wheel, self.weight)
woo = Bike(3, 4, 5)
print(woo)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you do not already know, the word "instantiation" means to create a version of an object. Here is how we would instantiate a bike.
Step2: Now, my_bike is an object reference to "Bike". This means that the variable doesn't actually hold the object in memory, but simply points to it.
Step3: What just happened? We created the init method in Bike, and provided it four parameters
Step4: The instantiation checks out. Here's what happened
Step5: Methods
Step6: Special Methods
|
12,636
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import statsmodels.api as sm
from scipy import stats
from statsmodels.iolib.table import SimpleTable, default_txt_fmt
np.random.seed(1024)
nsample = 50
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, (x - 5) ** 2))
X = sm.add_constant(X)
beta = [5.0, 0.5, -0.01]
sig = 0.5
w = np.ones(nsample)
w[nsample * 6 // 10 :] = 3
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + sig * w * e
X = X[:, [0, 1]]
mod_wls = sm.WLS(y, X, weights=1.0 / (w ** 2))
res_wls = mod_wls.fit()
print(res_wls.summary())
res_ols = sm.OLS(y, X).fit()
print(res_ols.params)
print(res_wls.params)
se = np.vstack(
[
[res_wls.bse],
[res_ols.bse],
[res_ols.HC0_se],
[res_ols.HC1_se],
[res_ols.HC2_se],
[res_ols.HC3_se],
]
)
se = np.round(se, 4)
colnames = ["x1", "const"]
rownames = ["WLS", "OLS", "OLS_HC0", "OLS_HC1", "OLS_HC3", "OLS_HC3"]
tabl = SimpleTable(se, colnames, rownames, txt_fmt=default_txt_fmt)
print(tabl)
covb = res_ols.cov_params()
prediction_var = res_ols.mse_resid + (X * np.dot(covb, X.T).T).sum(1)
prediction_std = np.sqrt(prediction_var)
tppf = stats.t.ppf(0.975, res_ols.df_resid)
pred_ols = res_ols.get_prediction()
iv_l_ols = pred_ols.summary_frame()["obs_ci_lower"]
iv_u_ols = pred_ols.summary_frame()["obs_ci_upper"]
pred_wls = res_wls.get_prediction()
iv_l = pred_wls.summary_frame()["obs_ci_lower"]
iv_u = pred_wls.summary_frame()["obs_ci_upper"]
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(x, y, "o", label="Data")
ax.plot(x, y_true, "b-", label="True")
# OLS
ax.plot(x, res_ols.fittedvalues, "r--")
ax.plot(x, iv_u_ols, "r--", label="OLS")
ax.plot(x, iv_l_ols, "r--")
# WLS
ax.plot(x, res_wls.fittedvalues, "g--.")
ax.plot(x, iv_u, "g--", label="WLS")
ax.plot(x, iv_l, "g--")
ax.legend(loc="best")
resid1 = res_ols.resid[w == 1.0]
var1 = resid1.var(ddof=int(res_ols.df_model) + 1)
resid2 = res_ols.resid[w != 1.0]
var2 = resid2.var(ddof=int(res_ols.df_model) + 1)
w_est = w.copy()
w_est[w != 1.0] = np.sqrt(var2) / np.sqrt(var1)
res_fwls = sm.WLS(y, X, 1.0 / ((w_est ** 2))).fit()
print(res_fwls.summary())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: WLS Estimation
Step2: WLS knowing the true variance ratio of heteroscedasticity
Step3: OLS vs. WLS
Step4: Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors
Step5: Calculate OLS prediction interval
Step6: Draw a plot to compare predicted values in WLS and OLS
Step7: Feasible Weighted Least Squares (2-stage FWLS)
|
12,637
|
<ASSISTANT_TASK:>
Python Code:
!wget https://raw.githubusercontent.com/rodluger/tutorials/master/gps/data/sample_transit.txt
!mv *.txt data/
import numpy as np
from scipy.linalg import cho_factor
def ExpSquaredKernel(t1, t2=None, A=1.0, l=1.0):
Return the ``N x M`` exponential squared
covariance matrix between time vectors `t1`
and `t2`. The kernel has amplitude `A` and
lengthscale `l`.
if t2 is None:
t2 = t1
T2, T1 = np.meshgrid(t2, t1)
return A ** 2 * np.exp(-0.5 * (T1 - T2) ** 2 / l ** 2)
def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0):
Return the log of the GP likelihood of the
data `y(t)` given uncertainty `sigma` and
an Exponential Squared Kernel with amplitude `A`
and length scale `sigma`.
# The covariance and its determinant
npts = len(t)
kernel = ExpSquaredKernel
K = kernel(t, A=A, l=l) + sigma ** 2 * np.eye(npts)
# The marginal log likelihood
log_like = -0.5 * np.dot(y.T, np.linalg.solve(K, y))
log_like -= 0.5 * np.linalg.slogdet(K)[1]
log_like -= 0.5 * npts * np.log(2 * np.pi)
return log_like
def draw_from_gaussian(mu, S, ndraws=1, eps=1e-12):
Generate samples from a multivariate gaussian
specified by covariance ``S`` and mean ``mu``.
(We derived these equations in Day 1, Notebook 01, Exercise 7.)
npts = S.shape[0]
L, _ = cho_factor(S + eps * np.eye(npts), lower=True)
L = np.tril(L)
u = np.random.randn(npts, ndraws)
x = np.dot(L, u) + mu[:, None]
return x.T
def compute_gp(t_train, y_train, t_test, sigma=0, A=1.0, l=1.0):
Compute the mean vector and covariance matrix of a GP
at times `t_test` given training points `y_train(t_train)`.
The training points have uncertainty `sigma` and the
kernel is assumed to be an Exponential Squared Kernel
with amplitude `A` and lengthscale `l`.
# Compute the required matrices
kernel = ExpSquaredKernel
Stt = kernel(t_train, A=1.0, l=1.0)
Stt += sigma ** 2 * np.eye(Stt.shape[0])
Spp = kernel(t_test, A=1.0, l=1.0)
Spt = kernel(t_test, t_train, A=1.0, l=1.0)
# Compute the mean and covariance of the GP
mu = np.dot(Spt, np.linalg.solve(Stt, y_train))
S = Spp - np.dot(Spt, np.linalg.solve(Stt, Spt.T))
return mu, S
import matplotlib.pyplot as plt
t, y, yerr = np.loadtxt("data/sample_transit.txt", unpack=True)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.xlabel("time")
plt.ylabel("relative flux");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: Benchmarking our implementation
Step6: <div style="background-color
|
12,638
|
<ASSISTANT_TASK:>
Python Code:
# YOUR ACTION REQUIRED:
# Execute this cell first using <CTRL-ENTER> and then using <SHIFT-ENTER>.
# Note the difference in which cell is selected after execution.
print('Hello world!')
# YOUR ACTION REQUIRED:
# Execute only the first print statement by selecting the first line and pressing
# <CTRL-SHIFT-ENTER>.
print('Only print this line.')
print('Avoid printing this line.')
def xor_str(a, b):
return ''.join([chr(ord(a[i % len(a)]) ^ ord(b[i % len(b)]))
for i in range(max(len(a), len(b)))])
# YOUR ACTION REQUIRED:
# Try to find the correct value for the variable below.
workshop_secret = 'Tensorflow rocks' #workshop_secret = '(replace me!)'
xor_str(workshop_secret,
'\x03\x00\x02\x10\x00\x1f\x03L\x1b\x18\x00\x06\x07\x06K2\x19)*S;\x17\x08\x1f\x00\x05F\x1e\x00\x14K\x115\x16\x07\x10\x1cR1\x03\x1d\x1cS\x1a\x00\x13J')
# Hint: You might want to checkout the ../solutions directory
# (you should already have opened this directory in a browser tab :-)
# We must call this "magic" before importing TensorFlow. We will explain
# further down what "magics" (starting with %) are.
%tensorflow_version 2.x
# Include basic dependencies and display the tensorflow version.
import tensorflow as tf
tf.__version__
# Print the current working directory and list all files in it.
!pwd
!ls
# Especially useful: Installs new packages.
!pip install qrcode
import qrcode
qrcode.make('Colab rocks!')
# YOUR ACTION REQUIRED:
# Set the cursor to after tf.one and press <CTRL-SPACE>.
# On Mac, only <OPTION-ESCAPE> might work.
tf.one_hot #tf.one
# YOUR ACTION REQUIRED:
# Complete the command to `tf.maximum` and then add the opening bracket "(" to
# see the function documentation.
tf.maximum([1, 2, 3], [2, 2, 2]) #tf.maximu
tf.maximum?
test_dict = {'key0': 'Tensor', 'key1': 'Flow'}
test_dict?
# Display how long the system has been running.
# Note : this shows "0 users" because no user is logged in via SSH.
!uptime
# Display available and used memory.
!free -h
print("-"*70)
# Display the CPU specification.
!lscpu
print("-"*70)
# Display the GPU specification (if available).
!(nvidia-smi | grep -q "has failed") && echo "No GPU found!" || nvidia-smi
# Display the Matplotlib outputs within a cell's output.
%matplotlib inline
import numpy as np
from matplotlib import pyplot
# Create a randomized scatterplot using matplotlib.
x = np.random.rand(100).astype(np.float32)
noise = np.random.normal(scale=0.3, size=len(x))
y = np.sin(x * 7) + noise
pyplot.scatter(x, y)
# Load an example dataset.
from vega_datasets import data
cars = data.cars()
# Plot the dataset, referencing dataframe column names.
import altair as alt
alt.Chart(cars).mark_point().encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
tooltip=['Name', 'Origin', 'Horsepower', 'Miles_per_Gallon']
).interactive()
%%sh
echo "This is a shell script!"
# List all running VM processes.
ps -ef
echo "Done"
# Embed custom HTML directly into a cell's output.
%%html
<marquee>HTML rocks</marquee>
n = 1000000
%time list1 = [i for i in range(n)]
print("")
%time list2 = [i for i in range(int(n/2))]
%%time
n = 1000000
list1 = [i for i in range(n)]
list2 = [i for i in range(int(n/2))]
from google.colab import auth
auth.authenticate_user()
!gsutil ls gs://amld-datasets/zoo_img | head
# Note: This cell hangs if you forget to call auth.authenticate_user() above.
tf.io.gfile.glob('gs://amld-datasets/zoo_img/*')[:10]
# YOUR ACTION REQUIRED:
# Explore existing snippets by going to the `Code snippets` section.
# Click on the <> button on the left sidebar to open the snippets.
# Alternatively, you can press `<CTRL><ALT><P>` (or `<COMMAND><OPTION><P>` for
# OS X).
from google.colab import snippets
# snippets.register('https://colab.research.google.com/drive/1OFSjEmqC-UC66xs-LR7-xmgkvxYTrAcN')
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def mymagic(line_content, cell_content=None):
print('line_content="%s" cell_content="%s"' % (line_content, cell_content))
%mymagic Howdy Alice!
%%mymagic simple question
Howdy Alice!
how are you?
#@title Execute me
# Hidden cell content.
print("Double click the cell to see its content.")
# Form example mostly taken from "Adding form fields" Snippet.
#@title Example form
#@markdown Specify some test data and execute this cell.
string_type = 'test_string' #@param {type: "string"}
slider_value = 145 #@param {type: "slider", min: 100, max: 200}
number = 1339 #@param {type: "number"}
date = '2019-01-26' #@param {type: "date"}
pick_me = "a" #@param ['a', 'b', 'c']
#@markdown ---
print("Submitted data:")
print(string_type, slider_value, number, date, pick_me)
# YOUR ACTION REQUIRED:
# Execute this cell, print the variable contents of a, b and exit the debugger.
%pdb on
a = 67069 / 47 - 0x5a
b = a - 0x539
#c = a / b # Will throw an exception.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can also only execute one single statement in a cell.
Step2: What to do if you get stuck
Step3: Importing TensorFlow
Step4: Running shell commands
Step5: Autocompletion and docstrings
Step6: In addition, you can also display docstrings to see the function signature and possible parameters.
Step7: Alternatively, you might also inspect function details with docstrings if available by appending a "?".
Step8: Note
Step9: Runtimes
Step10: As can be seen, the machine has been allocated just very recently for our purposes.
Step11: Plotting
Step12: Altair
Step13: Notebook Magics
Step14: Line magics
Step15: Note
Step16: Data handling
Step17: List a subset of the contained files using the gsutil tool.
Step18: Conveniently, TensorFlow natively supports multiple file systems such as
Step19: Snippets
Step20: We have created some default snippets for this workshop in
Step21: Pro tip
Step22: Forms
Step23: Interactive debugging
|
12,639
|
<ASSISTANT_TASK:>
Python Code:
%%bash
pip freeze | grep tensor
!pip3 install tensorflow-hub==0.7.0
!pip3 install --upgrade tensorflow==1.15.3
!pip3 install google-cloud-bigquery==1.10
import os
import tensorflow as tf
import numpy as np
import tensorflow_hub as hub
import shutil
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.15.3'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
categories_list = open("categories.txt").read().splitlines()
authors_list = open("authors.txt").read().splitlines()
content_ids_list = open("content_ids.txt").read().splitlines()
mean_months_since_epoch = 523
embedded_title_column = hub.text_embedding_column(
key="title",
module_spec="https://tfhub.dev/google/nnlm-de-dim50/1",
trainable=False)
content_id_column = tf.feature_column.categorical_column_with_hash_bucket(
key="content_id",
hash_bucket_size= len(content_ids_list) + 1)
embedded_content_column = tf.feature_column.embedding_column(
categorical_column=content_id_column,
dimension=10)
author_column = tf.feature_column.categorical_column_with_hash_bucket(key="author",
hash_bucket_size=len(authors_list) + 1)
embedded_author_column = tf.feature_column.embedding_column(
categorical_column=author_column,
dimension=3)
category_column_categorical = tf.feature_column.categorical_column_with_vocabulary_list(
key="category",
vocabulary_list=categories_list,
num_oov_buckets=1)
category_column = tf.feature_column.indicator_column(category_column_categorical)
months_since_epoch_boundaries = list(range(400,700,20))
months_since_epoch_column = tf.feature_column.numeric_column(
key="months_since_epoch")
months_since_epoch_bucketized = tf.feature_column.bucketized_column(
source_column = months_since_epoch_column,
boundaries = months_since_epoch_boundaries)
crossed_months_since_category_column = tf.feature_column.indicator_column(tf.feature_column.crossed_column(
keys = [category_column_categorical, months_since_epoch_bucketized],
hash_bucket_size = len(months_since_epoch_boundaries) * (len(categories_list) + 1)))
feature_columns = [embedded_content_column,
embedded_author_column,
category_column,
embedded_title_column,
crossed_months_since_category_column]
record_defaults = [["Unknown"], ["Unknown"],["Unknown"],["Unknown"],["Unknown"],[mean_months_since_epoch],["Unknown"]]
column_keys = ["visitor_id", "content_id", "category", "title", "author", "months_since_epoch", "next_content_id"]
label_key = "next_content_id"
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column,record_defaults=record_defaults)
features = dict(zip(column_keys, columns))
label = features.pop(label_key)
return features, label
# Create list of files that match pattern
file_list = tf.io.gfile.glob(filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
def model_fn(features, labels, mode, params):
net = tf.feature_column.input_layer(features, params['feature_columns'])
for units in params['hidden_units']:
net = tf.layers.dense(net, units=units, activation=tf.nn.relu)
# Compute logits (1 per class).
logits = tf.layers.dense(net, params['n_classes'], activation=None)
predicted_classes = tf.argmax(logits, 1)
from tensorflow.python.lib.io import file_io
with file_io.FileIO('content_ids.txt', mode='r') as ifp:
content = tf.constant([x.rstrip() for x in ifp])
predicted_class_names = tf.gather(content, predicted_classes)
if mode == tf.estimator.ModeKeys.PREDICT:
predictions = {
'class_ids': predicted_classes[:, tf.newaxis],
'class_names' : predicted_class_names[:, tf.newaxis],
'probabilities': tf.nn.softmax(logits),
'logits': logits,
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
table = tf.contrib.lookup.index_table_from_file(vocabulary_file="content_ids.txt")
labels = table.lookup(labels)
# Compute loss.
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# Compute evaluation metrics.
accuracy = tf.metrics.accuracy(labels=labels,
predictions=predicted_classes,
name='acc_op')
top_10_accuracy = tf.metrics.mean(tf.nn.in_top_k(predictions=logits,
targets=labels,
k=10))
metrics = {
'accuracy': accuracy,
'top_10_accuracy' : top_10_accuracy}
tf.summary.scalar('accuracy', accuracy[1])
tf.summary.scalar('top_10_accuracy', top_10_accuracy[1])
if mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec(
mode, loss=loss, eval_metric_ops=metrics)
# Create training op.
assert mode == tf.estimator.ModeKeys.TRAIN
optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
outdir = 'content_based_model_trained'
shutil.rmtree(outdir, ignore_errors = True) # start fresh each time
#tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
estimator = tf.estimator.Estimator(
model_fn=model_fn,
model_dir = outdir,
params={
'feature_columns': feature_columns,
'hidden_units': [200, 100, 50],
'n_classes': len(content_ids_list)
})
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset("training_set.csv", tf.estimator.ModeKeys.TRAIN),
max_steps = 2000)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset("test_set.csv", tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 30,
throttle_secs = 60)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
%%bash
head -5 training_set.csv > first_5.csv
head first_5.csv
awk -F "\"*,\"*" '{print $2}' first_5.csv > first_5_content_ids
output = list(estimator.predict(input_fn=read_dataset("first_5.csv", tf.estimator.ModeKeys.PREDICT)))
import numpy as np
recommended_content_ids = [np.asscalar(d["class_names"]).decode('UTF-8') for d in output]
content_ids = open("first_5_content_ids").read().splitlines()
from google.cloud import bigquery
recommended_title_sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\"
LIMIT 1.format(recommended_content_ids[0])
current_title_sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\"
LIMIT 1.format(content_ids[0])
recommended_title = bigquery.Client().query(recommended_title_sql).to_dataframe()['title'].tolist()[0].encode('utf-8').strip()
current_title = bigquery.Client().query(current_title_sql).to_dataframe()['title'].tolist()[0].encode('utf-8').strip()
print("Current title: {} ".format(current_title))
print("Recommended title: {}".format(recommended_title))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's make sure we install the necessary version of tensorflow-hub. After doing the pip install below, click "Restart the kernel" on the notebook so that the Python environment picks up the new packages.
Step2: Note
Step3: Build the feature columns for the model.
Step4: In the cell below we'll define the feature columns to use in our model. If necessary, remind yourself the various feature columns to use.
Step5: Create the input function.
Step6: Create the model and train/evaluate
Step7: Train and Evaluate
Step8: This takes a while to complete but in the end, I get about 30% top 10 accuracy.
Step9: Recall, to make predictions on the trained model we pass a list of examples through the input function. Complete the code below to make predictions on the examples contained in the "first_5.csv" file we created above.
Step12: Finally, we map the content id back to the article title. Let's compare our model's recommendation for the first example. This can be done in BigQuery. Look through the query below and make sure it is clear what is being returned.
|
12,640
|
<ASSISTANT_TASK:>
Python Code:
# Install CLU & Flax.
!pip install -q clu flax
example_directory = 'examples/seq2seq'
editor_relpaths = ('train.py', 'input_pipeline.py', 'models.py')
repo, branch = 'https://github.com/google/flax', 'main'
# (If you run this code in Jupyter[lab], then you're already in the
# example directory and nothing needs to be done.)
#@markdown **Fetch newest Flax, copy example code**
#@markdown
#@markdown **If you select no** below, then the files will be stored on the
#@markdown *ephemeral* Colab VM. **After some time of inactivity, this VM will
#@markdown be restarted an any changes are lost**.
#@markdown
#@markdown **If you select yes** below, then you will be asked for your
#@markdown credentials to mount your personal Google Drive. In this case, all
#@markdown changes you make will be *persisted*, and even if you re-run the
#@markdown Colab later on, the files will still be the same (you can of course
#@markdown remove directories inside your Drive's `flax/` root if you want to
#@markdown manually revert these files).
if 'google.colab' in str(get_ipython()):
import os
os.chdir('/content')
# Download Flax repo from Github.
if not os.path.isdir('flaxrepo'):
!git clone --depth=1 -b $branch $repo flaxrepo
# Copy example files & change directory.
mount_gdrive = 'no' #@param ['yes', 'no']
if mount_gdrive == 'yes':
DISCLAIMER = 'Note : Editing in your Google Drive, changes will persist.'
from google.colab import drive
drive.mount('/content/gdrive')
example_root_path = f'/content/gdrive/My Drive/flax/{example_directory}'
else:
DISCLAIMER = 'WARNING : Editing in VM - changes lost after reboot!!'
example_root_path = f'/content/{example_directory}'
from IPython import display
display.display(display.HTML(
f'<h1 style="color:red;" class="blink">{DISCLAIMER}</h1>'))
if not os.path.isdir(example_root_path):
os.makedirs(example_root_path)
!cp -r flaxrepo/$example_directory/* "$example_root_path"
os.chdir(example_root_path)
from google.colab import files
for relpath in editor_relpaths:
s = open(f'{example_root_path}/{relpath}').read()
open(f'{example_root_path}/{relpath}', 'w').write(
f'## {DISCLAIMER}\n' + '#' * (len(DISCLAIMER) + 3) + '\n\n' + s)
files.view(f'{example_root_path}/{relpath}')
# Note : In Colab, above cell changed the working directory.
!pwd
from absl import app
app.parse_flags_with_usage(['seq2seq'])
from absl import logging
logging.set_verbosity(logging.INFO)
import jax
# Local imports from current directory - auto reload.
# Any changes you make to the three imported files will appear automatically.
%load_ext autoreload
%autoreload 2
import input_pipeline
import models
import train
# Examples are generated on the fly.
ctable = input_pipeline.CharacterTable('0123456789+= ')
list(ctable.generate_examples(5))
batch = ctable.get_batch(5)
# A single query (/answer) is one-hot encoded.
batch['query'][0]
# Note how CTABLE encodes PAD=0, EOS=1, '0'=2, '1'=3, ...
ctable.decode_onehot(batch['query'][:1])
# Get a live update during training - use the "refresh" button!
# (In Jupyter[lab] start "tensorboard" in the local directory instead.)
if 'google.colab' in str(get_ipython()):
%load_ext tensorboard
%tensorboard --logdir=./workdirs
import time
workdir = f'./workdirs/{int(time.time())}'
# Train 2k steps & log 20 times.
app.parse_flags_with_usage([
'seq2seq',
'--num_train_steps=2000',
'--decode_frequency=100',
])
state = train.train_and_evaluate(workdir=workdir)
if 'google.colab' in str(get_ipython()):
#@markdown You can upload the training results directly to https://tensorboard.dev
#@markdown
#@markdown Note that everbody with the link will be able to see the data.
upload_data = 'yes' #@param ['yes', 'no']
if upload_data == 'yes':
!tensorboard dev upload --one_shot --logdir ./workdirs --name 'Flax examples/seq2seq (Colab)'
inputs = ctable.encode_onehot(['2+40'])
# batch, max_length, vocab_size
inputs.shape
# Using different random seeds generates different samples.
preds = train.decode(state.params, inputs, jax.random.PRNGKey(0), ctable)
ctable.decode_onehot(preds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step2: Dataset
Step3: Training
Step4: Inference
|
12,641
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-esm4', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
12,642
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import StringIO
import zipfile
import urllib
from __future__ import division, print_function
matplotlib.style.use('fivethirtyeight')
%matplotlib inline
# Download and extract the 2015 FARS file
output = StringIO.StringIO()
output.write(urllib.urlopen('ftp://ftp.nhtsa.dot.gov/FARS/2015/National/FARS2015NationalCSV.zip').read())
fars_zip = zipfile.ZipFile(output)
fars_zip.extract('accident.csv')
# Read the data file into a pandas DataFrame
df_fatality = pd.read_csv('accident.csv')
# Number of traffic fatalities in the US in 2012 using pandas DataFrame sum function
total_traffic_fatalities = df_fatality.FATALS.sum()
print("2015 Traffic Fatalities: ", total_traffic_fatalities)
# Get the rates
df_cdc = pd.read_csv('data/cdc_injuries_2015.txt',delimiter='\t')
df_cdc['Rate'] = df_cdc['Deaths'] / (df_cdc['Population'] / 100000)
# Create the series for plotting
df_cdc_rates = df_cdc.set_index('Injury Mechanism & All Other Leading Causes')['Rate']\
.dropna()\
.sort_values()
# Plot the top 10
plt.figure(figsize=(12,6))
df_cdc_rates.iloc[-10:].plot(kind='barh',
title='Motor Vehicles are Third-Leading Cause of Death Due to Injury')
plt.xlabel('Deaths per 100k people, 2015')
plt.ylabel('')
plt.show()
df_who = pd.read_csv('data/who_2013_traffic_deaths.csv', index_col=0, skiprows=1,
names=['Country', 'Deaths', 'Death Rate'])
plt.figure(figsize=(12,6))
# group of peer countries
country_group = ['Australia', 'Canada', 'France', 'Germany', 'Japan',
'United Kingdom of Great Britain and Northern Ireland', 'United States of America']
# labels for plot
country_labels = ['Australia', 'Canada', 'France', 'Germany', 'Japan',
'UK', 'USA']
ax = df_who.loc[country_group]['Death Rate'].plot(kind='bar')
plt.ylabel("2013 Traffic Deaths / 100,000 people")
plt.title("US Traffic Death Rates Higher Than Those of Peer Group")
plt.xticks(np.arange(len(country_group)), country_labels, rotation=0)
plt.xlabel('')
rects = ax.patches
def autolabel(rects):
Attach some labels.
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x()+rect.get_width()/2., height - .3, '%0.1f'%height,
ha='center', va='top', fontsize=14, color='w')
autolabel(rects)
plt.show()
# Load FARS fatality time series
df_annual = pd.read_csv('data/fars_annual_trend.txt',delimiter='\t')
df_annual['Year'] = pd.to_datetime(df_annual['Year'], format='%Y')
series_annual = df_annual.set_index('Year')['Fatality Rate per 100,000 Population']
# Add 2015 per capita, US 2015 Population available here:
# https://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?pid=PEP_2015_PEPANNRES&src=pt
series_annual[pd.to_datetime('2015-01-01')] = 35092 / (321418820 / 100000)
plt.figure(figsize=(12,6))
series_annual.plot()
plt.ylim(0)
plt.title('US Fatality Rate Declined in Mid-2000\'s')
plt.ylabel('Deaths per 100k people')
plt.show()
f, axarr = plt.subplots(1,2,figsize=(12,4))
df_annual['VMT (Trillions)'] = df_annual['Vehicle Miles Traveled (Billions)'] / 1000
df_annual.set_index('Year')['VMT (Trillions)'].plot(ax=axarr[0], title='Total VMT in the US is Leveling Off', color='black')
axarr[0].set_ylim(0)
axarr[0].set_xlabel('')
axarr[0].set_ylabel('Annual VMT (Trillions)')
df_annual.set_index('Year')['Fatality Rate per 100 Million VMT'].plot(ax=axarr[1], title='Fatality Rate per VMT is Declining',
)
axarr[1].set_xlabel('')
axarr[1].set_ylim(0)
axarr[1].set_ylabel('Deaths per 100M VMT')
plt.show()
# Number of fatalities in crashes involving a drunken driver
df_dd = df_fatality.FATALS[df_fatality.DRUNK_DR >= 1].sum()
print("Fatalities involving a drunk driver: ", df_dd)
print("Percent of total traffic fatalities involving drunk driver: ",
'{0:.1f}%'.format(df_dd / total_traffic_fatalities * 100))
# pandas DataFrame pivot by hour that crash occurred and drunk driving
fatal_pivot = df_fatality.pivot_table(index=['HOUR'], columns=['DRUNK_DR'],
values='FATALS', aggfunc=np.sum).fillna(0)
# Sum the total number of drunk drivers involved
fatal_pivot['DRUNK_DR_SUM'] = fatal_pivot[[1,2,3]].sum(axis=1)
fp = fatal_pivot[[0,'DRUNK_DR_SUM']].iloc[:-1].copy()
fp.columns = ['No Drunk Driver', 'Drunk Driver']
plt.rcParams['figure.figsize'] = (12,6)
fp.plot()
plt.title('Drunk Driving Fatalities Peak in the Late Evening/Early Morning Hours')
plt.ylabel('Total Fatalities, 2015')
plt.xlabel('Hour')
plt.show()
# Now look at day of week
fatal_pivot = df_fatality.pivot_table(index=['DAY_WEEK'],columns=['DRUNK_DR'],
values='FATALS', aggfunc=np.sum)
# Sum the total number of drunk drivers involved
fatal_pivot['DRUNK_DR_SUM'] = fatal_pivot[[1,2,3]].sum(axis=1)
fp = fatal_pivot[[0,'DRUNK_DR_SUM']].copy()
fp.columns = ['No Drunk Driver', 'Drunk Driver']
# Days of week are indexed 1=Sunday, 2=Monday, ..., 6=Saturday
labels=['Sun','Mon','Tue','Wed','Thu','Fri','Sat']
fp.index = labels
fp.plot(kind='bar')
plt.xticks(rotation=0)
plt.ylabel('Total Fatalities, 2015')
plt.title('Drunk Driving Fatalities Peak on Weekends')
plt.show()
weather_group = df_fatality.groupby(['WEATHER']).sum()['FATALS']
labels = ['Clear', 'Rain', 'Sleet/Hail', 'Snow', 'Fog, Smog, Smoke',
'Severe Crosswinds', 'Blowing Sand, Soil, Dirt', 'Other',
'Cloudy', 'Blowing Snow', 'Freezing Rain or Drizzle', 'Not Reported', 'Unknown']
weather_group.index = labels
(weather_group.sort_values() / weather_group.sum()).plot(kind='barh')
plt.title('Most Crashes Occur in Clear Weather Conditions')
plt.xlabel('Proportion of Total Crashes, 2015')
plt.show()
# pandas groupby on LGT_COND column
light_group = df_fatality.groupby(['LGT_COND']).sum()['FATALS']
labels = ['Daylight','Dark - Not Lighted', 'Dark - Lighted',
'Dawn', 'Dusk', 'Dark - Unknown Lighting', 'Other',
'Not Reported', 'Unknown']
light_group.index = labels
(light_group.sort_values() / light_group.sum()).plot(kind='barh')
plt.title('Fatal Crashes are Evenly Split Between Daylight and Darkness')
plt.xlabel('Proportion of Total Crashes, 2015')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Motor Vehicles Are Third-Leading Cause of Death Due to Injury
Step2: There were 35,092 traffic fatalities in the U.S. in 2015, or a little more than 11 for every 100,000 people. To put that in perspective, 39,260 women died from breast cancer and 29,720 men died from prostate cancer in 2013, according to the American Cancer Society. The fight against these cancers generates a lot of public awareness and fundraising. Fore example, in Chicago the lights on top of skyscrapers turn pink for a month every year. Contrast that with a general public apathy to the number of people dying in traffic crashes at rates comparable to the most-common forms of cancer.
Step4: Motor vehicle traffic is the third longest bar on the plot. Drug-related deaths make up the majority of poisoning deaths, and this number has increased substantially in recent years.
Step5: The U.S. does not compare favorably at all against other wealthy countries with large populations. Even other countries with high automobile share, such as Australia and Canada, have nearly half the traffic death rate of the U.S.. The U.S. is wealthier by GDP per capita than the other nations in the chart, so why is our rate of traffic deaths so much higher?
Step6: The fatality rate has declined significantly since the early 1990's, with a sharp decrease in the second half of the 2000's.
Step7: The absolute number of fatalities has declined, but so has the fatality rate per vehicle miles traveled (VMT), which indicates that we are making progress towards safer roads. Since 1994, the fatality rate has dropped while VMT increased. In recent years, Americans are driving less, with several year-over-year decreases in CMTd since the mid-2000's. The continued decline in the fatality rate - even with a decreasing denominator - is an encouraging sign.
Step8: Nearly a third of all traffic fatalities involve a drunk driver. Despite all the education and public campaigns and increased enforcement, drunk driving is still taking a massive toll on human life every year.
Step9: Clearly the late evening and early morning hours show high levels of drunken driving activity. Fatalities caused by drunken drivers are nearly double those caused by sober drivers between the hours of 2
Step10: As you might expect, drunk driving fatalities peak substantially on the weekends, with non-drunk fatalities remaining relatively consistent across all days of week.
Step11: The majority of fatalities occur with no weather affecting visibility. Rain is the only precipitation form that shows up significantly. Perhaps people reduce driving during adverse conditions or drive more cautiously - leading to fewer deaths.
|
12,643
|
<ASSISTANT_TASK:>
Python Code:
import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 1000)
org_values = model.simulate(real_parameters, times)
# Add noise
noise = 10
rho = 0.9
phi = 0.95
## makes sigma comparable with estimate from statsmodel
errors = pints.noise.arma11(rho, phi, noise / np.sqrt((1-rho**2) / (1 + 2 * rho * phi + phi**2)), len(org_values))
values = org_values + errors
# Show the noisy data
plt.figure()
plt.plot(times, org_values)
plt.plot(times, values)
plt.xlabel('time')
plt.ylabel('y')
plt.legend(['true', 'observed'])
plt.show()
from statsmodels.tsa.arima.model import ARIMA
model = toy.LogisticModel()
class ARIMALogLikelihood(pints.ProblemLogLikelihood):
def __init__(self, problem, arima_order):
super(ARIMALogLikelihood, self).__init__(problem)
self._nt = len(self._times) - 1
self._no = problem.n_outputs()
if len(arima_order) != 3:
raise ValueError("ARIMA (p, d, q) orders must be tuple of length 3.")
self._arima_order = arima_order
p = arima_order[0]
d = arima_order[1]
q = arima_order[2]
self._p = p
self._q = q
self._d = d
self._n_parameters = problem.n_parameters() + (p + q + 1) * self._no
self._m = (self._p + self._q + 1) * self._no
def __call__(self, x):
# convert x to list to make it easier to append
# nuisance params
x = x.tolist()
# p AR params; q MA params
m = self._m
# extract noise model params
parameters = x[-m:]
sol = self._problem.evaluate(x[:-m])
model = ARIMA(endog=self._values,
order=self._arima_order,
exog=sol)
# in statsmodels, parameters are variances
# rather than std. deviations, so square
sigma2 = parameters[-1]**2
parameters = parameters[:-1] + [sigma2]
# first param is trend (if model not differenced),
# second is coefficient on ODE soln
# see model.param_names
if self._d == 0:
full_params = [0, 1] + parameters
else:
full_params = [1] + parameters
return model.loglike(full_params)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = ARIMALogLikelihood(problem, arima_order=(1, 0, 1))
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, 0, 0, noise * 0.1],
[0.02, 600, 1, 1, noise * 100],
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
real_parameters = np.array(real_parameters + [rho, phi, 10])
xs = [
real_parameters * 1.05,
real_parameters * 1,
real_parameters * 1.025
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC)
# Add stopping criterion
mcmc.set_max_iterations(4000)
# Disable logging
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains,
ref_parameters=real_parameters,
parameter_names=[r'$r$', r'$k$', r'$\rho$', r'$\phi$', r'$\sigma$'])
# Discard warm up
chains = chains[:, 2000:, :]
# Look at distribution in chain 0
pints.plot.pairwise(chains[0],
kde=False,
ref_parameters=real_parameters,
parameter_names=[r'$r$', r'$k$', r'$\rho$', r'$\phi$', r'$\sigma$'])
# Show graphs
plt.show()
results = pints.MCMCSummary(chains=chains,
parameter_names=["r", "k", "rho", "phi", "sigma"])
print(results)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ARMA errors
Step2: Perform Bayesian inference using statsmodels' ARIMA Kalman filter
Step3: Look at results.
Step4: Look at results. Note that 'sigma' will be different to the value used to generate the data, due to a different definition.
|
12,644
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
import xarray as xr
import scipy.integrate as sp #Gives access to the ODE integration package
from climlab.utils.thermo import pseudoadiabat
def generate_idealized_temp_profile(SST, plevs, Tstrat=200):
Generates an idealized temperature profile with specified SST and Tstrat
solution = sp.odeint(pseudoadiabat, SST, np.flip(plevs))
temp = solution.reshape(-1)
temp[np.where(temp<Tstrat)] = Tstrat
return np.flip(temp) # need to re-invert the pressure axis
def make_idealized_column(SST, num_lev=100, Tstrat=200):
# Set up a column state
state = climlab.column_state(num_lev=num_lev, num_lat=1)
# Extract the pressure levels
plevs = state['Tatm'].domain.axes['lev'].points
# Set the SST
state['Ts'][:] = SST
# Set the atmospheric profile to be our idealized profile
state['Tatm'][:] = generate_idealized_temp_profile(SST=SST, plevs=plevs, Tstrat=Tstrat)
return state
state = make_idealized_column(300)
# Plot the profile
fig, ax = plt.subplots(dpi=100)
state['Tatm'].to_xarray().plot(ax=ax, y='lev', yincrease=False)
ax.set_xlabel("Temperature (K)")
ax.set_ylabel("Pressure (hPa)")
ax.grid()
h2o = climlab.radiation.water_vapor.ManabeWaterVapor(state=state,
relative_humidity=0.8)
fig, ax = plt.subplots(dpi=100)
h2o.q.to_xarray().plot(ax=ax, y='lev', yincrease=False)
ax.set_xlabel("Specific humidity (g/g)")
ax.set_ylabel("Pressure (hPa)")
ax.grid()
absorber_vmr = {'CO2':280/1e6,
'CH4':0.,
'N2O':0.,
'O2':0.,
'CFC11':0.,
'CFC12':0.,
'CFC22':0.,
'CCL4':0.,
'O3':0.}
# RRTMG radiation
rad = climlab.radiation.RRTMG_LW(state=state, specific_humidity=h2o.q,
icld=0, # Clear-sky only!
return_spectral_olr=False, # Just return total OLR
absorber_vmr = absorber_vmr)
rad.compute_diagnostics()
rad.OLR
def calc_olr(SST, CO2ppmv, return_spectral_olr=False, RH=0.8, Tstrat=200, qStrat=5e-06):
# Couple water vapor to radiation
## climlab setup
# create surface and atmosperic domains
state = make_idealized_column(SST, Tstrat=Tstrat)
# fixed relative humidity
# Note we pass the qStrat parameter here, which sets a minimum specific humidity
# Set RH=0. and qStrat=0. for fully dry column
h2o = climlab.radiation.water_vapor.ManabeWaterVapor(state=state,
relative_humidity=RH,
qStrat=qStrat,
)
absorber_vmr['CO2'] = CO2ppmv/1e6
# RRTMG radiation
rad = climlab.radiation.rrtm.rrtmg_lw.RRTMG_LW(state=state, specific_humidity=h2o.q,
icld=0, # Clear-sky only!
return_spectral_olr=return_spectral_olr,
absorber_vmr = absorber_vmr)
rad.compute_diagnostics()
return rad
# Test this gives the same as before...
calc_olr(SST=300, CO2ppmv=280).OLR
%%time
n=20
OLRS = np.zeros((n,n))
temparray = np.linspace(280, 290, n)
co2array = np.linspace(280, 1200, n)
for idx1, temp in enumerate(temparray):
for idx2, co2 in enumerate(co2array):
OLRS[idx1, idx2] = calc_olr(temp, co2).OLR
da = xr.DataArray(OLRS, dims=['temp', 'co2'],
coords={'temp':temparray,
'co2':co2array},
)
fig, ax = plt.subplots(dpi=100)
p = da.plot.contourf(ax=ax,
cmap='viridis',
levels=20,
add_colorbar=False)
fig.colorbar(p, label="OLR (W m$^{-2}$)")
ax.set_xlabel("$CO_{2}$ (ppmv)")
ax.set_ylabel("SST (K)")
# To do this, we'll run the model with the idealized temperature profile
# but not include the effects of water vapour (i.e., set RH=0 and qStrat=0)
# We've already set all other absorbing species to 0.
rad1 = calc_olr(SST=300, CO2ppmv=0., RH=0., return_spectral_olr=True, qStrat=0.)
# check that the different OLRs match up...
print(rad1.OLR_spectral.to_xarray().sum('wavenumber').values)
print(rad1.OLR)
wavenumbers = np.linspace(0.1, 3000) # don't start from zero to avoid divide by zero warnings
# Centers and Widths of the spectral bands, cm-1
spectral_centers = rad1.OLR_spectral.domain.axes['wavenumber'].points
spectral_widths = rad1.OLR_spectral.domain.axes['wavenumber'].delta
def planck_curve(wavenumber, T):
'''Return the Planck curve in units of W/m2/cm-1
Inputs: wavenumber in cm-1
temperature T in units of K'''
# 100pi factor converts from steradians/m to 1/cm
return (climlab.utils.thermo.Planck_wavenumber(wavenumber, T)*100*np.pi)
def make_planck_curve(ax, T, color='orange'):
'''Plot the Planck curve (W/m2/cm-1) on the given ax object'''
ax.plot(wavenumbers, planck_curve(wavenumbers, T),
lw=2, color=color, label="Planck curve, {}K".format(T))
def make_planck_feedback(ax, T, color='orange'):
'''Plot the Planck spectral feedback parameter (mW/m2/cm-1/K) on the given ax object'''
ax.plot(wavenumbers, (planck_curve(wavenumbers, T+1)-planck_curve(wavenumbers, T))*1000,
lw=2, color=color, label="Planck feedback, {}K".format(T))
def make_rrtmg_spectrum(ax, OLR_spectral, color='blue', alpha=0.5, label='RRTMG - 300K'):
# Need to normalize RRTMG spectral outputs by width of each wavenumber band
ax.bar(spectral_centers, np.squeeze(OLR_spectral)/spectral_widths,
width=spectral_widths, color=color, edgecolor='black', alpha=alpha, label=label)
Plot !
fig, ax = plt.subplots(dpi=100)
make_planck_curve(ax, 300, color='orange')
make_rrtmg_spectrum(ax, rad1.OLR_spectral, label='RRTMG - 300K')
ax.legend(frameon=False)
ax.set_xlabel("Wavenumber (cm$^{-1}$)")
ax.set_ylabel("TOA flux (W/m$^{2}$/cm$^{-1}$)")
ax.grid()
# Same calculation as above but with some well-mixed CO2 in the column
rad2 = calc_olr(SST=300, CO2ppmv=10, RH=0., qStrat=0., return_spectral_olr=True, )
rad3 = calc_olr(SST=300, CO2ppmv=280, RH=0., qStrat=0., return_spectral_olr=True, )
fig, ax = plt.subplots(dpi=100)
make_planck_curve(ax, 300, color='orange')
make_rrtmg_spectrum(ax, rad1.OLR_spectral, label='RRTMG - 300K, 0ppmv CO2', color='blue')
make_rrtmg_spectrum(ax, rad2.OLR_spectral, label='RRTMG - 300K, 10ppmv CO2', color='orange')
make_rrtmg_spectrum(ax, rad3.OLR_spectral, label='RRTMG - 300K, 280ppmv CO2', color='green')
ax.legend(frameon=False)
ax.set_xlabel("Wavenumber (cm$^{-1}$)")
ax.set_ylabel("TOA flux (W/m$^{2}$/cm$^{-1}$)")
ax.grid()
# Our calc_olr() function handles water vapor by setting the RH parameter
rad4 = calc_olr(SST=300, CO2ppmv=0., RH=0.8, return_spectral_olr=True, )
fig, ax = plt.subplots(dpi=100, figsize=(7,4))
make_planck_curve(ax, 300, color='orange')
make_rrtmg_spectrum(ax, rad1.OLR_spectral, label="RRTMG - 300K, 0ppmv CO2", color='blue')
make_rrtmg_spectrum(ax, rad4.OLR_spectral, label="RRTMG - 300K, water vapour, 0ppmv CO2", color='orange')
ax.legend(frameon=False, loc='upper right')
ax.set_xlabel("Wavenumber (cm$^{-1}$)")
ax.set_ylabel("TOA flux (W/m$^{2}$/cm$^{-1}$)")
ax.grid()
SSTcolors = {320: 'green',
300: 'orange',
280: 'blue',
}
rad = {}
for SST in SSTcolors:
rad[SST] = calc_olr(SST=SST, CO2ppmv=0., RH=0.8, return_spectral_olr=True, )
Plot !
fig, ax = plt.subplots(dpi=100, figsize=(7,4))
for SST in SSTcolors:
make_planck_curve(ax, SST, color=SSTcolors[SST])
make_rrtmg_spectrum(ax, rad[SST].OLR_spectral,
label="RRTMG - {}K, water vapour, no CO2".format(SST),
color=SSTcolors[SST])
ax.set_xlim(0, 4000)
ax.legend(frameon=False, loc='upper right')
ax.set_xlabel("Wavenumber (cm$^{-1}$)")
ax.set_ylabel("TOA flux (W/m$^{2}$/cm$^{-1}$)")
ax.grid()
feedback = {}
for SST in SSTcolors:
# Calculate perturbation (+1K) state diagnostics
rad_p1 = calc_olr(SST=SST+1, CO2ppmv=0., RH=0.8, return_spectral_olr=True, )
# Calculate spectral feedback parameter
feedback[SST] = (rad_p1.OLR_spectral-rad[SST].OLR_spectral)
Plot !
fig, ax = plt.subplots(dpi=100, figsize=(7,4))
SST=280
make_planck_feedback(ax, SST, color=SSTcolors[SST])
make_rrtmg_spectrum(ax, feedback[SST]*1000,
label="RRTMG - {}K, water vapour, no CO2".format(SST),
color=SSTcolors[SST])
ax.set_xlim(0, 4000)
ax.set_ylim(-0.5, 6)
ax.legend(frameon=False, loc='upper right')
ax.set_xlabel("Wavenumber (cm$^{-1}$)")
ax.set_ylabel(r"$\lambda_{\nu}$ (mW/m$^{2}$/cm$^{-1}/K$)")
ax.grid()
Plot !
fig, ax = plt.subplots(dpi=100, figsize=(7,4))
SST=300
make_planck_feedback(ax, SST, color=SSTcolors[SST])
make_rrtmg_spectrum(ax, feedback[SST]*1000,
label="RRTMG - {}K, water vapour, no CO2".format(SST),
color=SSTcolors[SST])
ax.set_xlim(0, 4000)
ax.set_ylim(-0.5, 6)
ax.legend(frameon=False, loc='upper right')
ax.set_xlabel("Wavenumber (cm$^{-1}$)")
ax.set_ylabel(r"$\lambda_{\nu}$ (mW/m$^{2}$/cm$^{-1}/K$)")
ax.grid()
Plot !
fig, ax = plt.subplots(dpi=100, figsize=(7,4))
SST=320
make_planck_feedback(ax, SST, color=SSTcolors[SST])
make_rrtmg_spectrum(ax, feedback[SST]*1000,
label="RRTMG - {}K, water vapour, no CO2".format(SST),
color=SSTcolors[SST])
ax.set_xlim(0, 4000)
ax.set_ylim(-1, 6.5)
ax.legend(frameon=False, loc='upper right')
ax.set_xlabel("Wavenumber (cm$^{-1}$)")
ax.set_ylabel(r"$\lambda_{\nu}$ (mW/m$^{2}$/cm$^{-1}/K$)")
ax.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Set up idealized atmospheric profiles of temperature and humidity
Step3: Now, compute specific humidity profile using climlab.radiation.water_vapor.ManabeWaterVapor
Step4: Run the profiles through RRTMG_LW
Step5: Now, wrap it all into a simple function
Step6: Now, lets iterate over a few (SST, CO2) pairs
Step7: Okay then! As expected we can see that, all else being equal, increasing CO$_{2}$ <span style="color
Step9: Now, lets check to see if we get the familiar Planck curve
Step10: Now, what happens when we include $CO_{2}$?
Step11: As we saw before, including $CO_{2}$ in the radiative transfer calculation reduces the total OLR (i.e., the spectral integral over what we've plotted). This happens predominantly due to absorption at the center of the $15 \mu\mathrm{m}$ $CO_{2}$ band (around $667.5 \mathrm{cm}^{-1}$).
Step13: Water vapour clearly also influences the OLR spectrum quite a bit! Two interesting things to note
Step14: Nice!
Step16: At low temperatures, the feedback parameter in the window region is close the the Planck feedback, indicating efficient emission to space from these wavenumbers.
Step19: At higher temperatures, water vapour becomes optically thick in the window region, causing the OLR to become less sensitive to changes in surface temperature. As such, the feedback parameter reduces rapidly.
|
12,645
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
wiki = graphlab.SFrame('people_wiki.gl')
wiki
wiki['URI'][1]
wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])
wiki
model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],
method='brute_force', distance='euclidean')
model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)
wiki[wiki['name'] == 'Barack Obama'][['word_count']].stack('word_count', new_column_name=['word','count']).sort('count',ascending=False)
def top_words(name):
Get a table of the most frequent words in the given person's wikipedia page.
row = wiki[wiki['name'] == name]
word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])
return word_count_table.sort('count', ascending=False)
obama_words = top_words('Barack Obama')
obama_words
barrio_words = top_words('Francisco Barrio')
barrio_words
combined_words = obama_words.join(barrio_words, on='word')
combined_words
combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})
combined_words
combined_words.sort('Obama', ascending=False)
obama_words = top_words('Barack Obama')
common_words = list(obama_words[:5]['word'])
type(common_words)
#mmon_words
set(common_words)
common_words = list(top_words('Barack Obama')[:5]['word']) # Barack Obama 5 largest words
print common_words
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(word_count_vector.keys()) #using keys() method and using set() method convert list to set
# return True if common_words is a subset of unique_words
# return False otherwise
return set(common_words).issubset(unique_words) # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
print wiki['has_top_words']
sum(wiki['has_top_words'])
print 'Output from your function:', has_top_words(wiki[32]['word_count'])
print 'Correct output: True'
print 'Also check the length of unique_words. It should be 167'
print 'Output from your function:', has_top_words(wiki[33]['word_count'])
print 'Correct output: False'
print 'Also check the length of unique_words. It should be 188'
type(wiki[33])
a = graphlab.SFrame(wiki[wiki['name']=='Barack Obama']['word_count'])[0]['X1']
b = graphlab.SFrame(wiki[wiki['name']=='George W. Bush']['word_count'])[0]['X1']
c = graphlab.SFrame(wiki[wiki['name']=='Joe Biden']['word_count'])[0]['X1']
graphlab.toolkits.distances.euclidean(a,b) # Obama and Bush
graphlab.toolkits.distances.euclidean(a,c) # Obama and Joe
graphlab.toolkits.distances.euclidean(b,c) # Bush and Joe+++++++++++++
bush_words = top_words('George W. Bush')
obama_words.join(bush_words, on='word') \
.rename({'count' : 'Obama', 'count.1' : 'Bush'}) \
.sort('Obama', ascending = False)
obama_words.join(bush_words, on='word') \
.rename({'count' : 'Obama', 'count.1' : 'Bush'}) \
.sort('Obama', ascending = False)['word'][:10]
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])
model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='euclidean')
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
def top_words_tf_idf(name):
row = wiki[wiki['name'] == name]
word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])
return word_count_table.sort('weight', ascending=False)
obama_tf_idf = top_words_tf_idf('Barack Obama')
obama_tf_idf
schiliro_tf_idf = top_words_tf_idf('Phil Schiliro')
schiliro_tf_idf
combination2_words = obama_tf_idf.join(schiliro_tf_idf,on='word').sort('weight',ascending=False)
combination2_words
combination2_words = combination2_words.rename({'weight':'Obama', 'weight.1':'Schiliro'})
combination2_words
combination2_words = combination2_words.sort('Obama', ascending=False)
combination2_words
common_words = set(list(combination2_words[:5]['word']))
common_words
# common_words = common_words
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(word_count_vector.keys())
# return True if common_words is a subset of unique_words
# return False otherwise
return common_words.issubset(unique_words) # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
print wiki['has_top_words'] # YOUR CODE HERE
sum(wiki['has_top_words'])
obama = wiki[wiki['name'] == 'Barack Obama']['tf_idf'][0]
biden = wiki[wiki['name'] == 'Joe Biden']['tf_idf'][0]
graphlab.toolkits.distances.euclidean(obama, biden)
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
def compute_length(row):
return len(row['text'])
wiki['length'] = wiki.apply(compute_length)
nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_euclidean.sort('rank')
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_cosine.sort('rank')
plt.figure(figsize=(10.5,4.5))
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})
sf['word_count'] = graphlab.text_analytics.count_words(sf['text'])
encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')
encoder.fit(wiki)
sf = encoder.transform(sf)
sf
tweet_tf_idf = sf[0]['tf_idf.word_count']
tweet_tf_idf
obama = wiki[wiki['name'] == 'Barack Obama']
obama
obama_tf_idf = obama[0]['tf_idf']
graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)
model2_tf_idf.query(obama, label='name', k=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Wikipedia dataset
Step2: Extract word count vectors
Step3: Find nearest neighbors
Step4: Let's look at the top 10 nearest neighbors by performing the following query
Step6: All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.
Step7: Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as join. The join operation is very useful when it comes to playing around with data
Step8: Since both tables contained the column named count, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (count) is for Obama and the second (count.1) for Barrio.
Step9: Note. The join operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget ascending=False to display largest counts first.
Step10: Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Step11: Checkpoint. Check your has_top_words function on two random articles
Step12: Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?
Step13: Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words,
Step14: Note. Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.
Step15: Let's determine whether this list makes sense.
Step16: Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
Step17: The first 10 words should say
Step18: Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.
Step19: The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability
Step20: But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
Step21: To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
Step22: Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 2000 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many Wikipedia articles are 2500 words or more, and both Obama and Biden are over 2500 words long.
Step23: From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.
Step24: Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.
Step25: Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
Step26: Now, compute the cosine distance between the Barack Obama article and this tweet
Step27: Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors
|
12,646
|
<ASSISTANT_TASK:>
Python Code:
import sys
try:
import cplex
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install cplex
else:
!pip install --user cplex
import sys
try:
import docplex.mp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
# first import the Model class from docplex.mp
from docplex.mp.model import Model
# create one model instance, with a name
m = Model(name='telephone_production')
# by default, all variables in Docplex have a lower bound of 0 and infinite upper bound
desk = m.continuous_var(name='desk')
cell = m.continuous_var(name='cell')
# write constraints
# constraint #1: desk production is greater than 100
m.add_constraint(desk >= 100)
# constraint #2: cell production is greater than 100
m.add_constraint(cell >= 100)
# constraint #3: assembly time limit
ct_assembly = m.add_constraint( 0.2 * desk + 0.4 * cell <= 400)
# constraint #4: paiting time limit
ct_painting = m.add_constraint( 0.5 * desk + 0.4 * cell <= 490)
m.maximize(12 * desk + 20 * cell)
m.print_information()
s = m.solve()
m.print_solution()
# create a new model, copy of m
im = m.copy()
# get the 'desk' variable of the new model from its name
idesk = im.get_var_by_name('desk')
# add a new (infeasible) constraint
im.add_constraint(idesk >= 1100);
# solve the new proble, we expect a result of None as the model is now infeasible
ims = im.solve()
if ims is None:
print('- model is infeasible')
overtime = m.continuous_var(name='overtime', ub=40)
ct_assembly.rhs = 400 + overtime
m.maximize(12*desk + 20 * cell - 2 * overtime)
s2 = m.solve()
m.print_solution()
print('* desk variable has reduced cost: {0}'.format(desk.reduced_cost))
print('* cell variable has reduced cost: {0}'.format(cell.reduced_cost))
# revert soft constraints
ct_assembly.rhs = 440
s3 = m.solve()
# now get slack value for assembly constraint: expected value is 40
print('* slack value for assembly time constraint is: {0}'.format(ct_assembly.slack_value))
# get slack value for painting time constraint, expected value is 0.
print('* slack value for painting time constraint is: {0}'.format(ct_painting.slack_value))
m.parameters.lpmethod = 4
m.solve(log_output=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Installs DOcplexif needed
Step2: If either CPLEX or docplex where installed in the steps above, you will need to restart your jupyter kernel for the changes to be taken into account.
Step 2
Step3: Define the decision variables
Step4: Set up the constraints
Step5: Express the objective
Step6: A few remarks about how we formulated the mathemtical model in Python using DOcplex
Step7: Graphical representation of a Linear Problem
Step8: In this case, CPLEX has found an optimal solution at (300, 850). You can check that this point is indeed an extreme point of the feasible region.
Step9: Correcting infeasible models
Step10: Modify the assembly time constraint by changing its right-hand side by adding overtime.
Step11: Last, modify the objective expression to add the penalization term.
Step12: And solve again using DOcplex
Step13: Unbounded Variable vs. Unbounded model
Step14: Default optimality criteria for CPLEX optimizer
Step15: Degeneracy
|
12,647
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append("../python/")
import pentoref.IO as IO
import sqlite3 as sqlite
# Create databases if required
if False: # make True if you need to create the databases from the derived data
for corpus_name in ["TAKE", "TAKECV", "PENTOCV"]:
data_dir = "../../../pentoref/{0}_PENTOREF".format(corpus_name)
dfwords, dfutts, dfrefs, dfscenes, dfactions = IO.convert_subcorpus_raw_data_to_dataframes(data_dir)
IO.write_corpus_to_database("{0}.db".format(corpus_name),
corpus_name, dfwords, dfutts, dfrefs, dfscenes, dfactions)
# Connect to database
CORPUS = "PENTOCV"
db = sqlite.connect("{0}.db".format(CORPUS))
cursor = db.cursor()
# get the table column header names
print("utts", [x[1] for x in cursor.execute("PRAGMA table_info(utts)")])
print("words", [x[1] for x in cursor.execute("PRAGMA table_info(words)")])
print("refs", [x[1] for x in cursor.execute("PRAGMA table_info(refs)")])
print("scenes", [x[1] for x in cursor.execute("PRAGMA table_info(scenes)")])
print("actions", [x[1] for x in cursor.execute("PRAGMA table_info(actions)")])
for row in db.execute("SELECT gameID, starttime, speaker, utt_clean FROM utts" + \
" WHERE starttime >= 200 AND starttime <= 300" + \
' AND gameID = "r8_1_1_b"' + \
" ORDER BY gameID, starttime"):
print(row)
from collections import Counter
from pentoref.IOutils import clean_utt
piece_counter = Counter()
word_counter = Counter()
word_piece_counter = Counter()
for row in db.execute("SELECT id, gameID, text, uttID FROM refs"):
#for row in db.execute("SELECT shape, colour, orientation, gridPosition, gameID, pieceID FROM scenes"):
#isTarget = db.execute('SELECT refID FROM refs WHERE gameID ="' + row[4] + '" AND pieceID ="' + row[5] + '"')
#target = False
#for r1 in isTarget:
# target = True
#if not target:
# continue
#print(r)
#shape, colour, orientation, gridPosition, gameID, pieceID = row
#piece = gridPosition #shape + "_" + colour
piece, gameID, text, uttID = row
if CORPUS in ["TAKECV", "TAKE"]:
for f in db.execute('SELECT word from words WHERE gameID ="' + str(gameID) + '"'):
#print(f)
for word in f[0].lower().split():
word_counter[word] += 1
word_piece_counter[piece+"__"+word]+=1
piece_counter[piece] += 1
elif CORPUS == "PENTOCV":
for word in clean_utt(text.lower()).split():
word_counter[word] += 1
word_piece_counter[piece+"__"+word]+=1
piece_counter[piece] += 1
good_pieces = ["X", "Y", "P", "N", "U", "F", "Z", "L", "T", "I", "W", "V", "UNK"]
print("non standard pieces", {k:v for k,v in piece_counter.items() if k not in good_pieces})
piece_counter
word_counter.most_common(20)
word_total = sum(word_piece_counter.values())
piece_total= sum(piece_counter.values())
for piece, p_count in piece_counter.items():
print("piece:", piece, p_count)
p_piece = p_count/piece_total
highest = -1
best_word = ""
rank = {}
for word, w_count in word_counter.items():
if w_count < 3:
continue
p_word = w_count / word_total
p_word_piece = word_piece_counter[piece+"__"+word] / word_total
mi = (p_word_piece/(p_piece * p_word))
rank[word] = mi
if mi > highest:
highest = mi
best_word = word
if True:
top = 5
for k, v in sorted(rank.items(), key=lambda x:x[1], reverse=True):
print(k, v)
top -=1
if top <= 0:
break
print("*" * 30)
db.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get utterances from certain time periods in each experiment or for certain episodes
Step2: Get mutual information between words used in referring expressions and properties of the referent
|
12,648
|
<ASSISTANT_TASK:>
Python Code:
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.evaluation import RegressionEvaluator, MulticlassClassificationEvaluator
from pyspark.ml import Pipeline
from pyspark.mllib.regression import LabeledPoint
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import StringIndexer
from pyspark.mllib.evaluation import MulticlassMetrics
from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit, CrossValidator
def mapLibSVM(row):
return (row[5],Vectors.dense(row[:3]))
df = spark.read \
.format("csv") \
.option("header", "true") \
.option("inferSchema", "true") \
.load("datasets/iris.data")
indexer = StringIndexer(inputCol="label", outputCol="labelIndex")
indexer = indexer.fit(df).transform(df)
indexer.show()
dfLabeled = indexer.rdd.map(mapLibSVM).toDF(["label", "features"])
dfLabeled.show()
train, test = dfLabeled.randomSplit([0.9, 0.1], seed=12345)
lr = LogisticRegression(labelCol="label", maxIter=15)
paramGrid = ParamGridBuilder()\
.addGrid(lr.regParam, [0.1, 0.001]) \
.build()
tvs = TrainValidationSplit(estimator=lr,
estimatorParamMaps=paramGrid,
evaluator=MulticlassClassificationEvaluator(),
trainRatio=0.8)
cval = CrossValidator(estimator=lr,
estimatorParamMaps=paramGrid,
evaluator=MulticlassClassificationEvaluator(),
numFolds=10)
result_tvs = tvs.fit(train).transform(test)
result_cval = cval.fit(train).transform(test)
preds_tvs = result_tvs.select(["prediction", "label"])
preds_cval = result_cval.select(["prediction", "label"])
# Instânciação dos Objetos de Métrics
metrics_tvs = MulticlassMetrics(preds_tvs.rdd)
metrics_cval = MulticlassMetrics(preds_cval.rdd)
# Estatísticas Gerais para o Método TrainValidationSplit
print("Summary Stats")
print("F1 Score = %s" % metrics_tvs.fMeasure())
print("Accuracy = %s" % metrics_tvs.accuracy)
print("Weighted recall = %s" % metrics_tvs.weightedRecall)
print("Weighted precision = %s" % metrics_tvs.weightedPrecision)
print("Weighted F(1) Score = %s" % metrics_tvs.weightedFMeasure())
print("Weighted F(0.5) Score = %s" % metrics_tvs.weightedFMeasure(beta=0.5))
print("Weighted false positive rate = %s" % metrics_tvs.weightedFalsePositiveRate)
# Estatísticas Gerais para o Método TrainValidationSplit
print("Summary Stats")
print("F1 Score = %s" % metrics_cval.fMeasure())
print("Accuracy = %s" % metrics_cval.accuracy)
print("Weighted recall = %s" % metrics_cval.weightedRecall)
print("Weighted precision = %s" % metrics_cval.weightedPrecision)
print("Weighted F(1) Score = %s" % metrics_cval.weightedFMeasure())
print("Weighted F(0.5) Score = %s" % metrics_cval.weightedFMeasure(beta=0.5))
print("Weighted false positive rate = %s" % metrics_cval.weightedFalsePositiveRate)
from pyspark.ml.classification import RandomForestClassifier
rf = RandomForestClassifier(labelCol="label", featuresCol="features")
paramGrid = ParamGridBuilder()\
.addGrid(rf.numTrees, [1, 100]) \
.build()
cval = CrossValidator(estimator=rf,
estimatorParamMaps=paramGrid,
evaluator=MulticlassClassificationEvaluator(),
numFolds=10)
results = cval.fit(train).transform(test)
predictions = results.select(["prediction", "label"])
# Instânciação dos Objetos de Métrics
metrics = MulticlassMetrics(predictions.rdd)
# Estatísticas Gerais para o Método TrainValidationSplit
print("Summary Stats")
print("F1 Score = %s" % metrics.fMeasure())
print("Accuracy = %s" % metrics.accuracy)
print("Weighted recall = %s" % metrics.weightedRecall)
print("Weighted precision = %s" % metrics.weightedPrecision)
print("Weighted F(1) Score = %s" % metrics.weightedFMeasure())
print("Weighted F(0.5) Score = %s" % metrics.weightedFMeasure(beta=0.5))
print("Weighted false positive rate = %s" % metrics.weightedFalsePositiveRate)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Funções
Step2: Convertendo a saída de categórica para numérica
Step3: Definição do Modelo Logístico
Step4: Cross-Validation - TrainValidationSplit e CrossValidator
Step5: Treino do Modelo e Predição do Teste
Step6: Avaliação dos Modelos
Step7: Conclusão
Step8: Definição do Modelo de Árvores Randômicas
Step9: Cross-Validation - CrossValidator
Step10: Treino do Modelo e Predição do Teste
Step11: Avaliação do Modelo
|
12,649
|
<ASSISTANT_TASK:>
Python Code:
import sys
print("Python %d.%d.%d" % (sys.version_info.major, \
sys.version_info.minor, \
sys.version_info.micro))
import numpy as np
print("NumPy %s" % np.__version__)
import scipy
import scipy.io as sio
from scipy.optimize import fmin_cg
print("SciPy %s" % scipy.__version__)
import matplotlib
import matplotlib.pyplot as plt
print("matplotlib %s" % matplotlib.__version__)
import time
# Display graph inline
%matplotlib inline
# Display graph in 'retina' format for Mac with retina display. Others, use PNG or SVG format.
%config InlineBackend.figure_format = 'retina'
#%config InlineBackend.figure_format = 'PNG'
#%config InlineBackend.figure_format = 'SVG'
# For displaying 3D graph
from mpl_toolkits.mplot3d import Axes3D
movies = sio.loadmat('ex8_movies.mat')
# Rating: 1 to 5 stars
# R_ij: rating of movie i by user j
R = movies['R']
# Rating flag
# Y_ij = 0: move i has not been rated by user j
# Y_ij = 1: move i has been rated by user j
Y = movies['Y']
np.mean(Y[0, R[0] == 1])
fig = plt.figure()
ax = fig.add_subplot(111)
imshow = ax.imshow(Y, aspect='auto')
plt.title('Movie Rating by Users (1 to 5 stars)\n', linespacing=1)
plt.xlabel('Users')
plt.ylabel('Movies')
plt.colorbar(imshow)
fig.show()
def load_movie_list(filename):
movie_list = []
with open(filename, 'r') as input_fh:
for line in input_fh.readlines():
space_idx = line.find(' ')
if space_idx:
movie_list.append(line[space_idx + 1:].strip())
return movie_list
movie_list = load_movie_list('movie_ids.txt')
def normalize_rating(Y, R):
[m, n] = Y.shape
Y_mean = np.zeros([m, 1])
Y_norm = np.zeros(Y.shape)
for i in range(m):
Y_mean[i] = np.mean(Y[i, R[i] == 1])
Y_norm = Y - Y_mean
return Y_mean, Y_norm
Y_mean, Y_norm = normalize_rating(Y, R)
Y_mean
Y_norm
def compute_cost(X_Theta, args):
Y, R, Lambda, n_m, n_u, n = args
X_Theta = X_Theta.reshape([n_m + n_u, n])
X = X_Theta[:n_m,:]
Theta = X_Theta[n_m:,:]
error = (np.dot(X, Theta.transpose()) - Y) * R
J = (np.sum(error ** 2) / 2) + \
((Lambda * np.sum(Theta ** 2)) / 2) + \
((Lambda * np.sum(X ** 2)) / 2)
return J
def compute_cost_wrapper(X_Theta):
return compute_cost(X_Theta, args)
def compute_grad(X_Theta, args):
Y, R, Lambda, n_m, n_u, n = args
X_Theta = X_Theta.reshape([n_m + n_u, n])
X = X_Theta[:n_m,:]
Theta = X_Theta[n_m:,:]
error = (np.dot(X, Theta.transpose()) - Y) * R
X_grad = np.dot(error, Theta) + (Lambda * X)
Theta_grad = np.dot(error.transpose(), X) + (Lambda * Theta)
return np.vstack([X_grad, Theta_grad]).flatten()
def compute_grad_wrapper(X_Theta):
return compute_grad(X_Theta, args)
dummy_movies = sio.loadmat('ex8_movieParams.mat')
dummy_num_users = 4
dummy_num_movies = 5
dummy_num_features = 3
dummy_X = dummy_movies['X'][:dummy_num_movies, :dummy_num_features]
dummy_Theta = dummy_movies['Theta'][:dummy_num_users, :dummy_num_features]
dummy_Y = Y[:dummy_num_movies, :dummy_num_users]
dummy_R = R[:dummy_num_movies, :dummy_num_users]
Lambda = 1.5
X_Theta = np.vstack([dummy_X, dummy_Theta]).flatten()
args = np.asarray((dummy_Y, dummy_R, Lambda, dummy_num_movies, dummy_num_users, dummy_num_features))
J = compute_cost(X_Theta, args)
J
Lambda = 1.5
X_Theta = np.vstack([dummy_X, dummy_Theta])
args = np.asarray((dummy_Y, dummy_R, Lambda, dummy_num_movies, dummy_num_users, dummy_num_features))
grad = compute_grad(X_Theta, args)
grad
# Get total movies (n_m) and total users (n_u)
n_m, n_u = Y.shape
my_ratings = np.zeros([n_m, 1])
my_ratings[0] = 4
my_ratings[97] = 2
my_ratings[6] = 3
my_ratings[11] = 5
my_ratings[53] = 4
my_ratings[63] = 5
my_ratings[65] = 3
my_ratings[68] = 5
my_ratings[182] = 4
my_ratings[225] = 5
my_ratings[354] = 5
for i, rating in enumerate(my_ratings):
if rating:
print('Rated %5s for %s' % ('*' * int(rating), movie_list[i]))
my_Y = np.hstack([my_ratings, Y])
my_R = np.hstack([(my_ratings != 0).astype(int), R])
# Normalize
my_Y_mean, my_Y_norm = normalize_rating(my_Y, my_R)
# Get total movies (n_m) and total users (n_u)
n_m, n_u = my_Y.shape
# Total parameters
n = 100
# Initialize random parameters X and Theta
X_init = np.random.rand(n_m, n)
Theta_init = np.random.rand(n_u, n)
X_Theta_init = np.vstack([X_init, Theta_init]).flatten()
# Regularization
Lambda = 1.5
# Arguments
args = np.asarray((my_Y_norm, my_R, Lambda, n_m, n_u, n))
tic = time.time()
X_Theta_result = fmin_cg(compute_cost_wrapper, X_Theta_init, fprime=compute_grad_wrapper)
toc = time.time()
print('Runtime: %s seconds' % int(toc - tic))
X_Theta_init
X_Theta_result = X_Theta_result.reshape([n_m + n_u, n])
X_result = X_Theta_result[:n_m,:]
Theta_result = X_Theta_result[n_m:,:]
X_result
Theta_result
# Prediction
p = np.dot(X_result, Theta_result.transpose()) + Y_mean
p = np.rint(((p - np.min(p)) / (np.abs(np.min(p)) + np.abs(np.max(p)))) * 4) + 1
my_predictions = p[:,1].reshape(p.shape[0],1)
sorted_my_predictions = np.argsort(my_predictions, axis=0)[::-1]
# Display top ten recommendation
for i in sorted_my_predictions[:10]:
print('Rated %5s for %s' % (('*' * int(my_predictions[i])), movie_list[i]))
Y = np.asarray([[5, 5, 0, 0],
[5, 0, 0, 0],
[0, 4, 0, 0],
[0, 0, 5, 4],
[0, 0, 5, 0]])
R = np.asarray([[1, 1, 1, 1],
[1, 0, 0, 1],
[0, 1, 1, 0],
[1, 1, 1, 1],
[1, 1, 1, 0]])
movie_list = ['Love at Last',
'Romance Forever',
'Cute Puppies of Love',
'Nonstop Car Chases',
'Swords vs Karate']
def normalize_rating(Y, R):
[m, n] = Y.shape
Y_mean = np.zeros([m, 1])
Y_norm = np.zeros(Y.shape)
for i in range(m):
Y_mean[i] = np.mean(Y[i, R[i] == 1])
Y_norm = Y - Y_mean
return Y_mean, Y_norm
# Normalize
Y_mean, Y_norm = normalize_rating(Y, R)
# Get total movies (n_m) and total users (n_u)
n_m, n_u = Y.shape
# Total parameters
n = 100
# Initialize random parameters X and Theta
X_init = np.random.rand(n_m, n)
Theta_init = np.random.rand(n_u, n)
X_Theta_init = np.vstack([X_init, Theta_init]).flatten()
# Regularization
Lambda = 1.5
# Arguments
args = np.asarray((Y_norm, R, Lambda, n_m, n_u, n))
X_Theta_result = fmin_cg(compute_cost_wrapper, X_Theta_init, fprime=compute_grad_wrapper)
X_Theta_result = X_Theta_result.reshape([n_m + n_u, n])
X_result = X_Theta_result[:n_m,:]
Theta_result = X_Theta_result[n_m:,:]
# Prediction
p = np.dot(X_result, Theta_result.transpose()) + Y_mean
p = np.rint(((p - np.min(p)) / (np.abs(np.min(p)) + np.abs(np.max(p)))) * 5)
p
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Modules
Step2: Display Settings
Step3: Collaborative Filtering[1]
Step4: Based on movie_ids.txt file, Toy Story (1995) movie is on the list no 1 or index 0.
Step5: Movie List
Step6: Normalize Function
Step7: Cost and Gradient Functions
Step8: Collaborative filtering gradient function
Step9: Rate Movies
Step10: Learn Rating
Step11: Testing with Small Sample
|
12,650
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import logging
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import hurraypy as hurray
import numpy as np
hurray.__version__
logger = logging.getLogger('hurraypy')
# console = logging.StreamHandler()
# console.setLevel(logging.DEBUG)
# console.setFormatter(logging.Formatter('%(levelname)s --- %(message)s'))
# logger.addHandler(console)
# logger.setLevel(logging.DEBUG)
logger.handlers
hurray.log.log.debug("bla")
hurray.log.log.info("bla")
# conn = hurray.connect('localhost:2222')
conn = hurray.connect('~/hurray.sock')
conn
f = conn.create_file("test.h5", overwrite=True)
f
f = conn.File("test.h5")
print(f)
with conn.File("test.h5") as f:
print(f)
f.delete()
f2 = conn.create_file("test2.h5", overwrite=True)
f2
f = f2.rename("test.h5")
f
f3 = conn.create_file("test3.h5", overwrite=True)
try:
f3.rename("test.h5")
except hurray.exceptions.DatabaseError as e:
print(e)
f4 = conn.create_file("project1/data.h5", overwrite=True)
f4
conn.list_files("project1/")
conn.list_files("")
dst = f.create_dataset("mydata", shape=(400, 300), dtype=np.float64)
dst
dst.shape, dst.dtype
dst.path
dst[:]
arr = np.linspace(0, 1, num=dst.shape[0] * dst.shape[1]).reshape(dst.shape)
arr.shape == dst.shape
dst[:] = arr
dst[:]
f
dst[10:12, 50:55]
dst[10:12, 50:55] = 999
dst[9:13, 50:55]
dst = f.require_dataset("mydata", shape=(400, 300), dtype=np.float64, exact=True)
dst[9:13, 50:55]
f.require_dataset("mydata", shape=(400, 300), dtype=np.int16, exact=True)
f.create_group("mygroup")
f.keys()
f.create_group("mygroup/subgroup")
subgrp = f["mygroup/subgroup"]
subgrp
data = np.random.random((600, 400))
dst = subgrp.create_dataset("randomdata", data=data)
dst
f.tree()
print(f.tree())
dst = f["mygroup/subgroup/randomdata"]
dst.attrs["unit"] = "celsius"
dst.attrs["max_value"] = 50
dst
dst.attrs.keys()
dst.attrs["unit"], dst.attrs["max_value"]
f.tree()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, make sure all logging messages are sent to stdout
Step2: Connecting to a hurray server
Step3: Working with files
Step4: Note that Hurray objects (files, datasets, groups) display nicely in Jupyter notebooks.
Step5: Working with existing files works like this
Step6: Deleting and renaming files is also possible
Step7: Note that the object referenced by f becomes unusable after deleting the file.
Step8: Note that rename() is not "in place". We must (re-)assign its return value.
Step9: Files can be in subdirectories
Step10: Working with datasets
Step11: A dataset has a shape and a dtype, just like NumPy arrays
Step12: It also has a path, which is the name of the dataset, prefixed by the names of containing groups. Our dataset is not contained in a group. It therefore appears under the root node / (actually, it is in a group
Step13: Let's check what data our dataset contains. Numpy-style indexing allows to read/write from/to a dataset. A [
Step14: Let's overwrite this dataset with increasing floating point numbers
Step15: Creating a dataset has increased file size
Step16: Fancy indexing allows allows to read/write only portions of a dataset. In the following example, only columns 50 to 55 of rows 10 and 11 are sent over the wire
Step17: We can also overwrite the above cells using the same notation
Step18: Require ... TODO
Step19: This shoud result in an error because dtypes do not match
Step20: Working with groups
Step21: Recall that every file object is also a group and therefore acts like a dictionary. Its keys() now lists are newly created group
Step22: Let's create a subgroup (note that groups follow POSIX filesystem conventions)
Step23: Now let's put a dataset in our subgroup
Step24: Every group has a tree() method that displays sub groups and datasets as a tree.
Step25: If you're not in a notebook or ipython console, tree() will give you a text based representation
Step26: Attributes
Step27: Objects that have attributes get a red "A"
|
12,651
|
<ASSISTANT_TASK:>
Python Code:
# numpy provides python tools to easily load comma separated files.
import numpy as np
# use numpy to load disease #1 data
d1 = np.loadtxt(open("../30_Data_ML-III/D1.csv", "rb"), delimiter=",")
# features are all rows for columns before 200
# The canonical way to name this is that X is our matrix of
# examples by features.
X1 = d1[:,:200]
# labels are in all rows at the 200th column
# The canonical way to name this is that y is our vector of
# labels.
y1 = d1[:,200]
# use numpy to load disease #2 data
d2 = np.loadtxt(open("../30_Data_ML-III/D2.csv", "rb"), delimiter=",")
# features are all rows for columns before 200
X2 = d2[:,:200]
# labels are in all rows at the 200th column
y2 = d2[:,200]
# First we need to import svms from sklearn
from sklearn.svm import SVC
# Get an SVC with default parameters as our algorithm
classifier = SVC()
# Fit the classifier to our datasets
classifier.fit(X1, y1)
# Apply the classifier back to our data and get an accuracy measure
train_score = classifier.score(X1, y1)
# Print the accuracy
print(train_score)
# Get an SVC with a high C
classifier = SVC(C = 100)
# Fit the classifier to our datasets
classifier.fit(X1, y1)
# Apply the classifier back to our data and get an accuracy measure
train_score = classifier.score(X1, y1)
# Print the accuracy
print(train_score)
import sklearn
# Import the function to split our data:
from sklearn.cross_validation import train_test_split
# Split things into training and testing - let's have 30% of our data end up as testing
X1_train, X1_test, y1_train, y1_test = train_test_split(X1, y1, test_size=.33)
# Get an SVC again using C = 100
classifier = SVC(C = 100)
# Fit the classifier to the training data:
classifier.fit(X1_train, y1_train)
# Now we're going to apply it to the training labels first:
train_score = classifier.score(X1_train, y1_train)
# We're also going to applying it to the testing labels:
test_score = classifier.score(X1_test, y1_test)
print("Training Accuracy: " + str(train_score))
print("Testing Accuracy: " + str(test_score))
# First, we need to import the classifier
from sklearn.tree import DecisionTreeClassifier
# Now we're going to get a decision tree classifier with the default parameters
classifier = DecisionTreeClassifier()
# The 'fit' syntax is the same
classifier.fit(X1_train, y1_train)
# As is the 'score' syntax
train_score = classifier.score(X1_train, y1_train)
test_score = classifier.score(X1_test, y1_test)
print("Training Accuracy: " + str(train_score))
print("Testing Accuracy: " + str(test_score))
# Now we're going to get a decision tree classifier with selected parameters
classifier = DecisionTreeClassifier(max_features=8, max_depth=3)
# The 'fit' syntax is the same
classifier.fit(X1_train, y1_train)
# As is the 'score' syntax
train_score = classifier.score(X1_train, y1_train)
test_score = classifier.score(X1_test, y1_test)
print("Training Accuracy: " + str(train_score))
print("Testing Accuracy: " + str(test_score))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implement an SVM!
Step2: The parts inside the parentheses give us the ability to set or change parameters. Anything with an equals sign after it has a default parameter set. In this case, the default C is set to 1.0. There's also a box that gives some description of what each parameter is (only a few of them may make sense to us right now). If we scroll to the bottom of the box, we'll get some examples provided by the helpful sklearn team, though they don't know about the names of our datasets. They'll often use the standard name X for features and y for labels.
Step3: Ouch! Only about 50% accuracy. That's painful! We learned that we could modify C to make the algorithm try to fit the data we show it better. Let's ramp up C and see what happens!
Step4: Nice! 100% accuracy. This seems like we're on the right track. What we'd really like to do is figure out how we do on held out testing data though. Fortunately, sklearn provides a helper function to make holding out some of the data easy. This function is called train_test_split and we can find its documentation. If we weren't sure where to go, the sklearn documentation has a full section on cross validation.
Step5: Now let's go ahead and train our classifier on the training data and test it on some held out test data
Step6: Nice! Now we can see that while our training accuracy is very high, our testing accuracy is much lower. We could say that our model has "overfit" to the data. We learned about overfitting before. You'll get a chance to play with this SVM a bit more below. Before we move to that though, we want to show you how easy it is to use a different classifier. You might imagine that a classifier could be composed of a cascading series of rules. If this is true, then consider that. Otherwise, consider this other thing. This type of algorithm is called a decision tree, and we're going to rain one now.
Step7: Oof! That's pretty overfit! We're perfect on the training data but basically flipping a coin on the held out data. A DecisionTreeClassifier has two parameters max_features and max_depth that can really help us prevent overfitting. Let's train a very small tree (no more than 8 features) that's very short (no more than 3 deep).
|
12,652
|
<ASSISTANT_TASK:>
Python Code:
PATH_NEWS_ARTICLES="/home/phoenix/Documents/HandsOn/Final/news_articles.csv"
ARTICLES_READ=[2,7]
NUM_RECOMMENDED_ARTICLES=5
try:
import numpy
import pandas as pd
import pickle as pk
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import re
from nltk.stem.snowball import SnowballStemmer
import nltk
stemmer = SnowballStemmer("english")
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy" "pandas" "sklearn" "nltk"
import numpy
import pandas as pd
import pickle as pk
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import re
from nltk.stem.snowball import SnowballStemmer
import nltk
stemmer = SnowballStemmer("english")
print('Done!')
news_articles = pd.read_csv(PATH_NEWS_ARTICLES)
news_articles.head()
#Select relevant columns and remove rows with missing values
news_articles = news_articles[['Article_Id','Title','Content']].dropna()
#articles is a list of all articles
articles = news_articles['Content'].tolist()
articles[0] #an uncleaned article
def clean_tokenize(document):
document = re.sub('[^\w_\s-]', ' ',document) #remove punctuation marks and other symbols
tokens = nltk.word_tokenize(document) #Tokenize sentences
cleaned_article = ' '.join([stemmer.stem(item) for item in tokens]) #Stemming each token
return cleaned_article
cleaned_articles = map(clean_tokenize, articles)
cleaned_articles[0] #a cleaned, tokenized and stemmed article
#Get user representation in terms of words associated with read articles
user_articles = ' '.join(cleaned_articles[i] for i in ARTICLES_READ)
user_articles
#Generate tfidf matrix model for entire corpus
tfidf_matrix = TfidfVectorizer(stop_words='english', min_df=2)
article_tfidf_matrix = tfidf_matrix.fit_transform(cleaned_articles)
article_tfidf_matrix #tfidf vector of an article
#Generate tfidf matrix model for read articles
user_article_tfidf_vector = tfidf_matrix.transform([user_articles])
user_article_tfidf_vector
user_article_tfidf_vector.toarray()
articles_similarity_score=cosine_similarity(article_tfidf_matrix, user_article_tfidf_vector)
recommended_articles_id = articles_similarity_score.flatten().argsort()[::-1]
recommended_articles_id
#Remove read articles from recommendations
final_recommended_articles_id = [article_id for article_id in recommended_articles_id
if article_id not in ARTICLES_READ ][:NUM_RECOMMENDED_ARTICLES]
final_recommended_articles_id
#Recommended Articles and their title
print 'Articles Read'
print news_articles.loc[news_articles['Article_Id'].isin(ARTICLES_READ)]['Title']
print '\n'
print 'Recommender '
print news_articles.loc[news_articles['Article_Id'].isin(final_recommended_articles_id)]['Title']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Represent articles in terms of bag of words
Step2: 2. Represent user in terms of read articles associated words
Step3: 3. Generate TF-IDF matrix for user read articles and unread articles
Step4: 4. Calculate cosine similarity between user read articles and unread articles
Step5: 5. Get the recommended articles
|
12,653
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# Code Under Test
def entropy(ps):
items = ps * np.log(ps)
if any(np.isnan(items)):
raise ValueError("Cannot compute log of ps!")
return -np.sum(items)
np.isnan([.1, .9])
# Smoke test
entropy([0.5, 0.5])
# One-shot test. Need to know the correct answer.
SMALL_VALUE = 1e-5
entropy([SMALL_VALUE, 1-SMALL_VALUE])
# Edge test. This is something that should cause an exception.
entropy([-.1, .9])
# Pattern test
print (entropy([0.5, 0.5]), entropy([1/3, 1/3, 1/3]), entropy(np.repeat(1/20, 20)))
import unittest
# Define a class in which the tests will run
class UnitTests(unittest.TestCase):
# Each method in the class to execute a test
def test_success(self):
self.assertEqual(1, 1)
def test_success1(self):
self.assertTrue(1 == 1)
def test_failure(self):
self.assertEqual(1, 1)
suite = unittest.TestLoader().loadTestsFromTestCase(UnitTests)
_ = unittest.TextTestRunner().run(suite)
import unittest
# Define a class in which the tests will run
class UnitTests(unittest.TestCase):
# Each method in the class to execute a test
def test_success(self):
self.assertEqual(1, 1)
def test_success1(self):
self.assertTrue(1 == 1)
def test_failure(self):
self.assertEqual(1, 1)
suite = unittest.TestLoader().loadTestsFromTestCase(UnitTests)
_ = unittest.TextTestRunner().run(suite)
# Function the handles test loading
#def test_setup(argument ?):
# Implementating a pattern test. Use functions in the test.
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_equal_probability(self):
def test(count):
Invokes the entropy function for a number of values equal to count
that have the same probability.
:param int count:
raise RuntimeError ("Not implemented.")
#
test(2)
test(20)
test(200)
suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy)
_ = unittest.TextTestRunner().run(suite)
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_invalid_probability(self):
try:
entropy([0.1, -0.5])
self.assertTrue(False)
except ValueError:
self.assertTrue(True)
suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy)
_ = unittest.TextTestRunner().run(suite)
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_invalid_probability(self):
with self.assertRaises(ValueError):
a = 1 / 0.0
entropy([0.1, -0.5])
suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy)
_ = unittest.TextTestRunner().run(suite)
import unittest
# Define a class in which the tests will run
class TestEntryopy(unittest.TestCase):
def test_oneshot(self):
self.assertEqual(geomean([1,1]), 1)
def test_oneshot2(self):
self.assertEqual(geomean([3, 3, 3]), 3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1.
Step2: Question
Step3: Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \frac{1}{n}$.
Step4: You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better.
Step6: Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach.
Step7: Testing For Exceptions
Step8: unittest provides help with testing exceptions.
Step9: Test Files
|
12,654
|
<ASSISTANT_TASK:>
Python Code:
%tensorflow_version 1.x
!curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import deepchem_installer
%time deepchem_installer.install(version='2.3.0')
import deepchem as dc
import os
from deepchem.utils import download_url
download_url("https://s3-us-west-1.amazonaws.com/deepchem.io/datasets/pdbbind_core_df.csv.gz")
data_dir = os.path.join(dc.utils.get_data_dir())
dataset_file= os.path.join(dc.utils.get_data_dir(), "pdbbind_core_df.csv.gz")
raw_dataset = dc.utils.save.load_from_disk(dataset_file)
print("Type of dataset is: %s" % str(type(raw_dataset)))
print(raw_dataset[:5])
#print("Shape of dataset is: %s" % str(raw_dataset.shape))
import numpy as np
import tensorflow as tf
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Training the Model
|
12,655
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import re
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import cross_val_score
from os.path import join
from bs4 import BeautifulSoup
root_dir = '/Users/arman/kaggledata/popcorn'
dfTrain = pd.read_csv(join(root_dir,'labeledTrainData.tsv'),header=0,\
delimiter="\t",quoting=3)
dfTest = pd.read_csv(join(root_dir,'testData.tsv'), header=0,\
delimiter="\t", quoting=3 )
dfTrain.head(5)
dfTest.head(5)
dfTrain['review'][11]
target = dfTrain['sentiment']
def review_to_wordlist(review, remove_stopwords=False, split=False):
Simple text cleaning function,
uses BeautifulSoup to extract text content from html
removes all non-alphabet
converts to lower case
can remove stopwords
can perform simple tokenization using split by whitespace
review_text = BeautifulSoup(review, 'lxml').get_text()
review_text = re.sub("[^a-zA-Z]"," ", review_text)
words = review_text.lower().split()
if remove_stopwords:
stops = set(stopwords.words("english"))
words = [w for w in words if not w in stops]
if split:
return(words)
else:
return(' '.join(words))
review_to_wordlist(dfTrain['review'][11])
review_to_wordlist(dfTrain['review'][11],remove_stopwords=True)
token = review_to_wordlist(dfTrain['review'][11],remove_stopwords=True, split=True)
print(token)
dfTrain['review'] = dfTrain['review'].map(review_to_wordlist)
dfTest['review'] = dfTest['review'].map(review_to_wordlist)
train_len = len(dfTrain)
corpus = list(dfTrain['review']) + list(dfTest['review'])
tfv = TfidfVectorizer(min_df=3, max_features=None, ngram_range=(1, 2),\
use_idf=True,smooth_idf=True,sublinear_tf=True,\
stop_words = 'english')
tfv.fit(corpus)
X_all = tfv.transform(corpus)
print(X_all.shape)
train = X_all[:train_len]
test = X_all[train_len:]
Cs = [1,3,10,30,100,300]
for c in Cs:
clf = LogisticRegression(penalty='l2', dual=True, tol=0.0001,\
C=c, fit_intercept=True, intercept_scaling=1.0,\
class_weight=None, random_state=None)
print("c:",c," score:", np.mean(cross_val_score(clf, train, target,\
cv=5, scoring='roc_auc')))
clf = LogisticRegression(penalty='l2', dual=True, tol=0.0001,\
C=30, fit_intercept=True, intercept_scaling=1.0,\
class_weight=None, random_state=None)
clf.fit(train,target)
preds = clf.predict_proba(test)[:,1]
dfOut = pd.DataFrame( data={"id":dfTest["id"], "sentiment":preds} )
dfOut.to_csv(join(root_dir,'submission.csv'), index=False, quoting=3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you are missing bs4 or nltk you can install them via
Step2: Let's take a quick look at the data
Step3: In particular note that the review column has some html tags
Step4: Our target is to use sentiment column to predict the same for the test set
Step6: Now we need some sort of "cleaning" processes, we simply eliminate all the non-alphabet characters and use BeautifulSoup library to extract the text content, Let's put everything together in a function
Step7: Before proceeding, let's test what our function does
Step8: and with the remove_stopwords flag on, it will give us
Step9: and with split flag on, it can actually perform a simple tokenization
Step10: Notice the words
Step11: Our corpus is all of the reviews
Step12: Not let's use sklearn's tf-idf vectorizer with unigram and bigrams, and a log TF function (sublinear_tf=True)
Step13: We can now use the object tfv to build the tf-idf vector-space representation of the reviews, the transformation returns a sparse scipy matrix
Step14: Notice the shape of the X_all matrix
Step15: So it created about 300K numerical features! (the total count of words in the corpus + number of unique bigrams)
Step16: We now use a Logistic Regression model to fit to the numerical features, (LR is quite safe here to use for such a high number of features, to use tree based models we definitely need feature selection)
Step17: Our CV experiment suggests that c = 30 is the best choice, so we use our best model to fit to the entire train set now
Step18: and finally predicting for test set and storing the results
|
12,656
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import locale
import matplotlib.pyplot as plt
from bokeh.plotting import figure, show
from bokeh.models import ColumnDataSource, HoverTool
%matplotlib inline
from bokeh.plotting import output_notebook
output_notebook()
_ = locale.setlocale(locale.LC_ALL, '')
thousands_sep = lambda x: locale.format("%.2f", x, grouping=True)
#example:
print thousands_sep(1234567890.76543)
getdate_ym = lambda x: str(x.year) + "_" + str(x.month)
getdate_ymd = lambda x: str(x.month) + "/" + str(x.day) + "/" + str(x.year)
dates = pd.DatetimeIndex(['2010-10-17', '2011-05-13', "2012-01-15"])
map(getdate_ym, dates)
map(getdate_ymd, dates)
df = pd.read_csv('in/gifts_Feb2016_2.csv')
source_columns = ['donor_id', 'amount_initial', 'donation_date', 'appeal', 'fund', 'city', 'state', 'zipcode_initial', 'charitable', 'sales']
df.columns = source_columns
df.info()
strip_func = lambda x: x.strip() if isinstance(x, str) else x
df = df.applymap(strip_func)
df.replace({'appeal': {'0': ''}}, inplace=True)
df.appeal.fillna('', inplace=True)
df.fund.fillna('', inplace=True)
df.donation_date = pd.to_datetime(df.donation_date)
df.charitable = df.charitable.astype('bool')
df['zipcode'] = df.zipcode_initial.str[0:5]
fill_zipcode = lambda x: '0'*(5-len(str(x))) + str(x)
x1 = pd.DataFrame([[1, '8820'], [2, 8820]], columns=['a','b'])
x1.b = x1.b.apply(fill_zipcode)
x1
df.zipcode = df.zipcode.apply(fill_zipcode)
## Ensure that all amounts are dollar figures
df[~df.amount_initial.str.startswith('-$') & ~df.amount_initial.str.startswith('$')]
## drop row with invalid data
df.drop(df[df.donation_date == '1899-12-31'].index, axis=0, inplace=True)
df['amount_cleanup'] = df.amount_initial.str.replace(',', '')
df['amount_cleanup'] = df.amount_cleanup.str.replace('$', '')
df['amount'] = df.amount_cleanup.astype(float)
## Make sure we did not throw away valid numbers by checking with the original value
df[(df.amount == 0)].amount_initial.unique()
# There are some outliers in the data, quite a few of them are recent.
_ = plt.scatter(df[df.amount > 5000].amount.values, df[df.amount > 5000].donation_date.values)
plt.show()
# Fun little thing to try out bokeh (we can hover and detect the culprits)
def plot_data(df):
dates = map(getdate_ym, pd.DatetimeIndex(df[df.amount > 5000].donation_date))
amounts = map(thousands_sep, df[df.amount > 5000].amount)
x = df[df.amount > 5000].donation_date.values
y = df[df.amount > 5000].amount.values
donor_ids = df[df.amount > 5000].donor_id.values
states = df[df.amount > 5000].state.values
source = ColumnDataSource(
data=dict(
x=x,
y=y,
dates=dates,
amounts=amounts,
donor_ids=donor_ids,
states=states,
)
)
hover = HoverTool(
tooltips=[
("date", "@dates"),
("amount", "@amounts"),
("donor", "@donor_ids"),
("states", "@states"),
]
)
p = figure(plot_width=400, plot_height=400, title=None, tools=[hover])
p.circle('x', 'y', size=5, source=source)
show(p)
plot_data(df.query('amount > 5000'))
# All the Outliers seem to have the following properties: state == YY and specific donorid.
# Plot the remaining data outside of these to check that we caught all the outliers.
plot_data(df[~df.index.isin(df.query('state == "YY" and amount > 5000').index)])
# Outlier data
df[(df.state == 'YY') & (df.amount >= 45000)]
df[(df.state == 'YY') & (df.amount >= 45000)]\
.sort_values(by='amount', ascending=False)\
.head(6)[source_columns]\
.to_csv('out/0/outlier_data.csv')
df.drop(df[(df.state == 'YY') & (df.amount >= 45000)].index, inplace=True)
print 'After dropping the anonymous donor, total amounts from the unknown state as a percentage of all amounts is: '\
, thousands_sep(100*df[(df.state == 'YY')].amount.sum()/df.amount.sum()), '%'
## Some funds have zero amounts associated with them.
## They mostly look like costs - expense fees, transaction fees, administrative fees
## Let us examine if we can safely drop them from our analysis
df[df.amount_initial == '$0.00'].groupby(['fund', 'appeal'])['donor_id'].count()
df.drop(df[df.amount == 0].index, axis=0, inplace=True)
## What is the total amount of the negative?
print 'Total negative amount is: ', df[df.amount < 0].amount.sum()
# Add if condition to make this re-runnable
if df[df.amount < 0].amount.sum() > 0:
print 'Amounts grouped by fund and appeal, sorted by most negative amounts'
df[df.amount < 0]\
.groupby(['fund', 'appeal'])['amount',]\
.sum()\
.sort_values(by='amount')\
.to_csv('out/0/negative_amounts_sorted.csv')
df[df.amount < 0]\
.groupby(['fund', 'appeal'])['amount',]\
.sum()\
.to_csv('out/0/negative_amounts_grouped_by_fund.csv')
df.drop(df[df.amount < 0].index, axis=0, inplace=True)
df.info()
df.state.unique()
## States imported from http://statetable.com/
states = pd.read_csv('in/state_table.csv')
states.rename(columns={'abbreviation': 'state'}, inplace=True)
all_states = pd.merge(states, pd.DataFrame(df.state.unique(), columns=['state']), on='state', how='right')
invalid_states = all_states[pd.isnull(all_states.id)].state
df[df.state.isin(invalid_states)].state.value_counts().sort_index()
df[df.state.isin(['56', 'AB', 'BC', 'CF', 'Ca', 'Co', 'HY', 'IO', 'Ny', 'PR', 'UK', 'VI', 'ja'])]
%%html
<style>table {float:left}</style>
state_renames = {'Ny': 'NY', 'IO': 'IA', 'Ca' : 'CA', 'Co' : 'CO', 'CF' : 'FL', 'ja' : 'FL'}
df.replace({'state': state_renames}, inplace=True)
non_usa_states = ['ON', 'AP', 'VI', 'PR', '56', 'HY', 'BC', 'AB', 'UK', 'KA']
print 'Total amount for locations outside USA: ', sum(df[df.state.isin(non_usa_states)].amount)
#### Total amount for locations outside USA: 30710.63
df.drop(df[df.state.isin(non_usa_states)].index, axis=0, inplace=True)
print 'Percentage of amount for unknown (YY) state : {:.2f}'.format(100*df[df.state == 'YY'].amount.sum()/df.amount.sum())
print 'Total amount for the unknown state excluding outliers: ', df[(df.state == 'YY') & (df.amount < 45000)].amount.sum()
print 'Total amount for the unknown state: ', df[(df.state == 'YY')].amount.sum()
print 'Total amount: ', df.amount.sum()
print 'Pecentage of total amount from donations with no location: ', 100*sum(df[(df.city == '') & (df.state == '') & (df.zipcode_initial == '')].amount)/sum(df.amount)
noloc_df = df[(df.city == '') & (df.state == '') & (df.zipcode_initial == '')].copy()
df = df[~((df.city == '') & (df.state == '') & (df.zipcode_initial == ''))].copy()
print df.shape[0] + noloc_df.shape[0]
noloc_df = noloc_df.append(df[(df.state == 'YY')])
df = df[~(df.state == 'YY')]
# Verify that we transferred all the rows over correctly. This total must match the total from above.
print df.shape[0] + noloc_df.shape[0]
noloc_df = noloc_df.append(df[(df.city.str.lower() == 'yyy') | (df.city.str.lower() == 'yyyy')])
df = df[~((df.city.str.lower() == 'yyy') | (df.city.str.lower() == 'yyyy'))]
# Verify that we transferred all the rows over correctly. This total must match the total from above.
print df.shape[0] + noloc_df.shape[0]
print 'Percentage of total amount for data with City but no state: {:.3f}'.format(100*sum(df[df.state == ''].amount)/sum(df.amount))
df[((df.state == '') & (df.city != ''))][['city','zipcode','amount']].sort_values('city', ascending=True).to_csv('out/0/City_No_State.csv')
index = df[(df.donor_id == '-28K0T47RF') & (df.donation_date == '2007-11-30') & (df.city == 'Cupertino')].index
df.ix[index,'state'] = 'CA'
index = df[(df.donor_id == '9F4812A118') & (df.donation_date == '2012-06-30') & (df.city == 'San Juan')].index
df.ix[index,'state'] = 'WA'
df.ix[index,'zipcode'] = 98250
# Verified that these remaining entries are for non-US location
print 'Total amount for non-USA location: ', df[((df.state == '') & (df.city != ''))].amount.sum()
df.drop(df[((df.state == '') & (df.city != ''))].index, inplace=True)
print 'Percentage of total amount for data with valid US state, but no city, zipcode: {:.3f}'.format(100*sum(df[(df.city == '') & (df.zipcode_initial == '')].amount)/sum(df.amount))
# Verify that we transferred all the rows over correctly. This total must match the total from above.
print df.shape[0] + noloc_df.shape[0]
stateonly_df = df[(df.city == '') & (df.zipcode_initial == '')].copy()
stateonly_df.state = ''
## Move the rows with just the state over to the noloc_df dataset
noloc_df = pd.concat([noloc_df, stateonly_df])
df = df[~((df.city == '') & (df.zipcode_initial == ''))].copy()
# Verify that we transferred all the rows over correctly. This total must match the total from above.
print df.shape[0] + noloc_df.shape[0]
print 100*sum(df[df.city == ''].amount)/sum(df.amount)
print len(df[df.city == '']), len(df[df.zipcode_initial == ''])
print sum(df[df.city == ''].amount), sum(df[df.zipcode_initial == ''].amount)
print sum(df[(df.city == '') & (df.zipcode_initial != '')].amount),\
sum(df[(df.city != '') & (df.zipcode_initial == '')].amount)
print sum(df.amount)
## Zip codes from ftp://ftp.census.gov/econ2013/CBP_CSV/zbp13totals.zip
zipcodes = pd.read_csv('in/zbp13totals.txt', dtype={'zip': object})
zipcodes = zipcodes[['zip', 'city', 'stabbr']]
zipcodes = zipcodes.rename(columns = {'zip':'zipcode', 'stabbr': 'state', 'city': 'city'})
zipcodes.city = zipcodes.city.str.title()
zipcodes.zipcode = zipcodes.zipcode.astype('str')
## If we know the zip code, we can populate the city by using the zipcodes data
df.replace({'city': {'': np.nan}, 'state': {'': np.nan}}, inplace=True)
## Set the index correctly for update to work. Then reset it back.
df.set_index(['zipcode'], inplace=True)
zipcodes.set_index(['zipcode'], inplace=True)
df.update(zipcodes, join='left', overwrite=False, raise_conflict=False)
df.reset_index(drop=False, inplace=True)
zipcodes.reset_index(drop=False, inplace=True)
zipcodesdetail = pd.read_csv('in/zip_code_database.csv')
zipcodesdetail = zipcodesdetail[zipcodesdetail.country == 'US'][['zip', 'primary_city', 'county', 'state', 'timezone', 'latitude', 'longitude']]
zipcodesdetail = zipcodesdetail.rename(columns = {'zip':'zipcode', 'primary_city': 'city'})
# The zip codes dataset has quite a few missing values. Filling in what we need for now.
# If this happens again, search for a different data source!!
zipcodesdetail.loc[(zipcodesdetail.city == 'Frisco') & (zipcodesdetail.state == 'TX') & (pd.isnull(zipcodesdetail.county)), 'county'] = 'Denton'
# Strip the ' County' portion from the county names
def getcounty(county):
if pd.isnull(county):
return county
elif county.endswith(' County'):
return county[:-7]
else:
return county
zipcodesdetail.county = zipcodesdetail['county'].apply(getcounty)
zipcodesdetail.zipcode = zipcodesdetail.zipcode.apply(fill_zipcode)
newcols = np.array(list(set(df.columns).union(zipcodesdetail.columns)))
df = pd.merge(df, zipcodesdetail, on=['state', 'city', 'zipcode'], how='inner', suffixes=('_x', ''))[newcols]
# For some reason, the data types are being reset. So setting them back to their expected data types.
df.donation_date = df.donation_date.apply(pd.to_datetime)
df.charitable = df.charitable.apply(bool)
df.amount = df.amount.apply(int)
all_zipcodes = pd.merge(df, zipcodes, on='zipcode', how='left')
all_zipcodes[pd.isnull(all_zipcodes.city_x)].head()
## There seems to be only one row with an invalid zip code. Let's drop it.
df.drop(df[df.zipcode_initial.isin(['GU214ND','94000'])].index, axis=0, inplace=True)
print 'No state: count of rows: ', len(df[df.state == ''].amount),\
'Total amount: ', sum(df[df.state == ''].amount)
print 'No zipcode: count of rows: ', len(df[df.zipcode == ''].amount),\
'Total amount: ', sum(df[df.zipcode == ''].amount)
print 'No city: count of rows: ', len(df[df.city == ''].amount),\
'Total amount: ', sum(df[df.city == ''].amount)
# Examining data - top 10 states by amount and number of donors
print df.groupby('state')['amount',].sum().sort_values(by='amount', ascending=False)[0:10]
print df.groupby('state')['donor_id',].count().sort_values(by='donor_id', ascending=False)[0:10]
print noloc_df.state.unique()
print noloc_df.city.unique()
print noloc_df.zipcode.unique()
noloc_df['city'] = ''
noloc_df['state'] = ''
noloc_df['zipcode'] = ''
print df.shape[0] + noloc_df.shape[0]
df.shape, noloc_df.shape
# The input data has the latest zip code for each donor. So we cannot observe any movement even if there was any since
# all donations by a given donor will only have the same exact zipcode.
x1 = pd.DataFrame(df.groupby(['donor_id','zipcode']).zipcode.nunique())
x1[x1.zipcode != 1]
# The noloc_df and the df with location values have no donors in common - so we cannot use the donor
# location information from df to detect the location in noloc_df.
set(df.donor_id.values).intersection(noloc_df.donor_id.values)
df.rename(columns={'donation_date': 'activity_date'}, inplace=True)
df['activity_year'] = df.activity_date.apply(lambda x: x.year)
df['activity_month'] = df.activity_date.apply(lambda x: x.month)
df['activity_dow'] = df.activity_date.apply(lambda x: x.dayofweek)
df['activity_ym'] = df['activity_date'].map(lambda x: 100*x.year + x.month)
df['activity_yq'] = df['activity_date'].map(lambda x: 10*x.year + (x.month-1)//3)
df['activity_ymd'] = df['activity_date'].map(lambda x: 10000*x.year + 100*x.month + x.day)
# Drop the zipcode_initial (for privacy reasons)
df.drop('zipcode_initial', axis=1, inplace=True)
!mkdir -p out/0
df.to_pickle('out/0/donations.pkl')
noloc_df.to_pickle('out/0/donations_noloc.pkl')
df[df.donor_id == '_1D50SWTKX'].sort_values(by='activity_date').tail()
df.columns
df.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load csv
Step2: Address nan column values
Step3: Change column types and drop unused columns
Step4: Cleanup amounts
Step5: Outlier data
Step6: Exchanged emails with Anil and confirmed the decision to drop the outlier for the anonymous donor with the 9.5 million dollars.
Step7: Amounts with zero values
Step8: Dropping rows with zero amounts (after confirmation with SEF office)
Step9: Negative amounts
Step10: Dropping rows with negative amounts (after confirmation with SEF office)
Step11: Investigate invalid state codes
Step12: Explanation for invalid state codes
Step13: Dropping data for non-US locations
Step14: Investigate donations with state of YY
Step15: We will add these donations to the noloc_df below (which is the donations that have empty strings for the city/state/zipcode.
Step16: Investigate City in ('YYY','yyy')
Step17: Investigate empty state but non-empty city
Step18: By visually examining the cities for rows that don't have a state, we can see that all the cities are coming from Canada and India and some from other countries (except two entries). So we will correct these two entries and drop all the other rows as they are not relevant to the USA.
Step19: Investigate empty city and zipcode but valid US state
Step20: Investigating empty city and empty state with non-empty zip code
Step21: Investigate invalid zip codes
Step22: Final check on all location data to confirm that we have no rows with empty state, city or location
Step23: All done! Let's save our dataframes for the next stage of processing
|
12,657
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
np.random.seed(9876789)
nsample = 100
x = np.linspace(0, 10, 100)
X = np.column_stack((x, x ** 2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size=nsample)
X = sm.add_constant(X)
y = np.dot(X, beta) + e
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
print("Parameters: ", results.params)
print("R2: ", results.rsquared)
nsample = 50
sig = 0.5
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, np.sin(x), (x - 5) ** 2, np.ones(nsample)))
beta = [0.5, 0.5, -0.02, 5.0]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
res = sm.OLS(y, X).fit()
print(res.summary())
print("Parameters: ", res.params)
print("Standard errors: ", res.bse)
print("Predicted values: ", res.predict())
pred_ols = res.get_prediction()
iv_l = pred_ols.summary_frame()["obs_ci_lower"]
iv_u = pred_ols.summary_frame()["obs_ci_upper"]
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(x, y, "o", label="data")
ax.plot(x, y_true, "b-", label="True")
ax.plot(x, res.fittedvalues, "r--.", label="OLS")
ax.plot(x, iv_u, "r--")
ax.plot(x, iv_l, "r--")
ax.legend(loc="best")
nsample = 50
groups = np.zeros(nsample, int)
groups[20:40] = 1
groups[40:] = 2
# dummy = (groups[:,None] == np.unique(groups)).astype(float)
dummy = pd.get_dummies(groups).values
x = np.linspace(0, 20, nsample)
# drop reference category
X = np.column_stack((x, dummy[:, 1:]))
X = sm.add_constant(X, prepend=False)
beta = [1.0, 3, -3, 10]
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + e
print(X[:5, :])
print(y[:5])
print(groups)
print(dummy[:5, :])
res2 = sm.OLS(y, X).fit()
print(res2.summary())
pred_ols2 = res2.get_prediction()
iv_l = pred_ols.summary_frame()["obs_ci_lower"]
iv_u = pred_ols.summary_frame()["obs_ci_upper"]
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(x, y, "o", label="Data")
ax.plot(x, y_true, "b-", label="True")
ax.plot(x, res2.fittedvalues, "r--.", label="Predicted")
ax.plot(x, iv_u, "r--")
ax.plot(x, iv_l, "r--")
legend = ax.legend(loc="best")
R = [[0, 1, 0, 0], [0, 0, 1, 0]]
print(np.array(R))
print(res2.f_test(R))
print(res2.f_test("x2 = x3 = 0"))
beta = [1.0, 0.3, -0.0, 10]
y_true = np.dot(X, beta)
y = y_true + np.random.normal(size=nsample)
res3 = sm.OLS(y, X).fit()
print(res3.f_test(R))
print(res3.f_test("x2 = x3 = 0"))
from statsmodels.datasets.longley import load_pandas
y = load_pandas().endog
X = load_pandas().exog
X = sm.add_constant(X)
ols_model = sm.OLS(y, X)
ols_results = ols_model.fit()
print(ols_results.summary())
norm_x = X.values
for i, name in enumerate(X):
if name == "const":
continue
norm_x[:, i] = X[name] / np.linalg.norm(X[name])
norm_xtx = np.dot(norm_x.T, norm_x)
eigs = np.linalg.eigvals(norm_xtx)
condition_number = np.sqrt(eigs.max() / eigs.min())
print(condition_number)
ols_results2 = sm.OLS(y.iloc[:14], X.iloc[:14]).fit()
print(
"Percentage change %4.2f%%\n"
* 7
% tuple(
[
i
for i in (ols_results2.params - ols_results.params)
/ ols_results.params
* 100
]
)
)
infl = ols_results.get_influence()
2.0 / len(X) ** 0.5
print(infl.summary_frame().filter(regex="dfb"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: OLS estimation
Step2: Our model needs an intercept so we add a column of 1s
Step3: Fit and summary
Step4: Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples
Step5: OLS non-linear curve but linear in parameters
Step6: Fit and summary
Step7: Extract other quantities of interest
Step8: Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.
Step9: OLS with dummy variables
Step10: Inspect the data
Step11: Fit and summary
Step12: Draw a plot to compare the true relationship to OLS predictions
Step13: Joint hypothesis test
Step14: You can also use formula-like syntax to test hypotheses
Step15: Small group effects
Step16: Multicollinearity
Step17: Fit and summary
Step18: Condition number
Step19: Then, we take the square root of the ratio of the biggest to the smallest eigen values.
Step20: Dropping an observation
Step21: We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.
Step22: In general we may consider DBETAS in absolute value greater than $2/\sqrt{N}$ to be influential observations
|
12,658
|
<ASSISTANT_TASK:>
Python Code:
import getpass
APIKEY = getpass.getpass()
from googleapiclient.discovery import build
speech_service = build('speech', 'v1p1beta1', developerKey=APIKEY)
#@title このセルを実行して record_audio を定義
# Install required libraries and packages
!pip install -qq pydub
!apt-get -qq update
!apt-get -qq install -y ffmpeg
# Define record_audio
import base64
import google.colab
import pydub
from io import BytesIO
def record_audio(file_id, framerate=16000, channels=1, file_format='flac'):
# Record webm file from Colaboratory.
audio = google.colab._message.blocking_request(
'user_media',
{
'audio': True,
'video': False,
'duration': -1
},
timeout_sec=600)
# Convert web file into in_memory file.
mfile = BytesIO(base64.b64decode(audio[audio.index(',')+1:]))
# Store webm file locally.
with open('{0}.webm'.format(file_id), 'wb') as f:
mfile.seek(0)
f.write(mfile.read())
# Open stored web file and save it as wav with sample_rate=16000
output_file = '{0}.{1}'.format(file_id, file_format)
_ = pydub.AudioSegment.from_file('{0}.webm'.format(file_id), codec='opus')
_ = _.set_channels(channels)
_.set_frame_rate(framerate).export(output_file, format=file_format)
return output_file
audio_filename = record_audio('ja-sample', framerate=16000, channels=1)
from IPython.display import Audio
Audio(audio_filename, rate=16000)
from base64 import b64encode
from json import dumps
languageCode = 'en-US' #@param ["en-US", "ja-JP", "en-IN"]
model = 'default' #@param ["command_and_search", "phone_call", "video", "default"]
with open(audio_filename, 'rb') as audio_file:
content = b64encode(audio_file.read()).decode('utf-8')
my_audio = {
'content': content
}
my_recognition_config = {
'encoding': 'FLAC',
'sampleRateHertz': 16000,
'languageCode': languageCode,
'model': model
}
my_request_body={
'audio': my_audio,
'config': my_recognition_config,
}
response = speech_service.speech().recognize(body=my_request_body).execute()
response
for r in response["results"]:
print('認識結果: ', r['alternatives'][0]['transcript'])
print('信頼度: ', r['alternatives'][0]['confidence'])
my_recognition_config = {
'encoding': 'FLAC',
'sampleRateHertz': 16000,
'languageCode': languageCode,
'model': model,
'enableWordTimeOffsets': True
}
my_request_body={
'audio': my_audio,
'config': my_recognition_config,
}
response = speech_service.speech().recognize(body=my_request_body).execute()
response
for r in response["results"]:
print('認識結果: ', r['alternatives'][0]['transcript'])
print('信頼度: ', r['alternatives'][0]['confidence'], "\n")
for r in response["results"][0]['alternatives'][0]["words"]:
print("word: ", r["word"])
print("startTime: ", r["startTime"])
print("endTime: ", r["endTime"], "\n")
import textwrap
from googleapiclient.discovery import build
service = build('texttospeech', 'v1beta1', developerKey=APIKEY)
response = service.voices().list(
languageCode="ja_JP",
).execute()
for voice in response['voices']:
print(voice)
source_language = "ja_JP" #@param {type: "string"}
source_sentence = "Google Cloud Text-to-Speech \u3092\u4F7F\u3046\u3068\u3001\u81EA\u7136\u306A\u4F1A\u8A71\u97F3\u58F0\u3092\u5408\u6210\u3067\u304D\u307E\u3059\u3002" #@param {type:"string"}
audio_encoding = 'OGG_OPUS' #@param ['OGG_OPUS', 'LINEAR16', 'MP3']
voice_gender = 'FEMALE' #@param ['FEMALE', 'MALE', 'NEUTRAL', 'SSML_VOICE_GENDER_UNSPECIFIED']
textwrap.wrap(source_sentence)
voice_name = 'ja-JP-Wavenet-A' #@param {type: "string"}
response = service.text().synthesize(
body={
'input': {
'text': source_sentence,
},
'voice': {
'languageCode': source_language,
'ssmlGender': voice_gender,
'name': voice_name,
},
'audioConfig': {
'audioEncoding': audio_encoding,
},
}
).execute()
import base64
from IPython.display import Audio
Audio(base64.b64decode(response['audioContent']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cloud Speech-to-Text API を使ってみよう !
Step2: 音声データの準備
Step3: record_audio を実行して音声を録音しましょう。
Step4: 録音結果を確認しましょう。
Step5: 音声認識の実行
Step6: 入力する音声データを定義します。
Step7: RecognitionConfig を定義します。
Step8: recognize method のリクエストメッセージの body を定義します。
Step9: recognize method を実行します。
Step10: recognize method のレスポンスを確認します。
Step11: 単語のタイムスタンプの取得
Step12: recognize method を実行します。
Step13: recognize method のレスポンスを確認します。
Step14: 演習問題
Step15: サポートされているすべての音声の一覧表示
Step16: テキストから音声を合成する
Step17: 合成した音声を確認しましょう
|
12,659
|
<ASSISTANT_TASK:>
Python Code:
pm_df = pd.read_hdf('pm_objid_stars.h5')
len(missing_is_pm_star)
len(np.where(missing_is_pm_star == 1)[0])
len(tmp_tbl)
len(np.unique(tmp_tbl.objid))
tmp_tbl
pm_objid = np.empty(0).astype(np.int64)
for mf in missing_files:
tstart = time.time()
tmp_tbl = fits.getdata(mf)
unique_objid = np.unique(tmp_tbl.objid)
missing_is_pm_star = np.isin(unique_objid, pm_df.objid.values)
pm_objid = np.append(pm_objid, unique_objid[missing_is_pm_star])
tend = time.time()
print(mf, len(pm_objid), len(np.unique(pm_objid)), tend - tstart)
len(np.unique(pm_objid))
new_pm_stars = pd.DataFrame(pm_objid, columns=['objid'])
new_pm_stars.to_hdf('pm_stars_in_ps1_missing.h5', 'd1')
# add unique command because there are a few repeats
star_objid = np.unique(np.append(pm_objid, plx_objid))
new_stars = pd.DataFrame(star_objid, columns=['objid'])
new_stars.to_hdf('stars_in_ps1_missing.h5', 'd1')
gaia_in_ps1 = pd.read_hdf('stars_in_ps1_missing.h5')
star_objid = gaia_in_ps1.objid.values
print(len(star_objid) - len(np.unique(star_objid)))
print(len(np.unique(star_objid)))
rf_files = glob.glob('../update_*.csv')
N_gaia_and_ps1 = 0
for rff in rf_files:
tstart = time.time()
rf_df = pd.read_csv(rff)
already_one = len(np.where(rf_df.score == 1)[0])
gaia_star = np.isin(rf_df.objid.values, star_objid)
gaia_and_ps1 = len(np.where(gaia_star == True)[0])
N_gaia_and_ps1 += gaia_and_ps1
update_rf_score = (gaia_star & (rf_df.score != 1))
rf_df.loc[update_rf_score, "score"] = 1
now_one = len(np.where(rf_df.score == 1)[0])
rf_df.to_csv(rff.replace('update', 'gaia_update'), index=False)
star_objid = star_objid[~np.isin(star_objid, rf_df.objid.values[gaia_star])]
tend = time.time()
print(rff, len(star_objid), gaia_and_ps1,
len(np.where(update_rf_score == 1)[0]),
tend-tstart)
N_gaia_and_ps1
len(star_objid)
gaia_only = pd.DataFrame(star_objid, columns=['objid'])
gaia_only['score'] = np.ones(len(star_objid)).astype(float)
gaia_only.head()
gaia_only.to_csv('../gaia_only_update.csv', index=False)
len(gaia_only)
len(np.unique(gaia_only.objid))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in RF classifications and replace Gaia stars with score = 1
|
12,660
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib as mpl # used sparingly
import matplotlib.pyplot as plt
pd.set_option("notebook_repr_html", False)
pd.set_option("max_rows", 10)
%matplotlib inline
from matplotlib import matplotlib_fname
matplotlib_fname()
from matplotlib import rcParams
rcParams.keys()
rcParams['font.family']
rcParams['font.family'] = 'monospace'
rcParams['font.family']
rcParams['font.family'] = 'sans-serif'
from matplotlib import rc_context
with rc_context({'font.family': 'monospace'}):
print(rcParams['font.family'])
print(rcParams['font.family'])
import matplotlib.pyplot as plt
plt.plot([1, 2, 3, 4])
plt.title("Title")
plt.xlabel("X")
fig, ax = plt.subplots()
ax.plot([1, 2, 3, 4, 5])
ax.set_title("Title")
plt.draw_if_interactive()
plt.plot([1, 5, 3])
plt.Figure?
fig = plt.Figure()
plt.close()
fig = plt.figure(figsize=(5, 5))
fig = plt.figure()
ax = fig.add_subplot(111)
lines = ax.plot([1, 2, 3])
text = ax.set_xlabel("X")
fig = plt.figure(figsize=(10, 5))
ax1 = fig.add_subplot(121)
ax1.plot([1, 2, 3])
ax2 = fig.add_subplot(122)
ax2.plot([3, 2, 1])
plt.xlabel??
fig, ax = plt.subplots(figsize=(8, 6))
ax.scatter(np.random.randn(20), np.random.randn(20))
fig, ax = plt.subplots(figsize=(8, 6))
ax.scatter(np.random.randn(20), np.random.randn(20))
ax.scatter(np.random.randn(20), np.random.randn(20), color='r')
fig
plt.plot?
x = np.linspace(-2*np.pi, 2*np.pi, 100)
y = np.sin(x)
plt.plot(x, y)
fig, ax = plt.subplots(figsize=(8, 8))
ax.plot([1, 2, 4, 5], label="Line 1")
ax.plot([2, 5, 3, 4], label="Line 2")
legend = ax.legend(loc='best', fontsize=20)
fig, ax = plt.subplots(figsize=(8, 8))
ax.plot([1, 2, 4, 5], label="Line 1")
ax.plot([2, 5, 3, 4], label="Line 2")
ax.set_xlabel("X", fontsize=20)
ax.set_ylabel("Y", fontsize=20)
legend = ax.legend(loc='best', fontsize=20)
fig, ax = plt.subplots(figsize=(8, 8))
ax.plot([1, 2, 4, 5], label="Line 1")
ax.plot([2, 5, 3, 4], label="Line 2")
ax.set_xlabel("X", fontsize=20)
ax.set_ylabel("Y", fontsize=20)
ax.set_title("Title", fontsize=20)
legend = ax.legend(loc='best', fontsize=20)
fig, ax = plt.subplots(figsize=(8, 8))
ax.grid(False)
ax.tick_params(axis='y', which='major', length=15, right=False)
ax.tick_params(axis='x', which='major', length=15, top=False, direction="out", pad=15)
fig, ax = plt.subplots(figsize=(8, 8))
ax.grid(False)
ax.tick_params(axis='y', which='major', length=15, right=False)
ax.tick_params(axis='x', which='major', length=15, top=False)
ticklabels = ax.xaxis.set_ticklabels(['aaaa', 'bbbb', 'cccc',
'dddd', 'eeee', 'ffff'],
rotation=45, fontsize=15)
ax.spines
fig, ax = plt.subplots(figsize=(8, 8))
ax.tick_params(bottom=False, top=False, left=False, right=False)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.grid(False)
ax.xaxis.set_ticklabels([])
ax.yaxis.set_ticklabels([]);
x, y = np.random.randn(2, 100)
x.sort()
fig, ax = plt.subplots()
ax.plot(y, 'g--')
fig, ax = plt.subplots()
ax.plot(x, y)
fig, ax = plt.subplots()
ax.plot(x, y, 'o')
x2, y2 = np.random.randn(2, 200)
x2.sort()
fig, ax = plt.subplots()
lines = ax.plot(x, y, 'o', x2, y2, 'ro', ms=8, alpha=.5)
y = pd.Series(np.random.randn(25))
y.plot()
y.cumsum().plot()
dta = pd.DataFrame({'normal': np.random.normal(size=100),
'gamma': np.random.gamma(1, size=100),
'poisson': np.random.poisson(size=100)})
ax = dta.cumsum(0).plot()
ax = dta.cumsum(0).plot(subplots=True, figsize=(10, 10))
axes = dta.cumsum(0).plot(subplots=True, figsize=(10, 10))
fig = axes[0].figure
fig.tight_layout()
axes = dta.cumsum().plot(secondary_y='normal')
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
for i, ax in enumerate(axes):
variable = dta.columns[i]
ax = dta[variable].cumsum().plot(ax=ax)
ax.set_title(variable, fontsize=16)
axes[0].set_ylabel("Cumulative Sum", fontsize=14);
dta = pd.read_csv("../data/weather_nyc.csv")
dta = dta.ix[dta.year < 2015] # truncate to end of year
dta.query("year < 2015")
bins = [dta.temp.min(), 32, 55, 80, dta.temp.max()]
bins
labels = ["freezing", "cold", "warm", "hot"]
dta["temp_bin"] = pd.cut(dta.temp, bins, labels=labels)
try:
from scipy.constants import F2C
except ImportError: # no scipy installed
def F2C(f):
return (np.array(f) - 32)/1.8
lmap = lambda func, x : list(map(func, x))
bins = [dta.tempc.min()] + lmap(F2C, (32, 55, 80)) + [dta.tempc.max()]
bins
labels = ["freezing", "cold", "warm", "hot"]
dta["tempc_bin"] = pd.cut(dta.temp, bins, labels=labels)
dta.head()
ax = dta.groupby("temp_bin").size().plot(kind="bar")
ax = dta.groupby("temp_bin").size().plot(kind="bar", rot=0, fontsize=16, figsize=(8, 5))
ax.set_xlabel("Temperature")
ax.set_ylabel("Number of Days")
ax.set_title("Temperatures from 1995 - 2014");
dta.groupby(["season", "temp_bin"]).size().plot(kind="barh", figsize=(6, 8))
ct = pd.crosstab(dta.temp_bin, dta.season)
ct
ax = ct.plot(kind="bar", stacked=True, figsize=(12, 8), grid=False,
legend=True)
colors = plt.cm.Paired(np.linspace(0, 1, 4))
colors
ax = pd.crosstab(dta.temp_bin, dta.season).plot(kind="bar", stacked=True,
figsize=(12, 8), grid=False,
legend=True, colors=colors, rot=0,
fontsize=16)
# adjust the fontsize of the legend
legend = ax.get_legend()
for text in legend.get_texts():
text.set_fontsize(18)
legend.get_title().set_fontsize(20)
dta.temp.min()
ax = dta.temp.plot(kind="hist", bins=50)
dta.ix[dta.temp == -99, ["temp", "tempc"]] = np.nan
ax = dta.temp.plot(kind="hist", bins=50, grid=False, figsize=(10, 6))
# plot a vertical line that spans the axis
line = ax.axvline(dta.temp.mean(), color='r', lw=3, label="Mean")
# specifically add a legend
handles, labels = ax.get_legend_handles_labels()
ax.legend([handles[0]], [labels[0]], fontsize=16)
handles
def scotts_rule(x):
x = x.dropna()
std = x.std()
return 3.5 * std / (len(x)**(1./3))
def width_to_nbins(x, h):
x = x.dropna()
return int(round(x.ptp()/h))
h = scotts_rule(dta.temp)
nbins = width_to_nbins(dta.temp, h)
ax = dta.temp.plot(kind="hist", bins=nbins, grid=False, figsize=(10, 6))
# plot a vertical line that spans the axis
line = ax.axvline(dta.temp.mean(), color='r', lw=3, label="Mean")
ax = dta.temp.plot(kind='kde', grid=False, figsize=(10, 6))
ax.set_xlim(0, 100)
ax = dta.temp.plot(kind='kde', grid=False, figsize=(10, 6), color='r', lw=3)
ax = dta.temp.plot(kind="hist", bins=nbins, grid=False, figsize=(10, 6), ax=ax, normed=True, alpha=.7)
ax.set_xlim(0, 100)
ax = dta.boxplot(column="temp", by="season", grid=False, figsize=(8, 10), fontsize=16,
whis=[5, 95])
ax.set_title(ax.get_title(), fontsize=20)
ax.xaxis.get_label().set_fontsize(18)
fig = ax.figure
# Change the size of the figure title
# http://stackoverflow.com/a/12449783/535665
fig.texts[0].set_fontsize(20)
# whitespace between axes and fig boundary
fig.subplots_adjust(top=.85)
def jitter(x, n, noise=.05):
return x + np.random.normal(0, noise, size=n)
ax = dta.boxplot(column="temp", by="season", grid=False, figsize=(8, 10), fontsize=16,
whis=[5, 95])
ax.set_title(ax.get_title(), fontsize=20)
ax.xaxis.get_label().set_fontsize(18)
fig = ax.figure
# http://stackoverflow.com/a/12449783/535665
fig.texts[0].set_fontsize(20)
# whitespace between axes and fig boundary
fig.subplots_adjust(top=.85)
for i, season in enumerate(ax.get_xticklabels()):
y = dta.ix[dta.season == season.get_text()].temp
x = jitter(i + 1, len(y))
# there's a lot of data so turn the alpha way down (or sub-sample)
ax.plot(x, y, 'ro', alpha=.05)
baseball = pd.read_csv("../data/baseball.csv")
baseball.head()
ax = baseball.plot(kind="scatter", x="ab", y="h", grid=False, figsize=(8, 6), s=8**2,
alpha=.7)
ax.margins(0)
ax.set_xlim(0, 700)
ax.set_ylim(0, 200)
ax = baseball.plot(kind="scatter", x="ab", y="h", grid=False, figsize=(8, 6), s=baseball.hr*10,
alpha=.5)
ax.margins(0)
ax.set_xlim(0, 700)
ax.set_ylim(0, 200)
ax = baseball.plot(kind="scatter", x="ab", y="h", grid=False, figsize=(8, 6), c="DarkGreen", s=50)
ax = baseball.plot(kind="scatter", x="ab", y="rbi", grid=False, figsize=(8, 6), c="Blue", s=50,
ax=ax)
ax.margins(0)
ax.set_xlim(0, 700)
ax.set_ylim(0, 200);
ax = baseball.plot(kind="scatter", x="ab", y="h", grid=False, figsize=(8, 6), c=baseball.hr*10,
s=40, cmap="hot")
ax.margins(0)
ax.set_xlim(0, 700)
ax.set_ylim(0, 200);
ax = baseball.plot(kind="scatter", x="ab", y="h", grid=False, figsize=(8, 6), c=baseball.hr*10,
s=40, cmap="hot")
ax.margins(0)
ax.set_xlim(0, 700)
ax.set_ylim(0, 200)
fig = ax.figure
# colorbars are actually a separate subplot in your figure
colorbar = fig.axes[1]
colorbar.yaxis.set_tick_params(right=False);
ax = pd.scatter_matrix(baseball.loc[:,'r':'sb'], figsize=(14, 10), diagonal='hist')
ax = pd.scatter_matrix(baseball.loc[:,'r':'sb'], figsize=(14, 10), diagonal='kde')
idx = pd.to_datetime(dta.year*10000 + dta.month*100 + dta.day, format='%Y%m%d')
idx
y = dta.set_index(idx).temp
y.head()
y.index
#ax = y.plot(figsize=(12, 8))
ax = pd.rolling_mean(y, window=60, min_periods=1, center=True).plot(figsize=(12, 8),
label="Rolling 2-month mean")
means = y.groupby(lambda x : x.year).mean()
means.index = pd.DatetimeIndex(pd.to_datetime(means.index * 10000 + 1231, format="%Y%m%d"))
ax = means.plot(ax=ax, label="Yearly Average")
legend = ax.legend()
ax = plt.subplot2grid((2, 2), (0, 0))
with plt.rc_context(rc={"xtick.labelsize": 0,
"ytick.labelsize": 0,
"axes.facecolor": "lightgray",
"figure.figsize": (8, 8)}):
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1, 2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2, 0))
ax5 = plt.subplot2grid((3,3), (2, 1))
ax1.figure.suptitle("subplot2grid", fontsize=20)
from matplotlib.gridspec import GridSpec
with plt.rc_context(rc={"xtick.labelsize": 0,
"ytick.labelsize": 0,
"axes.facecolor": "lightgray"}):
fig, ax = plt.subplots(figsize=(8, 8))
gs = GridSpec(3, 3)
ax1 = plt.subplot(gs[0, :])
# identical to ax1 = plt.subplot(gs.new_subplotspec((0,0), colspan=3))
ax2 = plt.subplot(gs[1,:-1])
ax3 = plt.subplot(gs[1:, -1])
ax4 = plt.subplot(gs[-1,0])
ax5 = plt.subplot(gs[-1,-2])
fig.suptitle("GridSpec", fontsize=20)
import seaborn as sns
tips = sns.load_dataset("tips")
tips.head()
with mpl.rc_context(rc={"legend.fontsize": "18", "axes.titlesize": "18"}):
g = sns.FacetGrid(tips, col="sex", hue="smoker", size=7)
g.map(plt.scatter, "total_bill", "tip", alpha=.7, s=80)
g.add_legend()
g._legend.get_title().set_fontsize(20)
g.axes[0, 0].title.set_fontsize(20)
g.axes[0, 0].xaxis.get_label().set_fontsize(20)
g.axes[0, 1].title.set_fontsize(20)
g.axes[0, 1].xaxis.get_label().set_fontsize(20)
ax = dta.boxplot(column="temp", by="season", grid=False, figsize=(8, 10), fontsize=16,
whis=[5, 95])
X = dta[["temp", "season"]].dropna()
ax = sns.violinplot(X.temp, groupby=X.season)
ax = sns.violinplot(X.temp, groupby=X.season, inner='points', alpha=.5,
order=['Winter', 'Spring', 'Summer', 'Fall'])
temp95 = dta.query("year == 1995")[["temp", "month", "day"]]
temp14 = dta.query("year == 2014")[["temp", "month", "day"]]
temps = temp95.merge(temp14, on=["month", "day"], how="inner", suffixes=("_95", "_14"))
g = sns.jointplot(temps.temp_95, temps.temp_14, kind="kde", size=7, space=0)
g = sns.jointplot(temps.temp_95, temps.temp_14, kind="hex", color="#4CB391",
joint_kws={"bins": 200})
fig, ax = plt.subplots(figsize=(6, 6))
np.random.seed(0)
x, y = np.random.normal(size=(2, 200))
color, size = np.random.random((2, 200))
ax.scatter(x, y, c=color, s=500 * size, alpha=0.5, cmap="rainbow")
ax.grid(color='lightgray', alpha=0.7)
import mpld3
mpld3.display(fig)
from mpld3 import plugins
fig, ax = plt.subplots(6, 6, figsize=(6, 6))
fig.subplots_adjust(hspace=0.1, wspace=0.1)
ax = ax[::-1]
X = baseball.loc[:, 'r':'rbi']
for i in range(6):
for j in range(6):
ax[i, j].xaxis.set_major_formatter(plt.NullFormatter())
ax[i, j].yaxis.set_major_formatter(plt.NullFormatter())
points = ax[i, j].scatter(X.values[:, j], X.values[:, i])
if i == 0:
ax[i, j].set_xlabel(X.columns[j])
ax[i, 0].set_ylabel(X.columns[i])
plugins.connect(fig, plugins.LinkedBrush(points))
mpld3.display(fig)
from IPython.display import Image, HTML
# Image("./tufte.svg")
HTML("./tufte.svg")
import os
to_colors = lambda x : x/255.
blue3 = list(map(to_colors, (24, 116, 205))) # 1874CD
wheat2 = list(map(to_colors, (238, 216, 174))) # EED8AE
wheat3 = list(map(to_colors, (205, 186, 150))) # CDBA96
wheat4 = list(map(to_colors, (139, 126, 102))) # 8B7E66
firebrick3 = list(map(to_colors, (205, 38, 38))) # CD2626
gray30 = list(map(to_colors, (77, 77, 77))) # 4D4D4D
idx = range(366)
np.where([True, False, False, True, False])[0]
yticks = range(-10, 101, 10)
ylabels = [str(i) + u"\u00b0" for i in yticks]
ylabels
with plt.xkcd():
# Based on "Stove Ownership" from XKCD by Randall Monroe
# http://xkcd.com/418/
fig = plt.figure()
ax = fig.add_axes((0.1, 0.2, 0.8, 0.7))
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.xticks([])
plt.yticks([])
ax.set_ylim([-30, 10])
data = np.ones(100)
data[70:] -= np.arange(1, 31)
plt.annotate(
'THE DAY I REALIZED\nI COULD COOK BACON\nWHENEVER I WANTED',
xy=(70, 1), arrowprops=dict(arrowstyle='->'), xytext=(15, -10), zorder=-1)
plt.plot(data)
plt.xlabel('time')
plt.ylabel('my overall health')
fig.text(0.5, 0.05,
'"Stove Ownership" from xkcd by Randall Monroe', ha='center')
with plt.xkcd():
# Based on "The data So Far" from XKCD by Randall Monroe
# http://xkcd.com/373/
fig = plt.figure()
ax = fig.add_axes((0.1, 0.2, 0.8, 0.7))
ax.bar([-0.125, 1.0-0.125], [0, 100], 0.25)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticks([0, 1])
ax.set_xlim([-0.5, 1.5])
ax.set_ylim([0, 110])
ax.set_xticklabels(['CONFIRMED BY\nEXPERIMENT', 'REFUTED BY\nEXPERIMENT'])
ax.set_yticks([])
fig.suptitle("CLAIMS OF SUPERNATURAL POWERS")
fig.text(0.5, 0.01,
'"The Data So Far" from xkcd by Randall Monroe',
ha='center', )
from matplotlib.ticker import MaxNLocator
x = np.arange(20)
y = np.random.randn(20)
fig, ax = plt.subplots()
ax.plot(x, y)
ax.xaxis.set_major_locator(MaxNLocator(nbins=8))
x = np.arange(20)
y1 = np.random.randn(20)
y2 = np.random.randn(20)
fig, axes = plt.subplots(2, 1, sharex=True)
axes[0].plot(x, y1)
axes[1].plot(x, y2)
fig.tight_layout()
t = np.arange(0.01, 10.0, 0.01)
s1 = np.exp(t)
s2 = np.sin(2*np.pi*t)
fig, ax1 = plt.subplots()
ax1.plot(t, s1, 'b-')
ax1.set_xlabel('time (s)')
# Make the y-axis label and tick labels match the line color.
ax1.set_ylabel('exp', color='b', fontsize=18)
for tl in ax1.get_yticklabels():
tl.set_color('b')
ax2 = ax1.twinx()
ax2.plot(t, s2, 'r.')
ax2.set_ylabel('sin', color='r', fontsize=18)
for tl in ax2.get_yticklabels():
tl.set_color('r')
fig, ax = plt.subplots()
ax.imshow(np.random.uniform(0, 1, size=(50, 50)), cmap="RdYlGn")
fig, ax = plt.subplots()
ax.set_ylabel("$\\beta^2$", fontsize=20, rotation=0, labelpad=20)
with mpl.rc_context(rc={"text.usetex": True}):
fig, ax = plt.subplots(figsize=(5, 5))
ax.set_ylabel("$\\beta^2$", fontsize=20, rotation=0, labelpad=20)
from matplotlib.pylab import bivariate_normal
np.random.seed(12)
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
with mpl.rc_context(rc={'xtick.direction': 'out',
'ytick.direction': 'out'}):
# Create a simple contour plot with labels using default colors. The
# inline argument to clabel will control whether the labels are draw
# over the line segments of the contour, removing the lines beneath
# the label
fig, ax = plt.subplots(figsize=(8, 8))
contours = ax.contour(X, Y, Z)
ax.clabel(contours, inline=1, fontsize=10)
fig, ax = plt.subplots()
ax.arrow(0, 0, 0.5, 0.5, head_width=0.05, head_length=0.1, fc='k', ec='k')
ax.arrow(0.25, 0, 0.5, 0.5, head_width=0, head_length=0, fc='k', ec='k')
x = np.arange(0.0, 2, 0.01)
y1 = np.sin(2*np.pi*x)
y2 = 1.2*np.sin(4*np.pi*x)
fig, axes = plt.subplots(3, 1, sharex=True, figsize=(6, 10))
axes[0].fill_between(x, 0, y1)
axes[0].set_ylabel('between y1 and 0')
axes[1].fill_between(x, y1, 1)
axes[1].set_ylabel('between y1 and 1')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Landscape of Plotting Libraries
Step2: Backends
Step3: This has a popular one
Step4: You can also use the rc_context context manager
Step5: Interactive Plotting with PyPlot
Step6: If using object method calls, you must call draw or draw_if_interactive to see changes
Step7: By default the plot method takes x values, then y values
Step8: What is the pyplot namespace?
Step9: Close the last made Figure, by default
Step10: You can also refer to figures by their number starting at 1
Step11: Axes
Step12: You may have guessed that you can have more than one axes on a plot
Step13: Library Plotting
Step14: You'll also notice that I assign the returns from the matplotlib object method calls to variables
Step15: Notebook aside
Step16: Exercise
Step17: Colors
Step18: Labels and Legends
Step19: You can label the X and Y axes
Step20: Label the axes with a title
Step21: Ticks and Tick Labels
Step22: You can set your own tick labels
Step23: Spines
Step24: More on plot
Step25: Plotting in Pandas vs Matplotlib
Step26: Notice that these return AxesSubplot objects, so we have our hook in to all of the powerful methods from matplotlib
Step27: Exercise
Step28: These are just matplotlib objects
Step29: We can easily add a secondary y-axis
Step30: We can also ask pandas to plot on already existing axes
Step31: Bar plots
Step32: Or equivalently
Step33: Recall that pandas.cut can be used to bin continuous data into buckets
Step34: Celsius bins
Step35: What's wrong with this graph?
Step36: Horizontal bar chart
Step37: Stacked bar chart
Step38: Matplotlib provides a variety of ColorMaps
Step39: Histograms
Step40: It's even a good exercise here! Let's drop turn the -99 into NaNs.
Step41: Incidentally, pandas will handle nulls in plotting
Step42: Optimal number of bins
Step43: Density Plots
Step44: We can compare the KDE to the normed histogram
Step45: Exercise
Step46: We can add some more information by overlaying the original data on the boxplot
Step47: Scatterplots
Step48: We can uncover more information by changing the size of the points
Step49: Or by adding color using the c keyword
Step50: c can also be a color intensity
Step51: Notice that there is a colorbar automatically
Step52: Use pd.scatter_matrix To view a large number of variables simultaenously
Step53: Plotting Time-Series
Step54: Pandas plotting is DatetimeIndex aware
Step55: GridSpec
Step56: We can have more easy, fine-grained control with subplot2grid for creating multiple subplots that span columns, for example
Step57: You can use GridSpec class directly to create the same plot
Step58: Seaborn
Step59: FacetGrid
Step60: Violin plot
Step61: We can plot the points inside the violins and re-order the seasons
Step62: Distribution plots
Step63: We can also look at a hexbin plot of the same data with the marginal distributions as histograms.
Step64: mpld3
Step65: Unfortunately, this is just a static image. Let's use mpld3 to change that. Using the display command, you get a fully interactive visualization of the figure.
Step66: Notice the toolbar on hover. You can use that to interact with the figure.
Step67: Putting it all together
Step68: This is a plot of NYC's weather in 2014 versus historical averages
Step69: You probably don't wan't to work with the month, day tuples in its present form for plotting
Step70: First, make the figure and plot the high and low bars (Hints
Step71: Annotate the points one of the 2014 historical lows and one of the 2014 historical highs with the appropriate text (Hint
Step72: Other frequently used plotting tricks
Step73: Tick Tricks
Step74: ColorMaps
Step75: Twinning Axes
Step76: Image Plots
Step77: $LaTeX$
Step78: Contour Plots
Step79: Arrows
Step80: Filling in plots
|
12,661
|
<ASSISTANT_TASK:>
Python Code:
### BEGIN SOLUTION
import sympy as sym
a, b, c = sym.Symbol("a"), sym.Symbol("b"), sym.Symbol("c")
sym.expand((9 * a ** 2 * b * c ** 4) ** (sym.S(1) / 2) / (6 * a * b ** (sym.S(3) / 2) * c))
### END SOLUTION
### BEGIN SOLUTION
sym.expand((sym.S(2) ** (sym.S(1) / 2) + 2) ** 2 - 2 ** (sym.S(5) / 2))
### END SOLUTION
### BEGIN SOLUTION
(sym.S(1) / 8) ** (sym.S(4) / 3)
### END SOLUTION
def expand(expression):
### BEGIN SOLUTION
Take a symbolic expression and expands it.
return sym.expand(expression)
### END SOLUTION
### BEGIN SOLUTION
a = sym.Symbol("a")
D = sym.Matrix([[1, 2, a], [3, 1, 0], [1, 1, 1]])
### END SOLUTION
### BEGIN SOLUTION
D_inv = D.inv()
### END SOLUTION
### BEGIN SOLUTION
b = sym.Matrix([[3], [4], [1]])
sym.simplify(D.inv() @ b).subs({a: 4})
### END SOLUTION
import random
def sample_experiment():
Returns the throw type and whether it was caught
### BEGIN SOLUTION
if random.random() < .25:
throw = "backhand"
probability_of_catch = .8
else:
throw = "forehand"
probability_of_catch = .9
caught = random.random() < probability_of_catch
### END SOLUTION
return throw, caught
### BEGIN SOLUTION
number_of_repetitions = 1_000_000
random.seed(0)
samples = [sample_experiment() for repetition in range(number_of_repetitions)]
probability_of_catch = sum(catch is True for throw, catch in samples) / number_of_repetitions
### END SOLUTION
### BEGIN SOLUTION
samples_with_drop = [(throw, catch) for throw, catch in samples if catch is False]
number_of_drops = len(samples_with_drop)
probability_of_forehand_given_drop = sum(throw == "forehand" for throw, catch in samples_with_drop) / number_of_drops
### END SOLUTION
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: b. \((2 ^ {\frac{1}{2}} + 2) ^ 2 - 2 ^ {\frac{5}{2}}\)
Step2: \((\frac{1}{8}) ^ {\frac{4}{3}}\)
Step4: Question 2
Step5: Question 3
Step6: b. Create a variable D_inv with value the inverse of \(D\).
Step7: c. Using D_inv output the solution of the following system of equations
Step9: Question 4
Step10: b. Using 1,000,000 samples create a variable probability_of_catch which has value an estimate for the probability of the frisbee being caught.
Step11: c. Using the above, create a variable probability_of_forehand_given_drop which has value an estimate for the probability of the frisbee being thrown with a forehand given that it was not caught.
|
12,662
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlopen
from bs4 import BeautifulSoup
html = urlopen("https://en.wikipedia.org/wiki/Python_(programming_language)")
bsObj = BeautifulSoup(html.read(), "html.parser")
for link in bsObj.findAll("a"):
if 'href' in link.attrs:
print(link.attrs['href'])
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
html = urlopen("https://en.wikipedia.org/wiki/Python_(programming_language)")
bsObj = BeautifulSoup(html.read(), "html.parser")
for link in bsObj.find("div", {"id":"bodyContent"}).findAll("a", href=re.compile("^(/wiki/)((?!:).)*$")):
if 'href' in link.attrs:
print(link.attrs['href'])
from urllib.request import urlopen
from bs4 import BeautifulSoup
import datetime
import random
import re
count = 0
random.seed(datetime.datetime.now())
def getLinks(articleUrl):
html = urlopen("http://en.wikipedia.org"+articleUrl)
bsObj = BeautifulSoup(html, "html.parser")
return bsObj.find("div", {"id":"bodyContent"}).findAll("a", href=re.compile("^(/wiki/)((?!:).)*$"))
links = getLinks("/wiki/Python_(programming_language)")
while len(links) > 0 and count < 10:
newArticle = links[random.randint(0, len(links)-1)].attrs["href"]
print(newArticle)
count = count + 1
links = getLinks(newArticle)
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
pages = set()
def getLinks(pageUrl):
global pages
html = urlopen("http://en.wikipedia.org"+pageUrl)
bsObj = BeautifulSoup(html, "html.parser")
try:
print(bsObj.h1.get_text())
print(bsObj.find(id ="mw-content-text").findAll("p")[0])
print(bsObj.find(id="ca-edit").find("span").find("a").attrs['href'])
except AttributeError:
print("This page is missing something! No worries though!")
for link in bsObj.findAll("a", href=re.compile("^(/wiki/)")):
if 'href' in link.attrs:
if link.attrs['href'] not in pages:
#We have encountered a new page
newPage = link.attrs['href']
print("----------------\n"+newPage)
pages.add(newPage)
getLinks(newPage)
getLinks("")
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
import datetime
import random
pages = set()
random.seed(datetime.datetime.now())
#Retrieves a list of all Internal links found on a page
def getInternalLinks(bsObj, includeUrl):
internalLinks = []
#Finds all links that begin with a "/"
for link in bsObj.findAll("a", href=re.compile("^(/|.*"+includeUrl+")")):
if link.attrs['href'] is not None:
if link.attrs['href'] not in internalLinks:
internalLinks.append(link.attrs['href'])
return internalLinks
#Retrieves a list of all external links found on a page
def getExternalLinks(bsObj, excludeUrl):
externalLinks = []
#Finds all links that start with "http" or "www" that do
#not contain the current URL
for link in bsObj.findAll("a", href=re.compile("^(http|www)((?!"+excludeUrl+").)*$")):
if link.attrs['href'] is not None:
if link.attrs['href'] not in externalLinks:
externalLinks.append(link.attrs['href'])
return externalLinks
def splitAddress(address):
addressParts = address.replace("http://", "").split("/")
return addressParts
def getRandomExternalLink(startingPage):
html = urlopen(startingPage)
bsObj = BeautifulSoup(html, "html.parser")
externalLinks = getExternalLinks(bsObj, splitAddress(startingPage)[0])
if len(externalLinks) == 0:
internalLinks = getInternalLinks(startingPage)
return getNextExternalLink(internalLinks[random.randint(0,
len(internalLinks)-1)])
else:
return externalLinks[random.randint(0, len(externalLinks)-1)]
def followExternalOnly(startingSite):
externalLink = getRandomExternalLink(startingSite)
print("Random external link is: "+externalLink)
#followExternalOnly(externalLink)
followExternalOnly("http://oreilly.com")
followExternalOnly("http://oreilly.com")
followExternalOnly("http://oreilly.com")
allExtLinks = set()
allIntLinks = set()
def getAllExternalLinks(siteUrl):
html = urlopen(siteUrl)
bsObj = BeautifulSoup(html.read(), "html.parser")
internalLinks = getInternalLinks(bsObj, splitAddress(siteUrl)[0])
externalLinks = getExternalLinks(bsObj, splitAddress(siteUrl)[0])
for link in externalLinks:
if link not in allExtLinks:
allExtLinks.add(link)
print(link)
for link in internalLinks:
if link not in allIntLinks:
print(link)
allIntLinks.add(link)
getAllExternalLinks(link)
getAllExternalLinks("http://oreilly.com")
import scrapy
class BlogSpider(scrapy.Spider):
name = 'blogspider'
start_urls = ['https://blog.scrapinghub.com']
def parse(self, response):
for title in response.css('h2.entry-title'):
yield {'title': title.css('a ::text').extract_first()}
next_page = response.css('div.prev-post > a ::attr(href)').extract_first()
if next_page:
yield scrapy.Request(response.urljoin(next_page), callback=self.parse)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 可以发现,所有指向Wikipedia词条的链接都是/wiki/开头,所以我们可以用正则表达式来过滤出这些词条,就像这样
Step2: 上面的函数还不太能用于实际抓取,我们稍作改进,变成下面这个样子,就可以初步用于抓取页面的所有链接了。因为我们不能无限制地抓取下去,我便设置了10个链接的上限。
Step3: 为了避免一个页面被采集两次,链接去重是非常重要的,下面的代码用Python的set来保存已经采集的链接。下面这段代码将无限制地运行下去,除非set集为空,然而这几乎是不可能的。
Step4: 我们接下来可以写一个随机找外链的小程序
Step5: 如果我们的目标是获取页面上所有的外链,并记录之,我们可以用下面的函数来办
Step6: 使用Scrapy
|
12,663
|
<ASSISTANT_TASK:>
Python Code:
# CHANGE the following settings
BASE_IMAGE='gcr.io/your-image-name'
MODEL_STORAGE = 'gs://your-bucket-name/folder-name' #Must include a folder in the bucket, otherwise, model export will fail
BQ_DATASET_NAME="hotel_recommendations" #This is the name of the target dataset where you model and predictions will be stored
PROJECT_ID="your-project-id" #This is your GCP project ID that can be found in the GCP console
KFPHOST="your-ai-platform-pipeline-url" # Kubeflow Pipelines URL, can be found from settings button in CAIP Pipelines
REGION='your-project-region' #For example, us-central1, note that Vertex AI endpoint deployment region must match MODEL_STORAGE bucket region
ENDPOINT_NAME='your-vertex-ai-endpoint-name'
DEPLOY_COMPUTE='your-endpoint-compute-size'#For example, n1-standard-4
DEPLOY_IMAGE='us-docker.pkg.dev/vertex-ai/prediction/xgboost-cpu.0-82:latest' #Do not change, BQML XGBoost is currently compatible with 0.82
from typing import NamedTuple
import json
import os
def run_bigquery_ddl(project_id: str, query_string: str, location: str) -> NamedTuple(
'DDLOutput', [('created_table', str), ('query', str)]):
Runs BigQuery query and returns a table/model name
print(query_string)
from google.cloud import bigquery
from google.api_core.future import polling
from google.cloud import bigquery
from google.cloud.bigquery import retry as bq_retry
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query_string, retry=bq_retry.DEFAULT_RETRY)
job._retry = polling.DEFAULT_RETRY
while job.running():
from time import sleep
sleep(0.1)
print('Running ...')
tblname = job.ddl_target_table
tblname = '{}.{}'.format(tblname.dataset_id, tblname.table_id)
print('{} created in {}'.format(tblname, job.ended - job.started))
from collections import namedtuple
result_tuple = namedtuple('DDLOutput', ['created_table', 'query'])
return result_tuple(tblname, query_string)
def train_matrix_factorization_model(ddlop, project_id, dataset):
query =
CREATE OR REPLACE MODEL `{project_id}.{dataset}.my_implicit_mf_model_quantiles_demo_binary_prod`
OPTIONS
(model_type='matrix_factorization',
feedback_type='implicit',
user_col='user_id',
item_col='hotel_cluster',
rating_col='rating',
l2_reg=30,
num_factors=15) AS
SELECT
user_id,
hotel_cluster,
if(sum(is_booking) > 0, 1, sum(is_booking)) AS rating
FROM `{project_id}.{dataset}.hotel_train`
group by 1,2
.format(project_id = project_id, dataset = dataset)
return ddlop(project_id, query, 'US')
def evaluate_matrix_factorization_model(project_id, mf_model, location='US')-> NamedTuple('MFMetrics', [('msqe', float)]):
query =
SELECT * FROM ML.EVALUATE(MODEL `{project_id}.{mf_model}`)
.format(project_id = project_id, mf_model = mf_model)
print(query)
from google.cloud import bigquery
import json
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query)
metrics_df = job.result().to_dataframe()
from collections import namedtuple
result_tuple = namedtuple('MFMetrics', ['msqe'])
return result_tuple(metrics_df.loc[0].to_dict()['mean_squared_error'])
def create_user_features(ddlop, project_id, dataset, mf_model):
#Feature engineering for useres
query =
CREATE OR REPLACE TABLE `{project_id}.{dataset}.user_features_prod` AS
WITH u as
(
select
user_id,
count(*) as total_visits,
count(distinct user_location_city) as distinct_cities,
sum(distinct site_name) as distinct_sites,
sum(is_mobile) as total_mobile,
sum(is_booking) as total_bookings,
FROM `{project_id}.{dataset}.hotel_train`
GROUP BY 1
)
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors
FROM
u JOIN ML.WEIGHTS( MODEL `{mf_model}`) w
ON processed_input = 'user_id' AND feature = CAST(u.user_id AS STRING)
.format(project_id = project_id, dataset = dataset, mf_model=mf_model)
return ddlop(project_id, query, 'US')
def create_hotel_features(ddlop, project_id, dataset, mf_model):
#Feature eingineering for hotels
query =
CREATE OR REPLACE TABLE `{project_id}.{dataset}.hotel_features_prod` AS
WITH h as
(
select
hotel_cluster,
count(*) as total_cluster_searches,
count(distinct hotel_country) as distinct_hotel_countries,
sum(distinct hotel_market) as distinct_hotel_markets,
sum(is_mobile) as total_mobile_searches,
sum(is_booking) as total_cluster_bookings,
FROM `{project_id}.{dataset}.hotel_train`
group by 1
)
SELECT
h.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS hotel_factors
FROM
h JOIN ML.WEIGHTS( MODEL `{mf_model}`) w
ON processed_input = 'hotel_cluster' AND feature = CAST(h.hotel_cluster AS STRING)
.format(project_id = project_id, dataset = dataset, mf_model=mf_model)
return ddlop(project_id, query, 'US')
def combine_features(ddlop, project_id, dataset, mf_model, hotel_features, user_features):
#Combine user and hotel embedding features with the rating associated with each combination
query =
CREATE OR REPLACE TABLE `{project_id}.{dataset}.total_features_prod` AS
with ratings as(
SELECT
user_id,
hotel_cluster,
if(sum(is_booking) > 0, 1, sum(is_booking)) AS rating
FROM `{project_id}.{dataset}.hotel_train`
group by 1,2
)
select
h.* EXCEPT(hotel_cluster),
u.* EXCEPT(user_id),
IFNULL(rating,0) as rating
from `{hotel_features}` h, `{user_features}` u
LEFT OUTER JOIN ratings r
ON r.user_id = u.user_id AND r.hotel_cluster = h.hotel_cluster
.format(project_id = project_id, dataset = dataset, mf_model=mf_model, hotel_features=hotel_features, user_features=user_features)
return ddlop(project_id, query, 'US')
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE FUNCTION `hotel_recommendations.arr_to_input_15_hotels`(h ARRAY<FLOAT64>)
RETURNS
STRUCT<
h1 FLOAT64,
h2 FLOAT64,
h3 FLOAT64,
h4 FLOAT64,
h5 FLOAT64,
h6 FLOAT64,
h7 FLOAT64,
h8 FLOAT64,
h9 FLOAT64,
h10 FLOAT64,
h11 FLOAT64,
h12 FLOAT64,
h13 FLOAT64,
h14 FLOAT64,
h15 FLOAT64
> AS (STRUCT(
h[OFFSET(0)],
h[OFFSET(1)],
h[OFFSET(2)],
h[OFFSET(3)],
h[OFFSET(4)],
h[OFFSET(5)],
h[OFFSET(6)],
h[OFFSET(7)],
h[OFFSET(8)],
h[OFFSET(9)],
h[OFFSET(10)],
h[OFFSET(11)],
h[OFFSET(12)],
h[OFFSET(13)],
h[OFFSET(14)]
));
CREATE OR REPLACE FUNCTION `hotel_recommendations.arr_to_input_15_users`(u ARRAY<FLOAT64>)
RETURNS
STRUCT<
u1 FLOAT64,
u2 FLOAT64,
u3 FLOAT64,
u4 FLOAT64,
u5 FLOAT64,
u6 FLOAT64,
u7 FLOAT64,
u8 FLOAT64,
u9 FLOAT64,
u10 FLOAT64,
u11 FLOAT64,
u12 FLOAT64,
u13 FLOAT64,
u14 FLOAT64,
u15 FLOAT64
> AS (STRUCT(
u[OFFSET(0)],
u[OFFSET(1)],
u[OFFSET(2)],
u[OFFSET(3)],
u[OFFSET(4)],
u[OFFSET(5)],
u[OFFSET(6)],
u[OFFSET(7)],
u[OFFSET(8)],
u[OFFSET(9)],
u[OFFSET(10)],
u[OFFSET(11)],
u[OFFSET(12)],
u[OFFSET(13)],
u[OFFSET(14)]
));
def train_xgboost_model(ddlop, project_id, dataset, total_features):
#Combine user and hotel embedding features with the rating associated with each combination
query =
CREATE OR REPLACE MODEL `{project_id}.{dataset}.recommender_hybrid_xgboost_prod`
OPTIONS(model_type='boosted_tree_classifier', input_label_cols=['rating'], AUTO_CLASS_WEIGHTS=True)
AS
SELECT
* EXCEPT(user_factors, hotel_factors),
{dataset}.arr_to_input_15_users(user_factors).*,
{dataset}.arr_to_input_15_hotels(hotel_factors).*
FROM
`{total_features}`
.format(project_id = project_id, dataset = dataset, total_features=total_features)
return ddlop(project_id, query, 'US')
def evaluate_class(project_id, dataset, class_model, total_features, location='US')-> NamedTuple('ClassMetrics', [('roc_auc', float)]):
query =
SELECT
*
FROM ML.EVALUATE(MODEL `{class_model}`, (
SELECT
* EXCEPT(user_factors, hotel_factors),
{dataset}.arr_to_input_15_users(user_factors).*,
{dataset}.arr_to_input_15_hotels(hotel_factors).*
FROM
`{total_features}`
))
.format(dataset = dataset, class_model = class_model, total_features = total_features)
print(query)
from google.cloud import bigquery
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query)
metrics_df = job.result().to_dataframe()
from collections import namedtuple
result_tuple = namedtuple('ClassMetrics', ['roc_auc'])
return result_tuple(metrics_df.loc[0].to_dict()['roc_auc'])
def export_bqml_model(project_id, model, destination) -> NamedTuple('ModelExport', [('destination', str)]):
import subprocess
#command='bq extract -destination_format=ML_XGBOOST_BOOSTER -m {}:{} {}'.format(project_id, model, destination)
model_name = '{}:{}'.format(project_id, model)
print (model_name)
subprocess.run(['bq', 'extract', '-destination_format=ML_XGBOOST_BOOSTER', '-m', model_name, destination], check=True)
from collections import namedtuple
result_tuple = namedtuple('ModelExport', ['destination'])
return result_tuple(destination)
def deploy_bqml_model_vertexai(project_id, region, model_name, endpoint_name, model_dir, deploy_image, deploy_compute):
from google.cloud import aiplatform
parent = "projects/" + project_id + "/locations/" + region
client_options = {"api_endpoint": "{}-aiplatform.googleapis.com".format(region)}
clients = {}
#upload the model to Vertex AI
clients['model'] = aiplatform.gapic.ModelServiceClient(client_options=client_options)
model = {
"display_name": model_name,
"metadata_schema_uri": "",
"artifact_uri": model_dir,
"container_spec": {
"image_uri": deploy_image,
"command": [],
"args": [],
"env": [],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": ""
}
}
upload_model_response = clients['model'].upload_model(parent=parent, model=model)
print("Long running operation on uploading the model:", upload_model_response.operation.name)
model_info = clients['model'].get_model(name=upload_model_response.result(timeout=180).model)
#Create an endpoint on Vertex AI to host the model
clients['endpoint'] = aiplatform.gapic.EndpointServiceClient(client_options=client_options)
create_endpoint_response = clients['endpoint'].create_endpoint(parent=parent, endpoint={"display_name": endpoint_name})
print("Long running operation on creating endpoint:", create_endpoint_response.operation.name)
endpoint_info = clients['endpoint'].get_endpoint(name=create_endpoint_response.result(timeout=180).name)
#Deploy the model to the endpoint
dmodel = {
"model": model_info.name,
"display_name": 'deployed_'+model_name,
"dedicated_resources": {
"min_replica_count": 1,
"max_replica_count": 1,
"machine_spec": {
"machine_type": deploy_compute,
"accelerator_count": 0,
}
}
}
traffic = {
'0' : 100
}
deploy_model_response = clients['endpoint'].deploy_model(endpoint=endpoint_info.name, deployed_model=dmodel, traffic_split=traffic)
print("Long running operation on deploying the model:", deploy_model_response.operation.name)
deploy_model_result = deploy_model_response.result()
import kfp.dsl as dsl
import kfp.components as comp
import time
@dsl.pipeline(
name='Training pipeline for hotel recommendation prediction',
description='Training pipeline for hotel recommendation prediction'
)
def training_pipeline(project_id = PROJECT_ID):
import json
#Minimum threshold for model metric to determine if model will be deployed for prediction
mf_msqe_threshold = 0.5
class_auc_threshold = 0.8
#Defining function containers
ddlop = comp.func_to_container_op(run_bigquery_ddl, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery'])
evaluate_class_op = comp.func_to_container_op(evaluate_class, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery','pandas'])
evaluate_mf_op = comp.func_to_container_op(evaluate_matrix_factorization_model, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery','pandas'])
export_bqml_model_op = comp.func_to_container_op(export_bqml_model, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery'])
deploy_bqml_model_op = comp.func_to_container_op(deploy_bqml_model_vertexai, base_image=BASE_IMAGE, packages_to_install=['google-cloud-aiplatform'])
#############################
#Defining pipeline execution graph
dataset = BQ_DATASET_NAME
#Train matrix factorization model
mf_model_output = train_matrix_factorization_model(ddlop, PROJECT_ID, dataset).set_display_name('train matrix factorization model')
mf_model_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
mf_model = mf_model_output.outputs['created_table']
#Evaluate matrix factorization model
mf_eval_output = evaluate_mf_op(PROJECT_ID, mf_model).set_display_name('evaluate matrix factorization model')
mf_eval_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
with dsl.Condition(mf_eval_output.outputs['msqe'] < mf_msqe_threshold):
#Create features for classification model
user_features_output = create_user_features(ddlop, PROJECT_ID, dataset, mf_model).set_display_name('create user factors features')
user_features = user_features_output.outputs['created_table']
user_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
hotel_features_output = create_hotel_features(ddlop, PROJECT_ID, dataset, mf_model).set_display_name('create hotel factors features')
hotel_features = hotel_features_output.outputs['created_table']
hotel_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
total_features_output = combine_features(ddlop, PROJECT_ID, dataset, mf_model, hotel_features, user_features).set_display_name('combine all features')
total_features = total_features_output.outputs['created_table']
total_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
#Train XGBoost model
class_model_output = train_xgboost_model(ddlop, PROJECT_ID, dataset, total_features).set_display_name('train XGBoost model')
class_model = class_model_output.outputs['created_table']
class_model_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
class_eval_output = evaluate_class_op(project_id, dataset, class_model, total_features).set_display_name('evaluate XGBoost model')
class_eval_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
with dsl.Condition(class_eval_output.outputs['roc_auc'] > class_auc_threshold):
#Export model
export_destination_output = export_bqml_model_op(project_id, class_model, MODEL_STORAGE).set_display_name('export XGBoost model')
export_destination_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
export_destination = export_destination_output.outputs['destination']
deploy_model = deploy_bqml_model_op(PROJECT_ID, REGION, class_model, ENDPOINT_NAME, MODEL_STORAGE, DEPLOY_IMAGE, DEPLOY_COMPUTE).set_display_name('Deploy XGBoost model')
deploy_model.execution_options.caching_strategy.max_cache_staleness = 'P0D'
pipeline_func = training_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
import kfp
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
client = kfp.Client(KFPHOST)
experiment = client.create_experiment('hotel_recommender_experiment')
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Create BigQuery function
Step5: Creating the model
Step8: Creating embedding features for users and hotels
Step10: Function below combines all the features selected (total_mobile_searches) and engineered (user factors and hotel factors) into a training dataset for the XGBoost classifier. Note the target variable is rating which is converted into a binary classfication.
Step11: We will create a couple of BigQuery user-defined functions (UDF) to convert arrays to a struct and its array elements are the fields in the struct. <strong>Be sure to change the BigQuery dataset name to your dataset name. </strong>
Step14: Train XGBoost model and evaluate it
Step15: Export XGBoost model and host it as a model endpoint on Vertex AI
Step16: Defining the Kubeflow Pipelines (KFP)
Step17: Submitting pipeline runs
|
12,664
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
!wget http://www.cs.colostate.edu/~anderson/cs480/notebooks/oldfaithful.csv
data = np.loadtxt('oldfaithful.csv')
data.shape
plt.scatter(data[:,0],data[:,1]);
plt.xlabel('Duration');
plt.ylabel('Interval');
clusters = [d for d in data]
clusters[:5]
C1 = [np.array([[1,2]]), np.array([[2,3]])]
C2 = [np.array([[2,2]]), np.array([[6,7]])]
C1,C2
allC1 = np.vstack((C1))
allC2 = np.vstack((C2))
allC1,allC2
allC1[:,np.newaxis,:] - allC2
np.sum((allC1[:,np.newaxis,:] - allC2)**2,axis=2)
np.max(np.sum((allC1[:,np.newaxis,:] - allC2)**2,axis=2))
def clusterDistance(Ci,Cj):
allCi = np.vstack((Ci))
allCj = np.vstack((Cj))
return np.max(np.sum((allCi[:,np.newaxis,:] - allCj)**2, axis=2))
clusterDistance(C1,C2)
C3 = [np.array([[6,4]]), np.array([[8,9]])]
clusters = [C1, C2, C3]
clusters
for i in range(len(clusters)-1):
for j in range(i+1,len(clusters)):
print(i,j)
dists = []
for i in range(len(clusters)-1):
for j in range(i+1,len(clusters)):
dists.append([i,j,clusterDistance(clusters[i],clusters[j])])
dists
[[i,j,clusterDistance(clusters[i],clusters[j])] for i in range(len(clusters)-1) for j in range(i+1,len(clusters))]
def clusterDistance(Ci,Cj):
'''Ci and Cj are two clusters, each being a dict with 'X' and 'label' keys'''
return np.mean(np.sum((Ci['X'][:,np.newaxis,:] - Cj['X'])**2, axis=2))
# return np.min(np.sum((Ci['X'][:,np.newaxis,:] - Cj['X'])**2, axis=2))
# return np.max(np.sum((Ci['X'][:,np.newaxis,:] - Cj['X'])**2, axis=2))
def mergeClusters(Ci,Cj, k):
return {'X': np.vstack((Ci['X'], Cj['X'])),
'label': k}
def agglomerative(X,clusterDistanceF, nClusters):
labels = np.zeros((X.shape[0]))
# clusters is list of pairs of sample and label
clusters = [ {'X':X[i:i+1,:], 'label':i} for i in range(X.shape[0]) ]
k = X.shape[0] - 1
while len(clusters) > nClusters:
dists = np.array( [[i,j,clusterDistance(clusters[i],clusters[j])] for i in range(len(clusters)-1) for j in range(i+1,len(clusters))] )
whichClosest = np.argmin(dists[:,-1])
closest = dists[whichClosest,:2]
i,j = closest.astype(int)
# Merge them
k += 1
clusters[i] = {'X': np.vstack((clusters[i]['X'],clusters[j]['X'])),
'label': k}
clusters.pop(j)
print(len(clusters), end=' ')
return clusters
data.shape
clusters = agglomerative(data,clusterDistance, 2)
clusters
for i in range(len(clusters)):
cluster = clusters[i]['X']
plt.scatter(cluster[:,0], cluster[:,1])
plt.xlabel('Duration');
plt.ylabel('Interval');
dataDists = np.sum((data[:,np.newaxis,:] - data)**2, axis=2)
dataDists.shape
def clusterDistance(Ci, Cj, dataDists):
'''Ci and Cj are two clusters, each being a dict with 'X' and 'label' keys'''
return np.mean( np.array([dataDists[i,j] for i in Ci['X'] for j in Cj['X']]) )
# return np.min(np.sum((Ci['X'][:,np.newaxis,:] - Cj['X'])**2, axis=2))
# return np.max(np.sum((Ci['X'][:,np.newaxis,:] - Cj['X'])**2, axis=2))
def agglomerative(X,clusterDistanceF, nClusters):
dataDists = np.sum((X[:,np.newaxis,:] - X)**2, axis=2)
labels = np.zeros((X.shape[0]))
# clusters is list of pairs of sample and label
clusters = [ {'X':[i], 'label':i} for i in range(X.shape[0]) ]
k = X.shape[0] - 1
while len(clusters) > nClusters:
dists = np.array( [[i,j,clusterDistance(clusters[i],clusters[j], dataDists)] for i in range(len(clusters)-1) for j in range(i+1,len(clusters))] )
whichClosest = np.argmin(dists[:,-1])
closest = dists[whichClosest,:2]
i,j = closest.astype(int)
# Merge them
k += 1
clusters[i] = {'X': clusters[i]['X'] + clusters[j]['X'],
'label': k}
clusters.pop(j)
print(len(clusters), end=' ')
return clusters
clusters = agglomerative(data,clusterDistance, 2)
for i in range(len(clusters)):
cluster = clusters[i]['X']
coords = np.array([data[c] for c in cluster])
plt.scatter(coords[:,0], coords[:,1])
plt.xlabel('Duration');
plt.ylabel('Interval');
data2 = np.loadtxt('userslocations.csv')
data2.shape
clusters = agglomerative(data2,clusterDistance, 4)
plt.figure(figsize=(20,8))
for i in range(len(clusters)):
cluster = clusters[i]['X']
coords = np.array([data[c] for c in cluster])
plt.scatter(coords[:,0], coords[:,1])
plt.xlabel('Interval (minutes)')
plt.ylabel('Duration (minutes)')
plt.subplot(1,3,2);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's represent clusters as a list of sample matrices, each matrix containing samples from one cluster. Initially, all samples are in their own clusters. Let's use the Old Faithful data to develop our implementation.
Step2: Now we need the complete-linkage cluster distance function.
Step3: So, the maximum square distance between $C_1$ and $C_2$ is 50.
Step4: All that is left is a way to identify to two clusters with the minimum distance.
Step5: or
Step6: So, clusters at indices 0 and 1 are closest. We can merge these two using np.vstack. Now we are ready to write the function.
Step7: Now for a simple, but very inefficient, implementation of agglomerative clustering.
Step8: How might we make this more efficient?
Step9: What else could you do to speed this up?
|
12,665
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
#Imports for solution
import numpy as np
import scipy.stats as sp
from matplotlib.pyplot import *
#Setting Distribution variables
##All rates are in per Minute.
#Everything will me modeled as a Poisson Process
SIM_TIME = 180
QUEUE_ARRIVAL_RATE = 15
N_SCANNERS =4
SCANNER_BAG_CHECKING_RATE = 3 #Takes 20 seconds to put your bag on Scanner
FRISK_MACHINES_PER_SCANNER = 3 #Number of people checking machine per scanner
N_FRISK_MACHINES = N_SCANNERS*FRISK_MACHINES_PER_SCANNER
FRISK_CHECKING_RATE = 2 #Half a minute per frisk
SCANNER_RATE = SCANNER_BAG_CHECKING_RATE*N_SCANNERS
FRISK_RATE = FRISK_CHECKING_RATE*N_FRISK_MACHINES
FRISK_ARRIVAL_RATE = SCANNER_RATE
#Queue Modeling
ARRIVAL_PATTERN = sp.poisson.rvs(QUEUE_ARRIVAL_RATE,size = SIM_TIME) #for an hour
ARRIVAL_LIST = []
for index, item in enumerate(ARRIVAL_PATTERN):
ARRIVAL_LIST += [index]*item
#print ARRIVAL_LIST
TIMEAXIS = np.linspace(1,SIM_TIME,SIM_TIME)
fig = figure()
arrivalplot = plot(TIMEAXIS,ARRIVAL_PATTERN,'go-')
ylabel('People arrived at time t')
xlabel("Time (minutes)")
show()
SCAN_PATTERN = sp.poisson.rvs(SCANNER_RATE,size=SIM_TIME)
SCAN_LIST = []
for index, item in enumerate(SCAN_PATTERN):
SCAN_LIST += [index]*item
arrivalfig = figure()
arrivalplot = plot(TIMEAXIS,SCAN_PATTERN,'o-')
ylabel('People arrived at time t for the scanner')
xlabel("Time (minutes)")
show()
FRISK_PATTERN = sp.poisson.rvs(FRISK_RATE,size=SIM_TIME)
FRISK_LIST = []
for index, item in enumerate(FRISK_PATTERN):
FRISK_LIST += [index]*item
arrivalfig = figure()
arrivalplot = plot(TIMEAXIS,FRISK_PATTERN,'ro-')
ylabel('People Leaving at time t from frisking counter')
xlabel("Time (minutes)")
show()
EXIT_NUMER = zip(FRISK_PATTERN,SCAN_PATTERN)
EXIT_NUMBER = [min(k) for k in EXIT_NUMER]
#plot(EXIT_NUMBER,'o')
#show()
EXIT_PATTERN = []
for index, item in enumerate(EXIT_NUMBER):
EXIT_PATTERN += [index]*item
RESIDUAL_ARRIVAL_PATTERN = ARRIVAL_LIST[0:len(EXIT_PATTERN)]
WAIT_TIMES = [m-n for m,n in zip(EXIT_PATTERN,RESIDUAL_ARRIVAL_PATTERN)]
#print EXIT_PATTERN
'''
for i,val in EXIT_PATTERN:
WAIT_TIMES += [ARRIVAL_PATTERN(i) - val]
'''
plot(WAIT_TIMES,'r-')
ylabel('Wait times for people entering the queue')
xlabel("Order of entering the queue")
ylim([0,40])
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For this simulation, we'll be using numpy and scipy for their statistical and matrix math prowess and matplotlib as our primary plotting tool
Step2: Setting the arrival rates for each of the steps in the airport arrival process. First is the arrival to the queue, then to the scanning machines and then scanning to the frisking booth.
Step3: We're taking the arrivals at each of the time intervals, generated by a poisson function and storing the number of people who have arrived at each minute.
Step4: And this is the pattern for the scanner
Step5: Critical to note that this ignores the queuing and assumes that xx people are processed at each time interval at the counter. This will be used in conjunction with the scanner output to choose the bottle neck at each point in time
Step6: Minimum number of processed people between the scanners and the frisking is the bottleneck at any given time, and this will be the exit rate at any given time.
|
12,666
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import re, pickle, collections, bcolz, numpy as np, keras, sklearn, math, operator
from gensim.models import word2vec
import torch, torch.nn as nn
from torch.autograd import Variable
from torch import optim
import torch.nn.functional as F
path='/data/datasets/fr-en-109-corpus/'
dpath = 'data/translate/'
fname=path+'giga-fren.release2.fixed'
en_fname = fname+'.en'
fr_fname = fname+'.fr'
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
lines = ((re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname), open(fr_fname)))
qs = [(e.group(), f.group()) for e,f in lines if e and f]; len(qs)
qs[:6]
pickle.dump(qs, open(dpath+'fr-en-qs.pkl', 'wb'))
qs = pickle.load(open(dpath+'fr-en-qs.pkl', 'rb'))
en_qs, fr_qs = zip(*qs)
re_apos = re.compile(r"(\w)'s\b") # make 's a separate word
re_mw_punc = re.compile(r"(\w[’'])(\w)") # other ' in a word creates 2 words
re_punc = re.compile("([\"().,;:/_?!—])") # add spaces around punctuation
re_mult_space = re.compile(r" *") # replace multiple spaces with just one
def simple_toks(sent):
sent = re_apos.sub(r"\1 's", sent)
sent = re_mw_punc.sub(r"\1 \2", sent)
sent = re_punc.sub(r" \1 ", sent).replace('-', ' ')
sent = re_mult_space.sub(' ', sent)
return sent.lower().split()
fr_qtoks = list(map(simple_toks, fr_qs)); fr_qtoks[:4]
en_qtoks = list(map(simple_toks, en_qs)); en_qtoks[:4]
simple_toks("Rachel's baby is cuter than other's.")
PAD = 0; SOS = 1
def toks2ids(sents):
voc_cnt = collections.Counter(t for sent in sents for t in sent)
vocab = sorted(voc_cnt, key=voc_cnt.get, reverse=True)
vocab.insert(PAD, "<PAD>")
vocab.insert(SOS, "<SOS>")
w2id = {w:i for i,w in enumerate(vocab)}
ids = [[w2id[t] for t in sent] for sent in sents]
return ids, vocab, w2id, voc_cnt
fr_ids, fr_vocab, fr_w2id, fr_counts = toks2ids(fr_qtoks)
en_ids, en_vocab, en_w2id, en_counts = toks2ids(en_qtoks)
def load_glove(loc):
return (bcolz.open(loc+'.dat')[:],
pickle.load(open(loc+'_words.pkl','rb'), encoding='latin1'),
pickle.load(open(loc+'_idx.pkl','rb'), encoding='latin1'))
en_vecs, en_wv_word, en_wv_idx = load_glove('/data/datasets/nlp/glove/results/6B.100d')
en_w2v = {w: en_vecs[en_wv_idx[w]] for w in en_wv_word}
n_en_vec, dim_en_vec = en_vecs.shape
en_w2v['king']
w2v_path='/data/datasets/nlp/frWac_non_lem_no_postag_no_phrase_200_skip_cut100.bin'
fr_model = word2vec.Word2Vec.load_word2vec_format(w2v_path, binary=True)
fr_voc = fr_model.vocab
dim_fr_vec = 200
def create_emb(w2v, targ_vocab, dim_vec):
vocab_size = len(targ_vocab)
emb = np.zeros((vocab_size, dim_vec))
found=0
for i, word in enumerate(targ_vocab):
try: emb[i] = w2v[word]; found+=1
except KeyError: emb[i] = np.random.normal(scale=0.6, size=(dim_vec,))
return emb, found
en_embs, found = create_emb(en_w2v, en_vocab, dim_en_vec); en_embs.shape, found
fr_embs, found = create_emb(fr_model, fr_vocab, dim_fr_vec); fr_embs.shape, found
from keras.preprocessing.sequence import pad_sequences
maxlen = 30
en_padded = pad_sequences(en_ids, maxlen, 'int64', "post", "post")
fr_padded = pad_sequences(fr_ids, maxlen, 'int64', "post", "post")
en_padded.shape, fr_padded.shape, en_embs.shape
from sklearn import model_selection
fr_train, fr_test, en_train, en_test = model_selection.train_test_split(
fr_padded, en_padded, test_size=0.1)
[o.shape for o in (fr_train, fr_test, en_train, en_test)]
fr_train[0], en_train[0]
def long_t(arr): return Variable(torch.LongTensor(arr)).cuda()
fr_emb_t = torch.FloatTensor(fr_embs).cuda()
en_emb_t = torch.FloatTensor(en_embs).cuda()
def create_emb(emb_mat, non_trainable=False):
output_size, emb_size = emb_mat.size()
emb = nn.Embedding(output_size, emb_size)
emb.load_state_dict({'weight': emb_mat})
if non_trainable:
for param in emb.parameters():
param.requires_grad = False
return emb, emb_size, output_size
class EncoderRNN(nn.Module):
def __init__(self, embs, hidden_size, n_layers=2):
super(EncoderRNN, self).__init__()
self.emb, emb_size, output_size = create_emb(embs, True)
self.n_layers = n_layers
self.hidden_size = hidden_size
self.gru = nn.GRU(emb_size, hidden_size, batch_first=True, num_layers=n_layers)
# ,bidirectional=True)
def forward(self, input, hidden):
return self.gru(self.emb(input), hidden)
def initHidden(self, batch_size):
return Variable(torch.zeros(self.n_layers, batch_size, self.hidden_size))
def encode(inp, encoder):
batch_size, input_length = inp.size()
hidden = encoder.initHidden(batch_size).cuda()
enc_outputs, hidden = encoder(inp, hidden)
return long_t([SOS]*batch_size), enc_outputs, hidden
class DecoderRNN(nn.Module):
def __init__(self, embs, hidden_size, n_layers=2):
super(DecoderRNN, self).__init__()
self.emb, emb_size, output_size = create_emb(embs)
self.gru = nn.GRU(emb_size, hidden_size, batch_first=True, num_layers=n_layers)
self.out = nn.Linear(hidden_size, output_size)
def forward(self, inp, hidden):
emb = self.emb(inp).unsqueeze(1)
res, hidden = self.gru(emb, hidden)
res = F.log_softmax(self.out(res[:,0]))
return res, hidden
v=np.array([1,2,3]); v, v.shape
m=np.array([v,v*2,v*3]); m, m.shape
m+v
v1=np.expand_dims(v,-1); v1, v1.shape
m+v1
def unit_prefix(x, n=1):
for i in range(n): x = x.unsqueeze(0)
return x
def align(x, y, start_dim=2):
xd, yd = x.dim(), y.dim()
if xd > yd: y = unit_prefix(y, xd - yd)
elif yd > xd: x = unit_prefix(x, yd - xd)
xs, ys = list(x.size()), list(y.size())
nd = len(ys)
for i in range(start_dim, nd):
td = nd-i-1
if ys[td]==1: ys[td] = xs[td]
elif xs[td]==1: xs[td] = ys[td]
return x.expand(*xs), y.expand(*ys)
def aligned_op(x,y,f): return f(*align(x,y,0))
def add(x, y): return aligned_op(x, y, operator.add)
def sub(x, y): return aligned_op(x, y, operator.sub)
def mul(x, y): return aligned_op(x, y, operator.mul)
def div(x, y): return aligned_op(x, y, operator.truediv)
def dot(x, y):
assert(1<y.dim()<5)
x, y = align(x, y)
if y.dim() == 2: return x.mm(y)
elif y.dim() == 3: return x.bmm(y)
else:
xs,ys = x.size(), y.size()
res = torch.zeros(*(xs[:-1] + (ys[-1],)))
for i in range(xs[0]): res[i].baddbmm_(x[i], (y[i]))
return res
def Arr(*sz): return torch.randn(sz)/math.sqrt(sz[0])
m = Arr(3, 2); m2 = Arr(4, 3)
v = Arr(2)
b = Arr(4,3,2); t = Arr(5,4,3,2)
mt,bt,tt = m.transpose(0,1), b.transpose(1,2), t.transpose(2,3)
def check_eq(x,y): assert(torch.equal(x,y))
check_eq(dot(m,mt),m.mm(mt))
check_eq(dot(v,mt), v.unsqueeze(0).mm(mt))
check_eq(dot(b,bt),b.bmm(bt))
check_eq(dot(b,mt),b.bmm(unit_prefix(mt).expand_as(bt)))
exp = t.view(-1,3,2).bmm(tt.contiguous().view(-1,2,3)).view(5,4,3,3)
check_eq(dot(t,tt),exp)
check_eq(add(m,v),m+unit_prefix(v).expand_as(m))
check_eq(add(v,m),m+unit_prefix(v).expand_as(m))
check_eq(add(m,t),t+unit_prefix(m,2).expand_as(t))
check_eq(sub(m,v),m-unit_prefix(v).expand_as(m))
check_eq(mul(m,v),m*unit_prefix(v).expand_as(m))
check_eq(div(m,v),m/unit_prefix(v).expand_as(m))
def Var(*sz): return Parameter(Arr(*sz)).cuda()
class AttnDecoderRNN(nn.Module):
def __init__(self, embs, hidden_size, n_layers=2, p=0.1):
super(AttnDecoderRNN, self).__init__()
self.emb, emb_size, output_size = create_emb(embs)
self.W1 = Var(hidden_size, hidden_size)
self.W2 = Var(hidden_size, hidden_size)
self.W3 = Var(emb_size+hidden_size, hidden_size)
self.b2 = Var(hidden_size)
self.b3 = Var(hidden_size)
self.V = Var(hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, num_layers=2)
self.out = nn.Linear(hidden_size, output_size)
def forward(self, inp, hidden, enc_outputs):
emb_inp = self.emb(inp)
w1e = dot(enc_outputs, self.W1)
w2h = add(dot(hidden[-1], self.W2), self.b2).unsqueeze(1)
u = F.tanh(add(w1e, w2h))
a = mul(self.V,u).sum(2).squeeze(2)
a = F.softmax(a).unsqueeze(2)
Xa = mul(a, enc_outputs).sum(1)
res = dot(torch.cat([emb_inp, Xa.squeeze(1)], 1), self.W3)
res = add(res, self.b3).unsqueeze(0)
res, hidden = self.gru(res, hidden)
res = F.log_softmax(self.out(res.squeeze(0)))
return res, hidden
def get_batch(x, y, batch_size=16):
idxs = np.random.permutation(len(x))[:batch_size]
return x[idxs], y[idxs]
hidden_size = 128
fra, eng = get_batch(fr_train, en_train, 4)
inp = long_t(fra)
targ = long_t(eng)
emb, emb_size, output_size = create_emb(en_emb_t)
emb.cuda()
inp.size()
W1 = Var(hidden_size, hidden_size)
W2 = Var(hidden_size, hidden_size)
W3 = Var(emb_size+hidden_size, hidden_size)
b2 = Var(1,hidden_size)
b3 = Var(1,hidden_size)
V = Var(1,1,hidden_size)
gru = nn.GRU(hidden_size, hidden_size, num_layers=2).cuda()
out = nn.Linear(hidden_size, output_size).cuda()
dec_inputs, enc_outputs, hidden = encode(inp, encoder)
enc_outputs.size(), hidden.size()
emb_inp = emb(dec_inputs); emb_inp.size()
w1e = dot(enc_outputs, W1); w1e.size()
w2h = dot(hidden[-1], W2)
w2h = (w2h+b2.expand_as(w2h)).unsqueeze(1); w2h.size()
u = F.tanh(w1e + w2h.expand_as(w1e))
a = (V.expand_as(u)*u).sum(2).squeeze(2)
a = F.softmax(a).unsqueeze(2); a.size(),a.sum(1).squeeze(1)
Xa = (a.expand_as(enc_outputs) * enc_outputs).sum(1); Xa.size()
res = dot(torch.cat([emb_inp, Xa.squeeze(1)], 1), W3)
res = (res+b3.expand_as(res)).unsqueeze(0); res.size()
res, hidden = gru(res, hidden); res.size(), hidden.size()
res = F.log_softmax(out(res.squeeze(0))); res.size()
def train(inp, targ, encoder, decoder, enc_opt, dec_opt, crit):
decoder_input, encoder_outputs, hidden = encode(inp, encoder)
target_length = targ.size()[1]
enc_opt.zero_grad(); dec_opt.zero_grad()
loss = 0
for di in range(target_length):
decoder_output, hidden = decoder(decoder_input, hidden, encoder_outputs)
decoder_input = targ[:, di]
loss += crit(decoder_output, decoder_input)
loss.backward()
enc_opt.step(); dec_opt.step()
return loss.data[0] / target_length
def req_grad_params(o):
return (p for p in o.parameters() if p.requires_grad)
def trainEpochs(encoder, decoder, n_epochs, print_every=1000, lr=0.01):
loss_total = 0 # Reset every print_every
enc_opt = optim.RMSprop(req_grad_params(encoder), lr=lr)
dec_opt = optim.RMSprop(decoder.parameters(), lr=lr)
crit = nn.NLLLoss().cuda()
for epoch in range(n_epochs):
fra, eng = get_batch(fr_train, en_train, 64)
inp = long_t(fra)
targ = long_t(eng)
loss = train(inp, targ, encoder, decoder, enc_opt, dec_opt, crit)
loss_total += loss
if epoch % print_every == print_every-1:
print('%d %d%% %.4f' % (epoch, epoch / n_epochs * 100, loss_total / print_every))
loss_total = 0
hidden_size = 128
encoder = EncoderRNN(fr_emb_t, hidden_size).cuda()
decoder = AttnDecoderRNN(en_emb_t, hidden_size).cuda()
trainEpochs(encoder, decoder, 10000, print_every=500, lr=0.005)
def evaluate(inp):
decoder_input, encoder_outputs, hidden = encode(inp, encoder)
target_length = maxlen
decoded_words = []
for di in range(target_length):
decoder_output, hidden = decoder(decoder_input, hidden, encoder_outputs)
topv, topi = decoder_output.data.topk(1)
ni = topi[0][0]
if ni==PAD: break
decoded_words.append(en_vocab[ni])
decoder_input = long_t([ni])
return decoded_words
def sent2ids(sent):
ids = [fr_w2id[t] for t in simple_toks(sent)]
return pad_sequences([ids], maxlen, 'int64', "post", "post")
def fr2en(sent):
ids = long_t(sent2ids(sent))
trans = evaluate(ids)
return ' '.join(trans)
i=8
print(en_qs[i],fr_qs[i])
fr2en(fr_qs[i])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare corpus
Step2: To make this problem a little simpler so we can train our model more quickly, we'll just learn to translate questions that begin with 'Wh' (e.g. what, why, where which). Here are our regexps that filter the sentences we want.
Step3: Because it takes a while to load the data, we save the results to make it easier to load in later.
Step4: Because we are translating at word level, we need to tokenize the text first. (Note that it is also possible to translate at character level, which doesn't require tokenizing.) There are many tokenizers available, but we found we got best results using these simple heuristics.
Step5: Special tokens used to pad the end of sentences, and to mark the start of a sentence.
Step6: Enumerate the unique words (vocab) in the corpus, and also create the reverse map (word->index). Then use this mapping to encode every sentence as a list of int indices.
Step7: Word vectors
Step8: For French word vectors, we're using those from http
Step9: We need to map each word index in our vocabs to their word vector. Not every word in our vocabs will be in our word vectors, since our tokenization approach won't be identical to the word vector creators - in these cases we simply create a random vector.
Step10: Prep data
Step11: And of course we need to separate our training and test sets...
Step12: Here's an example of a French and English sentence, after encoding and padding.
Step13: Model
Step14: Turning a sequence into a representation can be done using an RNN (called the 'encoder'. This approach is useful because RNN's are able to keep track of state and memory, which is obviously important in forming a complete understanding of a sentence.
Step15: Finally, we arrive at a vector representation of the sequence which captures everything we need to translate it. We feed this vector into more RNN's, which are trying to generate the labels. After this, we make a classification for what each word is in the output sequence.
Step16: This graph demonstrates the accuracy decay for a neural translation task. With an encoding/decoding technique, larger input sequences result in less accuracy.
Step17: But Pytorch doesn't support broadcasting. So let's add it to the basic operators, and to a general tensor dot product
Step18: Let's test!
Step19: Attentional model
Step20: Attention testing
Step21: Train
Step22: Run
Step23: Testing
|
12,667
|
<ASSISTANT_TASK:>
Python Code:
__AUTHORS__ = {'am': ("Andrea Marino",
"andrea.marino@unifi.it",),
'mn': ("Massimo Nocentini",
"massimo.nocentini@unifi.it",
"https://github.com/massimo-nocentini/",)}
__KEYWORDS__ = ['Python', 'Jupyter', 'language', 'keynote',]
def increment(a):
return a + 1
increment(0)
increment(1)
L = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
L
LL = [increment(a) for a in L]
LL
LLL = [increment(a) for a in LL]
LLL
r = range(10)
r
list(r)
map(lambda i: i + 1, L)
(lambda i: i + 1)(0)
(lambda i: i + 1)(1)
list(map(lambda i: i + 1, L))
M = map(lambda i: i + 1, L)
M
next(M)
next(M)
next(M)
next(M)
next(M)
next(M)
next(M)
next(M)
next(M)
next(M)
next(M)
list(range(10))
list(i for i in range(10))
N = (i for i in range(10))
N
list(N)
next(N)
from random import random # import the random generator, to be used to sample from the uniform distribution
random() # a quick check that the random function works
int(True) # this is a very quick check to see if a Boolean can be used as integer
def Bernoulli(p):
'This is a generator for a Bernoulli random variable of parameter `p` for success.'
while True: # forever we loop
r = random() # get a sample
yield int(r <= p) # if that sample denotes a success or a failure we *yield* that outcome
yield # if we evaluate *yield* not in a context, Python raises an error because it is a construct
help(Bernoulli)
B = Bernoulli(p=0.6) # B is our random variable
B
next(B)
next(B)
next(B)
sample = [next(B) for _ in range(1000)]
sample[:20] # just for a quick evaluation, we print the first 20 elements
from collections import Counter
Counter(sample)
B_flip = map(lambda o: 1-o, B)
B_flip
sample = [next(B_flip) for _ in range(1000)]
sample[:20] # just for a quick evaluation, we print the first 20 elements
def Bernoulli(p):
'This is a generator for a Bernoulli random variable of parameter `p` for success.'
while True: # forever we loop
r = random() # get a sample
o = int(r <= p) # if that sample denotes a success or a failure we *yield* that outcome
print('B ' + str(o))
yield o
def flip(o):
print('flip')
return 1-o
B_flip = map(flip, Bernoulli(p=0.9))
B_flip
sample = [next(B_flip) for _ in range(20)]
Counter(sample)
class A(object):
def __init__(self, j):
self.j = j
def __add__(self, i):
return self.j + i
def __radd__(self, i):
return self.j + i
def __lt__(self, i):
return self.j < i
def B(b):
pass
B
B(3) is None
def B(b):
...
increment(4)
a = A()
increment(a)
a = A()
increment(a)
A(3) + 1
1 + A(3)
1 + A(3)
A(4) < 2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <center><img src="https
Step2: we want to build an object that denotes a Bernoulli random variable.
|
12,668
|
<ASSISTANT_TASK:>
Python Code:
# Install the SDK
#!pip3 install 'kfp>=0.1.31.2' --quiet
import kfp
import kfp.components as comp
#Define a Python function
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
return a + b
add_op = comp.func_to_container_op(add)
#Advanced function
#Demonstrates imports, helper functions and multiple outputs
from typing import NamedTuple
def my_divmod(dividend: float, divisor:float) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float), ('mlpipeline_ui_metadata', 'UI_metadata'), ('mlpipeline_metrics', 'Metrics')]):
'''Divides two numbers and calculate the quotient and remainder'''
#Pip installs inside a component function.
#NOTE: installs should be placed right at the beginning to avoid upgrading a package
# after it has already been imported and cached by python
import sys, subprocess;
subprocess.run([sys.executable, '-m', 'pip', 'install', 'tensorflow==1.8.0'])
#Imports inside a component function:
import numpy as np
#This function demonstrates how to use nested functions inside a component function:
def divmod_helper(dividend, divisor):
return np.divmod(dividend, divisor)
(quotient, remainder) = divmod_helper(dividend, divisor)
from tensorflow.python.lib.io import file_io
import json
# Exports a sample tensorboard:
metadata = {
'outputs' : [{
'type': 'tensorboard',
'source': 'gs://ml-pipeline-dataset/tensorboard-train',
}]
}
# Exports two sample metrics:
metrics = {
'metrics': [{
'name': 'quotient',
'numberValue': float(quotient),
},{
'name': 'remainder',
'numberValue': float(remainder),
}]}
from collections import namedtuple
divmod_output = namedtuple('MyDivmodOutput', ['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics'])
return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics))
my_divmod(100, 7)
divmod_op = comp.func_to_container_op(my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3')
import kfp.dsl as dsl
@dsl.pipeline(
name='Calculation pipeline',
description='A toy pipeline that performs arithmetic calculations.'
)
def calc_pipeline(
a='a',
b='7',
c='17',
):
#Passing pipeline parameter and a constant value as operation arguments
add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance.
#Passing a task output reference as operation arguments
#For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax
divmod_task = divmod_op(add_task.output, b)
#For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax
result_task = add_op(divmod_task.outputs['quotient'], c)
#Specify pipeline argument values
arguments = {'a': '7', 'b': '8'}
#Submit a pipeline run
kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
# Run the pipeline on a separate Kubeflow Cluster instead
# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)
# kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple function that just add two numbers
Step2: Convert the function to a pipeline operation
Step3: A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.
Step4: Test running the python function directly
Step5: Convert the function to a pipeline operation
Step6: Define the pipeline
Step7: Submit the pipeline for execution
|
12,669
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
%matplotlib qt
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*8,3), np.float32)
objp[:,:2] = np.mgrid[0:8, 0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('../camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for idx, fname in enumerate(images):
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (8,6), None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (8,6), corners, ret)
#write_name = 'corners_found'+str(idx)+'.jpg'
#cv2.imwrite(write_name, img)
cv2.imshow('img', img)
cv2.waitKey(500)
cv2.destroyAllWindows()
import pickle
%matplotlib inline
# Test undistortion on an image
img = cv2.imread('calibration_wide/test_image.jpg')
img_size = (img.shape[1], img.shape[0])
# Do camera calibration given object points and image points
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None)
dst = cv2.undistort(img, mtx, dist, None, mtx)
cv2.imwrite('calibration_wide/test_undist.jpg',dst)
# Save the camera calibration result for later use (we won't worry about rvecs / tvecs)
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump( dist_pickle, open( "calibration_wide/wide_dist_pickle.p", "wb" ) )
#dst = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)
# Visualize undistortion
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(dst)
ax2.set_title('Undistorted Image', fontsize=30)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If the above cell ran sucessfully, you should now have objpoints and imgpoints needed for camera calibration. Run the cell below to calibrate, calculate distortion coefficients, and test undistortion on an image!
|
12,670
|
<ASSISTANT_TASK:>
Python Code:
from keras.layers import Conv2D, MaxPooling2D, Input, Dense, Flatten, Activation, add, Lambda
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import GlobalAveragePooling2D
from keras.optimizers import RMSprop
from keras.backend import tf as ktf
from keras.models import Model, Sequential, load_model
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras.applications.resnet50 import ResNet50
from lib.data_utils import get_MNIST_data
data = get_MNIST_data(num_validation=0, fit=True)
# see if we get the data correctly
print('image size: ', data['X_train'].shape)
# build the model
# preprocess to (28,28,3), then build a resize layer using tf.resize_images() to (224,224,3) as input
inputs = Input(shape=(28,28,3))
inputs_resize = Lambda(lambda img: ktf.image.resize_images(img, (224,224)))(inputs) # resize layer
resnet50 = ResNet50(include_top=False, input_tensor=inputs_resize, input_shape=(224,224,3), pooling='avg')
x = resnet50.output
#x = Dense(units=1024, activation='relu')(x)
predictions = Dense(units=10, activation='softmax')(x)
# connect the model
freezemodel = Model(inputs=inputs, outputs=predictions)
#freezemodel.summary()
# freeze all ResNet50 layers
for layer in resnet50.layers:
layer.trainable = False
# set the loss and optimizer
freezemodel.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# fit the model
checkpoint = ModelCheckpoint('../models/freezeResNet_{epoch:02d}-{loss:.2f}.h5',
monitor='loss',
save_best_only=True)
freezemodel.fit(data['X_train'], data['y_train'].reshape(-1,1),
batch_size=16, epochs=10, callbacks=[checkpoint], initial_epoch=1)
# test the model and see accuracy
score = freezemodel.evaluate(data['X_test'], data['y_test'].reshape(-1, 1), batch_size=32)
print(score)
# save the model: 0.96
freezemodel.save('ResNet50_freeze.h5')
# continue the model training
freezemodel = load_model('../models/ResNet50_freeze.h5', custom_objects={'ktf': ktf})
# set the loss and optimizer
rmsprop = RMSprop(lr=0.0001)
freezemodel.compile(optimizer=rmsprop, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# fit the model
checkpoint = ModelCheckpoint('../models/freezeResNet_{epoch:02d}-{loss:.2f}.h5',
monitor='loss',
save_best_only=True)
freezemodel.fit(data['X_train'], data['y_train'].reshape(-1, 1),
batch_size=16, epochs=10, callbacks=[checkpoint], initial_epoch=4)
# build the model
# preprocess to (28,28,3), then build a resize layer using tf.resize_images() to (224,224,3) as input
inputs = Input(shape=(28,28,3))
inputs_resize = Lambda(lambda img: ktf.image.resize_images(img, (224,224)))(inputs) # resize layer
resnet50 = ResNet50(include_top=False, input_tensor=inputs_resize, input_shape=(224,224,3), pooling='avg')
x = resnet50.output
#x = Dense(units=1024, activation='relu')(x)
predictions = Dense(units=10, activation='softmax')(x)
# connect the model
tunemodel = Model(inputs=inputs, outputs=predictions)
#freezemodel.summary()
# set the loss and optimizer
rmsprop = RMSprop(lr=0.0001)
tunemodel.compile(optimizer=rmsprop, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# fit the model
checkpoint = ModelCheckpoint('../models/tuneResNet_{epoch:02d}-{loss:.2f}.h5',
monitor='loss',
save_best_only=True)
tunemodel.fit(data['X_train'], data['y_train'].reshape(-1, 1),
batch_size=16, epochs=10, callbacks=[checkpoint], initial_epoch=0)
# test the model and see accuracy
score = tunemodel.evaluate(data['X_test'], data['y_test'].reshape(-1, 1), batch_size=32)
print(score)
# build the model
# preprocess to (28,28,3), then build a resize layer using tf.resize_images() to (224,224,3) as input
inputs = Input(shape=(28,28,3))
inputs_resize = Lambda(lambda img: ktf.image.resize_images(img, (224,224)))(inputs) # resize layer
resnet50 = ResNet50(include_top=False, input_tensor=inputs_resize, input_shape=(224,224,3), pooling='avg')
x = resnet50.output
predictions = Dense(units=10, activation='softmax')(x)
# connect the model
tunemodel = Model(inputs=inputs, outputs=predictions)
# set the loss and optimizer
rmsprop = RMSprop(lr=0.0001)
tunemodel.compile(optimizer=rmsprop, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# fit the model
checkpoint = ModelCheckpoint('../models/tuneResNet_early_{epoch:02d}-{loss:.2f}.h5',
monitor='loss',
save_best_only=True)
earlystop = EarlyStopping(min_delta=0.0001, patience=1)
tunemodel.fit(data['X_train'], data['y_train'].reshape(-1, 1),
batch_size=16, epochs=10, validation_data=(data['X_test'], data['y_test'].reshape(-1, 1)),
callbacks=[checkpoint, earlystop], initial_epoch=0)
# test the model and see accuracy
score = tunemodel.evaluate(data['X_test'], data['y_test'].reshape(-1, 1),
batch_size=16)
print(score)
from lib.data_utils import create_submission
from keras.models import load_model
# for freeze ResNet50 model (3 epochs)
simple_CNN = load_model('../models/freezeResNet_03-0.09.h5', custom_objects={'ktf': ktf})
print('Load model successfully.')
create_submission(simple_CNN, '../data/test.csv', '../submission/submission_freezeResNet_03.csv', 16, fit=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the MNIST data. Notice that we assume that it's 'kaggle-DigitRecognizer/data/train.csv', and we use helper function to read into a dictionary.
Step2: Freeze-weights transfer
Step3: Fine-tune transfer
Step4: Fine-tune transfer with early stopping
Step5: Create submissions
|
12,671
|
<ASSISTANT_TASK:>
Python Code:
import veneer
v = veneer.Veneer()
%matplotlib inline
v.network().plot()
set(v.model.catchment.runoff.get_models())
v.model.find_states('TIME.Models.RainfallRunoff.AWBM.AWBM')
v.model.catchment.runoff.create_modelled_variable?
# Save the result!
variables = v.model.catchment.runoff.create_modelled_variable('Baseflow store')
variables
# variables['created'] are the variable names that we want to insert into the functions
variables['created']
name_params = list(v.model.catchment.runoff.enumerate_names())
name_params
v.model.functions.create_functions?
# Again, save the result...
functions = v.model.functions.create_functions('$funky_%s_%s','1.1 * %s',variables['created'],name_params)
functions
functions['created']
# Applying functions in some nonsensical manner...
v.model.catchment.runoff.apply_function('A2',functions['created'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Demonstration model
Step2: NOTE
Step3: The result of the function call is very important. It tells us what was created and the names.
Step4: Result of create_functions includes a list of created functions
Step5: Note You can see all these in Edit | Functions
|
12,672
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
data = pd.read_csv("train.csv", index_col="Loan_ID")
# test = pd.read_csv("test.csv", index_col="PassengerID")
print data.shape
data.columns
data.loc[(data["Gender"]=="Female") & (data["Education"]=="Not Graduate") & (data["Loan_Status"]=="Y"), ["Gender","Education","Loan_Status"]]
#Check current type:
data.dtypes
#Load the file:
colTypes = pd.read_csv('datatypes.csv')
print colTypes
#Iterate through each row and assign variable type.
# Note: astype is used to asign types
for i, row in colTypes.iterrows(): #i: dataframe index; row: each row in series format
if row['feature']=="categorical":
data[row['feature']]=data[row['feature']].astype(np.object)
elif row['feature']=="continuous":
data[row['feature']]=data[row['feature']].astype(np.float)
print data.dtypes
#Create a new function:
def num_missing(x):
return sum(x.isnull())
#Applying per column:
print "Missing values per column:"
print data.apply(num_missing, axis=0) #axis=0 defines that function is to be applied on each column
#Applying per row:
print "\nMissing values per row:"
print data.apply(num_missing, axis=1).head() #axis=1 defines that function is to be applied on each column
#First we import a function to determine the mode
from scipy.stats import mode
mode(data['Gender'])
mode(data['Gender']).mode[0]
#Impute the values:
data['Gender'].fillna(mode(data['Gender']).mode[0], inplace=True)
data['Married'].fillna(mode(data['Married']).mode[0], inplace=True)
data['Self_Employed'].fillna(mode(data['Self_Employed']).mode[0], inplace=True)
#Now check the #missing values again to confirm:
print data.apply(num_missing, axis=0)
#Determine pivot table
impute_grps = data.pivot_table(values=["LoanAmount"], index=["Gender","Married","Self_Employed"], aggfunc=np.mean)
print impute_grps
#iterate only through rows with missing LoanAmount
for i,row in data.loc[data['LoanAmount'].isnull(),:].iterrows():
ind = tuple([row['Gender'],row['Married'],row['Self_Employed']])
data.loc[i,'LoanAmount'] = impute_grps.loc[ind].values[0]
#Now check the #missing values again to confirm:
print data.apply(num_missing, axis=0)
pd.crosstab(data["Credit_History"],data["Loan_Status"],margins=True)
def percConvert(ser):
return ser/float(ser[-1])
pd.crosstab(data["Credit_History"],data["Loan_Status"],margins=True).apply(percConvert, axis=1)
prop_rates = pd.DataFrame([1000, 5000, 12000], index=['Rural','Semiurban','Urban'],columns=['rates'])
prop_rates
data_merged = data.merge(right=prop_rates, how='inner',left_on='Property_Area',right_index=True, sort=False)
data_merged.pivot_table(values='Credit_History',index=['Property_Area','rates'], aggfunc=len)
data_sorted = data.sort_values(['ApplicantIncome','CoapplicantIncome'], ascending=False)
data_sorted[['ApplicantIncome','CoapplicantIncome']].head(10)
import matplotlib.pyplot as plt
%matplotlib inline
data.boxplot(column="ApplicantIncome",by="Loan_Status")
data.hist(column="ApplicantIncome",by="Loan_Status",bins=30)
#Binning:
def binning(col, cut_points, labels=None):
#Define min and max values:
minval = col.min()
maxval = col.max()
#create list by adding min and max to cut_points
break_points = [minval] + cut_points + [maxval]
#if no labels provided, use default labels 0 ... (n-1)
if not labels:
labels = range(len(cut_points)+1)
#Binning using cut function of pandas
colBin = pd.cut(col,bins=break_points,labels=labels,include_lowest=True)
return colBin
#Binning age:
cut_points = [90,140,190]
labels = ["low","medium","high","very high"]
data["LoanAmount_Bin"] = binning(data["LoanAmount"], cut_points, labels)
print pd.value_counts(data["LoanAmount_Bin"], sort=False)
#Define a generic function using Pandas replace function
def coding(col, codeDict):
colCoded = pd.Series(col, copy=True)
for key, value in codeDict.items():
colCoded.replace(key, value, inplace=True)
return colCoded
#Coding LoanStatus as Y=1, N=0:
print 'Before Coding:'
print pd.value_counts(data["Loan_Status"])
data["Loan_Status_Coded"] = coding(data["Loan_Status"], {'N':0,'Y':1})
print '\nAfter Coding:'
print pd.value_counts(data["Loan_Status_Coded"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Boolean Indexing
Step2: More
Step3: Here we see that Credit_History is a nominal variable but appearing as float. A good way to tackle this issue is to create a csv file with column names and types. This way we can make a generic function to read the file and assign column data types. For instance, in this case I've defined a csv file datatypes.csv (download).
Step4: On loading this file, we can iterate through each row and assign the datatype from column 'type' to the variable name defined in 'feature' column.
Step5: Now the credit history column is modified to 'object' type which is used for representing nominal variables in Pandas.
Step6: Thus we get the desired result. Note
Step7: This returns both mode and count. Remember that mode can be an array as there can be multiple values with high frequency. We will take the first one by default always using
Step8: Now we can fill the missing values and check using technique #3.
Step9: Hence confirmed the missing values are imputed. Note
Step10: More
Step11: Note
Step12: These are absolute numbers but percentages can be more intuitive in making some quick insights. We can do this using the apply function
Step13: Now it is clearly evident that people with a credit histpry have much higher chances of getting a loan as 80% people with credit history got a loan as compared to only 9% without credit histoty.
Step14: Now we can merge this information with the original dataframe as
Step15: The pivot table validates sucessful merge operation. Note that the 'values' argument is irrelevant here because we are simply counting the values.
Step16: Note
Step17: This shows that income is not a big deciding factor on its own as there is no appreciable difference between the people who received and were denied the loan.
Step18: More
|
12,673
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
!pip install -q kaggle
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import zipfile
from sklearn import model_selection
import os
import pathlib
# Upload the API token.
def get_kaggle():
try:
import kaggle
return kaggle
except OSError:
pass
token_file = pathlib.Path("~/.kaggle/kaggle.json").expanduser()
token_file.parent.mkdir(exist_ok=True, parents=True)
try:
from google.colab import files
except ImportError:
raise ValueError("Could not find kaggle token.")
uploaded = files.upload()
token_content = uploaded.get('kaggle.json', None)
if token_content:
token_file.write_bytes(token_content)
token_file.chmod(0o600)
else:
raise ValueError('Need a file named "kaggle.json"')
import kaggle
return kaggle
kaggle = get_kaggle()
SENTIMENT_LABELS = [
"negative", "somewhat negative", "neutral", "somewhat positive", "positive"
]
# Add a column with readable values representing the sentiment.
def add_readable_labels_column(df, sentiment_value_column):
df["SentimentLabel"] = df[sentiment_value_column].replace(
range(5), SENTIMENT_LABELS)
# Download data from Kaggle and create a DataFrame.
def load_data_from_zip(path):
with zipfile.ZipFile(path, "r") as zip_ref:
name = zip_ref.namelist()[0]
with zip_ref.open(name) as zf:
return pd.read_csv(zf, sep="\t", index_col=0)
# The data does not come with a validation set so we'll create one from the
# training set.
def get_data(competition, train_file, test_file, validation_set_ratio=0.1):
data_path = pathlib.Path("data")
kaggle.api.competition_download_files(competition, data_path)
competition_path = (data_path/competition)
competition_path.mkdir(exist_ok=True, parents=True)
competition_zip_path = competition_path.with_suffix(".zip")
with zipfile.ZipFile(competition_zip_path, "r") as zip_ref:
zip_ref.extractall(competition_path)
train_df = load_data_from_zip(competition_path/train_file)
test_df = load_data_from_zip(competition_path/test_file)
# Add a human readable label.
add_readable_labels_column(train_df, "Sentiment")
# We split by sentence ids, because we don't want to have phrases belonging
# to the same sentence in both training and validation set.
train_indices, validation_indices = model_selection.train_test_split(
np.unique(train_df["SentenceId"]),
test_size=validation_set_ratio,
random_state=0)
validation_df = train_df[train_df["SentenceId"].isin(validation_indices)]
train_df = train_df[train_df["SentenceId"].isin(train_indices)]
print("Split the training data into %d training and %d validation examples." %
(len(train_df), len(validation_df)))
return train_df, validation_df, test_df
train_df, validation_df, test_df = get_data(
"sentiment-analysis-on-movie-reviews",
"train.tsv.zip", "test.tsv.zip")
train_df.head(20)
class MyModel(tf.keras.Model):
def __init__(self, hub_url):
super().__init__()
self.hub_url = hub_url
self.embed = hub.load(self.hub_url).signatures['default']
self.sequential = tf.keras.Sequential([
tf.keras.layers.Dense(500),
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(5),
])
def call(self, inputs):
phrases = inputs['Phrase'][:,0]
embedding = 5*self.embed(phrases)['default']
return self.sequential(embedding)
def get_config(self):
return {"hub_url":self.hub_url}
model = MyModel("https://tfhub.dev/google/nnlm-en-dim128/1")
model.compile(
loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.optimizers.Adam(),
metrics = [tf.keras.metrics.SparseCategoricalAccuracy(name="accuracy")])
history = model.fit(x=dict(train_df), y=train_df['Sentiment'],
validation_data=(dict(validation_df), validation_df['Sentiment']),
epochs = 25)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
train_eval_result = model.evaluate(dict(train_df), train_df['Sentiment'])
validation_eval_result = model.evaluate(dict(validation_df), validation_df['Sentiment'])
print(f"Training set accuracy: {train_eval_result[1]}")
print(f"Validation set accuracy: {validation_eval_result[1]}")
predictions = model.predict(dict(validation_df))
predictions = tf.argmax(predictions, axis=-1)
predictions
cm = tf.math.confusion_matrix(validation_df['Sentiment'], predictions)
cm = cm/cm.numpy().sum(axis=1)[:, tf.newaxis]
sns.heatmap(
cm, annot=True,
xticklabels=SENTIMENT_LABELS,
yticklabels=SENTIMENT_LABELS)
plt.xlabel("Predicted")
plt.ylabel("True")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How to solve a problem on Kaggle with TF-Hub
Step2: Since this tutorial will be using a dataset from Kaggle, it requires creating an API Token for your Kaggle account, and uploading it to the Colab environment.
Step3: Getting started
Step4: Note
Step5: Training an Model
Step6: Prediction
Step7: Confusion matrix
|
12,674
|
<ASSISTANT_TASK:>
Python Code:
%%writefile game_of_life_utils.py
import numpy as np
from scipy.signal import convolve2d
def life_step_1(X):
Game of life step using generator expressions
nbrs_count = sum(np.roll(np.roll(X, i, 0), j, 1)
for i in (-1, 0, 1) for j in (-1, 0, 1)
if (i != 0 or j != 0))
return (nbrs_count == 3) | (X & (nbrs_count == 2))
def life_step_2(X):
Game of life step using scipy tools
nbrs_count = convolve2d(X, np.ones((3, 3)), mode='same', boundary='wrap') - X
return (nbrs_count == 3) | (X & (nbrs_count == 2))
def set_ic(X,ic,offset=(0,0)):
Ni,Nj = X.shape
ni,nj = np.array(ic).shape
assert(offset[0]+ni<Ni)
assert(offset[1]+nj<Nj)
X[offset[0]:offset[0]+ni, offset[1]:offset[1]+nj] = ic
unbounded = [[1, 1, 1, 0, 1],
[1, 0, 0, 0, 0],
[0, 0, 0, 1, 1],
[0, 1, 1, 0, 1],
[1, 0, 1, 0, 1]]
diehard = [[0, 0, 0, 0, 0, 0, 1, 0],
[1, 1, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 1, 1, 1]]
boat = [[1, 1, 0],
[1, 0, 1],
[0, 1, 0]]
r_pentomino = [[0, 1, 1],
[1, 1, 0],
[0, 1, 0]]
beacon = [[0, 0, 1, 1],
[0, 0, 1, 1],
[1, 1, 0, 0],
[1, 1, 0, 0]]
acorn = [[0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[1, 1, 0, 0, 1, 1, 1]]
spaceship = [[0, 0, 1, 1, 0],
[1, 1, 0, 1, 1],
[1, 1, 1, 1, 0],
[0, 1, 1, 0, 0]]
block_switch_engine = [[0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 1, 0, 1, 1],
[0, 0, 0, 0, 1, 0, 1, 0],
[0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 0]]
glider_gun =\
[[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1],
[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1],
[1,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[1,1,0,0,0,0,0,0,0,0,1,0,0,0,1,0,1,1,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]]
all_ics = [diehard,boat,r_pentomino,beacon,acorn,spaceship,block_switch_engine,glider_gun,unbounded]
from game_of_life_utils import *
Ni,Nj = 60,40
X = np.zeros((Ni,Nj),dtype=np.bool)
set_ic(X,unbounded,offset = (25,28))
plt.imshow(X,interpolation='nearest',cmap='gray')
%%time
Xtime=[]
for s in range(225):
X = life_step_2(X)
Xtime.append(X.copy())
n = max(Ni,Nj)
if n<100:
scale = 500//n
else:
scale = 1
for i,X_ in enumerate(Xtime[::]):
clear_output(wait=True)
time.sleep(0.05)
display(Im.fromarray(240*X_.astype(np.uint8)).resize((scale*Nj,scale*Ni)))
print(s)
import ipyparallel as ipp
c = ipp.Client(profile='mpi')
print(c.ids)
view = c[:]
view.activate()
import os
notebook_wd = os.getcwd()
import os
print(view.apply_sync(os.getcwd))
view.map(os.chdir, [notebook_wd]*len(c.ids))
print(view.apply_sync(os.getcwd))
%%px --block
from mpi4py import MPI
import numpy as np
from game_of_life_utils import *
#number of procesess:
comm = MPI.COMM_WORLD
No_processes = comm.Get_size()
rank = comm.Get_rank()
#constants
Ni,Nj = No_processes*10,40
Niter = 225
#area:
if rank==0:
X = np.zeros((Ni,Nj),dtype=np.bool)
set_ic(X,unbounded,offset = (25,28))
else:
X = None
# subdomains:
buf = np.empty((Ni//No_processes,Nj),dtype=np.bool)
X_local = np.empty((Ni//No_processes+2,Nj),dtype=np.bool)
comm.Scatter(X, buf )
X_local[1:-1,:] = buf
U_local_time = []
Xtime = []
L_bulk = (1,slice(None))
L_ghost = (0,slice(None))
R_bulk = (-2, slice(None))
R_ghost = (-1, slice(None))
for i in range(Niter):
if rank<(No_processes-1):
comm.Send(X_local[R_bulk], dest=rank+1)
comm.Recv(X_local[R_ghost],source=rank+1)
if rank>0:
comm.Recv(X_local[L_ghost],source=rank-1)
comm.Send(X_local[L_bulk], dest=rank-1)
X_local = life_step_2(X_local)
comm.Gather(X_local[1:-1],X )
if rank==0:
Xtime.append( X.copy() )
U_local_time.append( X_local[1:-1].copy() )
%%px --block
from mpi4py import MPI
import numpy as np
from game_of_life_utils import *
#number of procesess:
comm = MPI.COMM_WORLD
No_processes = comm.Get_size()
rank = comm.Get_rank()
#constants
Ni,Nj = No_processes*10,40
Niter = 225
#area:
if rank==0:
X = np.zeros((Ni,Nj),dtype=np.bool)
set_ic(X,unbounded,offset = (25,28))
else:
X = None
# subdomains:
buf = np.empty((Ni//No_processes,Nj),dtype=np.bool)
X_local = np.empty((Ni//No_processes+2,Nj),dtype=np.bool)
comm.Scatter(X, buf )
X_local[1:-1,:] = buf
U_local_time = []
Xtime = []
L_bulk = (1,slice(None))
L_ghost = (0,slice(None))
R_bulk = (-2, slice(None))
R_ghost = (-1, slice(None))
for i in range(Niter):
if rank%2 == 0:
comm.Send(X_local[R_bulk], dest=(rank+1)%No_processes)
comm.Recv(X_local[R_ghost],source=(rank+1)%No_processes)
comm.Recv(X_local[L_ghost],source=(rank-1)%No_processes)
comm.Send(X_local[L_bulk], dest=(rank-1)%No_processes)
if rank%2 == 1:
comm.Recv(X_local[L_ghost],source=(rank-1)%No_processes)
comm.Send(X_local[L_bulk], dest=(rank-1)%No_processes)
comm.Send(X_local[R_bulk], dest=(rank+1)%No_processes)
comm.Recv(X_local[R_ghost],source=(rank+1)%No_processes)
X_local = life_step_2(X_local)
comm.Gather(X_local[1:-1],X )
if rank==0:
Xtime.append( X.copy() )
U_local_time.append( X_local[1:-1].copy() )
%%px --block
from mpi4py import MPI
import numpy as np
from game_of_life_utils import *
#number of procesess:
comm = MPI.COMM_WORLD
No_processes = comm.Get_size()
rank = comm.Get_rank()
#constants
Ni,Nj = No_processes*10,40
Niter = 225
#area:
if rank==0:
X = np.zeros((Ni,Nj),dtype=np.bool)
set_ic(X,unbounded,offset = (25,28))
else:
X = None
# subdomains:
buf = np.empty((Ni//No_processes,Nj),dtype=np.bool)
X_local = np.empty((Ni//No_processes+2,Nj),dtype=np.bool)
comm.Scatter(X, buf )
X_local[1:-1,:] = buf
U_local_time = []
Xtime = []
L_bulk = (1,slice(None))
L_ghost = (0,slice(None))
R_bulk = (-2, slice(None))
R_ghost = (-1, slice(None))
for i in range(Niter):
if rank%2 == 0:
comm.Sendrecv(X_local[R_bulk], dest=(rank+1)%No_processes,sendtag=0,\
recvbuf=X_local[R_ghost],source=(rank+1)%No_processes)
comm.Sendrecv(X_local[L_bulk], dest=(rank-1)%No_processes,sendtag=0, \
recvbuf=X_local[L_ghost],source=(rank-1)%No_processes)
if rank%2 == 1:
comm.Sendrecv(X_local[L_bulk], dest=(rank-1)%No_processes,sendtag=0,\
recvbuf=X_local[L_ghost],source=(rank-1)%No_processes)
comm.Sendrecv(X_local[R_bulk], dest=(rank+1)%No_processes,sendtag=0,\
recvbuf=X_local[R_ghost],source=(rank+1)%No_processes)
X_local = life_step_2(X_local)
comm.Gather(X_local[1:-1],X )
if rank==0:
Xtime.append( X.copy() )
U_local_time.append( X_local[1:-1].copy() )
np.argsort(view['rank'])[0]
len( view['Xtime'][ np.argsort(view['rank'])[0] ] )
Xtime_parallel = view['Xtime'][ np.argsort(view['rank'])[0] ]
Ni, Nj = Xtime_parallel[0].shape
n = max(Ni,Nj)
if n<100:
scale = 500//n
else:
scale = 1
for i,X_ in enumerate(Xtime_parallel[::]):
clear_output(wait=True)
time.sleep(0.02)
display(Im.fromarray(240*X_.astype(np.uint8)).resize((scale*Nj,scale*Ni)))
print(i)
sum([np.all(X_ == Xp_) for X_,Xp_ in zip(Xtime,Xtime_parallel)]), len(Xtime)
for X_,Xp_ in zip(Xtime,Xtime_parallel):
print (np.all(X_ == Xp_))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Game of life - serial version
Step3: Initial conditions
Step4: Different example
Step5: Visualization
Step6: Parallel game of life
Step7: setting proper working directory
Step8: first version
Step9: version with glocal periodic boundaries
Step10: 3rd version - using Sendrecv
Step11: Comparison of parallel and single process versions
Step12: Xtime_parallel will be a copy (in this notebook) of table of time snashots of global domain .
Step13: validation
|
12,675
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
rand_1kx = np.random.randint(0,100,1000)
x_mean = np.mean(rand_1kx)
x_sd = np.std(rand_1kx)
x_mean
pop_intercept = 30
pop_slope = 1.8
error_boost = 10
pop_error = np.random.standard_normal(size = rand_1kx.size) * error_boost
# I added an error booster since without it, the correlation was too high.
y = pop_intercept + pop_slope*rand_1kx + pop_error
y_mean = np.mean(y)
y_sd = np.std(y)
y_mean
sns.jointplot(rand_1kx, y)
sns.distplot(pop_error)
from sklearn.linear_model import LinearRegression
X_train_full = rand_1kx.reshape(-1,1)
y_train_full = y.reshape(-1,1)
y_train_full.shape
lm.fit(X_train, y_train)
#print the linear model built
predicted_pop_slope = lm.coef_[0][0]
predicted_pop_intercept = lm.intercept_[0]
print("y = " + str(predicted_pop_slope) + "*X" + " + " + str(predicted_pop_intercept))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(rand_1kx, y, test_size=0.33)
print(X_train.size)
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
X_train = X_train.reshape(-1,1)
X_test = X_test.reshape(-1,1)
y_train = y_train.reshape(-1,1)
y_test = y_test.reshape(-1,1)
y_train.shape
lm.fit(X_train, y_train)
#print the linear model built
predicted_subset_slope = lm.coef_[0][0]
predicted_subset_intercept = lm.intercept_[0]
print("y = " + str(predicted_subset_slope) + "*X"
+ " + " + str(predicted_subset_intercept))
y_predicted = lm.predict(X_test)
residuals = y_test - y_predicted
jax = sns.jointplot(y_test, y_predicted)
jax.set_axis_labels(xlabel='Y', ylabel='Predicted Y')
dax = sns.distplot(residuals)
dax.set_title('Distribution of residuals')
jax = sns.jointplot(y_predicted, residuals)
jax.set_axis_labels(xlabel='Predicted Y', ylabel='Residuals')
jax = sns.jointplot(y_test, residuals)
jax.set_axis_labels(xlabel='Y', ylabel='Residuals')
pop_df = pd.DataFrame(data={'x':rand_1kx, 'y':y})
pop_df.head()
pop_df.shape
sample_slopes = []
sample_intercepts = []
for i in range(0,50):
# perform a choice on dataframe index
sample_index = np.random.choice(pop_df.index, size=50)
# select the subset using that index
sample_df = pop_df.iloc[sample_index]
# convert to numpy and reshape the matrix for lm.fit
sample_x = np.array(sample_df['x']).reshape(-1,1)
sample_y = np.array(sample_df['y']).reshape(-1,1)
lm.fit(X=sample_x, y=sample_y)
sample_slopes.append(lm.coef_[0][0])
sample_intercepts.append(lm.intercept_[0])
mean_sample_slope = np.mean(sample_slopes)
mean_sample_intercept = np.mean(sample_intercepts)
fig, ax = plt.subplots(1,2, figsize=(15,6))
# plot sample slopes
sns.distplot(sample_slopes, ax=ax[0])
ax[0].set_title('Distribution of sample slopes. Mean: '
+ str(round(mean_sample_slope, 2)))
ax[0].axvline(mean_sample_slope, color='black')
# plot sample slopes
sns.distplot(sample_intercepts, ax=ax[1])
ax[1].set_title('Distribution of sample intercepts. Mean: '
+ str(round(mean_sample_intercept,2)))
ax[1].axvline(mean_sample_intercept, color='black')
print("Predicting using population")
print("----------------------------")
print("Error in intercept: {}".format(pop_intercept - predicted_pop_intercept))
print("Error in slope: {}".format(pop_slope - predicted_pop_slope))
print("\n\nPredicting using subset")
print("----------------------------")
print("Error in intercept: {}".format(pop_intercept - predicted_subset_intercept))
print("Error in slope: {}".format(pop_slope - predicted_subset_slope))
print("\n\nPredicting using a number of smaller samples")
print("------------------------------------------------")
print("Error in intercept: {}".format(pop_intercept - mean_sample_intercept))
print("Error in slope: {}".format(pop_slope - mean_sample_slope))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Synthesize the dataset
Step2: Make a scatter plot of X and y variables.
Step3: X and y follow uniform distribution, but the error $\epsilon$ is generated from standard normal distribution with a boosting factor. Let us plot its histogram to verify the distribution
Step4: Predict using population
Step5: Prediction with 66% of data
Step6: Perform predictions and plot the charts
Step7: Fitted vs Actual scatter
Step8: Predict using multiple samples
Step9: Select 50 samples of size 200 and perform regression
Step10: Plot the distribution of sample slopes and intercepts
Step11: Conclusion
|
12,676
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_csv('data/human_body_temperature.csv')
# Your work here.
# Load Matplotlib + Seaborn and SciPy libraries
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy import stats
from scipy.stats import norm
from statsmodels.stats.weightstats import ztest
%matplotlib inline
df.head(5)
ax = sns.distplot(df[['temperature']], rug=True, axlabel='Temperature (o F)')
print("Yes. We have *" + str(df['temperature'].size) + "* records in the sample data file.")
print("There is no connection or dependence between the measured temperature values, in other words, the observations are independent.")
# Sample (dataset) size
df['temperature'].describe()
# Population mean temperature
POP_MEAN = 98.6
# Sample size, mean and standard deviation
sample_size = df['temperature'].count()
sample_mean = df['temperature'].mean()
sample_std = df['temperature'].std(axis=0)
print("Population mean temperature (given): POP_MEAN = " + str(POP_MEAN))
print("Sample size: sample_size = " + str(sample_size))
print("Sample mean: sample_mean = "+ str(sample_mean))
print("Sample standard deviation: sample_std = "+ str(sample_std))
print("* Ho or Null hypothesis: Average body temperature *is* " + str(POP_MEAN)+" degrees F.")
print("* Ha or Alternative hypothesis: Average body temperature *is not* " + str(POP_MEAN)+" degrees F.")
t = ((sample_mean - POP_MEAN)/sample_std)*np.sqrt(sample_size)
print("t = " + str(t))
degree = sample_size - 1
print("degrees of freedom =" + str(degree))
p = 1 - stats.t.cdf(abs(t),df=degree)
print("p-value = %.10f" % p)
p2 = 2*p
print("p-value = %.10f (2 * p-value)" % p2)
ALFA = 0.05
print(". alfa = " + str(ALFA))
print(". p-value = %.10f" % p2)
print("----")
print(". Sample mean: sample_mean = "+ str(sample_mean))
print(". Population mean temperature (given): POP_MEAN = " + str(POP_MEAN))
print(". Population standard deviation: sample_std = "+ str(sample_std))
print(". Sample size: sample_size = " + str(sample_size))
print("----")
z = ((sample_mean - POP_MEAN)/sample_std)*np.sqrt(sample_size)
print("Z value or z_score: z = " + str(z))
# P-Value two sided
p_value_z = 1 - (norm.sf(abs(z))*2)
print("P-Value = %.15f" % p_value_z)
ALFA = 0.05
print(". alfa = " + str(ALFA))
print(". p-value = %.15f" % p_value_z)
# A sample with randomly 10 records from original dataset
df_sample10 = df.sample(n=10)
df_sample10['temperature'].count()
ax = sns.distplot(df_sample10[['temperature']], rug=True, axlabel='Temperature (o F)')
sample10_size = df_sample10['temperature'].count()
sample10_mean = df_sample10['temperature'].mean()
sample10_std = df_sample10['temperature'].std(axis=0)
print("Population mean temperature (given): POP_MEAN = " + str(POP_MEAN))
print("Sample-10 size: sample_size = " + str(sample10_size))
print("Sample-10 mean: sample_mean = "+ str(sample10_mean))
print("Sample-10 standard deviation: sample_std = "+ str(sample10_std))
t = ((sample10_mean - POP_MEAN)/sample10_std)*np.sqrt(sample10_size)
print("t = " + str(t))
degree = sample10_size - 1
print("degrees of freedom =" + str(degree))
p_value = 1 - stats.t.cdf(abs(t),df=degree)
# p-value considering two-tails
p_value = 2*p_value
print("p-value =" + str(p_value))
ALFA = 0.05
print(". alfa = " + str(ALFA))
print(". p-value = %.15f" % p_value)
z = ((sample10_mean - POP_MEAN)/sample10_std)*np.sqrt(sample10_size)
print("Z value or z_score: z = " + str(z))
# P-Value two sided
p_value_z = 1 - (norm.sf(abs(z))*2)
print("P-Value = %.15f" % p_value_z)
ALFA = 0.05
print(". alfa = " + str(ALFA))
print(". p-value = %.15f" % p_value_z)
# Sample (dataset) size
df['temperature'].describe()
median = df['temperature'].mean()
std = df['temperature'].std(axis=0)
print("One standard deviation (std) is %.3f degrees F." %std)
print("Three standard deviation (std) is %.3f degrees F." % (3*std))
lim_low = median - (3*std)
lim_high = median + (3*std)
print("A body temperature different than 99.7% of the population is: greater than "+ str(lim_high) + " and less than " + str(lim_low) + " degrees F.")
# Female temperature (mean and standard deviation)
df_female = df.loc[df['gender'] == 'F']
ax = sns.distplot(df_female[['temperature']])
print("Female temperature: mean = %f | std = %f" % (df_female['temperature'].mean(), df_female['temperature'].std()))
# Male temperature (mean and standard deviation)
df_male = df.loc[df['gender'] == 'M']
ax = sns.distplot(df_male[['temperature']])
print("Male temperature: mean = %f | std = %f" % (df_male['temperature'].mean(), df_male['temperature'].std()))
# Plotting histogram based on gender (Female/Male)
grid = sns.FacetGrid(df, col="gender")
grid.map(plt.hist, "temperature", color="y")
# Plotting Female/Male temperatures using Seaborn Pairplot
sns.pairplot(df, hue='gender', size=2.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Questions and Answers
Step2: 2. Is the sample size large? Are the observations independent?
Step3: What we know about population and what we get from sample dataset<br>
Step4: 3. Is the true population mean really 98.6 degrees F?
Step5: t-test formula
Step6: degrees of freedom
Step7: p-value
Step8: 2 * p-value is the new p-value
Step9: We assume that
Step10: ----
Step11: Z test
Step12: p-value
Step13: We (also) assume that
Step14: ---------------------------------------------------------------------------------------------------------------------------------
Step15: The histogram
Step16: Sample size, mean and standard deviation
Step17: t-test formula
Step18: degrees of freedom
Step19: p-value
Step20: We (also) assume that
Step21: ----
Step22: We (also) assume that
Step23: ---------------------------------------------------------------------------------------------------------------------------------
Step24: So, a "abnormal" body temperature is between -3std and +3std
Step25: 6. Is there a significant difference between males and females in normal temperature?
|
12,677
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
x = tf.constant(35, name='x')
y = tf.Variable(x + 5, name='y')
print(y)
x = tf.constant(35, name='x')
y = tf.Variable(x + 5, name='y')
model = tf.initialize_all_variables()
with tf.Session() as session:
session.run(model)
print(session.run(y))
import tensorflow as tf
x = tf.constant([35, 40, 45], name='x')
y = tf.Variable(x + 5, name='y')
model = tf.initialize_all_variables()
with tf.Session() as session:
session.run(model)
print(session.run(y))
import numpy as np
x=np.random.rand(10)
y=tf.Variable(5*x**2,name='y')
model = tf.initialize_all_variables()
with tf.Session() as session:
session.run(model)
print(session.run(y))
import tensorflow as tf
x = tf.constant(35, name='x')
print(x)
y = tf.Variable(x + 5, name='y')
with tf.Session() as session:
merged = tf.merge_all_summaries()
writer = tf.train.SummaryWriter("", session.graph)
model = tf.initialize_all_variables()
session.run(model)
print(session.run(y))
import matplotlib.image as mpimg
# First, load the image
filename = "MarshOrchid.jpg"
image = mpimg.imread(filename)
# Print out its shape
print(image.shape)
import matplotlib.pyplot as plt
plt.imshow(image)
plt.show()
import tensorflow as tf
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
# First, load the image again
filename = "MarshOrchid.jpg"
image = mpimg.imread(filename)
# Create a TensorFlow Variable
x = tf.Variable(image, name='x')
model = tf.initialize_all_variables()
with tf.Session() as session:
x = tf.transpose(x, perm=[1, 0, 2])
session.run(model)
result = session.run(x)
plt.imshow(result)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise
Step2: 2.
Step3: tensorboard
|
12,678
|
<ASSISTANT_TASK:>
Python Code:
import wget
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
# Import the dataset
data_url = 'https://raw.githubusercontent.com/nslatysheva/data_science_blogging/master/datasets/wine/winequality-red.csv'
dataset = wget.download(data_url)
dataset = pd.read_csv(dataset, sep=";")
# Using a lambda function to bin quality scores
dataset['quality_is_high'] = dataset.quality.apply(lambda x: 1 if x >= 6 else 0)
# Convert the dataframe to a numpy array and split the
# data into an input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-2].astype(float)
y = npArray[:,-1]
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
from sklearn.linear_model import LogisticRegression
# Build rf model
best_n_estimators, best_max_features = 73, 5
rf = RandomForestClassifier(n_estimators=best_n_estimators, max_features=best_max_features)
rf.fit(XTrain, yTrain)
rf_predictions = rf.predict(XTest)
# Build SVM model
best_C_svm, best_gamma = 1.07, 0.01
rbf_svm = svm.SVC(kernel='rbf', C=best_C_svm, gamma=best_gamma)
rbf_svm.fit(XTrain, yTrain)
svm_predictions = rbf_svm.predict(XTest)
# Build LR model
best_penalty, best_C_lr = "l2", 0.52
lr = LogisticRegression(penalty=best_penalty, C=best_C_lr)
lr.fit(XTrain, yTrain)
lr_predictions = lr.predict(XTest)
# Train SVM and output predictions
# rbfSVM = svm.SVC(kernel='rbf', C=best_C, gamma=best_gamma)
# rbfSVM.fit(XTrain, yTrain)
# svm_predictions = rbfSVM.predict(XTest)
print (classification_report(yTest, svm_predictions))
print ("Overall Accuracy:", round(accuracy_score(yTest, svm_predictions),4))
print(best_C, best_C_svm)
import collections
# stick all predictions into a dataframe
predictions = pd.DataFrame(np.array([rf_predictions, svm_predictions, lr_predictions])).T
predictions.columns = ['RF', 'SVM', 'LR']
# initialise empty array for holding predictions
ensembled_predictions = np.zeros(shape=yTest.shape)
# majority vote and output final predictions
for test_point in range(predictions.shape[0]):
row = predictions.iloc[test_point,:]
counts = collections.Counter(row)
majority_vote = counts.most_common(1)[0][0]
# output votes
ensembled_predictions[test_point] = majority_vote.astype(int)
#print "The majority vote for test point", test_point, "is: ", majority_vote
print(ensembled_predictions)
# Get final accuracy of ensembled model
from sklearn.metrics import classification_report, accuracy_score
for individual_predictions in [rf_predictions, svm_predictions, lr_predictions]:
# classification_report(yTest.astype(int), individual_predictions.astype(int))
print "Accuracy:", round(accuracy_score(yTest.astype(int), individual_predictions.astype(int)),2)
print classification_report(yTest.astype(int), ensembled_predictions.astype(int))
print "Ensemble Accuracy:", round(accuracy_score(yTest.astype(int), ensembled_predictions.astype(int)),2)
# from sklearn.ensemble import VotingClassifier
import sklearn.ensemble.VotingClassifier
# Build and fit majority vote classifier
# ensemble_1 = VotingClassifier(estimators=[('rf', rf), ('svm', rbf_svm), ('lr', lr)], voting='hard')
# ensemble_1.fit(XTrain, yTrain)
# simple_ensemble_predictions = ensemble_1.predict(XTest)
# print metrics.classification_report(yTest, simple_ensemble_predictions)
# print "Ensemble_2 Overall Accuracy:", round(metrics.accuracy_score(yTest, simple_ensemble_predictions),2)
# Getting weights
ensemble_1 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], weights=[1,1,1], voting='hard')
ensemble_1.fit(XTrain, yTrain)
simple_ensemble_predictions = ensemble_1.predict(XTest)
print metrics.classification_report(yTest, simple_ensemble_predictions)
print "Ensemble_2 Overall Accuracy:", round(metrics.accuracy_score(yTest, simple_ensemble_predictions),2)
ensemble_1 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], weights=[1,1,1], voting='soft')
ensemble_1.fit(XTrain, yTrain)
simple_ensemble_predictions = ensemble_1.predict(XTest)
print metrics.classification_report(yTest, simple_ensemble_predictions)
print "Ensemble_2 Overall Accuracy:", round(metrics.accuracy_score(yTest, simple_ensemble_predictions),2)
## Model stacking
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build models
Step2: 4) Majority vote on classifications
Step3: And we could assess the performance of the majority voted predictions like so
Step4: Luckily, we do not have to do all of this manually, but can use scikit's VotingClassifier class
Step5: We can also do a weighted majority vote, where the different base learners are associated with a weight (often reflecting the accuracies of the models, i.e. more accurate models should have a higher weight). These weight the occurence of predicted class labels, which allows certain algorithms to have more of a say in the majority voting.
Step6: You may have noticed the voting='hard' argument we passed to the VotingClassifier. Setting voting='soft' would predict the class labels based on how certain each algorithm in the ensemble was about their individual predictions. This involves calculating the predicted probabilities p for the classifier. Note that scikit only recommends this approach if the classifiers are already tuned well, which should be the case here.
|
12,679
|
<ASSISTANT_TASK:>
Python Code:
class PlanetaryObject():
A simple class used to store pertinant information about the plantary object
def __init__(self, date, L, e, SMA, i, peri, asc, r, v, anom, fp, mu):
self.date = date # Event Date
self.L = L # Longitude
self.e = e # Eccentricity
self.SMA = SMA # SMA
self.i = i # Inclination
self.peri = peri # Longitude of Perihelion
self.asc = asc # Longitude of Ascending Node
self.r = r # Radius
self.v = v # Velocity
self.anom = anom # True Anomaly
self.fp = fp # Flight Path Angle
self.mu = mu # Gravitation parameter
earth = PlanetaryObject(
datetime.date(2021, 3, 22),
181.44, # Longitude
0.0167, # Eccentricity
149598020, # SMA
0, # Inclination
102.958, # Longitude of Perihelion
0, # Longitude of Ascending Node
14905909.7, # Radius
29.89, # Velocity
78.48, # True Anomaly
0.9348, # Flight Path Angle
398600.4 # Gravitation parameter
)
mars = PlanetaryObject(
datetime.date(2021, 10, 8),
333.22, # Longitude
0.0934, # Eccentricity
227939133, # SMA
1.849, # Inclination
336.093, # Longitude of Perihelion
49.572, # Longitude of Ascending Node
206671197, # Radius
26.94, # Velocity
357.128, # True Anomaly
-0.2452, # Flight Path Angle
42828.3 # Gravitation parameter
)
mu_sun = 132712439935.5
def eccentricity(r_1, r_2, theta_1, theta_2):
Calculates the eccentricity of the transfer ellipse. This is calculated through
the following equation:
.. math::
\frac {r_2 - r_1} {r_1 * \cos{\theta_1} - r_2 * \cos{\theta_2}}
:param r_1: radius of the departing planetary object
:param r_2: radius of the arriving planetary object
:param theta_1: True anomaly of the departing planetary object in degrees
:param theta_2: True anomaly of the arriving planetary object in degrees
return (r_2 - r_1) / ((r_1 * math.cos(math.radians(theta_1))) - (r_2 * math.cos(math.radians(theta_2))))
def periapsis_radius(r, e, theta):
Calculates the periapsis radius of the transfer ellipse. This is calculated
using the following equation:
.. math::
\frac {r_1 * [1 + e \cos{\theta]}} {1 + e}
:param r: radius of the departing planetary object
:param e: eccentricity of the transfer ellipse
return (r * (1 + e * math.cos(math.radians(theta)))) / (1 + e)
def semimajor_axis(r=None, r_a=None, r_p=None, mu=None, V=None, e=None):
Calculates the semi-major axis of the transfer ellipse. This is calculated
using one of the following equations:
.. math::
\frac {r_a + r_p} {2}
\frac {\mu r} {2 \mu - V^2 r}
\frac {r_p} {1 - e}
\frac {r_a} {1 + e}
:param r: general radius of the elliptical orbit
:param r_a: Radius of apoapsis
:param r_p: Radius of periapsis
:param mu: gravitation parameter
:param V: Velocity of the orbiting object
:param e: Eccentricity of the elliptical orbit
if r_a != None and r_p != None:
return (r_a + r_p) / 2
if mu != None and r !=None and V != None:
return (mu * r) / (2 * mu - V ** 2 * r)
if r_p != None and e != None:
return r_p / (1 - e)
if r_a != None and e != None:
return r_a / (1 + e)
# If we reach this point, then the passed in arguments doesn't match
# any equations we have defined. Raise an Error
raise TypeError("Invalid arguments!")
def time_since_periapsis(e, n, theta=None, E=None):
Calculates the time since the periapsis. This is calculated using the
following equation:
.. math::
\frac {E - e \sin{E}} {n}
If E, isn't defined, it will be calculated using the param theta and
the following equation:
..math::
\cos {E} = \frac {e + \cos{\theta}} {1 + e \cos{\theta}}
:param e: eccentricity of the transfer ellipse
:param n: mean motion
:param theta: degrees to periapsis
:param E: eccentric anomaly in radians
if theta == None and E == None:
raise TypeError("theta or E MUST be defined")
if theta != None and E != None:
raise TypeError("theta OR E must be defined. Not both")
if E == None:
cos_E = (e + math.cos(math.radians(theta))) / (1 + e * math.cos(math.radians(theta)))
E = math.acos(cos_E)
return (E - e * math.sin(E)) / n
def mean_motion(mu, a):
Calculates the mean motion of an elliptical orbit. This is calculated
using the following equation:
.. math::
\sqrt{\frac{\mu} {a^3}}
:param mu: gravitation parameter (Mass * Gravitation constant)
:param a: semimajor axis
return math.sqrt(mu / a ** 3)
def velocity(mu, r, a):
Calculates the Velocity (V) of an object based on the elliptical orbit.
This is calculated using the following equation:
.. math::
\sqrt{\frac{2 * \mu} {r} - \frac{\mu} {a}}
:param mu: gravitation parameter (Mass * Gravition constant)
:param a: semimajor axis
return math.sqrt(2 * mu / r - mu / a)
def flight_path_angle(e, theta):
Calculates the Flight Path Angle (γ). This is calculated using
the following equation:
.. math::
\tan{γ} = {\frac{e * \sin{\theta}}{1 + 3 * \cos{\theta}}
:param e: eccentricity of the elliptical orbit
:param theta:
tan_y = (e * math.sin(math.radians(theta))) / (1 + e * math.cos(math.radians(theta)))
return math.atan(tan_y)
def inclination(Omega, L_s, L_t, i):
a = math.radians(Omega + 180 - L_s)
b = math.radians(L_t - (180 + Omega))
alpha = math.radians(180 - i)
cos_c = math.cos(a) * math.cos(b) + math.sin(a) * math.sin(b) * math.cos(alpha)
c = math.acos(cos_c)
sin_i_t = (math.sin(alpha) * math.sin(b)) / math.sin(c)
return math.asin(sin_i_t)
def transfer_ellipse(start_planet, end_planet, return_trials=False):
time_of_flight = end_planet.date - start_planet.date
time_of_flight = time_of_flight.days
longs = []
tofs = []
line_of_apisides = 180 # trial start
tof = 9999999999 # large number to get us started
while tof / 3600 / 24 > time_of_flight:
true_anom = line_of_apisides + (end_planet.L - start_planet.L)
longs.append((line_of_apisides, true_anom))
e = eccentricity(start_planet.r, end_planet.r, line_of_apisides, true_anom)
r_p = periapsis_radius(start_planet.r, e, line_of_apisides)
a = semimajor_axis(r_p=r_p, e=e)
n = mean_motion(mu_sun, a)
peri_to_start = time_since_periapsis(e, n, theta=line_of_apisides)
end_to_peri = time_since_periapsis(e, n, theta=true_anom)
tof = peri_to_start - end_to_peri
tofs.append(tof / 3600 / 24)
line_of_apisides += 1
# Calculate the Relative Velocities
V_start = velocity(mu_sun, start_planet.r, a)
V_end = velocity(mu_sun, end_planet.r, a)
y_start = flight_path_angle(e, line_of_apisides)
y_end = flight_path_angle(e, true_anom)
r_dict = {
'line_of_apisides': line_of_apisides - 1, # subtract the 1 we added during the loop
'true_anom': true_anom,
'eccentricity': e,
'SMA': a,
'time_of_flight': tof,
'V_start': V_start,
'V_end': V_end,
'y_start': math.degrees(y_start),
'y_end': math.degrees(y_end)
}
if return_trials:
r_dict.update({'runs':{'longs': longs, 'tofs':tofs}})
return r_dict
tf = transfer_ellipse(earth, mars, return_trials=True)
tf
i_t = inclination(mars.asc, earth.L, mars.L, mars.i)
print("i_t = {:.2f}°".format(math.degrees(i_t)))
cos_alpha_2 = math.cos(i_t) * math.cos(earth.fp + abs(y_ne))
alpha_2 = math.acos(cos_alpha_2)
C3 = earth.v ** 2 + V_ne ** 2 - 2 * earth.v * V_ne * math.cos(alpha_2)
V_he = math.sqrt(C3)
print("C3 = {:.2f} km^2/s^2; V_he = {:.2f} km/s".format(C3, V_he))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SIE 552 HW #3
Step9: There are also a few fundamental equations we need to know. These are captured below as python functions.
Step10: We'll split this problem up into 3 different sections
Step11: The Departure Trajectory
Step12: Using $i_t$, we can now determine the $V_{HE}$ and $C3$ of the departing trajectory
|
12,680
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ttim import *
import pandas as pd
b = 10 #aquifer thickness in m
Q = 172.8 #constant discharge rate in m^3/d
rw = 0.1 #well radius in m
rc = 0.1 #casing radius in m
r1 = 3.16
r2 = 31.6
data0 = np.loadtxt('data/moench_pumped.txt', skiprows=1)
t0 = data0[:, 0] / 60 / 60 / 24 #convert time from seconds to days
h0 = -data0[:, 1]
data1 = np.loadtxt('data/moench_ps1.txt', skiprows=1)
t1 = data1[:, 0] / 60 / 60 / 24 #convert time from seconds to days
h1 = -data1[:, 1]
data2 = np.loadtxt('data/moench_pd1.txt', skiprows=1)
t2 = data2[:, 0] / 60 / 60 / 24 #convert time from seconds to days
h2 = -data2[:, 1]
data3 = np.loadtxt('data/moench_ps2.txt', skiprows=1)
t3 = data3[:, 0] / 60 / 60 / 24 #convert time from seconds to days
h3 = -data3[:, 1]
data4 = np.loadtxt('data/moench_pd2.txt', skiprows=1)
t4 = data4[:, 0] / 60 / 60 / 24 #convert time from seconds to days
h4 = -data4[:, 1]
#Set kaq, Saq, Sy and kzoverkh as given in Moench (1997)
kaq = 1e-4 * 60 * 60 * 24 #convert from m/s to m/d
Sy = 0.2
Saq = 2e-5
zh = 0.5 #kzoverkh
ml1 = Model3D(kaq=kaq, z=[0, -0.1, -2.1, -5.1, -10.1], Saq=[Sy, Saq, Saq, Saq], \
kzoverkh=zh, tmin=1e-5, tmax=3)
w1 = Well(ml1, xw=0, yw=0, rw=rw, rc=rc, tsandQ=[(0, Q)], layers=3)
ml1.solve()
hm1 = ml1.head(r1, 0, t1, layers=1)[0]
hm2 = ml1.head(r1, 0, t2, layers=3)[0]
hm3 = ml1.head(r2, 0, t3, layers=1)[0]
hm4 = ml1.head(r2, 0, t4, layers=3)[0]
hm0 = ml1.head(0, 0, t0, layers=3)[0]
plt.figure(figsize=(8, 5))
plt.loglog(t0, -h0, '.', label='pumped well')
plt.loglog(t0, -hm0, label='ttim pumped well')
plt.loglog(t1, -h1, '.', label='PS1')
plt.loglog(t1, -hm1, label='ttim PS1')
plt.loglog(t2, -h2, '.', label='PD1')
plt.loglog(t2, -hm2, label='ttim PD1')
plt.loglog(t3, -h3, '.', label='PS2')
plt.loglog(t3, -hm3, label='ttim PS2')
plt.loglog(t4, -h4, '.', label='PD2')
plt.loglog(t4, -hm4, label='ttim PD2')
plt.legend();
res1 = 0
res2 = 0
res3 = 0
res4 = 0
res0 = 0
for i in range(len(h1)):
r = (h1[i] - hm1[i]) ** 2
res1 = res1 + r
for i in range(len(h2)):
r = (h2[i] - hm2[i]) ** 2
res2 = res2 + r
for i in range(len(h3)):
r = (h3[i] - hm3[i]) ** 2
res3 = res3 + r
for i in range(len(h4)):
r = (h4[i] - hm4[i]) ** 2
res4 = res4 + r
for i in range(len(h0)):
r = (h0[i] - hm0[i]) ** 2
res0 = res0 + r
n = len(h1) + len(h2) + len(h3) + len(h4) + len(h0)
residuals = res1 + res2 + res3 + res4 + res0
rmse = np.sqrt(residuals/n)
print('RMSE:', rmse)
ml2 = Model3D(kaq=1, z=[0, -0.1, -2.1, -5.1, -10.1], Saq=[0.1, 1e-4, 1e-4, 1e-4], \
kzoverkh=1, tmin=1e-5, tmax=3)
w2 = Well(ml2, xw=0, yw=0, rw=rw, rc=rc, tsandQ=[(0, Q)], layers=3)
ml2.solve()
ca2 = Calibrate(ml2)
ca2.set_parameter(name='kaq0_3', initial=1)
ca2.set_parameter(name='Saq0', initial=0.2)
ca2.set_parameter(name='Saq1_3', initial=1e-4, pmin=0)
ca2.set_parameter_by_reference(name='kzoverkh', parameter=ml2.aq.kzoverkh, \
initial=0.1, pmin=0)
ca2.series(name='pumped', x=0, y=0, t=t0, h=h0, layer=3)
ca2.series(name='PS1', x=r1, y=0, t=t1, h=h1, layer=1)
ca2.series(name='PD1', x=r1, y=0, t=t2, h=h2, layer=3)
ca2.series(name='PS2', x=r2, y=0, t=t3, h=h3, layer=1)
ca2.series(name='PD2', x=r2, y=0, t=t4, h=h4, layer=3)
ca2.fit()
display(ca2.parameters)
print('RMSE:', ca2.rmse())
hm0_2 = ml2.head(0, 0, t0, layers=3)[0]
hm1_2 = ml2.head(r1, 0, t1, layers=1)[0]
hm2_2 = ml2.head(r1, 0, t2, layers=3)[0]
hm3_2 = ml2.head(r2, 0, t3, layers=1)[0]
hm4_2 = ml2.head(r2, 0, t4, layers=3)[0]
plt.figure(figsize=(8, 5))
plt.semilogx(t0, h0, '.', label='pumped')
plt.semilogx(t0, hm0_2, label='ttim pumped')
plt.semilogx(t1, h1, '.', label='PS1')
plt.semilogx(t1, hm1_2, label='ttim PS1')
plt.semilogx(t2, h2, '.', label='PD1')
plt.semilogx(t2, hm2_2, label='ttim PD1')
plt.semilogx(t3, h3, ',', label='PS2')
plt.semilogx(t3, hm3_2, label='ttim PS2')
plt.semilogx(t4, h4, '.', label='PD2')
plt.semilogx(t4, hm4_2, label='ttim PD2')
plt.legend();
ml3 = Model3D(kaq=1, z=[0, -0.1, -2.1, -5.1, -10.1], Saq=[0.1, 1e-4, 1e-4, 1e-4], \
kzoverkh=1, tmin=1e-5, tmax=3)
w3 = Well(ml3, xw=0, yw=0, rw=rw, rc=rc, tsandQ=[(0, Q)], layers=3)
ml3.solve()
ca3 = Calibrate(ml3)
ca3.set_parameter(name='kaq0', initial=1, pmin=0)
ca3.set_parameter(name='kaq1_3', initial=1)
ca3.set_parameter(name='Saq0', initial=0.2, pmin=0)
ca3.set_parameter(name='Saq1_3', initial=1e-4, pmin=0)
ca3.set_parameter_by_reference(name='kzoverkh', parameter=ml3.aq.kzoverkh, \
initial=0.1, pmin=0)
ca3.series(name='pumped', x=0, y=0, t=t0, h=h0, layer=3)
ca3.series(name='PS1', x=r1, y=0, t=t1, h=h1, layer=1)
ca3.series(name='PD1', x=r1, y=0, t=t2, h=h2, layer=3)
ca3.series(name='PS2', x=r2, y=0, t=t3, h=h3, layer=1)
ca3.series(name='PD2', x=r2, y=0, t=t4, h=h4, layer=3)
ca3.fit()
display(ca3.parameters)
print('RMSE:', ca3.rmse())
hm0_3 = ml3.head(0, 0, t0, layers=3)[0]
hm1_3 = ml3.head(r1, 0, t1, layers=1)[0]
hm2_3 = ml3.head(r1, 0, t2, layers=3)[0]
hm3_3 = ml3.head(r2, 0, t3, layers=1)[0]
hm4_3 = ml3.head(r2, 0, t4, layers=3)[0]
plt.figure(figsize=(8, 5))
plt.semilogx(t0, h0, '.', label='pumped')
plt.semilogx(t0, hm0_3, label='ttim pumped')
plt.semilogx(t1, h1, '.', label='PS1')
plt.semilogx(t1, hm1_3, label='ttim PS1')
plt.semilogx(t2, h2, '.', label='PD1')
plt.semilogx(t2, hm2_3, label='ttim PD1')
plt.semilogx(t3, h3, ',', label='PS2')
plt.semilogx(t3, hm3_3, label='ttim PS2')
plt.semilogx(t4, h4, '.', label='PD2')
plt.semilogx(t4, hm4_3, label='ttim PD2');
ca3.parameters['optimal'].values
ta = pd.DataFrame(columns=['Moench', 'TTim', 'TTim-stratified'],\
index=['k0[m/d]', 'k[m/d]', 'Sy[-]', 'Ss[1/m]', 'kz/kh'])
ta.loc[:, 'TTim-stratified'] = ca3.parameters['optimal'].values
ta.loc[1:, 'TTim'] = ca2.parameters['optimal'].values
ta.loc[1:, 'Moench'] = [8.640, 0.2, 2e-5, 0.5]
ta.loc['RMSE'] = [0.061318, ca2.rmse(), ca3.rmse()]
ta
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set basic parameters
Step2: Load datasets of observation wells
Step3: Check how well TTim can simulate drawdowns in a vertically anisotropic water-table aquifer
Step4: Try calibrating model to find the parameters
Step5: Try calibrating model with stratified kaq
Step6: Summary of calibrated values
|
12,681
|
<ASSISTANT_TASK:>
Python Code:
# TODO: You Must Change the setting bellow
MYSQL = {
'user': 'root',
'passwd': '',
'db': 'coupon_purchase',
'host': '127.0.0.1',
'port': 3306,
'local_infile': True,
'charset': 'utf8',
}
DATA_DIR = '/home/nasuno/recruit_kaggle_datasets' # ディレクトリの名前に日本語(マルチバイト文字)は使わないでください。
OUTPUTS_DIR = '/home/nasuno/recruit_kaggle/outputs' # 予測結果などを保存するディレクトリ。
%matplotlib inline
import matplotlib.pyplot as plt
import MySQLdb
import numpy
from sklearn.utils import shuffle
from sklearn.cross_validation import train_test_split
from sklearn.metrics import f1_score, accuracy_score
from sklearn.linear_model import LogisticRegression
from datetime import datetime, timedelta
from itertools import product
# Random Seed
rng = numpy.random.RandomState(1234)
dbcon = MySQLdb.connect(**MYSQL)
dbcur = dbcon.cursor()
validation_start = datetime.strptime('2012-06-17 00:00:00', '%Y-%m-%d %H:%M:%S')
validation_end = validation_start + timedelta(days=7)
dbcur.execute(''' DROP TABLE IF EXISTS coupon_visit_train_training;''') # チュートリアルの便宜上一回削除します。
dbcur.execute(''' CREATE TABLE IF NOT EXISTS coupon_visit_train_training LIKE coupon_visit_train;''')
dbcur.execute('''
INSERT INTO coupon_visit_train_training
SELECT *
FROM coupon_visit_train
WHERE i_date >= "2011-07-01 00:00:00" AND i_date < %s
;
''', (validation_start, ))
dbcur.execute(''' DROP TABLE IF EXISTS coupon_visit_train_validation;''') # チュートリアルの便宜上一回削除します。
dbcur.execute(''' CREATE TABLE IF NOT EXISTS coupon_visit_train_validation LIKE coupon_visit_train;''')
dbcur.execute('''
INSERT INTO coupon_visit_train_validation
SELECT *
FROM coupon_visit_train
WHERE i_date >= %s
;
''', (validation_start, ))
# validation 期間に購買されうるクーポンの抽出
dbcur.execute('''
SELECT
coupon_id_hash
FROM coupon_list_train
WHERE
NOT (dispend <= %s OR dispfrom > %s)
;
''', (validation_start, validation_end))
coupon_ids = []
for row in dbcur.fetchall():
coupon_ids.append(row[0])
# user_idsをselectして、ランダムに、購買アイテムを割り当てる。
dbcur.execute('''
SELECT
user_id_hash
FROM user_list
;
''')
user_pcoupon_pred = {}
for row in dbcur.fetchall():
user_pcoupon_pred[row[0]] =list(shuffle(coupon_ids, random_state=rng)[:10])
# validation期間に購買したクーポンリストを抽出。
dbcur.execute('''
SELECT
user_id_hash, view_coupon_id_hash
FROM coupon_visit_train_validation
WHERE purchase_flg = 1
;
''')
user_pcoupon_true = {}
for row in dbcur.fetchall():
if row[0] not in user_pcoupon_true:
user_pcoupon_true[row[0]] = []
user_pcoupon_true[row[0]].append(row[1])
# ap10を算出する関数を定義。
def get_ap10(y_pred, y_true):
ap10 = 0.
y_true = set(y_true)
for i in range(len(y_pred)):
if y_pred[i] in y_true:
c = set(y_pred[:i + 1])
ap10 += len(y_true & c) / float(i + 1)
ap10 /= min(len(y_true), 10)
return ap10
map10 = 0.
n_purchased_user = 0.
for user_id in user_pcoupon_pred:
if user_id not in user_pcoupon_true:
# 当該ユーザがvalidation期間にcouponを買わなかった場合、
# ap@10は0
continue
n_purchased_user += 1
y_true = user_pcoupon_true[user_id]
y_pred = user_pcoupon_pred[user_id]
map10 += get_ap10(y_pred, y_true)
max_map10 = n_purchased_user / len(user_pcoupon_pred)
map10 /= len(user_pcoupon_pred)
print 'max_map@10: %.5f, map@10: %.5f' % (max_map10, map10)
output = ['USER_ID_hash,PURCHASED_COUPONS']
for user_id in user_pcoupon_pred:
output.append(user_id + ',' + ' '.join(user_pcoupon_pred[user_id]))
output = '\n'.join(output)
with open(OUTPUTS_DIR + '/random_prediction_valid.csv', 'wb') as fid:
fid.write(output)
# ユニークな都道府県リストの取得
dbcur.execute(''' SELECT pref_name FROM prefecture_locations ORDER BY pref_name ; ''')
pref_data = []
for row in dbcur.fetchall():
pref_data.append(row[0])
# ユーザの素性を作成。(ユーザの素性はtraining、validation, testで共通)
dbcur.execute('''
SELECT
t1.user_id_hash,
IF(t1.sex_id = 'm', 1, 0),
(t1.age-15)/65,
''' + ', '.join([u'IF(t1.pref_name = "' + p[0] + u'", 1, 0)' for i, p in enumerate(pref_data)]) + '''
FROM user_list AS t1
''')
user_feature = {} # ユーザの素性ベクトル
for row in dbcur.fetchall():
user_feature[row[0]] = row[1:]
training_start = validation_start - timedelta(days=7) # 訓練開始日時を算出。
# カテゴリリストの取得
dbcur.execute(''' SELECT DISTINCT(capsule_text) FROM coupon_list_train ORDER BY capsule_text;''')
capsule_data = []
for row in dbcur.fetchall():
capsule_data.append(row[0])
# ジャンルリストの取得
dbcur.execute(''' SELECT DISTINCT(genre_name) FROM coupon_list_train ORDER BY genre_name;''')
genre_data = []
for row in dbcur.fetchall():
genre_data.append(row[0])
# 大エリアリストの取得
dbcur.execute(''' SELECT DISTINCT(large_area_name) FROM coupon_list_train ORDER BY large_area_name;''')
larea_data = []
for row in dbcur.fetchall():
larea_data.append(row[0])
# 都道府県リストの取得
dbcur.execute(''' SELECT DISTINCT(ken_name) FROM coupon_list_train ORDER BY ken_name;''')
pref_data = []
for row in dbcur.fetchall():
pref_data.append(row[0])
# 小エリアリストの取得
dbcur.execute(''' SELECT DISTINCT(small_area_name) FROM coupon_list_train ORDER BY small_area_name;''')
sarea_data = []
for row in dbcur.fetchall():
sarea_data.append(row[0])
def get_item_feature(f_date, t_date):
# クーポンの素性を作成する関数。
# @f_date:対象期間の開始日時
# @t_date:対象期間の終了日時
# テーブルが訓練用のテーブルとなっている為、training とvalidationのデータを作成する際にしか利用できない。
dbcur.execute('''
SELECT
coupon_id_hash,
''' + ', '.join([u'IF(capsule_text = "' + p[0] + u'", 1, 0)' for i, p in enumerate(capsule_data)]) + ''',
''' + ', '.join([u'IF(genre_name = "' + p[0] + u'", 1, 0)' for i, p in enumerate(genre_data)]) + ''',
COALESCE(CAST(usable_date_mon AS SIGNED), 0),
COALESCE(CAST(usable_date_tue AS SIGNED), 0),
COALESCE(CAST(usable_date_wed AS SIGNED), 0),
COALESCE(CAST(usable_date_thu AS SIGNED), 0),
COALESCE(CAST(usable_date_fri AS SIGNED), 0),
COALESCE(CAST(usable_date_sat AS SIGNED), 0),
COALESCE(CAST(usable_date_sun AS SIGNED), 0),
COALESCE(CAST(usable_date_holiday AS SIGNED), 0),
COALESCE(CAST(usable_date_before_holiday AS SIGNED), 0),
''' + ', '.join([u'IF(large_area_name = "' + p[0] + u'", 1, 0)' for i, p in enumerate(larea_data)]) + ''',
''' + ', '.join([u'IF(ken_name = "' + p[0] + u'", 1, 0)' for i, p in enumerate(pref_data)]) + ''',
''' + ', '.join([u'IF(small_area_name = "' + p[0] + u'", 1, 0)' for i, p in enumerate(sarea_data)]) + '''
FROM coupon_list_train
WHERE
NOT (dispend <= %s OR dispfrom > %s)
;
''', (f_date, t_date))
item_feature = {} # クーポンの素性
for row in dbcur.fetchall():
item_feature[row[0]] = row[1:]
return item_feature
item_feature_train = get_item_feature(training_start, validation_start) # training 期間のクーポンの素性
item_feature_valid = get_item_feature(validation_start, validation_end) # validation 期間のクーポンの素性
print 'n_item_train: %d, n_item_valid: %d' % (len(item_feature_train), len(item_feature_valid))
def get_purchased_coupons(f_date, t_date):
# 実際に購買されるクーポンの取得
# @f_date:対象期間の開始日時
# @t_date:対象期間の終了日時
dbcur.execute('''
SELECT user_id_hash, view_coupon_id_hash
FROM coupon_visit_train
WHERE i_date >= %s AND i_date < %s AND purchase_flg = 1
ORDER BY user_id_hash, view_coupon_id_hash
;
''', (f_date, t_date))
purchased_items = {} # 各ユーザがどのクーポン群を購入するかを辞書型で返す。
for row in dbcur.fetchall():
if row[0] not in purchased_items:
purchased_items[row[0]] = set([])
purchased_items[row[0]].add(row[1])
return purchased_items
user_pcoupon_train = get_purchased_coupons(training_start, validation_start) # training 期間に各ユーザが実際に買ったクーポン
user_pcoupon_valid = get_purchased_coupons(validation_start, validation_end) # validation 期間に各ユーザが実際に買ったクーポン
n_pairs_train = len(user_feature) * len(item_feature_train) # ユーザ数×trainingクーポン数
n_pairs_valid = len(user_feature) * len(item_feature_valid) # ユーザ数×validation クーポン数
print 'n_train_datasets: %d, n_validation_datasets: %d, n_puser: %d' %(n_pairs_train, n_pairs_valid, len([1 for a in user_pcoupon_train if len(a) > 0]))
# 訓練データに利用するユーザをtraining期間に、実際にクーポンを購入したユーザに限定し、そのユーザIDとクーポンのIDの全組み合せを出力する。
pairs_train = list(product([k for k in user_pcoupon_train if len(user_pcoupon_train[k]) > 0], item_feature_train.keys()))
print 'n_train_datasets: %d' %(len(pairs_train), )
features_train = [] # 学習に用いる素性
labels_train = [] # 学習に用いるラベル
for pair in pairs_train: # 各ユーザ、アイテムペアについて
user_id, item_id = pair
features_train.append(user_feature[user_id] + item_feature_train[item_id]) # 単純な結合
if user_id in user_pcoupon_train and item_id in user_pcoupon_train[user_id]:
# 購買された
labels_train.append(1)
else:
# 購買されなかった
labels_train.append(0)
model = LogisticRegression() # ロジスティック回帰のモデル構築(ハイパーパラメタの調整は省略)。インスタンス化。
model.fit(features_train, labels_train) # x, yを入力して学習
purchase_index = numpy.argmax(model.classes_) # 1(=購買ラベル)がついている方のカラムインデックスを取得
item_index_to_item_id = sorted(item_feature_valid.keys()) # クーポンの番号をクーポンIDに変換する。
map10 = 0.
for user_id in user_feature: # map@10はユーザごとにap@10を算出する。
if user_id not in user_pcoupon_valid: # 購入したクーポンが亡ければ、ap@10は0なので、スコア評価時には飛ばす。
continue
feature = []
for item_id in item_index_to_item_id:
feature.append(user_feature[user_id] + item_feature_valid[item_id]) # 単純にユーザ素性とクーポン素性を結合
y_proba = model.predict_proba(feature) # 各クーポンの購買確率を算出
y_pred_indices = numpy.argsort(y_proba[:, purchase_index])[-10:][::-1] # 購入確率が高いクーポン上位10個のクーポン番号を取得
y_pred_item_ids = [item_index_to_item_id[i] for i in y_pred_indices] # クーポン番号をクーポンIDに変換。
map10 += get_ap10(y_pred_item_ids, user_pcoupon_valid[user_id]) # ap@10を計算して、map@10に足す。
map10 /= len(user_feature) # map@10はユーザ平均なので、全ユーザで割る。
print 'MAP@10: %.5f' % (map10, )
n = 10
#print ['sex', 'age'] + pref_data
label_names = (
['user_' + c for c in (['sex', 'age'] + pref_data)]
+ ['item_' + c for c in (
capsule_data
+ genre_data
+ ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun', 'holiday', 'before_holiday']
+ larea_data
+ pref_data
+ sarea_data
)]
)
print "合計の素性数:%d" % (model.coef_.shape[1])
print "降順に10個"
for i in numpy.argsort(abs(model.coef_[0]))[-n:][::-1]:
print 'index: %d, name: %s, %.4f' % (i, label_names[i], model.coef_[0][i])
print "昇順に10個"
for i in numpy.argsort(abs(model.coef_[0]))[:n]:
print 'index: %d, name: %s, %.4f' % (i, label_names[i], model.coef_[0][i])
N = 50
x = numpy.linspace(0, 10, N)
y1_train = x + rng.rand(N)*5 + 5
y2_train = x + rng.rand(N)*5
y_valid = x + rng.rand(N) * 5 + 1.5
plt.figure()
plt.plot(x, y1_train, 'o')
plt.plot(x, y1_train)
plt.plot(x, x + 7.5)
dbcur.execute('''
SELECT
COUNT(*),
SUM(purchase_flg),
COUNT(DISTINCT(view_coupon_id_hash))
FROM
coupon_visit_train
GROUP BY user_id_hash
;
''')
n_view = []
n_purchase = []
n_view_u = []
for row in dbcur.fetchall():
n_view.append(int(row[0]))
n_purchase.append(int(row[1]))
n_view_u.append(int(row[2]))
n_view = numpy.asarray(n_view)
n_purchase = numpy.asarray(n_purchase)
n_view_u = numpy.asarray(n_view_u)
### user-coldstartがどういった状況か見る為に、最初の20件だけ見る。
span = 20
fig = plt.figure(figsize=(18, 8))
ax = fig.add_subplot(2, 3, 1)
ax.hist(n_view, bins=numpy.arange(0, span), cumulative=True)
ax.set_title('page view count distribution')
ax = fig.add_subplot(2, 3, 2)
ax.hist(n_purchase, bins=numpy.arange(0, span), cumulative=True)
ax.set_title('purchase count distribution')
ax = fig.add_subplot(2, 3, 3)
ax.hist(n_view_u, bins=numpy.arange(0, span), cumulative=True)
ax.set_title('unique page view count distribution')
ax = fig.add_subplot(2, 3, 4)
ax.plot(n_view, n_purchase, 'x')
ax.set_title('X=page view count, Y=purchase count')
ax = fig.add_subplot(2, 3, 5)
ax.plot(n_view_u, n_purchase, 'x')
ax.set_title('X=unique page view count, Y=purchase count')
ax = fig.add_subplot(2, 3, 6)
ax.plot(n_view, n_view_u, 'x')
ax.set_title('X=page view count, Y=unique page view count')
plt.show()
## 3Dにしても良く分からないことが多いので,辞めましょう。
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(n_view, n_view_u, n_purchase, marker='x')
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
dbcur.execute('''
SELECT
t1.coupon_id_hash, COUNT(t2.view_coupon_id_hash), COALESCE(SUM(t2.purchase_flg), 0)
FROM coupon_list_test AS t1
LEFT JOIN coupon_visit_train AS t2 ON t1.coupon_id_hash = t2.view_coupon_id_hash
GROUP BY t1.coupon_id_hash
ORDER BY SUM(t2.purchase_flg)
;
''')
view_count = []
purchase_count = []
for row in dbcur.fetchall():
view_count.append(int(row[1]))
purchase_count.append(int(row[2]))
view_count = numpy.asarray(view_count)
purchase_count = numpy.asarray(purchase_count)
plt.figure()
plt.plot(purchase_count, view_count, '.')
plt.show()
dbcur.execute('''
SELECT
AVG(same_pref_purchase_cnt),
AVG(same_pref_view_cnt),
AVG(same_pref_purchase_cnt / same_pref_view_cnt),
AVG(diff_pref_purchase_cnt),
AVG(diff_pref_view_cnt),
AVG(diff_pref_purchase_cnt / diff_pref_view_cnt)
FROM (
SELECT
t1.user_id_hash,
SUM(t1.pref_name = t3.ken_name AND purchase_flg = 1) AS same_pref_purchase_cnt,
SUM(t1.pref_name = t3.ken_name) AS same_pref_view_cnt,
SUM(t1.pref_name != t3.ken_name AND purchase_flg = 1) AS diff_pref_purchase_cnt,
SUM(t1.pref_name != t3.ken_name) AS diff_pref_view_cnt
FROM user_list AS t1
LEFT JOIN coupon_visit_train AS t2 ON t1.user_id_hash = t2.user_id_hash
LEFT JOIN coupon_list_train AS t3 ON t2.view_coupon_id_hash = t3.coupon_id_hash
WHERE t1.pref_name != ""
GROUP BY t1.user_id_hash
) AS t1
;
''')
data = None
for row in dbcur.fetchall():
data = row
print 'same_purchase: %.2f, same_view: %.2f, same_rate: %.2f, diff_purchase: %.2f, diff_view: %.2f, diff_rate: %.2f' % (data)
dbcur.execute('''
SELECT
t1.sex_id,
AVG(t1.discount_rate_view),
AVG(t1.discount_rate_purchase)
FROM (
SELECT
t1.user_id_hash,
t1.sex_id,
AVG(100 - t3.price_rate) AS discount_rate_view,
COALESCE(SUM(IF(t2.purchase_flg, 100 - t3.price_rate, 0)) / SUM(t2.purchase_flg), 0) AS discount_rate_purchase
FROM user_list AS t1
LEFT JOIN coupon_visit_train AS t2 ON t1.user_id_hash = t2.user_id_hash
LEFT JOIN coupon_list_train AS t3 ON t2.view_coupon_id_hash = t3.coupon_id_hash
GROUP BY t1.user_id_hash
) AS t1
GROUP BY t1.sex_id
;
''')
data = []
for row in dbcur.fetchall():
row = list(row)
row[1] = float(row[1])
row[2] = float(row[2])
data.append(tuple(row))
for row in data:
print 'sex_id: %s, discount_rate_view: %.2f, discount_rate_purchase: %.2f' % (row)
dbcur.execute('''
SELECT
SUM(purchase_flg)
FROM coupon_visit_train_validation
WHERE purchase_flg = 1
GROUP BY user_id_hash
;
''')
x = []
for row in dbcur.fetchall():
x.append(int(row[0]))
plt.figure()
plt.hist(x, bins=numpy.arange(1, 15))
plt.show()
dbcur.execute('''
SELECT
AVG(t1.same_purchase),
AVG(t1.same_view),
AVG(t1.same_purchase / t1.same_view) AS same_rate,
AVG(t1.diff_purchase),
AVG(t1.diff_view),
AVG(t1.diff_purchase / t1.diff_view) AS diff_rate
FROM (
SELECT
t1.user_id_hash,
SUM(t1.genre_name = t3.genre_name AND t2.purchase_flg = 1) AS same_purchase,
SUM(t1.genre_name = t3.genre_name) AS same_view,
SUM(t1.genre_name != t3.genre_name AND t2.purchase_flg = 1) AS diff_purchase,
SUM(t1.genre_name != t3.genre_name) AS diff_view
FROM (
SELECT
t1.user_id_hash, t1.view_coupon_id_hash, t3.genre_name
FROM coupon_visit_train_training AS t1
LEFT JOIN coupon_visit_train_training AS t2 ON t1.user_id_hash = t2.user_id_hash AND t1.i_date < t2.i_date
LEFT JOIN coupon_list_train AS t3 ON t1.view_coupon_id_hash = t3.coupon_id_hash
WHERE t1.purchase_flg = 1 AND t2.user_id_hash IS NULL
GROUP BY t1.user_id_hash
) AS t1
LEFT JOIN coupon_visit_train_validation AS t2 ON t1.user_id_hash = t2.user_id_hash
LEFT JOIN coupon_list_train AS t3 ON t2.view_coupon_id_hash = t3.coupon_id_hash
LEFT JOIN (
SELECT user_id_hash
FROM coupon_visit_train_validation
WHERE purchase_flg = 1
GROUP BY user_id_hash
) AS t4 ON t1.user_id_hash = t4.user_id_hash
WHERE t4.user_id_hash IS NOT NULL
GROUP BY t1.user_id_hash
) AS t1
;
''')
data = None
for row in dbcur.fetchall():
data = row
print 'same_purchase: %.2f, same_view: %.2f, same_rate: %.2f, diff_purchase: %.2f, diff_view: %.2f, diff_rate: %.2f' % (data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3. モデリング対象の設定
Step2: ランダム推定・MAP@10の評価
Step3: 2. 抽出したクーポン群から各ユーザが購買するクーポンをランダムに10個選び、予測結果とする。
Step4: 3. 実際に購買したクーポンと照らし合わせ、MAP@10を算出する。
Step5: ランダムだと、全然当たらないですね。
Step6: Excercise
Step7: クーポンの特徴ベクトル
Step8: ユーザ・クーポンの特徴ベクトルと正解ラベルの割当
Step9: 全部のペアを考慮すると1000万行程度となってしまいメモリに乗り切らなさそうです。
Step10: 予測モデルの構築・精度評価
Step11: 先ほどの、ランダム予測よりだいぶ上がったようです。
Step12: 重みからこの予測モデルについて、
Step13: 5-3. 重要のデータの量やラベルの種類の確認
Step14: 最終的な精度評価に用いるテストデータに含まれる各クーポンに対して、どれくらいviewやpurchaseのデータが存在するか、の確認。
Step15: 関係性についての仮説をたてる
Step16: まず、同じ地域からの購買よりも,異なる地域からの購買の方が多いことが分かります。
Step17: あまり、変わらないですね、、、
|
12,682
|
<ASSISTANT_TASK:>
Python Code:
bayarea.find().count()
bayarea.find({"type": "node"}).count()
bayarea.find({"type": "way"}).count()
pipeline = [{"$match": {"amenity": {"$ne": None}}},
{"$group": {"_id": "$amenity",
"count": {"$sum": 1}}},
{"$sort": {"count": -1}},
{"$limit": 10}]
result = bayarea.aggregate(pipeline)
pprint(result)
# Top 10 fast food chains
pipeline = [{"$match": {"amenity": "fast_food", "name": {"$ne": None}}},
{"$group": {"_id": "$name", "count": {"$sum": 1}}},
{"$sort": {"count": -1}},
{"$limit": 10}]
result = bayarea.aggregate(pipeline)
pprint(result)
pipeline = [{"$match": {"leisure": {"$exists": 1}}},
{"$group": {"_id": "$leisure", "count": {"$sum": 1}}},
{"$sort": {"count": -1}},
{"$limit": 10}]
result = bayarea.aggregate(pipeline)
pprint(result)
pipeline = [{"$match": {"leisure": {"$exists": 1}, "address.city": {"$exists": 1}}},
{"$group": {"_id": "$address.city", "count": {"$sum": 1}}},
{"$sort": {"count": -1}},
{"$limit": 10}]
result = bayarea.aggregate(pipeline)
pprint(result)
pipeline = [{"$match": {"amenity": {"$exists": 1}, "address.city": {"$exists": 1}}},
{"$group": {"_id": "$address.city", "count": {"$sum": 1}}},
{"$sort": {"count": -1}},
{"$limit": 10}]
result = bayarea.aggregate(pipeline)
pprint(result)
pipeline = [{"$match": {"building": {"$exists": 1}, "address.city": {"$exists": 1}}},
{"$group": {"_id": "$address.city", "count": {"$sum": 1}}},
{"$sort": {"count": -1}},
{"$limit": 10}]
result = bayarea.aggregate(pipeline)
pprint(result)
pipeline = [{"$match": {"address.city": {"$exists": 1}}},
{"$group": {"_id": "$address.city", "count": {"$sum": 1}}},
{"$sort": {"count": -1}},
{"$limit": 10}]
result = bayarea.aggregate(pipeline)
pprint(result)
bayarea.find({"type": "node"}).count()
bayarea.find({"type": "node", "address.city": {"$exists": 0}}).count()
bayarea.find({"type": "node", "address.county": {"$exists": 0}}).count()
bayarea.find({"type": "node", "address.postcode": {"$exists": 0}}).count()
disp.Image("./images/leisure.png")
def css_styling():
styles = open("../css/custom.css", "r").read()
return disp.HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Number of nodes
Step2: Number of ways
Step3: Top 10 types of amenities
Step4: Top 10 fast food chains
Step5: Top 10 types of leisurely activities
Step6: Top cities with the most leisurely activities
Step7: Top cities with the most amenities
Step8: Top cities with the most buildings
Step9: Top cities with the most ways and nodes
Step10: 3. Other Ideas About the Datasets
Step11: For the city, county, and postcode there is only < 1% coverage for all the nodes. During the shaping of the data to JSON, these values can be programmatically filled out if I had a geographical database that can fill out the city, county, and postcode from latitudinal and longitudinal coordinates. Once these values are completed, then more inferences can be made for city-city, county-county, or post code - post code comparisons.
Step12: Source
|
12,683
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from sklearn.datasets import load_sample_image
china = load_sample_image("china.jpg")
fig = plt.figure(1)
ax = fig.add_subplot(1,1,1)
ax.imshow(china)
iso = china.reshape(-1,3)
print(iso.shape)
print(iso.nbytes)
fig = plt.figure(2)
rg = fig.add_subplot(2,2,1)
rb = fig.add_subplot(2,2,2)
gb = fig.add_subplot(2,2,3)
rg.plot(iso[::5,0], iso[::5,1], 'b.', markersize=1)
rg.set_title('Red-Green channel', fontsize=10)
rb.plot(iso[::5,0], iso[::5,2], 'b.', markersize=1)
rb.set_title('Red-Blue channel', fontsize=10)
gb.plot(iso[::5,1], iso[::5,2], 'b.', markersize=1)
gb.set_title('Green-Blue channel', fontsize=10)
fig.tight_layout()
from sklearn.cluster import KMeans
model = KMeans(32, n_jobs=-1)
labels = model.fit_predict(iso)
colors = model.cluster_centers_
fig = plt.figure(3)
rg = fig.add_subplot(2,2,1)
rb = fig.add_subplot(2,2,2)
gb = fig.add_subplot(2,2,3)
rg.plot(iso[::5,0], iso[::5,1], 'b.', markersize=1)
rg.set_title('Red-Green channel', fontsize=10)
rb.plot(iso[::5,0], iso[::5,2], 'b.', markersize=1)
rb.set_title('Red-Blue channel', fontsize=10)
gb.plot(iso[::5,1], iso[::5,2], 'b.', markersize=1)
gb.set_title('Green-Blue channel', fontsize=10)
rg.plot(colors[:,0], colors[:,1], 'r.')
rb.plot(colors[:,0], colors[:,2], 'r.')
gb.plot(colors[:,1], colors[:,2], 'r.')
fig.tight_layout()
new_image = colors[labels].reshape(china.shape).astype(np.uint8)
fig = plt.figure(4)
ax = fig.add_subplot(1,1,1)
ax.imshow(new_image)
import pandas as pd
iris = pd.read_csv('data/iris.csv')
iris.head()
fig = plt.figure(5)
ax = fig.add_subplot(1,1,1)
for s, c in zip(iris.groupby('Name'), ['r', 'w', 'b']):
s[1].plot.scatter(x='SepalWidth',
y='SepalLength',
c=c,
s=50*s[1]['PetalLength'],
ax=ax,
label=s[0])
plt.xlabel('Sepal width')
plt.ylabel('Sepal length')
from sklearn.decomposition import PCA
data = np.vstack((iris.SepalLength.as_matrix(),
iris.SepalWidth.as_matrix(),
iris.PetalLength.as_matrix(),
iris.PetalWidth.as_matrix())).T
pca = PCA(n_components=2)
X_r = pca.fit(data).transform(data)
print('Components', pca.components_)
print('Explained variance', pca.explained_variance_ratio_)
fig = plt.figure(6)
ax = fig.add_subplot(1,1,1)
projected = pd.DataFrame(
{'Axis1': X_r[:,0],
'Axis2': X_r[:,1],
'Name': iris.Name.as_matrix()
}
)
for (group, data), c in zip(projected.groupby('Name'), 'rwb'):
plt.scatter(data.Axis1, data.Axis2, c=c, label=group)
ax.set_xlabel(r'$m_1$', fontsize=18)
ax.set_ylabel(r'$m_2$', fontsize=18)
plt.legend()
plt.title('PCA of IRIS dataset')
data = np.vstack((projected.Axis1.as_matrix(),
projected.Axis2.as_matrix())).T
model = KMeans(3, n_jobs=-1)
labels = model.fit_predict(data)
label_name_map = {
1: 'Iris-setosa',
2: 'Iris-versicolor',
0: 'Iris-virginica'
}
projected['Label'] = [label_name_map[l] for l in labels]
fig = plt.figure(7)
ax = fig.add_subplot(1,1,1)
right = 0
wrong = 0
for i, (ax1, ax2, name, label) in projected.iterrows():
if name != label:
ax.scatter(ax1, ax2, color='r')
wrong += 1
elif name == label:
ax.scatter(ax1, ax2, color='b')
right += 1
print('Accuracy', right/(wrong+right))
plt.title('Clustering error')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Queremos comprimir esta imagen para reducir el tamaño que cuesta almacenarlo en memoria. Una de las estrategias de compresión es reducir la paleta de colore
Step2: Como se ha dicho anteriormente, hay colores más o menos posibles. Sabiendo que tenemos tres posibles canales, representaremos todos los píxeles en función de dónde están situados en el espacio de color. Para ello los proyectaremos en las combinaciones de dos canales rojo-verde, rojo-azul y verde-azul.
Step3: Como se puede ver, la mayoría de píxeles siguen un patrón desde el negro al blanco, pasando por combinaciones que tienden al gris (iguales cantidades de rojo verde y azul). Los colores más poco frecuentes son los rojos puros y los verdes puros.
Step4: A continuación representaremos sobre la anterior figura los centroides como puntos en rojo. Como se aprecia perfectamente, hay mayor densidad de centroides donde hay colores más probables.
Step5: Finalmente podemos reconstruir la imagen utilizando los valores ajustados al modelo, para ello tenemos que pasar de la representación bidimensional que hemos utilizaro para el modelo a la tridimensional que requiere la imagen.*
Step6: Distinguir entre el Iris Virginica y el Iris Versicolor
Step7: Un problema clásico de predicción es poder distinguir entre la Iris Virginica y la Iris Versicolor. Los datos tomados para cada flor son la longitud y la anchura del sépalo y el pétalo respectivamente. Distinguir la setosa de la virginica y versicolor es sencillo, puesto que la setosa tiene un sépalo claramente más corto y más ancho que las otras dos variedades.
Step8: En cambio, no parece que haya una manera obvia de distinguir la versicolor de la virginica por sus propiedades. Los pétalos y los sépalos tienen un aspecto parecido
Step9: Lo que obtenemos es que las dos medidas que separan bien la virginica de la versicolor son
Step10: En estas nuevas medidas derivadas, la combinación de $m_1$ y $m_2$ de la virginica es proporcionalmente mayor que la versicolor. En este nuevo subespacio la setosa es aún más fácil de distinguir, especialmente tomando la medida $m_1$.
|
12,684
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import load_boston
boston = load_boston()
print("Keys of boston: {}".format(boston.keys()))
# The value of the key DESCR is a short description of the dataset. Here we show the beinning of the description.
print(boston['DESCR'][:193] + "\n...")
# The value of feature_names is a list of strings, giving the abbreviated name of each feature
print("Feature names: {}".format(boston['feature_names']))
# The data itself is contained in the target and data fields.
# data contains the numeric measurements of features in a NumPy array
print("Type of data: {}".format(type(boston['data'])))
# The rows in the data array correspond to neighborhoods, while the columns represent the features
print("Shape of data: {}".format(boston['data'].shape))
# We see that the array contains measurements for 506 different neighborhoods. Here are values for the first 5.
print("First five columns of data:\n{}".format(boston['data'][:5]))
# The target array contains the Median value of owner-occupied homes in $1000's, also as a NumPy array
print("Type of target: {}".format(type(boston['target'])))
# target is a one-dimensional array, with one entry per sample
print("Shape of target: {}".format(boston['target'].shape))
# The target values are positive floating point numbers which represent a median house value in thousands of dollars.
print("Target:\n{}".format(boston['target']))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(boston['data'], boston['target'], random_state=0)
print("X_train shape: {}".format(X_train.shape))
print("y_train shape: {}".format(y_train.shape))
print("X_test shape: {}".format(X_test.shape))
print("y_test shape: {}".format(y_test.shape))
# create dataframe from data in X_train
boston_df = pd.DataFrame(X_train, columns=boston.feature_names)
# Add in the target data
boston_df['MEDV'] = y_train
# Look at the first few rows
boston_df.head()
# create a scatter matrix from the dataframe
tmp = pd.scatter_matrix(boston_df, figsize=(15, 15))
# Get a high-level overview of the data
boston_df.describe()
# Find which features are most highly correlated with the housing prices
df = boston_df
df['MEDV'] = y_train
df.corr()['MEDV']
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train, y_train)
print("lr.coef_: {}".format(lr.coef_))
print("lr.intercept_: {}".format(lr.intercept_))
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test)))
# Scale the boston dataset
from sklearn.preprocessing import MinMaxScaler
X = MinMaxScaler().fit_transform(boston.data)
X_train, X_test, y_train, y_test = train_test_split(X, boston['target'], random_state=0)
lr = LinearRegression().fit(X_train, y_train)
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test)))
from sklearn.datasets import load_boston
from sklearn.preprocessing import MinMaxScaler, PolynomialFeatures, StandardScaler, RobustScaler
def load_extended_boston(scaler='minmax'):
boston = load_boston()
X = boston.data
if 'standard' == scaler:
X = StandardScaler().fit_transform(boston.data)
elif 'robust' == scaler:
X = RobustScaler().fit_transform(boston.data)
else:
X = MinMaxScaler().fit_transform(boston.data)
X = PolynomialFeatures(degree=2).fit_transform(X)
return X, boston.target
X, y = load_extended_boston()
X.shape
# What if we fit this new dataset with a vastly expanded set of features using OLS?
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
lr = LinearRegression().fit(X_train, y_train)
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test)))
from sklearn.linear_model import Ridge
ridge = Ridge().fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge.score(X_test, y_test)))
ridge10 = Ridge(alpha=10).fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge10.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge10.score(X_test, y_test)))
ridge01 = Ridge(alpha=0.1).fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge01.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge01.score(X_test, y_test)))
plt.figure(figsize=(15, 10))
plt.plot(ridge.coef_, 's', label="Ridge alpha=1")
plt.plot(ridge10.coef_, '^', label="Ridge alpha=10")
plt.plot(ridge01.coef_, 'v', label="Ridge alpha=0.1")
plt.plot(lr.coef_, 'o', label="LinearRegression")
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
plt.hlines(0, 0, len(lr.coef_))
plt.ylim(-25, 25)
plt.legend()
plt.show()
# Let's evaluate cross-validation on the iris dataset using logistic regression (which is actually classification)
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
iris = load_iris()
logreg = LogisticRegression()
scores = cross_val_score(logreg, iris.data, iris.target)
print("Cross-validation scores: {}".format(scores))
scores = cross_val_score(logreg, iris.data, iris.target, cv=5)
print("Cross-validation scores: {}".format(scores))
print("Average cross-validation score: {:.2f}".format(scores.mean()))
lr = LinearRegression()
scores = cross_val_score(lr, boston.data, boston.target)
print("Cross-validation scores: {}".format(scores))
# Let's look at the boston housing dataset again using shuffle-split cross-validation to ensure random sampling
# The following code splits the dataset into 80% training set and 20% test set for 3 iterations:
from sklearn.model_selection import ShuffleSplit
shuffle_split = ShuffleSplit(test_size=.8, train_size=.2, n_splits=3)
scores = cross_val_score(lr, boston.data, boston.target, cv=shuffle_split)
print("Cross-validation scores:\n{}".format(scores))
X, y = load_extended_boston(scaler='standard')
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
print("Size of training set: {} size of test set: {}".format(X_train.shape[0], X_test.shape[0]))
best_score = 0
for alpha in [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]:
# for each combination of parameters, train an SVC
ridge = Ridge(alpha=alpha)
ridge.fit(X_train, y_train)
# evaluate the SVC on the test set
score = ridge.score(X_test, y_test)
# if we got a better score, store the score and parameters
if score > best_score:
best_score = score
best_parameters = {'alpha': alpha}
print("Best score: {:.2f}".format(best_score))
print("Best parameters: {}".format(best_parameters))
X, y = load_extended_boston(scaler='standard')
# split data into train+validation set and test set
X_trainval, X_test, y_trainval, y_test = train_test_split(X, y, random_state=0)
# split train+validation set into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(X_trainval, y_trainval, random_state=1)
print("Size of training set: {} size of validation set: {} size of test set:"
" {}\n".format(X_train.shape[0], X_valid.shape[0], X_test.shape[0]))
best_score = 0
for alpha in [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]:
# for each combination of parameters, train an SVC
ridge = Ridge(alpha=alpha)
ridge.fit(X_train, y_train)
# evaluate the Ridge on the test set
score = ridge.score(X_valid, y_valid)
# if we got a better score, store the score and parameters
if score > best_score:
best_score = score
best_parameters = {'alpha': alpha}
# rebuild a model on the combined training and validation set,
# and evaluate it on the test set
ridge = Ridge(**best_parameters)
ridge.fit(X_trainval, y_trainval)
test_score = ridge.score(X_test, y_test)
print("Best score on validation set: {:.2f}".format(best_score))
print("Best parameters: ", best_parameters)
print("Test set score with best parameters: {:.2f}".format(test_score))
best_score = 0
for alpha in [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]:
# for each combination of parameters, train an SVC
ridge = Ridge(alpha=alpha)
# perform cross-validation
scores = cross_val_score(ridge, X_trainval, y_trainval, cv=5)
# compute mean cross-validation accuracy
score = np.mean(scores)
# if we got a better score, store the score and parameters
if score > best_score:
best_score = score
best_parameters = {'alpha': alpha}
# rebuild a model on the combined training and validation set,
# and evaluate it on the test set
ridge = Ridge(**best_parameters)
ridge.fit(X_trainval, y_trainval)
test_score = ridge.score(X_test, y_test)
print("Best score on validation set: {:.2f}".format(best_score))
print("Best parameters: ", best_parameters)
print("Test set score with best parameters: {:.2f}".format(test_score))
param_grid = {'alpha': [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]}
print("Parameter grid:\n{}".format(param_grid))
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Ridge
grid_search = GridSearchCV(Ridge(), param_grid, cv=5)
X, y = load_extended_boston(scaler='standard')
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
grid_search.fit(X_train, y_train)
print("Test set score: {:.2f}".format(grid_search.score(X_test, y_test)))
print("Best parameters: {}".format(grid_search.best_params_))
print("Best cross-validation score: {:.2f}".format(grid_search.best_score_))
print("Best estimator:\n{}".format(grid_search.best_estimator_))
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Ridge
param_grid = {'alpha': [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]}
grid_search = GridSearchCV(Ridge(), param_grid, cv=5)
X, y = load_extended_boston(scaler='standard')
for i in range(10):
X_train, X_test, y_train, y_test = train_test_split(X, y)
grid_search.fit(X_train, y_train)
print("Run {} - Test set score: {:.2f} Best parameters: {}".format(i, grid_search.score(X_test, y_test),
grid_search.best_params_))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A First Application
Step2: Measuring Success
Step3: First things first
Step4: From the plots, we can see RM has a strong positive linear relationship with MEDV and LSTAT has a strong negative one. This makes sense - the housing price should go up as the number of rooms increases and the housing prices should go down as the percentage of lower class/income families in the neighborhood increases.
Step5: Building your model
Step6: The lr object encapsulates the algorithm that will be used to build the model from the training data, as well the algorithm to make predictions on new data points. It will also hold the information that the algorithm has extracted from the training data.
Step7: The “slope” parameters (w), also called weights or coefficients, are stored in the coef_ attribute, while the offset or intercept (b) is stored in the intercept_ attribute
Step8: The intercept_ attribute is always a single float number, while the coef_ attribute is a NumPy array with one entry per input feature. As we only have 13 input features in this dataset, lr.coef_ has 13 entries.
Step9: An R^2 of around 0.64 on the test set is not very good, but we can see that the scores on the training and test sets are are a decent distance apart. This means we are likely overfitting. With higher-dimensional datasets (meaning datasets with a large number of features), linear models become more powerful, and there is a higher chance of overfitting. More complicated linear models such as Ridge Regression and Lasso have been designed to help control this overfitting problem.
Step10: Ordinary Least Squares (OLS) regression is not sensitive to feature scaling, but all of the regularized linear methods which help reduce the overfitting present in OLS are sensitive to feature scaling.
Step11: Now the basic OLS model is doing a dramatically better job fitting the training set (R^2 of 0.95 vs 0.77).
Step12: As you can see, the training set score of Ridge is lower than for LinearRegression, while the test set score is higher. This is consistent with our expectation. With linear regression, we were overfitting our data. Ridge is a more restricted model, so we are less likely to overfit. A less complex model means worse performance on the training set, but better generalization. As we are only interested in generalization performance, we should choose the Ridge model over the LinearRegression model.
Step13: Decreasing alpha allows the coefficients to be less restricted. For very small values of alpha, coefficients are barely restricted at all, and we end up with a model that resembles LinearRegression
Step14: Here, alpha=0.1 seems to be working well. We could try decreasing alpha even more to improve generalization. For now, notice how the parameter alpha corresponds to the model complexity.
Step15: Clearly, the interactions and polynomial features gave us a good boost in performance when using Ridge. When using a more complex model like a random forest, the story can be a bit different, though. Adding features will benefit linear models the most. For very complex models, adding features may actually slightly decrease the performance.
Step16: By default, cross_val_score performs three-fold cross-validation, returning three accuracy values. We can change the number of folds used by changing the cv parameter
Step17: A common way to summarize the cross-validation accuracy is to compute the mean
Step18: Using the mean cross-validation we can conclude that we expect the model to be around 96% accurate on average. Looking at all five scores produced by the five-fold cross-validation, we can also conclude that there is a relatively high variance in the accuracy between folds, ranging from 100% accuracy to 90% accuracy. This could imply that the model is very dependent on the particular folds used for training, but it could also just be a consequence of the small size of the dataset.
Step19: As we can see, a default 3-fold cross-validation performed ok for the first two folds, but horribly bad for the third one.
Step20: Grid Search
Step21: The Danger of Overfitting the Parameters and the Validation Set
Step22: The best score on the validation set is 92%. However, the score on the test set—the score that actually tells us how well we generalize—is lower, at 78%. So we can claim to classify new data 78% correctly. This happens to be the same as before, now we can make a stronger claim since the final test set wasn't used in any way shape or form during hyper-parameter tuning.
Step23: To evaluate the accuracy of the Ridge Regression model using a particular setting of alpha using five-fold cross-validation, we need to train 11 * 5 = 55 models. As you can imagine, the main downside of the use of cross-validation is the time it takes to train all these models. However, as you can see here, it is a more reliable method which is less sensitive to how precisely the validation set is sampled from the overall trainin set, and thus more likely to generalize well.
Step24: We can now instantiate the GridSearchCV class with the model (Ridge), the parameter grid to search (param_grid), and the cross-validation strategy we want to use (say, five-fold stratified cross-validation)
Step25: GridSearchCV will use cross-validation in place of the split into a training and validation set that we used before. However, we still need to split the data into a training and a test set, to avoid overfitting the parameters
Step26: The grid_search object that we created behaves just like a classifier; we can call the standard methods fit, predict, and score on it. However, when we call fit, it will run cross-validation for each combination of parameters we specified in param_grid
Step27: Fitting the GridSearchCV object not only searches for the best parameters, but also automatically fits a new model on the whole training dataset with the parameters that yielded the best cross-validation performance. What happens in fit is therefore equivalent to the result of the code we saw at the beginning of this section. The GridSearchCV class provides a very convenient interface to access the retrained model using the predict and score methods. To evaluate how well the best found parameters generalize, we can call score on the test set
Step28: Choosing the parameters using cross-validation, we actually found a model that achieves 77% accuracy on the test set. The important thing here is that we did not use the test set to choose the parameters. The parameters that were found are scored in the best_params_ attribute, and the best cross-validation accuracy (the mean accuracy over the different splits for this parameter setting) is stored in best_score_
Step29: Sometimes it is helpful to have access to the actual model that was found—for example, to look at coefficients or feature importances. You can access the model with the best parameters trained on the whole training set using the best_estimator_ attribute
Step30: Because grid_search itself has predict and score methods, using best_estimator_ is not needed to make predictions or evaluate the model.
|
12,685
|
<ASSISTANT_TASK:>
Python Code:
# Python standard-library
from urllib.parse import urlencode
from urllib.request import urlretrieve
# Third-party dependencies
from astropy import units as u
from astropy.coordinates import SkyCoord
from IPython.display import Image
# initialize a SkyCood object named hcg7_center at the location of HCG 7
hcg7_center = SkyCoord.from_name('HCG 7')
# uncomment and run this line if you don't have an internet connection
# hcg7_center = SkyCoord(9.81625*u.deg, 0.88806*u.deg, frame='icrs')
type(hcg7_center)
dir(hcg7_center)
print(hcg7_center.ra, hcg7_center.dec)
print(hcg7_center.ra.hour, hcg7_center.dec)
type(hcg7_center.ra), type(hcg7_center.dec)
hcg7_center.ra, hcg7_center.dec
hcg7_center
hcg7_center.ra.hour
SkyCoord('0h39m15.9s', '0d53m17.016s', frame='icrs')
hcg7_center.ra.hour
# tell the SDSS service how big of a cutout we want
im_size = 12*u.arcmin # get a 12 arcmin square
im_pixels = 1024
cutoutbaseurl = 'http://skyservice.pha.jhu.edu/DR12/ImgCutout/getjpeg.aspx'
query_string = urlencode(dict(ra=hcg7_center.ra.deg,
dec=hcg7_center.dec.deg,
width=im_pixels, height=im_pixels,
scale=im_size.to(u.arcsec).value/im_pixels))
url = cutoutbaseurl + '?' + query_string
# this downloads the image to your disk
urlretrieve(url, 'HCG7_SDSS_cutout.jpg')
Image('HCG7_SDSS_cutout.jpg')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Describing on-sky locations with coordinates
Step2: <div class="alert alert-info">
Step3: Show the available methods and attributes of the SkyCoord object we've created called hcg7_center
Step4: Show the RA and Dec.
Step5: We see that, according to SESAME, HCG 7 is located at ra = 9.849 deg and dec = 0.878 deg.
Step6: SkyCoord will also accept string-formatted coordinates either as separate strings for RA/Dec or a single string. You'll need to give units, though, if they aren't part of the string itself.
Step7: Download an image
|
12,686
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
# initial parameters can be learned on training data
# theory reference https://web.stanford.edu/~jurafsky/slp3/8.pdf
# code reference https://phvu.net/2013/12/06/sweet-implementation-of-viterbi-in-python/
class HMM(object):
def __init__(self, initial_prob, trans_prob, obs_prob):
self.N = np.size(initial_prob)
self.initial_prob = initial_prob
self.trans_prob = trans_prob
self.obs_prob = obs_prob
self.emission = tf.constant(obs_prob)
assert self.initial_prob.shape == (self.N, 1)
assert self.trans_prob.shape == (self.N, self.N)
assert self.obs_prob.shape[0] == self.N
self.obs = tf.placeholder(tf.int32)
self.fwd = tf.placeholder(tf.float64)
self.viterbi = tf.placeholder(tf.float64)
def get_emission(self, obs_idx):
slice_location = [0, obs_idx]
num_rows = tf.shape(self.emission)[0]
slice_shape = [num_rows, 1]
return tf.slice(self.emission, slice_location, slice_shape)
def forward_init_op(self):
obs_prob = self.get_emission(self.obs)
fwd = tf.multiply(self.initial_prob, obs_prob)
return fwd
def forward_op(self):
transitions = tf.matmul(self.fwd, tf.transpose(self.get_emission(self.obs)))
weighted_transitions = transitions * self.trans_prob
fwd = tf.reduce_sum(weighted_transitions, 0)
return tf.reshape(fwd, tf.shape(self.fwd))
def decode_op(self):
transitions = tf.matmul(self.viterbi, tf.transpose(self.get_emission(self.obs)))
weighted_transitions = transitions * self.trans_prob
viterbi = tf.reduce_max(weighted_transitions, 0)
return tf.reshape(viterbi, tf.shape(self.viterbi))
def backpt_op(self):
back_transitions = tf.matmul(self.viterbi, np.ones((1, self.N)))
weighted_back_transitions = back_transitions * self.trans_prob
return tf.argmax(weighted_back_transitions, 0)
def forward_algorithm(sess, hmm, observations):
fwd = sess.run(hmm.forward_init_op(), feed_dict={hmm.obs: observations[0]})
for t in range(1, len(observations)):
fwd = sess.run(hmm.forward_op(), feed_dict={hmm.obs: observations[t], hmm.fwd: fwd})
prob = sess.run(tf.reduce_sum(fwd))
return prob
def viterbi_decode(sess, hmm, observations):
viterbi = sess.run(hmm.forward_init_op(), feed_dict={hmm.obs: observations[0]})
backpts = np.ones((hmm.N, len(observations)), 'int32') * -1
for t in range(1, len(observations)):
viterbi, backpt = sess.run([hmm.decode_op(), hmm.backpt_op()],
feed_dict={hmm.obs: observations[t],
hmm.viterbi: viterbi})
backpts[:, t] = backpt
tokens = [viterbi[:, -1].argmax()]
for i in range(len(observations) - 1, 0, -1):
tokens.append(backpts[tokens[-1], i])
return tokens[::-1]
if __name__ == '__main__':
states = ('Healthy', 'Fever')
# observations = ('normal', 'cold', 'dizzy')
# start_probability = {'Healthy': 0.6, 'Fever': 0.4}
# transition_probability = {
# 'Healthy': {'Healthy': 0.7, 'Fever': 0.3},
# 'Fever': {'Healthy': 0.4, 'Fever': 0.6}
# }
# emission_probability = {
# 'Healthy': {'normal': 0.5, 'cold': 0.4, 'dizzy': 0.1},
# 'Fever': {'normal': 0.1, 'cold': 0.3, 'dizzy': 0.6}
# }
initial_prob = np.array([[0.6], [0.4]])
trans_prob = np.array([[0.7, 0.3], [0.4, 0.6]])
obs_prob = np.array([[0.5, 0.4, 0.1], [0.1, 0.3, 0.6]])
hmm = HMM(initial_prob=initial_prob, trans_prob=trans_prob, obs_prob=obs_prob)
observations = [0, 1, 1, 2, 1]
with tf.Session() as sess:
prob = forward_algorithm(sess, hmm, observations)
print('Probability of observing {} is {}'.format(observations, prob))
seq = viterbi_decode(sess, hmm, observations)
print('Most likely hidden states are {}'.format(seq))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the same HMM model as before. This time, we'll include a couple additional functions.
Step2: Define the forward algorithm from Concept01.
Step3: Now, let's compute the Viterbi likelihood of the observed sequence
Step4: Let's try it out on some example data
|
12,687
|
<ASSISTANT_TASK:>
Python Code:
# If we're running on Colab, install modsimpy
# https://pypi.org/project/modsimpy/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install pint==0.9
!pip install modsimpy
!mkdir figs
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# import functions from the modsim.py module
from modsim import *
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(v_init = 0 * m / s,
g = 9.8 * m/s**2,
M = 75 * kg, # mass of jumper
m_cord = 75 * kg, # mass of cord
area = 1 * m**2, # frontal area of jumper
rho = 1.2 * kg/m**3, # density of air
v_term = 60 * m / s, # terminal velocity of jumper
L = 25 * m, # length of cord
k = 40 * N / m) # spring constant of cord
def make_system(params):
Makes a System object for the given params.
params: Params object
returns: System object
M, m_cord = params.M, params.m_cord
g, rho, area = params.g, params.rho, params.area
v_init, v_term = params.v_init, params.v_term
# back out the coefficient of drag
C_d = 2 * M * g / (rho * area * v_term**2)
mu = m_cord / M
init = State(y=0*m, v=v_init)
t_end = 10 * s
return System(params, C_d=C_d, mu=mu,
init=init, t_end=t_end)
system = make_system(params)
system
def drag_force(v, system):
Computes drag force in the opposite direction of `v`.
v: velocity
returns: drag force in N
rho, C_d, area = system.rho, system.C_d, system.area
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
drag_force(20 * m/s, system)
def cord_acc(y, v, system):
Computes the force of the bungee cord on the jumper:
y: height of the jumper
v: velocity of the jumpter
returns: acceleration in m/s
L, mu = system.L, system.mu
a_cord = -v**2 / 2 / (2*L/mu + (L+y))
return a_cord
y = -20 * m
v = -20 * m/s
cord_acc(y, v, system)
def slope_func1(state, t, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
y, v = state
M, g = system.M, system.g
a_drag = drag_force(v, system) / M
a_cord = cord_acc(y, v, system)
dvdt = -g + a_cord + a_drag
return v, dvdt
slope_func1(system.init, 0, system)
def event_func(state, t, system):
Run until y=-L.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: difference between y and -L
y, v = state
return y + system.L
event_func(system.init, 0, system)
results, details = run_ode_solver(system, slope_func1, events=event_func)
details.message
t_final = get_last_label(results)
t_final
def plot_position(results, **options):
plot(results.y, **options)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
min(results.y)
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
min(results.v)
a = gradient(results.v)
plot(a)
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
max_acceleration = max(abs(a)) * m/s**2 / params.g
def max_acceleration(system):
mu = system.mu
return 1 + mu * (4+mu) / 8
max_acceleration(system)
def sweep_m_cord(m_cord_array, params):
sweep = SweepSeries()
for m_cord in m_cord_array:
system = make_system(Params(params, m_cord=m_cord))
results, details = run_ode_solver(system, slope_func1, events=event_func)
min_velocity = min(results.v) * m/s
sweep[m_cord.magnitude] = min_velocity
return sweep
m_cord_array = linspace(1, 201, 21) * kg
sweep = sweep_m_cord(m_cord_array, params)
plot(sweep)
decorate(xlabel='Mass of cord (kg)',
ylabel='Fastest downward velocity (m/s)')
def spring_force(y, system):
Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system:
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
L, k = system.L, system.k
distance_fallen = -y
extension = distance_fallen - L
f_spring = k * extension
return f_spring
spring_force(-25*m, system)
spring_force(-26*m, system)
def slope_func2(state, t, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
y, v = state
M, g = system.M, system.g
a_drag = drag_force(v, system) / M
a_spring = spring_force(y, system) / M
dvdt = -g + a_drag + a_spring
return v, dvdt
system1 = make_system(params)
system1
event_func.direction=-1
results1, details1 = run_ode_solver(system1, slope_func1, events=event_func)
details1.message
t_final = get_last_label(results1)
t_final
init2 = results1.row[t_final]
init2
system2 = System(system1, t_0=t_final, init=init2)
system2
event_func.direction=+1
results2, details2 = run_ode_solver(system2, slope_func2, events=event_func)
details2.message
t_final = get_last_label(results2)
t_final
plot_position(results1, label='Phase 1')
plot_position(results2, label='Phase 2')
min(results2.y)
def simulate_system2(params):
system1 = make_system(params)
event_func.direction=-1
results1, details1 = run_ode_solver(system1, slope_func1, events=event_func)
t_final = get_last_label(results1)
init2 = results1.row[t_final]
system2 = System(system1, t_0=t_final, init=init2)
results2, details2 = run_ode_solver(system2, slope_func2, events=event_func)
t_final = get_last_label(results2)
return TimeFrame(pd.concat([results1, results2]))
results = simulate_system2(params);
plot_position(results)
params_no_cord = Params(params, m_cord=1*kg)
results_no_cord = simulate_system2(params_no_cord);
plot_position(results, label='m_cord = 75 kg')
plot_position(results_no_cord, label='m_cord = 1 kg')
savefig('figs/jump.png')
min(results_no_cord.y)
diff = min(results.y) - min(results_no_cord.y)
diff
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bungee jumping
Step3: Now here's a version of make_system that takes a Params object as a parameter.
Step4: Let's make a System
Step6: drag_force computes drag as a function of velocity
Step7: Here's drag force at 20 m/s.
Step9: The following function computes the acceleration of the jumper due to tension in the cord.
Step10: Here's acceleration due to tension in the cord if we're going 20 m/s after falling 20 m.
Step12: Now here's the slope function
Step13: As always, let's test the slope function with the initial params.
Step15: We'll need an event function to stop the simulation when we get to the end of the cord.
Step16: We can test it with the initial conditions.
Step17: And then run the simulation.
Step18: Here's how long it takes to drop 25 meters.
Step19: Here's the plot of position as a function of time.
Step20: We can use min to find the lowest point
Step21: Here's velocity as a function of time
Step22: Velocity when we reach the end of the cord.
Step23: Although we compute acceleration inside the slope function, we don't get acceleration as a result from run_ode_solver.
Step24: The maximum downward acceleration, as a factor of g
Step25: Using Equation (1) from Heck, Uylings, and Kędzierska, we can compute the peak acceleration due to interaction with the cord, neglecting drag.
Step26: If you set C_d=0, the simulated acceleration approaches the theoretical result, although you might have to reduce max_step to get a good numerical estimate.
Step27: Here's what it looks like. As expected, a heavier cord gets the jumper going faster.
Step29: Phase 2
Step30: The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
Step32: The slope function for Phase 2 includes the spring force, and drops the acceleration due to the cord.
Step33: I'll run Phase 1 again so we can get the final state.
Step34: Now I need the final time, position, and velocity from Phase 1.
Step35: And that gives me the starting conditions for Phase 2.
Step36: Here's how we run Phase 2, setting the direction of the event function so it doesn't stop the simulation immediately.
Step37: We can plot the results on the same axes.
Step38: And get the lowest position from Phase 2.
Step39: To see how big the effect of the cord is, I'll collect the previous code in a function.
Step40: Now we can run both phases and get the results in a single TimeFrame.
|
12,688
|
<ASSISTANT_TASK:>
Python Code:
print("Missing values")
titanic_data.isnull().any(axis=1).sum()
titanic_data.isnull().sum()
treated_data = titanic_data.drop(['Cabin','Name', 'PassengerId', 'Ticket'], axis=1)
treated_data = treated_data.dropna()
treated_data.isnull().any(axis=1).sum()
treated_data['Age'].hist()
print(treated_data['Age'].min(), treated_data['Age'].max())
treated_data = treated_data.replace(['male', 'female'],[0,1]).replace(['S','C','Q'], [0,1,2])
treated_data=treated_data.astype(np.float32)
treated_data.head()
train = treated_data.sample(frac=0.7)
test = treated_data.drop(train.index)
train_Y = train['Survived'].apply(lambda x: [0.,1.] if x == 0 else [1.,0.]).as_matrix()
train.drop('Survived', axis=1, inplace=True)
test_Y = test['Survived'].apply(lambda x: [0.,1.] if x == 0 else [1.,0.]).as_matrix()
test.drop('Survived', axis=1, inplace=True)
train.head()
# Network architeture
# Hidden layers size, you can mess with these sizes to
# see if there's an gain in accuracy
n_hidden_1 = 15
n_hidden_2 = 15
# These are the inputs and outputs size
# You shouldn't have to touch these
n_inputs = 7
n_classes = 2
# Creating the input and output placeholders
x = tf.placeholder(tf.float32, [None, n_inputs], name="input")
y = tf.placeholder(tf.float32, [None, n_classes], name="survival")
# We want probabilities that the passenger belongs to a given class (survive/no-survive)
# so we will use the softmax activation function
def makeModel(x, w, biases):
# Hidden layer with Softmax activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'], name='layer_1')
layer_1 = tf.nn.softmax(layer_1)
# Hidden layer with Softmax activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'], name='layer_2')
layer_2 = tf.nn.softmax(layer_2)
# Output layer with softmax activation
out_layer = tf.nn.softmax(tf.add(tf.matmul(layer_2, weights['out']), biases['out'], name='out_layer'))
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_inputs, n_hidden_1]), name='h1'),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2]), name='h2'),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]), name='out')
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1]), name='b1'),
'b2': tf.Variable(tf.random_normal([n_hidden_2]), name='b2'),
'out': tf.Variable(tf.random_normal([n_classes]), name='out_bias')
}
# Construct model
pred = makeModel(x, weights, biases)
# Our cost function will be softmax cross entropy between classes
logits = tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred, name="final_ouput")
cost = tf.reduce_mean(logits, name='Loss')
# You can change the LR to see how the training behaves
optimizer = tf.train.AdamOptimizer(learning_rate=0.05).minimize(cost)
# To calculate the correct class we verify if the greatest of the two scores
# is in the same index both in the predicition and the ground truth (GT)
correct = tf.equal(tf.argmax(y, 1, name='GT'), tf.argmax(pred,1, name='predicted'), name='correct')
accuracy = tf.reduce_mean(tf.cast(correct, "float"), name='Acc')
# TensorBoard reporting for accuracy and loss
tf.summary.scalar('LossSummary', cost)
tf.summary.scalar('AccSummary', accuracy)
r_acc = []
r_loss = []
r_t_loss = []
with tf.Session() as sess:
tf.global_variables_initializer().run()
writer = tf.summary.FileWriter('./logs', sess.graph)
# Training cycle
summary = tf.summary.merge_all()
# Changing the iteration number may also change the results
for epoch in range(5001):
# Run the summaries and the traing step
log, _ ,loss= sess.run([summary, optimizer, cost],
feed_dict={x:train.as_matrix(), y:[k for k in train_Y]})
writer.add_summary(log, epoch)
writer.flush()
# Every 100 iterations, log the acc and the loss on the test data
if epoch%50 == 0:
t_acc, t_loss = sess.run([accuracy, cost],feed_dict={x:test.as_matrix(), y: [k for k in test_Y]})
r_acc.append(t_acc)
r_t_loss.append(t_loss)
r_loss.append(loss)
# Pass all the test data by the classifier
p = sess.run(pred,{x:test.as_matrix(), y: [k for k in test_Y]})
sess.close()
writer.close()
print(p[:5])
print(test_Y[:5])
plt.figure(figsize=(8,10))
plt.subplot(211)
plt.title("Training Loss over iterations")
plt.xlabel("Iteration")
plt.ylabel("Softmax Loss")
plt.grid()
plt.plot(r_loss)
plt.subplot(212)
plt.title("Test Loss over iterations")
plt.xlabel("Iteration (x50)")
plt.ylabel("Softmax Loss")
plt.grid()
plt.plot(r_t_loss)
plt.title("Accuracy Over Iterations in the Test Data")
plt.xlabel("Iterations (x50)")
plt.ylabel("Accuracy")
plt.grid()
plt.plot(r_acc)
# First we will decode the survival of the passengers, note tha [1, 0] OHE means the passenger survived, so we
# can use the first index of the encoding
seriesTest = pd.Series(test_Y)
seriesTest = seriesTest.apply(lambda x: x[0])
seriesTest.head()
# Now we just have to sum all the values and divide by the total
seriesTest.sum() / len(seriesTest)
import sklearn.tree as skltr
train = treated_data.sample(frac=0.7)
test = treated_data.drop(train.index)
train_Y = train['Survived'].as_matrix()
train.drop('Survived', axis=1, inplace=True)
test_Y = test['Survived'].as_matrix()
test.drop('Survived', axis=1, inplace=True)
train.head()
# We will run the decision tree with the default parameters
decision_tree = skltr.DecisionTreeClassifier()
decision_tree.fit(train, train_Y)
# Now we will predict the results and see the accuracy
prediction = decision_tree.predict(test)
print("Accuracy: ", sum((prediction == test_Y))/len(test_Y))
# We must now see if this is better than random guessing
print(sum(test_Y)/len(test_Y))
# Now we can see which features are more important
plt.xticks(range(0, 70, 10), train.columns, rotation=90)
for x, h in zip(range(0,70,10), decision_tree.feature_importances_):
plt.bar(x, h, 5, color=(min(h*5, 1), 0, 1-min(h*5,1)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: That's a lot of missing values, let's see how they are distributed
Step2: We can just drop the cabin column as it isn't important, we will also drop the Name, PassengerId and Ticket columns
Step3: Also, the embarked column NaN indicates that a passenger did not embark, we can just drop those passenger, along with the ones with NaN ages
Step4: We no longer have any null value, lets check for outliers in the age column
Step5: Alright, that seems to be ok, lastly, we will drop the ticket column, as it does not seems relevant and will convert male/female to 0/1 and the embarking ports to 0/1/2
Step6: We are now ready to do the machine learning bit
Step7: We will now create our TensorFlow model for training
Step8: Now, let's take a look in the first 5 predictions and it's ground truths
Step9: Let's visualize the loss and accuracy over time (the same data is on tensorboard)
Step10: We can see our accuracy in the test data is aroun 74%, let's see if this is better than random guessing
Step11: It looks like out model is better than random guessing the values, since only 39% of the passengers survived and we are correct predicting the survival of about 74% of them.
Step12: First, we will split the model again, using the same splits as before (70/30)
|
12,689
|
<ASSISTANT_TASK:>
Python Code:
#basic imports and ipython setup
import matplotlib.pyplot as plt
import numpy as np
#import solver related modules
from MCEq.core import MCEqRun
#import primary model choices
import crflux.models as pm
mceq_run = MCEqRun(
#provide the string of the interaction model
interaction_model='SIBYLL2.3c',
#primary cosmic ray flux model
#support a tuple (primary model class (not instance!), arguments)
primary_model = (pm.HillasGaisser2012, "H3a"),
# Zenith angle in degrees. 0=vertical, 90=horizontal
theta_deg=0.0,
)
#Power of energy to scale the flux
mag = 3
#obtain energy grid (fixed) of the solution for the x-axis of the plots
e_grid = mceq_run.e_grid
#Dictionary for results
flux = {}
#Define equidistant grid in cos(theta)
angles = np.arccos(np.linspace(1,0,11))*180./np.pi
#Initialize empty grid
for frac in ['mu_conv','mu_pr','mu_total',
'numu_conv','numu_pr','numu_total',
'nue_conv','nue_pr','nue_total','nutau_pr']:
flux[frac] = np.zeros_like(e_grid)
#Sum fluxes, calculated for different angles
for theta in angles:
mceq_run.set_theta_deg(theta)
mceq_run.solve()
#_conv means conventional (mostly pions and kaons)
flux['mu_conv'] += (mceq_run.get_solution('conv_mu+', mag)
+ mceq_run.get_solution('conv_mu-', mag))
# _pr means prompt (the mother of the muon had a critical energy
# higher than a D meson. Includes all charm and direct resonance
# contribution)
flux['mu_pr'] += (mceq_run.get_solution('pr_mu+', mag)
+ mceq_run.get_solution('pr_mu-', mag))
# total means conventional + prompt
flux['mu_total'] += (mceq_run.get_solution('total_mu+', mag)
+ mceq_run.get_solution('total_mu-', mag))
# same meaning of prefixes for muon neutrinos as for muons
flux['numu_conv'] += (mceq_run.get_solution('conv_numu', mag)
+ mceq_run.get_solution('conv_antinumu', mag))
flux['numu_pr'] += (mceq_run.get_solution('pr_numu', mag)
+ mceq_run.get_solution('pr_antinumu', mag))
flux['numu_total'] += (mceq_run.get_solution('total_numu', mag)
+ mceq_run.get_solution('total_antinumu', mag))
# same meaning of prefixes for electron neutrinos as for muons
flux['nue_conv'] += (mceq_run.get_solution('conv_nue', mag)
+ mceq_run.get_solution('conv_antinue', mag))
flux['nue_pr'] += (mceq_run.get_solution('pr_nue', mag)
+ mceq_run.get_solution('pr_antinue', mag))
flux['nue_total'] += (mceq_run.get_solution('total_nue', mag)
+ mceq_run.get_solution('total_antinue', mag))
# since there are no conventional tau neutrinos, prompt=total
flux['nutau_pr'] += (mceq_run.get_solution('total_nutau', mag)
+ mceq_run.get_solution('total_antinutau', mag))
#average the results
for frac in ['mu_conv','mu_pr','mu_total',
'numu_conv','numu_pr','numu_total',
'nue_conv','nue_pr','nue_total','nutau_pr']:
flux[frac] = flux[frac]/float(len(angles))
#get path of the home directory + Desktop
save_pdf = False
for pref, lab in [('numu_',r'\nu_\mu'), ('nue_',r'\nu_e')]:
plt.figure(figsize=(4.2, 3))
plt.loglog(e_grid, flux[pref + 'total'], color='k', ls='-', lw=1.5)
plt.loglog(e_grid, flux[pref + 'conv'], color='b', ls='-.', lw=1.5,
label=r'conventional ${0}$'.format(lab))
plt.loglog(e_grid, flux[pref + 'pr'], color='r',ls='--', lw=1.5,
label='prompt ${0}$'.format(lab))
plt.xlim(10,1e7)
plt.ylim(1e-5,10)
plt.xlabel(r"$E_{{{0}}}$ [GeV]".format(lab))
plt.ylabel(r"$\Phi_{" + lab + "}$ (E/GeV)$^{" + str(mag) +" }$" +
"(cm$^{2}$ s sr GeV)$^{-1}$")
plt.legend(loc='upper right',frameon=False,numpoints=1,fontsize='medium')
plt.tight_layout()
if save_pdf:
import os
plt.savefig(os.path.join(os.path.expanduser("~"),'Desktop', pref + 'flux.png'),dpi=300)
np.savetxt(open(os.path.join(desktop, 'H3a_zenith_av.txt'),'w'),
zip(e_grid,
flux['mu_conv'],flux['mu_pr'],flux['mu_total'],
flux['numu_conv'],flux['numu_pr'],flux['numu_total'],
flux['nue_conv'],flux['nue_pr'],flux['nue_total'],
flux['nutau_pr']),
fmt='%6.5E',
header=('lepton flux scaled with E**{0}. Order (E, mu_conv, mu_pr, mu_total, ' +
'numu_conv, numu_pr, numu_total, nue_conv, nue_pr, nue_total, ' +
'nutau_pr').format(mag)
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If everything succeeds than the last message should be something like
Step2: Calculate average flux
Step3: Plot with matplotlib
Step4: Save as in ASCII file for other types of processing
|
12,690
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
from ipywidgets import widget
from IPython.display import display
from math import pi, sin
import numpy as np
from matplotlib import pyplot as plt
from sklearn.linear_model import Ridge
%matplotlib inline
def target(x):
'''
Function to generate target variables
'''
return sin(2 * pi * x) + np.random.normal(scale=0.3)
def example_data_generating_dist(size):
'''
Function to generate example data
size = size of data set to generate
'''
data = []
for i in range(size):
x = np.random.uniform()
y = target(x)
data.append([x,y])
arr = np.array(data)
x = np.array(arr[:,0])
y = np.array(arr[:,1])
return x, y
def polyfit(x, y, degree):
'''
Fit a polynomaial to some data
'''
_coef = np.polyfit(x,y,degree)
_poly = np.poly1d(_coef)
_ys = _poly(y)
return _poly
def graph_polyfit(degree, size):
x, y = example_data_generating_dist(size)
model = polyfit(x, y, degree)
xp = np.linspace(-1, 1, 50)
plt.ylim(y.min()-.2, y.max()+.2)
plt.xlim(x.min()-.2, x.max()+.2)
plt.plot(x, y, '.', xp, model(xp), '--')
plt.show()
return model
graph_underfit = interactive(graph_polyfit, degree=1, size=10)
graph_underfit
graph_overfit = interactive(graph_polyfit, degree=9, size=10)
graph_overfit
graph_just_right = interactive(graph_polyfit, degree=3, size=10)
graph_just_right
graph_reg = interactive(graph_polyfit, degree=9, size=100)
graph_reg
# generate some random data and target values
rr_X, rr_y = example_data_generating_dist(100)
rr_y
clf = Ridge(alpha=1.0, solver='lsqr')
clf.fit(rr_X[:,np.newaxis], rr_y)
plt.scatter(rr_X, rr_y)
colors = ['teal', 'yellowgreen', 'gold']
lw = 2
for count, degree in enumerate([2]):
model = Ridge()
model.fit(rr_X[:,np.newaxis], rr_y)
y_plot = model.predict(rr_X[:,np.newaxis])
plt.plot(rr_X, y_plot, color=colors[count], linewidth=lw,
label="degree %d" % degree)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Underfit
Step2: Overfit
Step3: Just right
Step4: Regularization -- More Data
Step5: You can see above, just be sampling 90 more data points from our mock function, the 9th degree polynomial is already starting to smooth out a bunch. If you grab the size slider and move it to the right, generating more sample data, you'll see that the recomputed polynomial that is modeling the data smooths even more. Giving us this regularization affect.
|
12,691
|
<ASSISTANT_TASK:>
Python Code:
path = "./pydata-book/ch02/usagov_bitly_data2012-03-16-1331923249.txt"
open(path).readline()
print(path)
print(type(path))
import json
datach02= [json.loads(line) for line in open(path)]
import json
path = "./pydata-book/ch02/usagov_bitly_data2012-03-16-1331923249.txt"
records = [json.loads(line) for line in open(path)]
records[0]
records[0]['tz']
time_zones = [rec['tz'] for rec in records]
time_zones = [i['tz'] for i in records if 'tz' in i]
time_zones[:2]
time_zones = [rec['tz'] for rec in records if 'tz' in rec]
time_zones[:10]
#这种方法是在遍历时区的过程中将计数值保留在字典中:
def get_counts(sequence):
counts = {}
for x in sequence:
if x in counts:
counts[x] += 1
else:
counts[x] = 1
return counts
#今天回头看这段代码发现看的不是很明白,特别是我在下面这个cell中,
#利用了上述的代码,发现这个结果让人看了有点费解。
#11th Jan. 2018
def get_counts(sequence):
counts = {}
for x in sequence:
if x in counts:
counts[x] += 1
else:
counts[x] = 1
return counts
sequence1={1,23,434,53,23,24}
a=get_counts(sequence1)
a[23]
#11th Jan. 2018
from collections import defaultdict
def get_counts2(sequence):
counts = defaultdict(int) #所有的值都会被初始化为0
for x in sequence:
counts[x] += 1
return counts
def get_counts(sequence):
counts = {}
for x in sequence:
if x in counts:
counts[x] += 1
else:
counts[x] = 1
return counts
counts = get_counts(time_zones)
counts['America/New_York']
len(time_zones)
def top_counts(count_dict, n =10):
value_key_pairs = [(count, tz) for tz, count in count_dict.items()]
value_key_pairs.sort()
return value_key_pairs[-n:]
top_counts(counts)
from collections import Counter
counts = Counter(time_zones)
counts.most_common(10)
from pandas import DataFrame, Series
import pandas as pd; import numpy as np
frame = DataFrame(records)
frame
frame['tz'][:10]
tz_counts = frame['tz'].value_counts()
tz_counts[:10]
clean_tz = frame['tz'].fillna('Missing')
clean_tz[clean_tz == ''] = 'Unknown'
tz_counts = clean_tz.value_counts()
tz_counts[:10]
%matplotlib inline
tz_counts[:10].plot(kind='barh', rot=0)
frame['a'][1]
frame['a'][50]
frame['a'][51]
results = Series([x.split()[0] for x in frame.a.dropna()])
results[:5]
results.value_counts()[:8]
cframe = frame[frame.a.notnull()]
operating_system = np.where(cframe['a'].str.contains('Windows'), 'Windows','Not Windows')
operating_system[:5] #注意这句代码执行后的输出跟原书不同
by_tz_os = cframe.groupby(['tz', operating_system])
agg_counts = by_tz_os.size().unstack().fillna(0)
agg_counts[:10]
#用于按升序排列
indexer = agg_counts.sum(1).argsort()
indexer[:10]
count_subset = agg_counts.take(indexer)[-10:]
count_subset
%matplotlib inline
normed_subset = count_subset.div(count_subset.sum(1), axis=0)
normed_subset.plot(kind='barh', stacked = True)
import pandas as pd
unames = ['user_id', 'gender', 'age', 'occupation', 'zip']
users = pd.read_table('pydata-book/ch02/movielens/users.dat', sep='::',
header=None, names = unames)
rnames = ['user_id', 'movie_id', 'rating', 'timestamp']
ratings = pd.read_table('pydata-book/ch02/movielens/ratings.dat', sep='::',
header=None, names = rnames)
mnames = ['movie_id', 'title', 'genres']
movies = pd.read_table('pydata-book/ch02/movielens/movies.dat', sep='::',
header=None, names = mnames)
users[:5]
ratings[:5]
movies[:5]
ratings[:10]
data = pd.merge(pd.merge(ratings, users), movies)
data[:10]
#书中原文的代码是
mean_ratings = data.pivot_table('rating',
rows='title', cols='gender',aggfunc='mean')
mean_ratings = data.pivot_table('rating', index='title',
columns='gender', aggfunc='mean')
mean_ratings[:5]
ratings_by_title = data.groupby('title').size()
ratings_by_title[0:10]
active_titles = ratings_by_title.index[ratings_by_title >= 250]
active_titles
mean_ratings = mean_ratings.ix[active_titles]
#书中原文用了mean_ratings.ix 但是ix其实已经被弃用了
mean_ratings = mean_ratings.loc[active_titles]
mean_ratings
top_female_ratings = mean_ratings.sort_index(by='F', ascending=False)
top_female_ratings = mean_ratings.sort_values(by='F', ascending=False)
top_female_ratings[:10]
mean_ratings['diff'] = mean_ratings['M'] - mean_ratings['F']
sorted_by_diff = mean_ratings.sort_index(by = 'diff')
sorted_by_diff = mean_ratings.sort_values(by='diff')
sorted_by_diff[:15]
sorted_by_diff[::-1][:15]
#根据电影名称分组的得分数据的标准差
rating_std_by_title = data.groupby('title')['rating'].std()
#根据active_title 进行过滤
rating_std_by_title = rating_std_by_title.loc[active_titles]
#根据值对Series进行降序排列
rating_std_by_title.order(ascending=False)[:10]
#上一个书中源代码中的order已经被弃用。最新版的可以使用sort_values
rating_std_by_title.sort_values(ascending=False)[:10]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: python有许多内置或第三方模块可以将JSON字符转换成python字典对象。这里,我将使用json模块及其loads函数逐行加载已经下载好的数据文件:
Step2: 上面最后一行表达式,叫做“列表推导式 list comprehension”。这是一种在一组字符串(或一组别的对象)上执行一条相同操作(如json.loads)的简洁方式。在一个打开的文件句柄上进行迭代即可获得一个由行组成的序列。现在,records对象就成为一组python字典了。
Step3: 用纯Python代码对时区进行排序
Step4: 然而我们发现上面这个出现了‘tz'的keyerror,这是因为并不是所有记录里面都有tz这个字段,为了让程序判断出来,我们需要加上if语句,即
Step5: 我们从上面可以看到,的确有些时区字段是空的。此处,为了对时区进行计算,介绍两种办法。
Step6: 非常了解Python标准库的话,可以将上述代码写得更加精简:
Step7: 上述两种写法中,都将代码写到了函数中。这样的做法,是为了代码段有更高的可重要性,方便对时区进行处理。此处我们只需要将时区 time_zones 传入即可:
Step8: 如果要想得到前10位的时区及其计数值,我们需要用到一些有关字典的处理技巧:
Step9: 我们还可在python标准库中找到collections.Counter类,它能使这个任务更加简单:
Step10: 第二种,用pendas对时区进行计数
Step11: 这里frame的输出形式是摘要试图(summary view),主要是用于较大的DataFrame对象。frame['tz']所返回的Series对象有一个value_counts方法,该方法可以让我们得到所需的信息:
Step12: 现在,我们想用matplotlib为这段数据生成一张图片。为此,我们先给记录中未知或缺失的时区天上一个替代值。fillna 函数可以替换缺失值(NA),而未知值(空字符串)可以通过布尔型数据索引加以替换:
Step13: 利用tz_counts对象的plot方法,我们开得到一张水平条形图:
Step14: 我们还可以对这种数据进行很多的处理。比如说,a字段含有执行URL短缩操作的浏览器、设备、应用程序的相关信息:
Step15: 将这些“agent"字符串(即浏览器的USER——AGENT)中的所有信息都解析出来是一件挺枯燥的工作。不过我们只要掌握了python内置的字符串函数和正则表达式,事情就好办许多了。
Step16: 现在假设我们想按Windows和非Windows用户对时区统计信息进行分解。为了简单,我们假定只要agent字符串中包含有"Windows"就认为该用户为Windows用户。由于有的agent确实,我们首先将它们从数据中移除:
Step17: 接下来,根据a值计算出各行是否是Windows
Step18: 接下来可以根据时区和新的到的操作系统列表对数据进行分组了:
Step19: 然后通过size对分组结果进行计数(类似于上面的value_counts函数),并利用unstack对计数结果进行重塑:
Step20: 最后我们来选取最常出现的时区。为了达到这个目的,我们根据agg_counts中的行数构造了一个间接索引数组:
Step21: 然后我们通过过take按照这个舒徐截取了最后的10行:
Step22: 这里可以生成一张条形图。我们将使用stacked = True来生成一张堆积条形图:
Step23: 这里所用到的所有方法都会在本书后续的章节中详细讲解。(我觉得这句话作者应该早点讲,害的我一直不敢继续读下去,原来这只是一个长长的说明案例啊)
Step24: 注意,数据和职业是以编码形式给出的。他们的具体含义请参考该数据集的README文件。
Step25: 现在我们就可以根据任意个域用户或电源属性对评分数据进行聚合操作了。为了按性别计算每部电源的平均分,我们可以使用pivot_table方法:
Step26: 显然上面执行通不过,从错误信息看根本没有‘rows’这个参数的功能。我本来想放弃这个代码了,但是还是留了个心眼,去Google了一下,发现去年有人讨论了这个问题:stackoverflow地址。
Step27: 上述操作产生了另一个DataFrame,其内容为电源平均得分,行作为电影名。列标为性别。现在,我们打算过滤掉评分数据不够250条的电影。为了达到这个目的,可以先对title进行分组,然后利用size()得到的一个含有各个电影分组大小的Series对象:
Step28: 上述所得到的索引中含有评分数据大于250条的电影名称,然后我们就可以据此从前面的mean_ratings中选取所需的行了:
Step29: 为了了解女性观众最喜欢的电源,我们可以对F列降序:
Step30: 计算评分分歧
Step31: 按‘diff'排序即可得到分歧最大且女性观众更喜欢的电影:
Step32: 对排序结果反序并取出10行,得到的就是男性更喜欢的电影啦:
Step33: 如果只想找出分歧最大的电影并且不考虑性别因素,则可以计算得分数据的方差或者标准差:
|
12,692
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs[None:,], self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = error * self.weights_hidden_to_output
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
hidden_error_term = hidden_error.T * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += np.dot(X[:, None], hidden_error_term)
# Weight step (hidden to output)
delta_weights_h_o += (output_error_term * hidden_outputs)[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * (delta_weights_h_o/n_records) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * (delta_weights_i_h/n_records) # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
import sys
### Set the hyperparameters here ###
iterations = 3500
learning_rate = 0.4
hidden_nodes = 4
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Unit tests
Step9: Training the network
Step10: Check out your predictions
|
12,693
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from IPython.display import display, HTML
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
CSS =
.output {
flex-direction: row;
}
complete_data = pd.read_csv("../data/Exercises_Summary_Statistics_Data.csv")
complete_data = complete_data.set_index('Patient_id')
complete_data.shape
complete_data.iloc[:, 0:15].head()
male_patients = complete_data[complete_data.Sex == "male"]
female_patients = complete_data[complete_data.Sex == "female"]
# Mean
male_mean_age = male_patients.Age.mean()
female_mean_age = female_patients.Age.mean()
# Median
male_median_age = male_patients.Age.median()
female_median_age = female_patients.Age.median()
# Std
male_std_age = male_patients.Age.std()
female_std_age = female_patients.Age.std()
print("The male mean age is:", male_mean_age, "The median age is:", male_median_age, \
"and the standard dev is:", male_std_age)
print("The female mean age is:", female_mean_age, "The median age is:", female_median_age, \
"and the standard dev is:", female_std_age)
display(male_patients.Age.quantile(q=[0,1/4,1/2,3/4,1]))
display(female_patients.Age.quantile(q=[0,1/4,1/2,3/4,1]))
#Lets first remove the control patients. Those patients don't hava a result since they weren't injured.
patient_data = complete_data[~complete_data.Group.isin(["Control"])]
patient_data.Result.unique()
patient_data.Result.value_counts()
patients_death = patient_data[patient_data.Result == "09: Death"]
patients_alive = patient_data[patient_data.Result != "09: Death"]
gene_names = ["Gene1", "Gene2", "Gene3", "Gene4", "Gene5", "Gene6"]
display(patients_death[gene_names].describe())
display(patients_alive[gene_names].describe())
HTML('<style>{}</style>'.format(CSS))
display(Image('./profileGraph.png', width=2000, unconfined=True))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Summary Statistics - Examples
Step2: The dimensions of the dataset are
Step3: Let's take a look
Step4: For those without biological background
Step5: Quantiles
Step6: There is almost no difference from the sexes! Really strange to see such close numbers...
Step7: Ok, we have 8 types of outcomes for the patients. One of them is control, ignore that, it's a problem with the dataset.
Step8: Ok, so, good news, most of our patients survived the injury!
Step9: Looking at the mean, Gene4 seems to be a good one to predict the death of the patient, since it is much higher on the dead patients that in the alive ones.
|
12,694
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Imports and Utility Functions
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from collections import Counter
import gin
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from meta_dataset.data import config
from meta_dataset.data import dataset_spec as dataset_spec_lib
from meta_dataset.data import learning_spec
from meta_dataset.data import pipeline
def plot_episode(support_images, support_class_ids, query_images,
query_class_ids, size_multiplier=1, max_imgs_per_col=10,
max_imgs_per_row=10):
for name, images, class_ids in zip(('Support', 'Query'),
(support_images, query_images),
(support_class_ids, query_class_ids)):
n_samples_per_class = Counter(class_ids)
n_samples_per_class = {k: min(v, max_imgs_per_col)
for k, v in n_samples_per_class.items()}
id_plot_index_map = {k: i for i, k
in enumerate(n_samples_per_class.keys())}
num_classes = min(max_imgs_per_row, len(n_samples_per_class.keys()))
max_n_sample = max(n_samples_per_class.values())
figwidth = max_n_sample
figheight = num_classes
if name == 'Support':
print('#Classes: %d' % len(n_samples_per_class.keys()))
figsize = (figheight * size_multiplier, figwidth * size_multiplier)
fig, axarr = plt.subplots(
figwidth, figheight, figsize=figsize)
fig.suptitle('%s Set' % name, size='20')
fig.tight_layout(pad=3, w_pad=0.1, h_pad=0.1)
reverse_id_map = {v: k for k, v in id_plot_index_map.items()}
for i, ax in enumerate(axarr.flat):
ax.patch.set_alpha(0)
# Print the class ids, this is needed since, we want to set the x axis
# even there is no picture.
ax.set(xlabel=reverse_id_map[i % figheight], xticks=[], yticks=[])
ax.label_outer()
for image, class_id in zip(images, class_ids):
# First decrement by one to find last spot for the class id.
n_samples_per_class[class_id] -= 1
# If class column is filled or not represented: pass.
if (n_samples_per_class[class_id] < 0 or
id_plot_index_map[class_id] >= max_imgs_per_row):
continue
# If width or height is 1, then axarr is a vector.
if axarr.ndim == 1:
ax = axarr[n_samples_per_class[class_id]
if figheight == 1 else id_plot_index_map[class_id]]
else:
ax = axarr[n_samples_per_class[class_id], id_plot_index_map[class_id]]
ax.imshow(image / 2 + 0.5)
plt.show()
def plot_batch(images, labels, size_multiplier=1):
num_examples = len(labels)
figwidth = np.ceil(np.sqrt(num_examples)).astype('int32')
figheight = num_examples // figwidth
figsize = (figwidth * size_multiplier, (figheight + 1.5) * size_multiplier)
_, axarr = plt.subplots(figwidth, figheight, dpi=300, figsize=figsize)
for i, ax in enumerate(axarr.transpose().ravel()):
# Images are between -1 and 1.
ax.imshow(images[i] / 2 + 0.5)
ax.set(xlabel=labels[i], xticks=[], yticks=[])
plt.show()
# 1
BASE_PATH = '/path/to/records'
GIN_FILE_PATH = 'meta_dataset/learn/gin/setups/data_config.gin'
# 2
gin.parse_config_file(GIN_FILE_PATH)
# 3
# Comment out to disable eager execution.
tf.enable_eager_execution()
# 4
def iterate_dataset(dataset, n):
if not tf.executing_eagerly():
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
with tf.Session() as sess:
for idx in range(n):
yield idx, sess.run(next_element)
else:
for idx, episode in enumerate(dataset):
if idx == n:
break
yield idx, episode
# 5
SPLIT = learning_spec.Split.TRAIN
ALL_DATASETS = ['aircraft', 'cu_birds', 'dtd', 'fungi', 'ilsvrc_2012',
'omniglot', 'quickdraw', 'vgg_flower']
all_dataset_specs = []
for dataset_name in ALL_DATASETS:
dataset_records_path = os.path.join(BASE_PATH, dataset_name)
dataset_spec = dataset_spec_lib.load_dataset_spec(dataset_records_path)
all_dataset_specs.append(dataset_spec)
use_bilevel_ontology_list = [False]*len(ALL_DATASETS)
use_dag_ontology_list = [False]*len(ALL_DATASETS)
# Enable ontology aware sampling for Omniglot and ImageNet.
use_bilevel_ontology_list[5] = True
use_dag_ontology_list[4] = True
variable_ways_shots = config.EpisodeDescriptionConfig(
num_query=None, num_support=None, num_ways=None)
dataset_episodic = pipeline.make_multisource_episode_pipeline(
dataset_spec_list=all_dataset_specs,
use_dag_ontology_list=use_dag_ontology_list,
use_bilevel_ontology_list=use_bilevel_ontology_list,
episode_descr_config=variable_ways_shots,
split=SPLIT,
image_size=84,
shuffle_buffer_size=300)
# 1
idx, (episode, source_id) = next(iterate_dataset(dataset_episodic, 1))
print('Got an episode from dataset:', all_dataset_specs[source_id].name)
# 2
for t, name in zip(episode,
['support_images', 'support_labels', 'support_class_ids',
'query_images', 'query_labels', 'query_class_ids']):
print(name, t.shape)
# 3
episode = [a.numpy() for a in episode]
# 4
support_class_ids, query_class_ids = episode[2], episode[5]
print(Counter(support_class_ids))
print(Counter(query_class_ids))
# 1
N_EPISODES=2
# 2, 3
for idx, (episode, source_id) in iterate_dataset(dataset_episodic, N_EPISODES):
print('Episode id: %d from source %s' % (idx, all_dataset_specs[source_id].name))
episode = [a.numpy() for a in episode]
plot_episode(support_images=episode[0], support_class_ids=episode[2],
query_images=episode[3], query_class_ids=episode[5])
BATCH_SIZE = 16
ADD_DATASET_OFFSET = True
dataset_batch = pipeline.make_multisource_batch_pipeline(
dataset_spec_list=all_dataset_specs, batch_size=BATCH_SIZE, split=SPLIT,
image_size=84, add_dataset_offset=ADD_DATASET_OFFSET,
shuffle_buffer_size=1000)
for idx, ((images, labels), source_id) in iterate_dataset(dataset_batch, 1):
print(images.shape, labels.shape)
N_BATCH = 2
for idx, (batch, source_id) in iterate_dataset(dataset_batch, N_BATCH):
print('Batch-%d from source %s' % (idx, all_dataset_specs[source_id].name))
plot_batch(*map(lambda a: a.numpy(), batch), size_multiplier=0.5)
#1
NUM_WAYS = 8
NUM_SUPPORT = 3
NUM_QUERY = 5
fixed_ways_shots = config.EpisodeDescriptionConfig(
num_ways=NUM_WAYS, num_support=NUM_SUPPORT, num_query=NUM_QUERY)
#2
use_bilevel_ontology_list = [False]*len(ALL_DATASETS)
use_dag_ontology_list = [False]*len(ALL_DATASETS)
quickdraw_spec = [all_dataset_specs[6]]
#3
dataset_fixed = pipeline.make_multisource_episode_pipeline(
dataset_spec_list=quickdraw_spec, use_dag_ontology_list=[False],
use_bilevel_ontology_list=use_bilevel_ontology_list, split=SPLIT,
image_size=84, episode_descr_config=fixed_ways_shots)
N_EPISODES = 2
for idx, (episode, source_id) in iterate_dataset(dataset_fixed, N_EPISODES):
print('Episode id: %d from source %s' % (idx, quickdraw_spec[source_id].name))
episode = [a.numpy() for a in episode]
plot_episode(support_images=episode[0], support_class_ids=episode[2],
query_images=episode[3], query_class_ids=episode[5])
import torch
# 1
to_torch_labels = lambda a: torch.from_numpy(a.numpy()).long()
to_torch_imgs = lambda a: torch.from_numpy(np.transpose(a.numpy(), (0, 3, 1, 2)))
# 2
def data_loader(n_batches):
for i, (e, _) in enumerate(dataset_episodic):
if i == n_batches:
break
yield (to_torch_imgs(e[0]), to_torch_labels(e[1]),
to_torch_imgs(e[3]), to_torch_labels(e[4]))
for i, batch in enumerate(data_loader(n_batches=2)):
#3
data_support, labels_support, data_query, labels_query = [x.cuda() for x in batch]
print(data_support.shape, labels_support.shape, data_query.shape, labels_query.shape)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using the Meta-Dataset Data Pipeline
Step2: Primers
Step3: Reading datasets
Step4: (1) Episodic Mode
Step5: Using Dataset
Step6: Visualizing Episodes
Step7: (2) Batch Mode
Step8: (3) Fixing Ways and Shots
Step9: (4) Using Meta-dataset with PyTorch
|
12,695
|
<ASSISTANT_TASK:>
Python Code:
# !pip install cloudmlmagic
%load_ext cloudmlmagic
%%ml_init -projectId PROJECTID -bucket BUCKET -scaleTier BASIC_GPU -region asia-east1 -runtimeVersion 1.2
{'install_requires': ['keras', 'h5py', 'Pillow']}
%%ml_code
from keras.applications.inception_v3 import InceptionV3
model = InceptionV3(weights='imagenet')
%%ml_code
from keras.preprocessing import image
from keras.applications.inception_v3 import preprocess_input, decode_predictions
from io import BytesIO
import numpy as np
import pandas as pd
import requests
url = 'https://github.com/hayatoy/deep-learning-datasets/releases/download/v0.1/tl_opera_capitol.npz'
response = requests.get(url)
dataset = np.load(BytesIO(response.content))
X_dataset = dataset['features']
y_dataset = dataset['labels']
%%ml_code
from keras.utils import np_utils
from sklearn.model_selection import train_test_split
X_dataset = preprocess_input(X_dataset)
y_dataset = np_utils.to_categorical(y_dataset)
X_train, X_test, y_train, y_test = train_test_split(
X_dataset, y_dataset, test_size=0.2, random_state=42)
x = X_dataset[0]
x = np.expand_dims(x, axis=0)
preds = model.predict(x)
print('Predicted:')
for p in decode_predictions(preds, top=5)[0]:
print("Score {}, Label {}".format(p[2], p[1]))
pd.DataFrame(model.layers).tail()
%ml_code
from keras.models import Model
# Intermediate layer
intermediate_layer_model = Model(inputs=model.input, outputs=model.layers[311].output)
x = np.expand_dims(X_dataset[0], axis=0)
feature = intermediate_layer_model.predict(x)
pd.DataFrame(feature.reshape(-1,1)).plot(figsize=(12, 3))
%%ml_code
from keras.layers import Dense
# Append dense layer
x = intermediate_layer_model.output
x = Dense(1024, activation='relu')(x)
predictions = Dense(2, activation='softmax')(x)
# Transfer learning model, all layers are trainable at this moment
transfer_model = Model(inputs=intermediate_layer_model.input, outputs=predictions)
print(pd.DataFrame(transfer_model.layers).tail())
# Freeze all layers
for layer in transfer_model.layers:
layer.trainable = False
# Unfreeze the last layers, so that only these layers are trainable.
transfer_model.layers[312].trainable = True
transfer_model.layers[313].trainable = True
transfer_model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
%%ml_run cloud
import tensorflow as tf
from keras import backend as K
transfer_model.fit(X_train, y_train, epochs=20,
verbose=2,
validation_data=(X_test, y_test))
loss, acc = transfer_model.evaluate(X_test, y_test)
print('Loss {}, Accuracy {}'.format(loss, acc))
K.set_learning_phase(0) # test
sess = K.get_session()
from tensorflow.python.framework import graph_util
# Make GraphDef of Transfer Model
g_trans = sess.graph
g_trans_def = graph_util.convert_variables_to_constants(sess,
g_trans.as_graph_def(),
[transfer_model.output.name.replace(':0','')])
# Image Converter Model
with tf.Graph().as_default() as g_input:
input_b64 = tf.placeholder(shape=(1,), dtype=tf.string, name='input')
input_bytes = tf.decode_base64(input_b64[0])
image = tf.image.decode_image(input_bytes)
image_f = tf.image.convert_image_dtype(image, dtype=tf.float32)
input_image = tf.expand_dims(image_f, 0)
output = tf.identity(input_image, name='input_image')
g_input_def = g_input.as_graph_def()
with tf.Graph().as_default() as g_combined:
x = tf.placeholder(tf.string, name="input_b64")
im, = tf.import_graph_def(g_input_def,
input_map={'input:0': x},
return_elements=["input_image:0"])
pred, = tf.import_graph_def(g_trans_def,
input_map={transfer_model.input.name: im,
'batch_normalization_1/keras_learning_phase:0': False},
return_elements=[transfer_model.output.name])
with tf.Session() as sess2:
inputs = {"inputs": tf.saved_model.utils.build_tensor_info(x)}
outputs = {"outputs": tf.saved_model.utils.build_tensor_info(pred)}
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
)
# save as SavedModel
b = tf.saved_model.builder.SavedModelBuilder('gs://BUCKET/keras-mlengine/savedmodel')
b.add_meta_graph_and_variables(sess2,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={'serving_default': signature})
b.save()
# This cell is to prevent "runAll".
# you must wait until ML Engine job finishes
raise Exception('wait until ml engine job finishes..')
# !gcloud ml-engine models create OperaCapitol
!gcloud ml-engine versions create v1 --model OperaCapitol --runtime-version 1.2 --origin gs://BUCKET/keras-mlengine/savedmodel
from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
from googleapiclient import errors
PROJECTID = 'PROJECTID'
projectID = 'projects/{}'.format(PROJECTID)
modelName = 'OperaCapitol'
modelID = '{}/models/{}'.format(projectID, modelName)
credentials = GoogleCredentials.get_application_default()
ml = discovery.build('ml', 'v1', credentials=credentials)
with open('opera.jpg', 'rb') as f:
b64_x = f.read()
import base64
import json
b64_x = base64.urlsafe_b64encode(b64_x)
input_instance = dict(inputs=b64_x)
input_instance = json.loads(json.dumps(input_instance))
request_body = {"instances": [input_instance]}
request = ml.projects().predict(name=modelID, body=request_body)
try:
response = request.execute()
except errors.HttpError as err:
# Something went wrong with the HTTP transaction.
# To use logging, you need to 'import logging'.
print('There was an HTTP error during the request:')
print(err._get_reason())
response
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load cloudmlmagic extension
Step2: Initialize and setup ML Engine parameters.
Step3: Load InceptionV3 model
Step4: Load dataset
Step5: Split dataset for train and test
Step6: The code cell above won't be included in the package being deployed on ML Engine.
Step7: Visualize last layers of InceptionV3
Step8: Extract intermediate features
Step9: Append dense layer at the last
Step10: Create Model and Version for Online Prediction
Step11: Let's classify this image! This must be class 0..
|
12,696
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(0)
data = np.random.randint(40, 100, size=(5, 5))
data
data.mean()
data.std()
# X - mean
dev_arr = data - data.mean()
dev_arr
# ( X - mean )^2
dev_arr ** 2
# sum( ( X - mean )^2 ) / N
a = (dev_arr ** 2 ).sum() / 25
a
np.sqrt(a)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 평균
Step2: 표준편차
|
12,697
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
symbols = [np.exp(1j * np.pi * (2*k+1) / 4) for k in range(4)]
sigma = 1/3
size = 10000 # Anzahl Symbole in Simulation
# Sendesignal
s = np.random.choice(symbols, size)
# Rauschen
n = np.random.normal(0, sigma, size) + 1j * np.random.normal(0, sigma, size)
# Empfangssignal
r = s + n
# Index der korrekt empfangenen Bits
bit_1_correct = np.sign(r.real) == np.sign(s.real)
bit_2_correct = np.sign(r.imag) == np.sign(s.imag)
# Index der korrekt empfangenen Symbole
correct = np.logical_and(bit_1_correct, bit_2_correct)
def plot_complex(data, *args, subplot=111, **kwargs):
ax = fig.add_subplot(subplot)
ax.set_xlabel('Inphasenkomponente')
ax.set_ylabel('Quadraturkomponente')
ax.axis('equal'); ax.axis((-2, 2, -2, 2)); ax.hold(True)
ax.plot(data.real, data.imag, *args, **kwargs)
return ax
fig = plt.figure(figsize=(14, 6), facecolor='w')
plot_complex(s, 'k.', markersize=10, subplot=121)
plot_complex(n, '.', subplot=122, alpha=0.5);
fig = plt.figure(figsize=(14, 6), facecolor='w')
ax = plot_complex(s, 'k.', subplot=121, markersize=10, zorder=10)
for symbol in symbols:
ax.add_artist(plt.Circle(
(symbol.real, symbol.imag), radius=3 * sigma, color='k', fill=None))
plot_complex(r, 'b.', subplot=122, alpha=0.5);
fig = plt.figure(figsize=(6, 6), facecolor='w')
ax = plot_complex(s, 'k.', markersize=10, zorder=10)
ax.plot((-2, 2), (0, 0), 'g', alpha=0.5) # Entscheidungsgrenzen
ax.plot((0, 0), (-2, 2), 'g', alpha=0.5)
ax.plot(r[ correct].real, r[ correct].imag, 'b.', alpha=0.5)
ax.plot(r[~correct].real, r[~correct].imag, 'r.');
symbol_errors = np.sum(~correct)
bit_errors = np.sum(~bit_1_correct) + np.sum(~bit_2_correct)
print("{} Fehler in {} Symbolen\n{} Fehler in {} Bits".format(
symbol_errors, size, bit_errors, 2 * size))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sendesymbole und Rauschen
Step2: Empfangssignal
Step3: Ergebnisse
Step4: Übertragungsfehler
|
12,698
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 1
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
pd.options.display.max_rows = 1000
pd.options.display.max_columns = 60
#utils.py is where all our custom functions live is we set an autoreload on it.
%aimport utils
from utils import *
%aimport viz_utils
from viz_utils import *
df_all = pd.read_excel('all data v3.xlsx', 'iLab data.txt', index_col=None, na_values=['NA'])
df_test = prepare_session(df_all,'L-2567b17a:120eda25685:-8000')
df_gaps = prepare_session(df_all,'L-10f11766:120ecd4f63a:-8000')
%aimport viz_utils
plot(df_gaps,to_plot,colors, column_to_use, function_to_use)
%aimport viz_utils
plot(df_test,to_plot,colors, column_to_use, function_to_use)
df_test2 = pd.read_excel('all_method_tester.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
%autoreload
REGEX_SINGLE_VALUE_FIRST = "st\d \d(?:$|(?:\sst)|(?:\s[\-\+x/]\s[A-Z]))"
REGEX_SINGLE_VALUE_SECOND = "st\d [A-Z][\sa-z]+ [\-\+x/] \d(?:$|(?:\s?st))"
def single_value_usage(df):
usage= []
method1 = action_usage(df,'Cleaned method 1',REGEX_SINGLE_VALUE_FIRST)
usage.extend(action_usage(df,'Cleaned method 2',REGEX_SINGLE_VALUE_FIRST))
usage.extend(action_usage(df,'Cleaned method 1',REGEX_SINGLE_VALUE_SECOND))
usage.extend(action_usage(df,'Cleaned method 2',REGEX_SINGLE_VALUE_SECOND))
return clean_coords(usage)
single_value_usage(df_test2)
%aimport viz_utils
plot(df_test2,to_plot,colors, column_to_use, function_to_use)
# #Using the example used for sketch.
# def export_df(df,name):
# select_df = df[["Session Id","Selection","Feedback Text","Cleaned method 1","Cleaned method 2","cases","Time_seconds","Timeshifted","Duration"]]
# writer = pd.ExcelWriter(name+'.xlsx')
# select_df.to_excel(writer,'Sheet1')
# writer.save()
#
# export_df(df_gaps,'gaps')
# export_df(df_test,'test')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the data
Step2: Preparing a test sample
Step3: Plotting the data
Step4: Session with range and extrapolated range
Step5: Testing
Step6: TO DO
|
12,699
|
<ASSISTANT_TASK:>
Python Code:
import em1ds as zpic
electrons = zpic.Species( "electrons", -1.0, ppc = 64, uth=[0.005,0.005,0.005])
sim = zpic.Simulation( nx = 1000, box = 100.0, dt = 0.05, species = electrons )
#Bz0 = 0.5
Bz0 = 1.0
#Bz0 = 4.0
sim.emf.set_ext_fld('uniform', B0= [0.0, 0.0, Bz0])
import numpy as np
niter = 1000
Ey_t = np.zeros((niter,sim.nx))
Ez_t = np.zeros((niter,sim.nx))
print("\nRunning simulation up to t = {:g} ...".format(niter * sim.dt))
while sim.n < niter:
print('n = {:d}, t = {:g}'.format(sim.n,sim.t), end = '\r')
Ey_t[sim.n,:] = sim.emf.Ey
Ez_t[sim.n,:] = sim.emf.Ez
sim.iter()
print("\nDone.")
import matplotlib.pyplot as plt
iter = sim.n//2
plt.plot(np.linspace(0, sim.box, num = sim.nx),Ez_t[iter,:], label = "$E_z$")
plt.plot(np.linspace(0, sim.box, num = sim.nx),Ey_t[iter,:], label = "$E_y$")
plt.grid(True)
plt.xlabel("$x_1$ [$c/\omega_n$]")
plt.ylabel("$E$ field []")
plt.title("$E_z$, $E_y$, t = {:g}".format( iter * sim.dt))
plt.legend()
plt.show()
import matplotlib.pyplot as plt
import matplotlib.colors as colors
# (omega,k) power spectrum
win = np.hanning(niter)
for i in range(sim.nx):
Ez_t[:,i] *= win
sp = np.abs(np.fft.fft2(Ez_t))**2
sp = np.fft.fftshift( sp )
k_max = np.pi / sim.dx
omega_max = np.pi / sim.dt
plt.imshow( sp, origin = 'lower', norm=colors.LogNorm(vmin = 1e-4, vmax = 0.1),
extent = ( -k_max, k_max, -omega_max, omega_max ),
aspect = 'auto', cmap = 'gray')
plt.colorbar().set_label('$|FFT(E_z)|^2$')
# Theoretical prediction
k = np.linspace(-k_max, k_max, num = 512)
plt.plot( k, np.sqrt( 1 + k**2), label = "theoretical", ls = "--" )
plt.ylim(0,12)
plt.xlim(0,12)
plt.xlabel("$k$ [$\omega_n/c$]")
plt.ylabel("$\omega$ [$\omega_n$]")
plt.title("O-Wave dispersion relation")
plt.legend()
plt.show()
import matplotlib.pyplot as plt
import matplotlib.colors as colors
win = np.hanning(niter)
for i in range(sim.nx):
Ey_t[:,i] *= win
k_max = np.pi / sim.dx
omega_max = np.pi / sim.dt
sp = np.abs( np.fft.fft2(Ey_t))**2
sp = np.fft.fftshift( sp )
plt.imshow( sp, origin = 'lower', norm=colors.LogNorm(vmin = 1e-4, vmax = 0.1),
extent = ( -k_max, k_max, -omega_max, omega_max ),
aspect = 'auto', cmap = 'gray')
plt.colorbar().set_label('$|FFT(E_y)|^2$')
k = np.linspace(-k_max, k_max, num = 512)
wa=np.sqrt((k**2+Bz0**2+2-np.sqrt(k**4-2*k**2*Bz0**2+Bz0**4+4*Bz0**2))/2)
wb=np.sqrt((k**2+Bz0**2+2+np.sqrt(k**4-2*k**2*Bz0**2+Bz0**4+4*Bz0**2))/2)
plt.plot( k,wb, label = 'theoretical $\omega_+$', color = 'r', ls = "--" )
plt.plot( k,wa, label = 'theoretical $\omega_-$', color = 'b', ls = "--" )
plt.xlabel("$k$ [$\omega_n/c$]")
plt.ylabel("$\omega$ [$\omega_n$]")
plt.title("X-wave dispersion relation")
plt.legend()
plt.ylim(0,12)
plt.xlim(0,12)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We run the simulation up to a fixed number of iterations, controlled by the variable niter, storing the value of the EM fields $E_y$ (X-wave) and $E_z$ (O-wave) at every timestep so we can analyze them later
Step2: EM Waves
Step3: O-Wave
Step4: X-wave
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.