Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
6,400
|
<ASSISTANT_TASK:>
Python Code:
import quantecon as qe
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
%matplotlib inline
np.set_printoptions(suppress=True, precision=4)
# Possible parameters
# alpha, beta, rho1, rho2, sigma
params = [[10.0, 0.95, 1.2, -0.3, 1.0],
[10.0, 0.95, 0.9, 0.0, 1.0],
[10.0, 0.95, 0.0, -0.0, 10.0]]
# Set parameters
alpha, beta, rho1, rho2, sigma = params[1]
# Note: LinearStateSpace object runs into iteration limit in computing stationary variance when we set
# sigma = .5 -- replace with doublej2 to fix this. Do some more testing
R = 1/beta
A = np.array([[1., 0., 0.],
[alpha, rho1, rho2],
[0., 1., 0.]])
C = np.array([[0.], [sigma], [0.]])
G = np.array([[0., 1., 0.]])
# for later use, form LinearStateSpace system and pull off steady state moments
mu_z0 = np.array([[1.0], [0.0], [0.0]])
sig_z0 = np.zeros((3, 3))
Lz = qe.LinearStateSpace(A, C, G, mu_0=mu_z0, Sigma_0=sig_z0)
muz, muy, Sigz, Sigy = Lz.stationary_distributions()
# mean vector of state for the savings problem
mxo = np.vstack([muz, 0.0])
# create stationary covariance matrix of x -- start everyone off at b=0
a1 = np.zeros((3, 1))
aa = np.hstack([Sigz, a1])
bb = np.zeros((1, 4))
sxo = np.vstack([aa, bb])
# These choices will initialize the state vector of an individual at zero debt
# and the ergodic distribution of the endowment process. Use these to create
# the Bewley economy.
mxbewley = mxo
sxbewley = sxo
#
# Here we create the matrices for our system
#
A12 = np.zeros((3,1))
ALQ_l = np.hstack([A, A12])
ALQ_r = np.array([[0, -R, 0, R]])
ALQ = np.vstack([ALQ_l, ALQ_r])
RLQ = np.array([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 1e-9]])
QLQ = np.array([1.0])
BLQ = np.array([0., 0., 0., R]).reshape(4,1)
CLQ = np.array([0., sigma, 0., 0.]).reshape(4,1)
betaLQ = beta
print("We can inspect the matrices that describe our system below")
print("A = \n", ALQ)
print("B = \n", BLQ)
print("R = \n", RLQ)
print("Q = \n", QLQ)
LQPI = qe.LQ(QLQ, RLQ, ALQ, BLQ, C=CLQ, beta=betaLQ)
P, F, d = LQPI.stationary_values() # Compute optimal value function and decision rule
ABF = ALQ - np.dot(BLQ,F) # Form closed loop system
# Use the above formulas to create the optimal policies for $b_{t+1}$ and $c_t$
b_pol = np.dot(G, la.inv(np.eye(3, 3) - beta*A)).dot(A - np.eye(3, 3))
c_pol = (1 - beta)*np.dot(G, la.inv(np.eye(3, 3) - beta*A))
#Create the A matrix for a LinearStateSpace instance
A_LSS1 = np.vstack([A, b_pol])
A_LSS2 = np.eye(4, 1, -3)
A_LSS = np.hstack([A_LSS1, A_LSS2])
# Create the C matrix for LSS methods
C_LSS = np.vstack([C, np.zeros(1)])
# Create the G matrix for LSS methods
G_LSS1 = np.vstack([G, c_pol])
G_LSS2 = np.vstack([np.zeros(1), -(1 - beta)])
G_LSS = np.hstack([G_LSS1, G_LSS2])
# use the following values to start everyone off at b=0, initial incomes zero
# Initial Conditions
mu_0 = np.array([1., 0., 0., 0.])
sigma_0 = np.zeros((4, 4))
ABF - A_LSS
print(c_pol, "\n", -F)
LSS = qe.LinearStateSpace(A_LSS, C_LSS, G_LSS, mu_0=mu_0, Sigma_0=sigma_0)
def income_consumption_debt_series(A, C, G, m0, s0, T=150, npaths=25):
This function takes initial conditions (m0, s0) and uses the Linear State Space
class from QuantEcon to simulate an economy `npaths` times for `T` periods.
It then uses that information to generate some graphs related to the discussion
below.
LSS = qe.LinearStateSpace(A, C, G, mu_0=m0, Sigma_0=s0)
# Simulation/Moment Parameters
moment_generator = LSS.moment_sequence()
# Simulate various paths
bsim = np.empty((npaths, T))
csim = np.empty((npaths, T))
ysim = np.empty((npaths, T))
for i in range(npaths):
sims = LSS.simulate(T)
bsim[i, :] = sims[0][-1, :]
csim[i, :] = sims[1][1, :]
ysim[i, :] = sims[1][0, :]
# Get the moments
cons_mean = np.empty(T)
cons_var = np.empty(T)
debt_mean = np.empty(T)
debt_var = np.empty(T)
for t in range(T):
mu_x, mu_y, sig_x, sig_y = next(moment_generator)
cons_mean[t], cons_var[t] = mu_y[1], sig_y[1, 1]
debt_mean[t], debt_var[t] = mu_x[3], sig_x[3, 3]
return bsim, csim, ysim, cons_mean, cons_var, debt_mean, debt_var
def consumption_income_debt_figure(bsim, csim, ysim):
# Get T
T = bsim.shape[1]
# Create first figure
fig, ax = plt.subplots(2, 1, figsize=(10, 8))
xvals = np.arange(T)
# Plot consumption and income
ax[0].plot(csim[0, :], label="c", color="b")
ax[0].plot(ysim[0, :], label="y", color="g")
ax[0].plot(csim.T, alpha=.1, color="b")
ax[0].plot(ysim.T, alpha=.1, color="g")
ax[0].legend(loc=4)
ax[0].set_xlabel("t")
ax[0].set_ylabel("y and c")
# Plot debt
ax[1].plot(bsim[0, :], label="b", color="r")
ax[1].plot(bsim.T, alpha=.1, color="r")
ax[1].legend(loc=4)
ax[1].set_xlabel("t")
ax[1].set_ylabel("debt")
fig.suptitle("Nonfinancial Income, Consumption, and Debt")
return fig
def consumption_debt_fanchart(csim, cons_mean, cons_var,
bsim, debt_mean, debt_var):
# Get T
T = bsim.shape[1]
# Create Percentiles of cross-section distributions
cmean = np.mean(cons_mean)
c90 = 1.65*np.sqrt(cons_var)
c95 = 1.96*np.sqrt(cons_var)
c_perc_95p, c_perc_95m = cons_mean + c95, cons_mean - c95
c_perc_90p, c_perc_90m = cons_mean + c90, cons_mean - c90
# Create Percentiles of cross-section distributions
dmean = np.mean(debt_mean)
d90 = 1.65*np.sqrt(debt_var)
d95 = 1.96*np.sqrt(debt_var)
d_perc_95p, d_perc_95m = debt_mean + d95, debt_mean - d95
d_perc_90p, d_perc_90m = debt_mean + d90, debt_mean - d90
# Create second figure
fig2, ax2 = plt.subplots(2, 1, figsize=(10, 8))
xvals = np.arange(T)
# Consumption fan
ax2[0].plot(xvals, cons_mean, color="k")
ax2[0].plot(csim.T, color="k", alpha=.25)
ax2[0].fill_between(xvals, c_perc_95m, c_perc_95p, alpha=.25, color="b")
ax2[0].fill_between(xvals, c_perc_90m, c_perc_90p, alpha=.25, color="r")
ax2[0].set_ylim((cmean-15, cmean+15))
ax2[0].set_ylabel("consumption")
# Debt fan
ax2[1].plot(xvals, debt_mean, color="k")
ax2[1].plot(bsim.T, color="k", alpha=.25)
ax2[1].fill_between(xvals, d_perc_95m, d_perc_95p, alpha=.25, color="b")
ax2[1].fill_between(xvals, d_perc_90m, d_perc_90p, alpha=.25, color="r")
# ax2[1].set_ylim()
ax2[1].set_ylabel("debt")
fig2.suptitle("Consumption/Debt over time")
ax2[1].set_xlabel("t")
return fig2
# Creates pictures with initial conditions of 0.0 for y and b
out = income_consumption_debt_series(A_LSS, C_LSS, G_LSS, mu_0, sigma_0)
bsim0, csim0, ysim0 = out[:3]
cons_mean0, cons_var0, debt_mean0, debt_var0 = out[3:]
fig_0 = consumption_income_debt_figure(bsim0, csim0, ysim0)
fig_02 = consumption_debt_fanchart(csim0, cons_mean0, cons_var0,
bsim0, debt_mean0, debt_var0)
fig_0.show()
fig_02.show()
def cointegration_figure(bsim, csim):
Plots the cointegration
# Create figure
fig, ax = plt.subplots(figsize=(10, 8))
ax.plot((1-beta)*bsim[0, :] + csim[0, :], color="k")
ax.plot((1-beta)*bsim.T + csim.T, color="k", alpha=.1)
fig.suptitle("Cointegration of Assets and Consumption")
ax.set_xlabel("t")
ax.set_ylabel("")
return fig
fig = cointegration_figure(bsim0, csim0)
fig.show()
# Creates pictures with initial conditions of 0.0 for b and y from invariant distribution
out = income_consumption_debt_series(A_LSS, C_LSS, G_LSS, mxbewley, sxbewley)
bsimb, csimb, ysimb = out[:3]
cons_meanb, cons_varb, debt_meanb, debt_varb = out[3:]
fig_0 = consumption_income_debt_figure(bsimb, csimb, ysimb)
fig_02 = consumption_debt_fanchart(csimb, cons_meanb, cons_varb,
bsimb, debt_meanb, debt_varb)
fig = cointegration_figure(bsimb, csimb)
fig.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plan of the notebook
Step2: It turns out that the bliss level of consumption $\gamma$ in the utility function $-.5 (c_t -\gamma)^2$
Step3: Now create the appropriate instance of an LQ model
Step4: Now create the optimal policies using the analytic formulas.
Step5: Solution via a system of expectational difference equations
Step6: A_LSS calculated as we have here should equal ABF calculated above using the LQ model.
Step7: Now compare pertinent elements of c_pol and -F
Step8: We have verified that the two methods give the same solution.
Step10: Population and sample panels
Step12: First example
Step13: A "borrowers and lenders" closed economy
|
6,401
|
<ASSISTANT_TASK:>
Python Code:
pip install pyschedule
# Load pyschedule and create a scenario with ten steps planning horizon
from pyschedule import Scenario, solvers, plotters
S = Scenario('hello_pyschedule',horizon=10)
# Create two resources
Alice, Bob = S.Resource('Alice'), S.Resource('Bob')
# Create three tasks with lengths 1,2 and 3
cook, wash, clean = S.Task('cook',1), S.Task('wash',2), S.Task('clean',3)
# Assign tasks to resources, either Alice or Bob,
# the %-operator connects tasks and resource
cook += Alice|Bob
wash += Alice|Bob
clean += Alice|Bob
# Solve and print solution
S.use_makespan_objective()
solvers.mip.solve(S,msg=1)
# Print the solution
print(S.solution())
%matplotlib inline
plotters.matplotlib.plot(S,fig_size=(10,5))
solvers.mip.solve(S,kind='SCIP')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here is a hello world example, you can also find this document as a <a href="https
Step2: In this example we use a makespan objective which means that we want to minimize the completion time of the last task. Hence, Bob should do the cooking from 0 to 1 and then do the washing from 1 to 3, whereas Alice will only do the cleaning from 0 to 3. This will ensure that both are done after three hours. This table representation is a little hard to read, we can visualize the plan using matplotlib
Step3: pyschedule supports different solvers, classical <a href="https
|
6,402
|
<ASSISTANT_TASK:>
Python Code:
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
%env PROJECT=$PROJECT
!bq mk movielens
%%bash
rm -r bqml_data
mkdir bqml_data
cd bqml_data
curl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip'
unzip ml-20m.zip
yes | bq rm -r $PROJECT:movielens
bq --location=US mk --dataset \
--description 'Movie Recommendations' \
$PROJECT:movielens
bq --location=US load --source_format=CSV \
--autodetect movielens.ratings ml-20m/ratings.csv
bq --location=US load --source_format=CSV \
--autodetect movielens.movies_raw ml-20m/movies.csv
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.movies AS
SELECT * REPLACE(SPLIT(genres, "|") AS genres)
FROM movielens.movies_raw
%%bash
bq --location=US cp \
cloud-training-demos:movielens.recommender \
movielens.recommender
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `movielens.recommender`, (
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g
WHERE g = 'Comedy'
))
ORDER BY predicted_rating DESC
LIMIT 5
%%bigquery --project $PROJECT
SELECT
processed_input,
feature,
TO_JSON_STRING(factor_weights) AS factor_weights,
intercept
FROM ML.WEIGHTS(MODEL `movielens.recommender`)
WHERE
(processed_input = 'movieId' AND feature = '96481')
OR (processed_input = 'userId' AND feature = '54192')
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.users AS
SELECT
userId,
RAND() * COUNT(rating) AS loyalty,
CONCAT(SUBSTR(CAST(userId AS STRING), 0, 2)) AS postcode
FROM
movielens.ratings
GROUP BY userId
%%bigquery --project $PROJECT
WITH userFeatures AS (
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors
FROM movielens.users u
JOIN ML.WEIGHTS(MODEL movielens.recommender) w
ON processed_input = 'userId' AND feature = CAST(u.userId AS STRING)
)
SELECT * FROM userFeatures
LIMIT 5
%%bigquery --project $PROJECT
WITH productFeatures AS (
SELECT
p.* EXCEPT(genres),
g, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights))
AS product_factors
FROM movielens.movies p, UNNEST(genres) g
JOIN ML.WEIGHTS(MODEL movielens.recommender) w
ON processed_input = 'movieId' AND feature = CAST(p.movieId AS STRING)
)
SELECT * FROM productFeatures
LIMIT 5
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.hybrid_dataset AS
WITH userFeatures AS (
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights))
AS user_factors
FROM movielens.users u
JOIN ML.WEIGHTS(MODEL movielens.recommender) w
ON processed_input = 'userId' AND feature = CAST(u.userId AS STRING)
),
productFeatures AS (
SELECT
p.* EXCEPT(genres),
g, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights))
AS product_factors
FROM movielens.movies p, UNNEST(genres) g
JOIN ML.WEIGHTS(MODEL movielens.recommender) w
ON processed_input = 'movieId' AND feature = CAST(p.movieId AS STRING)
)
SELECT
p.* EXCEPT(movieId),
u.* EXCEPT(userId),
rating
FROM productFeatures p, userFeatures u
JOIN movielens.ratings r
ON r.movieId = p.movieId AND r.userId = u.userId
%%bigquery --project $PROJECT
SELECT *
FROM movielens.hybrid_dataset
LIMIT 1
%%bigquery --project $PROJECT
CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_users(u ARRAY<FLOAT64>)
RETURNS
STRUCT<
u1 FLOAT64,
u2 FLOAT64,
u3 FLOAT64,
u4 FLOAT64,
u5 FLOAT64,
u6 FLOAT64,
u7 FLOAT64,
u8 FLOAT64,
u9 FLOAT64,
u10 FLOAT64,
u11 FLOAT64,
u12 FLOAT64,
u13 FLOAT64,
u14 FLOAT64,
u15 FLOAT64,
u16 FLOAT64
> AS (STRUCT(
u[OFFSET(0)],
u[OFFSET(1)],
u[OFFSET(2)],
u[OFFSET(3)],
u[OFFSET(4)],
u[OFFSET(5)],
u[OFFSET(6)],
u[OFFSET(7)],
u[OFFSET(8)],
u[OFFSET(9)],
u[OFFSET(10)],
u[OFFSET(11)],
u[OFFSET(12)],
u[OFFSET(13)],
u[OFFSET(14)],
u[OFFSET(15)]
));
%%bigquery --project $PROJECT
SELECT movielens.arr_to_input_16_users(u).*
FROM (SELECT
[0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15.] AS u)
%%bigquery --project $PROJECT
CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_products(p ARRAY<FLOAT64>)
RETURNS
STRUCT<
p1 FLOAT64,
p2 FLOAT64,
p3 FLOAT64,
p4 FLOAT64,
p5 FLOAT64,
p6 FLOAT64,
p7 FLOAT64,
p8 FLOAT64,
p9 FLOAT64,
p10 FLOAT64,
p11 FLOAT64,
p12 FLOAT64,
p13 FLOAT64,
p14 FLOAT64,
p15 FLOAT64,
p16 FLOAT64
> AS (STRUCT(
p[OFFSET(0)],
p[OFFSET(1)],
p[OFFSET(2)],
p[OFFSET(3)],
p[OFFSET(4)],
p[OFFSET(5)],
p[OFFSET(6)],
p[OFFSET(7)],
p[OFFSET(8)],
p[OFFSET(9)],
p[OFFSET(10)],
p[OFFSET(11)],
p[OFFSET(12)],
p[OFFSET(13)],
p[OFFSET(14)],
p[OFFSET(15)]
));
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender_hybrid
OPTIONS(model_type='linear_reg', input_label_cols=['rating'])
AS
SELECT
* EXCEPT(user_factors, product_factors),
movielens.arr_to_input_16_users(user_factors).*,
movielens.arr_to_input_16_products(product_factors).*
FROM
movielens.hybrid_dataset
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the dataset and trained model
Step2: And create a cleaned movielens.movies table.
Step3: Next, copy over the trained recommendation model. Note that if you're project is in the EU you will need to change the location from US to EU below. Note that as of the time of writing you cannot copy models across regions with bq cp.
Step4: Next, ensure the model still works by invoking predictions for movie recommendations
Step5: Incorporating user and movie information
Step6: Multiplying these weights and adding the intercept is how we get the predicted rating for this combination of movieId and userId in the matrix factorization approach.
Step7: Input features about users can be obtained by joining the user table with the ML weights and selecting all the user information and the user factors from the weights array.
Step8: Similarly, we can get product features for the movies data, except that we have to decide how to handle the genre since a movie could have more than one genre. If we decide to create a separate training row for each genre, then we can construct the product features using.
Step9: Combining these two WITH clauses and pulling in the rating corresponding the movieId-userId combination (if it exists in the ratings table), we can create the training dataset.
Step10: One of the rows of this table looks like this
Step11: Essentially, we have a couple of attributes about the movie, the product factors array corresponding to the movie, a couple of attributes about the user, and the user factors array corresponding to the user. These form the inputs to our “hybrid” recommendations model that builds off the matrix factorization model and adds in metadata about users and movies.
Step12: which gives
Step13: We can create a similar function named movielens.arr_to_input_16_products to convert the product factor array into named columns.
Step14: Then, we can tie together metadata about users and products with the user factors and product factors obtained from the matrix factorization approach to create a regression model to predict the rating
|
6,403
|
<ASSISTANT_TASK:>
Python Code:
data_reduced_genus = pd.read_csv(MAIN_DIR + "results/reduced_data--genus_only.csv")
abundance_utils.filter_by_abundance(data=data_reduced_genus,
abundance_column='fraction of reads', high=1, low=0.5).head()
data_reduced_genus.head()
# Change 'other' to 'other & unknown'
data_reduced_genus.ix[data_reduced_genus['Genus'] == 'other', 'Genus'] = r'other \& unknown'
data_reduced_genus.head()
plot_path = PLOT_DIR + "/genus_only"
abundance_plot_utils.plot_heatmap_genus(dataframe=data_reduced_genus,
high=1, low = 0.5,
oxy='High', rep='all',
plot_dir=plot_path)
# Loop through the ranges we want.
oxy_conditions = ['Low', 'High']
max_abundance = 1
range_pairs = [(0.01, max_abundance),
(0.05, max_abundance),
(0.1, max_abundance),
(0.5, max_abundance)]
for o in oxy_conditions:
for r in range_pairs:
#print "oxygen:", o
#print "range:", r
#print "range (low):", r[0]
abundance_plot_utils.plot_heatmap_genus(dataframe=data_reduced_genus,
low =r[0], high=r[1], oxy=o, rep='all',
plot_dir = PLOT_DIR + '/genus_only')
data_reduced = pd.read_csv(MAIN_DIR + "/results/reduced_data--all_taxonomy_remains.csv")
plot_1_spec_dict = {'Phylum':['Bacteroidetes'],
'Order':['Burkholderiales','Methylophilales',
'Methylococcales']}
plot_1_data = abundance_plot_utils.aggregate_mixed_taxonomy(dataframe =
pd.read_csv(
MAIN_DIR + "/results/reduced_data--all_taxonomy_remains.csv"),
taxa_dict = plot_1_spec_dict,
main_dir = MAIN_DIR)
plot_1_data.head()
t_dicts = [
{'Phylum':['Bacteroidetes'],
'Order':['Burkholderiales', 'Methylophilales', 'Methylococcales']},
{'Genus':['Methylobacter', 'Methylovulum', 'Methylomonas', 'Methylomicrobium',
'Methyloglobulus', 'Methylococcus', 'Methylocaldum', 'Methylosarcina']},
{'Genus':['Methylotenera', 'Methylovorus', 'Methylophilus', 'Methylobacillus']}
]
for t in t_dicts:
print(t)
print(abundance_plot_utils.taxa_dict_to_descriptive_string(t))
print(MAIN_DIR)
print(PLOT_DIR)
# plot without annotations
for t_dict in t_dicts:
abundance_plot_utils.heatmap_from_taxa_dict(
dataframe = pd.read_csv(MAIN_DIR + "/results/reduced_data--all_taxonomy_remains.csv"),
taxa_dict = t_dict,
annotate=False,
main_dir = MAIN_DIR,
plot_dir= PLOT_DIR + '/mixed_taxonomy/')
# plot with annotations
for t_dict in t_dicts:
abundance_plot_utils.heatmap_from_taxa_dict(dataframe =
pd.read_csv(MAIN_DIR + "/results/reduced_data--all_taxonomy_remains.csv"),
taxa_dict = t_dict,
facet = 'rep',
annotate = False,
main_dir = MAIN_DIR,
plot_dir = PLOT_DIR + '/mixed_taxonomy/')
plot_name = os.path.join(PLOT_DIR, '170420_methanotroph_methylotroph_taxa' + '--portrait.pdf')
p = abundance_plot_utils.plot_dominant_methylotrophs(data_reduced_genus, plot_name)
plot_name = os.path.join(PLOT_DIR, '170420_methanotroph_methylotroph_taxa' + '--landscape.pdf')
p = abundance_plot_utils.plot_dominant_methylotrophs(data_reduced_genus, plot_name, portrait=False)
# make a png copy for defense 170420
plot_name = os.path.join(PLOT_DIR, '170420_methanotroph_methylotroph_taxa' + '--landscape.png')
p = abundance_plot_utils.plot_dominant_methylotrophs(data_reduced_genus, plot_name, portrait=False)
pd.read_csv(MAIN_DIR + "/results/reduced_data--all_taxonomy_remains.csv").head()
abundance_plot_utils.plot_bars_for_taxa_dict(
dataframe = pd.read_csv(MAIN_DIR + "/results/reduced_data--all_taxonomy_remains.csv"),
taxa_dict = plot_1_spec_dict,
order_list = ['Methylococcales', 'Methylophilales',
'Bacteroidetes', 'Burkholderiales', 'unknown \& other'],
summarize_other = True,
colors = ['#2c7fb8', '#feb24c', '#525252', '#969696', '#d9d9d9'],
main_dir = MAIN_DIR,
filename = os.path.join(PLOT_DIR + '/mixed_taxonomy/', '170413_4_main_groups.pdf'))
p = abundance_plot_utils.plot_bars_for_taxa_dict(
dataframe = pd.read_csv(MAIN_DIR + "/results/reduced_data--all_taxonomy_remains.csv"),
taxa_dict = plot_1_spec_dict,
order_list = ['Methylococcales', 'Methylophilales',
'Bacteroidetes', 'Burkholderiales', 'unknown \& other'],
summarize_other = True,
colors = ['#2c7fb8', '#feb24c', '#525252', '#969696', '#d9d9d9'],
main_dir = MAIN_DIR, portrait=False,
filename = os.path.join(PLOT_DIR + '/mixed_taxonomy/', '170413_4_main_groups--landscape.pdf'))
p = abundance_plot_utils.plot_bars_for_taxa_dict(
dataframe = pd.read_csv(MAIN_DIR + "/results/reduced_data--all_taxonomy_remains.csv"),
taxa_dict = plot_1_spec_dict,
order_list = ['Methylococcales', 'Methylophilales',
'Bacteroidetes', 'Burkholderiales', 'unknown \& other'],
summarize_other = True,
colors = ['#2c7fb8', '#feb24c', '#525252', '#969696', '#d9d9d9'],
main_dir = MAIN_DIR, portrait=False,
filename = os.path.join(PLOT_DIR + '/mixed_taxonomy/', '170413_4_main_groups--landscape.png'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Demo plot_heatmap()
Step2: Run plot_heatmap_genus() across broad conditions
Step3: heatmap_from_taxa_dict()
Step4: 170313 add bar plots of the key methanotrophs/methylotrophs
Step5: 170313 add bar plots of the four groups.
|
6,404
|
<ASSISTANT_TASK:>
Python Code:
from time import sleep
def V_idg(N, p, kT, cost=0):
sleep(cost)
return N * kT / p
def compute_volume(job):
print('compute volume', job)
V = V_idg(cost=1, **job.statepoint())
job.document['V'] = V
with open(job.fn('V.txt'), 'w') as file:
file.write(str(V) + '\n')
import signac
project = signac.get_project('projects/tutorial')
for job in project:
compute_volume(job)
import signac
import numpy as np
project = signac.get_project(root='projects/tutorial')
def init_statepoints(n):
for p in np.linspace(0.1, 10.0, n):
sp = {'p': p, 'kT': 1.0, 'N': 1000}
job = project.open_job(sp)
job.init()
print('initialize', job)
init_statepoints(5)
for job in project:
if 'V' not in job.document:
compute_volume(job)
init_statepoints(10)
def classify(job):
yield 'init'
if 'V' in job.document and job.isfile('V.txt'):
yield 'volume-computed'
print('Status: {}'.format(project))
for job in project:
labels = ', '.join(classify(job))
p = round(job.sp.p, 1)
print(job, p, labels)
for job in project:
labels = classify(job)
if 'volume-computed' not in labels:
compute_volume(job)
list(map(compute_volume, project))
print('Done.')
from multiprocessing import Pool
with Pool() as pool:
pool.map(compute_volume, project)
from multiprocessing.pool import ThreadPool
with ThreadPool() as pool:
pool.map(compute_volume, project)
# % rm -r projects/tutorial/workspace
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: It is useful to think of each modification of the workspace, that includes addition, modification, and removal of data, in terms of an operation.
Step2: This operation computes the volume solely based on the state point parameters and stores the results such that they are clearly associated with the job, i.e., in the job document and in a file within the job's workspace.
Step3: Data Space Initialization
Step4: We see that initializing more jobs and even reinitializing old jobs is no problem.
Step5: Classification
Step6: Next, we implement a classify() generator function, which labels a job based on certain conditions
Step7: Our classifier will always yield the init label, but the volume-computed label is only yielded if the result has been computed and stored both in the job document and as a text file.
Step8: Using only simple classification functions, we already get a very good grasp on our project's overall status.
Step9: Parallelization
Step10: Using the map() expression makes it trivial to implement parallelization patterns, for example, using a process Pool
Step11: Or a ThreadPool
Step12: Uncomment and execute the following line if you want to remove all data and start over.
|
6,405
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
vp_path = os.path.abspath('../../')
if not vp_path in sys.path:
sys.path.append(vp_path)
import vampyre as vp
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# Parameters
nz0 = 1000 # number of components of z0
nz1 = 500 # number of measurements z1
# Compute the shapes
zshape0 = (nz0,) # Shape of z0 matrix
zshape1 = (nz1,) # Shape of z1 matrix = shape of y matrix
Ashape = (nz1,nz0) # Shape of A matrix
prob_on = 0.1 # fraction of components that are *on*
z0_mean_on = 0 # mean for the on components
z0_var_on = 1 # variance for the on components
snr = 30 # SNR in dB
# Generate the random input
z0_on = np.random.normal(z0_mean_on, np.sqrt(z0_var_on), zshape0)
u = np.random.uniform(0, 1, zshape0) < prob_on
z0 = z0_on*u
ind = np.arange(nz0)
plt.plot(ind,z0);
A = np.random.normal(0, 1/np.sqrt(nz0), Ashape)
z1 = A.dot(z0)
zvar1 = np.mean(np.abs(z1)**2)
wvar = zvar1*np.power(10, -0.1*snr)
w = np.random.normal(0,np.sqrt(wvar), zshape1)
y = z1 + w
est0_off = vp.estim.DiscreteEst(0,1,zshape0)
est0_on = vp.estim.GaussEst(z0_mean_on, z0_var_on,zshape0)
est_list = [est0_off, est0_on]
pz0 = np.array([1-prob_on, prob_on])
est0 = vp.estim.MixEst(est_list, w=pz0, name='Input')
Aop = vp.trans.MatrixLT(A,zshape0)
est1 = vp.estim.GaussEst(y,wvar,zshape1,name='Output')
nit = 20 # number of iterations
solver = vp.solver.Gamp(est0,est1,Aop,hist_list=['z0', 'zvar0'],nit=nit)
solver.summary()
solver.solve()
zhat0 = solver.z0
ind = np.array(range(nz0))
plt.plot(ind,z0)
plt.plot(ind,zhat0)
plt.legend(['True', 'Estimate']);
zerr0_act = np.mean(np.abs(zhat0-z0)**2)
zerr0_pred = solver.zvar0
zpow0 = np.mean(np.abs(z0)**2)
mse_act = 10*np.log10(zerr0_act/zpow0)
mse_pred = 10*np.log10(zerr0_pred/zpow0)
print("Normalized MSE (dB): actual {0:f} pred {1:f}".format(mse_act, mse_pred))
def plot_z0_est(solver,z0):
Plots the true and predicted MSE for the estimates of z0
# Compute the MSE as a function of the iteration
zhat0_hist = solver.hist_dict['z0']
zvar0_hist = solver.hist_dict['zvar0']
nit = len(zhat0_hist)
mse_act = np.zeros(nit)
mse_pred = np.zeros(nit)
for it in range(nit):
zerr0_act = np.mean(np.abs(zhat0_hist[it]-z0)**2)
zerr0_pred = zvar0_hist[it]
mse_act[it] = 10*np.log10(zerr0_act/zpow0)
mse_pred[it] = 10*np.log10(zerr0_pred/zpow0)
plt.plot(range(nit), mse_act, 'o-', linewidth=2)
plt.plot(range(nit), mse_pred, 's', linewidth=1)
plt.xlabel('Iteration')
plt.ylabel('Normalized MSE (dB)')
plt.legend(['Actual', 'Predicted'])
plt.grid()
plot_z0_est(solver,z0)
# Generate a random transform
A = vp.trans.rand_rot_invariant_mat(nz1,nz0,cond_num=10)
Aop = vp.trans.MatrixLT(A,zshape0)
z1 = A.dot(z0)
# Add noise
zvar1 = np.mean(np.abs(z1)**2)
wvar = zvar1*np.power(10, -0.1*snr)
w = np.random.normal(0,np.sqrt(wvar), zshape1)
y = z1 + w
# Create the estimator
est1 = vp.estim.GaussEst(y,wvar,zshape1,name='Output')
# Run GAMP
nit = 20
solver = vp.solver.Gamp(est0,est1,Aop,hist_list=['z0', 'zvar0'],nit=nit)
solver.solve()
plot_z0_est(solver,z0)
# Run GAMP with damping
nit = 200
solver = vp.solver.Gamp(est0,est1,Aop,hist_list=['z0', 'zvar0'],nit=nit,step=0.3)
solver.solve()
# Plot the results
plot_z0_est(solver,z0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will also load the other packages we will use in this demo. This could be done before the above import.
Step2: Generating Synthetic Data
Step3: To generate the synthetic data for this demo, we use the following simple probabilistic model. For the input z0, we will use Bernouli-Gaussian (BG) distribution, a simple model in sparse signal processing. In the BG model, the components z0[j] are i.i.d. where each component can be on or off.
Step4: Using these parameters, we can generate random sparse z0 following this distribution with the following simple code.
Step5: To illustrate the sparsity, we plot the vector z0. We can see from this plot that the majority of the components of z0 are zero.
Step6: Now, we create a random transform A and output z1 = A.dot(z0)
Step7: Finally, we add noise at the desired SNR
Step8: Setting up the AMP / GAMP Solver
Step9: We next use the vampyre class, MixEst, to describe a mixture of the two distributions. This is done by creating a list, est_list, of the estimators and an array pz with the probability of each component. The resulting estimator, est_in, is the estimator for the prior $z$, which is also the input to the transform $A$. We give this a name Input since it corresponds to the input. But, any naming is fine. Or, you can let vampyre give it a generic name.
Step10: We next define the operator A. In this case the operator is defined by a matrix so we use the MatrixLT class.
Step11: Finally, we describe the likelihood function, p(y|z1). Since y=z1+w, we can describe this as a Gaussian estimator.
Step12: Running the GAMP Solver
Step13: We can print a summary of the model which indicates the dimensions and the estimators.
Step14: We now run the solver by calling the solve() method. For a small problem like this, this should be close to instantaneous.
Step15: The VAMP solver estimate is the field zhat. We plot one column of this (icol=0) and compare it to the corresponding column of the true matrix z. You should see a very good match.
Step16: We can measure the normalized mean squared error as follows. The GAMP solver also produces an estimate of the MSE in the variable zvar0. We can extract this variable to compute the predicted MSE. We see that the normalized MSE is indeed low and closely matches the predicted value from VAMP.
Step18: Finally, we can plot the actual and predicted MSE as a function of the iteration number. When solver was contructed, we passed an argument hist_list=['z0', 'zvar0']. This indicated to store the value of the estimate z0 and predicted error variance zvar0 with each iteration. We can recover these values from solver.hist_dict, the history dictionary. Using the values we can compute and plot the normalized MSE on each iteartion. Since we are going to plot several times in this demo, we wrap the plotting routine in a function, plot_z0est().
Step19: Damping and Stability
Step20: Now, we create a synthetic data based on the matrix and re-run GAMP.
Step21: We plot the results and we can see that the algorithm diverges.
Step22: To fix the problem, one can apply damping. In damping, the GAMP algorithm is adjusted to take a partial step as controlled by a parameter step between 0 and 1. In general, the theory is that step <= 1/sqrt(cond_num). In practice, you can try different step sizes until you get reasonable results. A warning though
|
6,406
|
<ASSISTANT_TASK:>
Python Code:
import os
# A comma-delimited list of the words you want to train for.
# The options are: yes,no,up,down,left,right,on,off,stop,go
# All other words will be used to train an "unknown" category.
os.environ["WANTED_WORDS"] = "yes,no"
# The number of steps and learning rates can be specified as comma-separated
# lists to define the rate at each stage. For example,
# TRAINING_STEPS=15000,3000 and LEARNING_RATE=0.001,0.0001
# will run 18,000 training loops in total, with a rate of 0.001 for the first
# 15,000, and 0.0001 for the final 3,000.
os.environ["TRAINING_STEPS"]="15000,3000"
os.environ["LEARNING_RATE"]="0.001,0.0001"
# Calculate the total number of steps, which is used to identify the checkpoint
# file name.
total_steps = sum(map(lambda string: int(string),
os.environ["TRAINING_STEPS"].split(",")))
os.environ["TOTAL_STEPS"] = str(total_steps)
# Print the configuration to confirm it
!echo "Training these words: ${WANTED_WORDS}"
!echo "Training steps in each stage: ${TRAINING_STEPS}"
!echo "Learning rate in each stage: ${LEARNING_RATE}"
!echo "Total number of training steps: ${TOTAL_STEPS}"
# Replace Colab's default TensorFlow install with a more recent
# build that contains the operations that are needed for training
!pip uninstall -y tensorflow tensorflow_estimator
!pip install -q tf-estimator-nightly==1.14.0.dev2019072901 tf-nightly-gpu==1.15.0.dev20190729
# Clone the repository from GitHub
!git clone -q https://github.com/tensorflow/tensorflow
# Check out a commit that has been tested to work
# with the build of TensorFlow we're using
!git -c advice.detachedHead=false -C tensorflow checkout 17ce384df70
# Delete any old logs from previous runs
!rm -rf /content/retrain_logs
# Load TensorBoard
%load_ext tensorboard
%tensorboard --logdir /content/retrain_logs
!python tensorflow/tensorflow/examples/speech_commands/train.py \
--model_architecture=tiny_conv --window_stride=20 --preprocess=micro \
--wanted_words=${WANTED_WORDS} --silence_percentage=25 --unknown_percentage=25 \
--quantize=1 --verbosity=WARN --how_many_training_steps=${TRAINING_STEPS} \
--learning_rate=${LEARNING_RATE} --summaries_dir=/content/retrain_logs \
--data_dir=/content/speech_dataset --train_dir=/content/speech_commands_train \
!python tensorflow/tensorflow/examples/speech_commands/freeze.py \
--model_architecture=tiny_conv --window_stride=20 --preprocess=micro \
--wanted_words=${WANTED_WORDS} --quantize=1 --output_file=/content/tiny_conv.pb \
--start_checkpoint=/content/speech_commands_train/tiny_conv.ckpt-${TOTAL_STEPS}
!toco \
--graph_def_file=/content/tiny_conv.pb --output_file=/content/tiny_conv.tflite \
--input_shapes=1,49,40,1 --input_arrays=Reshape_2 --output_arrays='labels_softmax' \
--inference_type=QUANTIZED_UINT8 --mean_values=0 --std_dev_values=9.8077
import os
model_size = os.path.getsize("/content/tiny_conv.tflite")
print("Model is %d bytes" % model_size)
# Install xxd if it is not available
!apt-get -qq install xxd
# Save the file as a C source file
!xxd -i /content/tiny_conv.tflite > /content/tiny_conv.cc
# Print the source file
!cat /content/tiny_conv.cc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install dependencies
Step2: We'll also clone the TensorFlow repository, which contains the scripts that train and freeze the model.
Step3: Load TensorBoard
Step4: Begin training
Step5: Freeze the graph
Step6: Convert the model
Step7: The following cell will print the model size, which will be under 20 kilobytes.
Step8: Finally, we use xxd to transform the model into a source file that can be included in a C++ project and loaded by TensorFlow Lite for Microcontrollers.
|
6,407
|
<ASSISTANT_TASK:>
Python Code:
nodes = pd.read_pickle("cachenodes.pkl")
edges = pd.read_pickle("edges.pkl")
comp_nodes = pd.read_pickle("comp_nodes.pkl")
def build_topology(nodes, edges):
topology = nx.Graph()
# add all nodes
for index, row in nodes.iterrows():
node_name = row["name"]
node_attributes = row.drop(["name"]).to_dict()
topology.add_node(node_name, attr_dict=node_attributes)
# add all edges
for index, row in edges.iterrows():
node1_name = row["node1"]
node2_name = row["node2"]
edge_attributes = row.drop(["node1", "node2"]).to_dict()
topology.add_edge(node1_name, node2_name, attr_dict=edge_attributes)
return topology
topology = build_topology(nodes, edges)
from libcrap.core import calcsave_or_load
from functools import partial
pd_diskcache = partial(calcsave_or_load, load_func=pd.read_pickle, save_func=pd.to_pickle)
import itertools
@pd_diskcache("paths.pkl")
def find_comp_to_comp_shortest_paths(topology, comp_nodes):
paths_ugly = nx.all_pairs_shortest_path(topology)
# calculates shortest paths and stores them in a dict of dicts
# build a table with all computational node pairs
# they are not duplicated
# if there is ("n48001", "n49419") then there is no ("n49419", "n48001") pair
comp_node_pairs = pd.DataFrame.from_records(
itertools.chain.from_iterable(
[(node1, node2) for node2 in comp_nodes.iloc[index:]]
for (index, node1) in comp_nodes.iteritems()
),
columns=["node1", "node2"]
)
# write shortest paths to this table
comp_node_pairs["shortest_path"] = comp_node_pairs.apply(
lambda row: paths_ugly[row.loc["node1"]][row.loc["node2"]],
axis=1
)
return comp_node_pairs
# shortest paths between all computational nodes
paths = find_comp_to_comp_shortest_paths(topology, comp_nodes)
def interleave(it1, it2):
>>> list(interleave([1, 2, 3, 4], ["a", "b", "c"]))
[1, 'a', 2, 'b', 3, 'c', 4]
return (
item for item
in itertools.chain.from_iterable(itertools.zip_longest(it1, it2))
if item is not None)
def get_node_features(topology, node):
Returns node features as a tuple of tuples.
>>> topology = nx.Graph()
>>> topology.add_node("kek", attr_dict={"a": 1, "b": "lol"})
>>> get_node_features(topology, "kek")
(('a', 1), ('b', 'lol'))
return tuple(topology.node[node].items())
def get_edge_features(topology, node1, node2):
Returns features of an edge as tuple of tuples.
>>> topology = nx.Graph()
>>> topology.add_node("a1")
>>> topology.add_node("b1")
>>> topology.add_edge("a1", "b1", attr_dict={"foo": "bar", "shim": "sham"})
>>> get_edge_features(topology, "a1", "b1")
(('foo', 'bar'), ('shim', 'sham'))
return tuple(topology.edge[node1][node2].items())
def maybe_reverse(l):
Takes list or tuple and reverses it, or not.
Using maybe_reverse on some list and on its reversed version will
yield the same result.
>>> maybe_reverse([1, 2, 3])
[1, 2, 3]
>>> maybe_reverse([3, 2, 1])
[1, 2, 3]
>>> maybe_reverse(('a', 'b', 'c'))
('a', 'b', 'c')
>>> maybe_reverse(('c', 'b', 'a'))
('a', 'b', 'c')
if type(l) == list:
constructor = list
elif type(l) == tuple:
constructor = tuple
else:
raise TypeError("can only take list or tuple arguments")
reversed_l = constructor(reversed(l))
if str(l) <= str(reversed_l):
return l
return reversed_l
def get_features_of_path(topology, path):
Returns features of path as a tuple of tuples of tuples.
The list of features will be normalized, so that
this function returns the same features in the same order for
path (A, B, C, D) and for path (D, C, B, A)
nodes_features = (get_node_features(topology, node) for node in path)
edges_features = (get_edge_features(topology, node1, node2)
for (node1, node2) in zip(path[:-1], path[1:]))
return maybe_reverse(tuple(interleave(nodes_features, edges_features)))
def df_loc_by_sequence(df, sequence):
Use this instead of `df.loc[sequence]`.
Pandas df gets confused by tuples and possibly by other
sequences. If you do `df.loc[(1, 2)]`, it will look for 1
or 2 in df's index instead of looking for the tuple itself.
You can use df.xs to overcome this problem. Or use this
function which hides the ugliness.
Also see
[stackoverflow question](https://goo.gl/emtjB8)
for better description of the problem.
return df.xs(sequence)
import doctest
def test_get_node_features():
doctest.run_docstring_examples(get_node_features, globals())
assert get_node_features(topology, "КГК.48.0.3") == (("type_", "switch"),)
def test_get_edge_features():
doctest.run_docstring_examples(get_edge_features, globals())
correct_result = (("connection_type", "backplane"),)
result1 = get_edge_features(topology, "КГК.48.0.3", "n48022")
result2 = get_edge_features(topology, "n48022", "КГК.48.0.3")
assert result1 == correct_result == result2
doctest.run_docstring_examples(interleave, globals())
test_get_node_features()
test_get_edge_features()
doctest.run_docstring_examples(maybe_reverse, globals())
@pd_diskcache("classes.pkl")
def list_path_classes(topology, paths):
unique_features_classes = frozenset(
get_features_of_path(topology, path)
for path in paths["shortest_path"]
)
return pd.DataFrame.from_records(
([features] for features in sorted(unique_features_classes)),
columns=["features"]
)
@pd_diskcache("paths_with_classes.pkl")
def add_class_id_col(paths, classes):
Adds class_id column to paths table
# create pandas table for quick getting index by value of features list
classes_reverse_lookup = classes.reset_index().set_index("features", verify_integrity=True)
def get_class_id_by_path(path):
return df_loc_by_sequence(classes_reverse_lookup, get_features_of_path(topology, path))["index"]
return paths.assign(class_=paths["shortest_path"].apply(get_class_id_by_path))
classes = list_path_classes(topology, paths)
paths_with_classes = add_class_id_col(paths, classes)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build topology
Step2: Calculate shortest path for every pair of computational nodes
Step3: Actually do the work
Step10: Calculate feature lists of these paths
Step11: Test helper functions
Step13: Do the work
|
6,408
|
<ASSISTANT_TASK:>
Python Code:
w_412 = 0.56
w_443 = 0.73
w_490 = 0.71
w_510 = 0.36
w_560 = 0.01
run_id = '0000000-150630000034908-oozie-oozi-W'
run_meta = 'http://sb-10-16-10-55.dev.terradue.int:50075/streamFile/ciop/run/participant-c/0000000-150630000034908-oozie-oozi-W/results.metalink?'
participant = 'participant-c'
import glob
import pandas as pd
from scipy.stats.stats import pearsonr
import numpy
import math
!curl $run_meta | aria2c -d $participant -M -
path = participant # use your path
allFiles = glob.glob(path + "/*.txt")
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
df = pd.read_csv(file_,index_col=None, header=0)
list_.append(df)
frame = pd.concat(list_)
len(frame.index)
insitu_path = './insitu/AAOT.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "AAOT"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_aaot_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @412")
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_aaot_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @443")
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_aaot_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @490")
r_aaot_510 = 0
print("0 observations for band @510")
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_aaot_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @560")
insitu_path = './insitu/BOUSS.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "BOUS"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_bous_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @412")
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_bous_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @443")
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_bous_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @490")
frame_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()
r_bous_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @510")
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_bous_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @560")
insitu_path = './insitu/MOBY.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "MOBY"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_moby_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @12")
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_moby_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @443")
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_moby_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @490")
frame_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()
r_moby_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @510")
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_moby_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @560")
[r_aaot_412, r_aaot_443, r_aaot_490, r_aaot_510, r_aaot_560]
[r_bous_412, r_bous_443, r_bous_490, r_bous_510, r_bous_560]
[r_moby_412, r_moby_443, r_moby_490, r_moby_510, r_moby_560]
r_final = (numpy.mean([r_bous_412, r_moby_412, r_aaot_412]) * w_412 \
+ numpy.mean([r_bous_443, r_moby_443, r_aaot_443]) * w_443 \
+ numpy.mean([r_bous_490, r_moby_490, r_aaot_490]) * w_490 \
+ numpy.mean([r_bous_510, r_moby_510, r_aaot_510]) * w_510 \
+ numpy.mean([r_bous_560, r_moby_560, r_aaot_560]) * w_560) \
/ (w_412 + w_443 + w_490 + w_510 + w_560)
r_final
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run
Step2: Define all imports in a single cell
Step3: Manage run results
Step4: Number of points extracted from MERIS level 2 products
Step5: Calculate Pearson
|
6,409
|
<ASSISTANT_TASK:>
Python Code:
import os
import urllib
import zipfile
if not os.path.exists("char_lstm.zip"):
urllib.urlretrieve("http://data.mxnet.io/data/char_lstm.zip", "char_lstm.zip")
with zipfile.ZipFile("char_lstm.zip","r") as f:
f.extractall("./")
with open('obama.txt', 'r') as f:
print f.read()[0:1000]
def read_content(path):
with open(path) as ins:
return ins.read()
# Return a dict which maps each char into an unique int id
def build_vocab(path):
content = list(read_content(path))
idx = 1 # 0 is left for zero-padding
the_vocab = {}
for word in content:
if len(word) == 0:
continue
if not word in the_vocab:
the_vocab[word] = idx
idx += 1
return the_vocab
# Encode a sentence with int ids
def text2id(sentence, the_vocab):
words = list(sentence)
return [the_vocab[w] for w in words if len(w) > 0]
# build char vocabluary from input
vocab = build_vocab("./obama.txt")
print('vocab size = %d' %(len(vocab)))
import lstm
# Each line contains at most 129 chars.
seq_len = 129
# embedding dimension, which maps a character to a 256-dimension vector
num_embed = 256
# number of lstm layers
num_lstm_layer = 3
# hidden unit in LSTM cell
num_hidden = 512
symbol = lstm.lstm_unroll(
num_lstm_layer,
seq_len,
len(vocab) + 1,
num_hidden=num_hidden,
num_embed=num_embed,
num_label=len(vocab) + 1,
dropout=0.2)
test_seq_len
data_file = open("./obama.txt")
for line in data_file:
assert len(line) <= seq_len + 1, "seq_len is smaller than maximum line length. Current line length is %d. Line content is: %s" % (len(line), line)
data_file.close()
import bucket_io
# The batch size for training
batch_size = 32
# initalize states for LSTM
init_c = [('l%d_init_c'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)]
init_h = [('l%d_init_h'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)]
init_states = init_c + init_h
# Even though BucketSentenceIter supports various length examples,
# we simply use the fixed length version here
data_train = bucket_io.BucketSentenceIter(
"./obama.txt",
vocab,
[seq_len],
batch_size,
init_states,
seperate_char='\n',
text2id=text2id,
read_content=read_content)
# @@@ AUTOTEST_OUTPUT_IGNORED_CELL
import mxnet as mx
import numpy as np
import logging
logging.getLogger().setLevel(logging.DEBUG)
# We will show a quick demo with only 1 epoch. In practice, we can set it to be 100
num_epoch = 1
# learning rate
learning_rate = 0.01
# Evaluation metric
def Perplexity(label, pred):
loss = 0.
for i in range(pred.shape[0]):
loss += -np.log(max(1e-10, pred[i][int(label[i])]))
return np.exp(loss / label.size)
model = mx.mod.Module(symbol=symbol,
data_names=[x[0] for x in data_train.provide_data],
label_names=[y[0] for y in data_train.provide_label],
context=[mx.gpu(0)])
model.fit(train_data=data_train,
num_epoch=num_epoch,
optimizer='sgd',
optimizer_params={'learning_rate':learning_rate, 'momentum':0, 'wd':0.0001},
initializer=mx.init.Xavier(factor_type="in", magnitude=2.34),
eval_metric=mx.metric.np(Perplexity),
batch_end_callback=mx.callback.Speedometer(batch_size, 20),
epoch_end_callback=mx.callback.do_checkpoint("obama"))
from rnn_model import LSTMInferenceModel
# helper strcuture for prediction
def MakeRevertVocab(vocab):
dic = {}
for k, v in vocab.items():
dic[v] = k
return dic
# make input from char
def MakeInput(char, vocab, arr):
idx = vocab[char]
tmp = np.zeros((1,))
tmp[0] = idx
arr[:] = tmp
# helper function for random sample
def _cdf(weights):
total = sum(weights)
result = []
cumsum = 0
for w in weights:
cumsum += w
result.append(cumsum / total)
return result
def _choice(population, weights):
assert len(population) == len(weights)
cdf_vals = _cdf(weights)
x = random.random()
idx = bisect.bisect(cdf_vals, x)
return population[idx]
# we can use random output or fixed output by choosing largest probability
def MakeOutput(prob, vocab, sample=False, temperature=1.):
if sample == False:
idx = np.argmax(prob, axis=1)[0]
else:
fix_dict = [""] + [vocab[i] for i in range(1, len(vocab) + 1)]
scale_prob = np.clip(prob, 1e-6, 1 - 1e-6)
rescale = np.exp(np.log(scale_prob) / temperature)
rescale[:] /= rescale.sum()
return _choice(fix_dict, rescale[0, :])
try:
char = vocab[idx]
except:
char = ''
return char
import rnn_model
# load from check-point
_, arg_params, __ = mx.model.load_checkpoint("obama", 75)
# build an inference model
model = rnn_model.LSTMInferenceModel(
num_lstm_layer,
len(vocab) + 1,
num_hidden=num_hidden,
num_embed=num_embed,
num_label=len(vocab) + 1,
arg_params=arg_params,
ctx=mx.gpu(),
dropout=0.2)
seq_length = 600
input_ndarray = mx.nd.zeros((1,))
revert_vocab = MakeRevertVocab(vocab)
# Feel free to change the starter sentence
output ='The United States'
random_sample = False
new_sentence = True
ignore_length = len(output)
for i in range(seq_length):
if i <= ignore_length - 1:
MakeInput(output[i], vocab, input_ndarray)
else:
MakeInput(output[-1], vocab, input_ndarray)
prob = model.forward(input_ndarray, new_sentence)
new_sentence = False
next_char = MakeOutput(prob, revert_vocab, random_sample)
if next_char == '':
new_sentence = True
if i >= ignore_length - 1:
output += next_char
print(output)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then we define a few utility functions to pre-process the dataset.
Step3: Create LSTM Model
Step4: Train
Step5: Then we can train with the standard model.fit approach.
Step6: Inference
Step7: Then we create the inference model
Step8: Now we can generate a sequence of 600 characters starting with "The United States"
|
6,410
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
ecom = pd.read_csv('Ecommerce Purchases')
ecom.head()
ecom.info()
ecom['Purchase Price'].mean()
ecom['Purchase Price'].max()
ecom['Purchase Price'].min()
ecom[ecom['Language'] == 'en'].count()
ecom[ecom['Job'] =='Lawyer'].info()
ecom['AM or PM'].value_counts()
ecom['Job'].value_counts().head(5)
ecom[ecom['Lot']=='90 WT']['Purchase Price']
ecom[ecom['Credit Card']==4926535242672853]['Email']
ecom[(ecom['Purchase Price']>95) & (ecom['CC Provider']=='American Express')].count()
def cc_split(card_year):
splited = card_year.split('/')
if(splited[1] == '25'):
return True
else:
return False
sum(ecom['CC Exp Date'].apply(cc_split))
def email_split(email):
email_arr = email.split('@')
return email_arr[1]
ecom['Email'].apply(email_split).value_counts().head(5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check the head of the DataFrame.
Step2: How many rows and columns are there?
Step3: What is the average Purchase Price?
Step4: What were the highest and lowest purchase prices?
Step5: How many people have English 'en' as their Language of choice on the website?
Step6: How many people have the job title of "Lawyer" ?
Step7: How many people made the purchase during the AM and how many people made the purchase during PM ?
Step8: What are the 5 most common Job Titles?
Step9: Someone made a purchase that came from Lot
Step10: What is the email of the person with the following Credit Card Number
Step11: How many people have American Express as their Credit Card Provider and made a purchase above $95 ?
Step12: Hard
Step13: Hard
|
6,411
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# TensorFlow と tf.keras のインポート
import tensorflow.compat.v1 as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
train_images.shape
len(train_labels)
train_labels
test_images.shape
len(test_labels)
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.gca().grid(False)
plt.show()
train_images = train_images / 255.0
test_images = test_images / 255.0
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('Test accuracy:', test_acc)
predictions = model.predict(test_images)
predictions[0]
np.argmax(predictions[0])
test_labels[0]
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
plt.show()
prediction = predictions[0]
np.argmax(prediction)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: はじめてのニューラルネットワーク:分類問題の初歩
Step2: ファッションMNISTデータセットのロード
Step3: ロードしたデータセットは、NumPy配列になります。
Step4: データの観察
Step5: 同様に、訓練用データセットには60,000個のラベルが含まれます。
Step6: ラベルはそれぞれ、0から9までの間の整数です。
Step7: テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
Step8: テスト用データセットには10,000個のラベルが含まれます。
Step9: データの前処理
Step10: ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。
Step11: 訓練用データセットの最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
Step12: モデルの構築
Step13: このネットワークの最初の層は、tf.keras.layers.Flatten です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。
Step14: モデルの訓練
Step15: モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。
Step16: ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、過学習(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。
Step17: これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
Step18: 予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
Step19: というわけで、このモデルは、この画像が、アンクルブーツ、class_names[9] である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
Step20: 10チャンネルすべてをグラフ化してみることができます。
Step21: 0番目の画像と、予測、予測配列を見てみましょう。
Step22: 予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
Step23: 最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
Step24: tf.keras モデルは、サンプルの中のバッチ(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
Step25: そして、予測を行います。
Step26: model.predict メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
|
6,412
|
<ASSISTANT_TASK:>
Python Code:
import os
import csv
import codecs
import string
TRAINSET_PATH = '../data/train/'
TESTSET_PATH = '../data/test/'
LINGSPAM_TRAIN_CSV_PATH = TRAINSET_PATH + 'lingspam_train.csv'
LINGSPAM_TEST_CSV_PATH = TESTSET_PATH + 'lingspam_test.csv'
def generate_trainset(input_dir, output_path):
l = []
for root, dirs, files in os.walk(input_dir):
path = root.split(os.sep)
part_name = os.path.basename(root)
for file in files:
if not file.endswith('.txt'):
continue
d = {}
file_name = file.replace('.txt', '')
file_path = os.path.join(root, file)
with codecs.open(file_path, mode='r', encoding='utf8', errors='ignore') as f:
line_counter = 0
for line in f.readlines():
line = line.strip()
if line_counter == 0: # subject
subject = line.replace('Subject:', '').strip()
if line_counter == 2:
email = line
line_counter += 1
d['email_subject'] = subject
d['email_body'] = email
d['part_name'] = part_name
d['file_name'] = file_name
d['is_spam'] = 1 if file_name.startswith('spmsg') else 0
l.append(d)
with codecs.open(output_path, mode='w', encoding='utf8', errors='ignore') as out_file:
writer = csv.DictWriter(out_file, l[0].keys())
writer.writeheader()
for row in l:
writer.writerow(row)
# generate_trainset(TRAINSET_PATH, LINGSPAM_TRAIN_CSV_PATH)
# generate_trainset(TESTSET_PATH, LINGSPAM_TEST_CSV_PATH)
import csv
import pandas as pd
lingspam_train_path = './lingspam_train.csv'
lingspam_test_path = './lingspam_test.csv'
dtype={
'email_subject': str,
'email_body': str,
'part_name': str,
'file_name':str,
'is_spam': int
}
lingspam_train_df = pd.read_csv(lingspam_train_path, dtype=dtype)
lingspam_test_df = pd.read_csv(lingspam_test_path, dtype=dtype)
print("Dataset column names:")
for col in lingspam_train_df.columns:
print(col)
print('\nlingspam trainset:')
print(lingspam_train_df)
print('\nlingspam testset:')
print(lingspam_test_df)
trainset_size = lingspam_train_df.shape[0]
testset_size = lingspam_test_df.shape[0]
print("\nTrainset size: " + str(trainset_size))
print("Testset size: " + str(testset_size))
y_train = lingspam_train_df['is_spam'].to_numpy()
y_test = lingspam_test_df['is_spam'].to_numpy()
import string
from collections import Counter
words = []
dictionary = {}
for index, row in lingspam_train_df.iterrows():
email = row['email_body'].split(' ')
email = [word for word in email if word not in string.punctuation]
email = [word for word in email if len(word) > 1]
email = [word for word in email if word.isalpha() == True]
words += email
dictionary = Counter(words)
unique_num = len(dictionary)
total_num = sum(dictionary.values())
print("The number of unique words in lingspam trainset: " + str(unique_num))
print("The total times they appeared: " + str(total_num))
print("The 20 most common words in trainset:")
print(*dictionary.most_common(20), sep='\n')
print('\nThe length of current dictionary: ' + str(len(dictionary)))
import math
total_legit_emails = 0.0
total_spam_emails = 0.0
for index, row in lingspam_train_df.iterrows():
is_spam = row['is_spam']
if is_spam == 1:
total_spam_emails += 1
else:
total_legit_emails += 1
print("Total legit email number = {}".format(total_legit_emails))
print("Total spam email number = {}".format(total_spam_emails))
p = total_legit_emails / (total_spam_emails + total_legit_emails)
print("p = {}".format(p))
h_c = -1 * p * math.log(p, 2) - (1 - p) * math.log(1 - p, 2)
print("H(C) = {}".format(h_c))
def count_legit_emails_with_word(word):
num_legit_emails_with_word = 0
for index, row in lingspam_train_df.iterrows():
if row['is_spam'] == 0 and word in row['email_body'].split(' '):
num_legit_emails_with_word += 1
return num_legit_emails_with_word
def count_spam_emails_with_word(word):
num_spam_emails_with_word = 0
for index, row in lingspam_train_df.iterrows():
if row['is_spam'] == 1 and word in row['email_body'].split(' '):
num_spam_emails_with_word += 1
return num_spam_emails_with_word
def h_legit_word_not_present(word):
num_legit_emails_with_word = count_legit_emails_with_word(word)
num_spam_emails_with_word = count_spam_emails_with_word(word)
return (total_legit_emails - num_legit_emails_with_word) / (total_spam_emails + total_legit_emails) * math.log((total_legit_emails - num_legit_emails_with_word) / (total_spam_emails - num_spam_emails_with_word + total_legit_emails - num_legit_emails_with_word), 2)
def h_spam_word_not_present(word):
num_legit_emails_with_word = count_legit_emails_with_word(word)
num_spam_emails_with_word = count_spam_emails_with_word(word)
return (total_spam_emails - num_spam_emails_with_word) / (total_spam_emails + total_legit_emails) * math.log((total_spam_emails - num_spam_emails_with_word) / (total_spam_emails - num_spam_emails_with_word + total_legit_emails - num_legit_emails_with_word), 2)
def h_legit_word_is_present(word):
num_legit_emails_with_word = count_legit_emails_with_word(word)
num_spam_emails_with_word = count_spam_emails_with_word(word)
return num_legit_emails_with_word / (total_spam_emails + total_legit_emails) * math.log(num_legit_emails_with_word / (num_spam_emails_with_word + num_legit_emails_with_word), 2)
def h_spam_word_is_present(word):
num_legit_emails_with_word = count_legit_emails_with_word(word)
num_spam_emails_with_word = count_spam_emails_with_word(word)
return num_spam_emails_with_word / (total_spam_emails + total_legit_emails) * math.log(num_spam_emails_with_word / (num_spam_emails_with_word + num_legit_emails_with_word), 2)
def info_gain(word):
h_c_x = -1 * (h_legit_word_not_present(word) + h_spam_word_not_present(word) + h_legit_word_is_present(word) + h_spam_word_is_present(word))
ig = h_c - h_c_x
return ig
word = "language"
print("word: {}, info_gain: {}".format(word, info_gain(word)))
ig_filepath = './ig.csv'
dtype={
'word': str,
'freq': int,
'ig': float,
}
ig_df = pd.read_csv(ig_filepath, dtype=dtype)
print('\nInformation Gain dictionary:')
print(ig_df)
sorted_ig_df = ig_df.sort_values(by=['ig'], ascending=False)
print('\nDictionary sorted by IG on desending order:')
print(sorted_ig_df)
top_10_features = sorted_ig_df.head(10)
top_100_features = sorted_ig_df.head(100)
top_1000_features = sorted_ig_df.head(1000)
print("\nTop-10 features:")
print(top_10_features)
print("\nTop-100 features:")
print(top_100_features)
print("\nTop-1000 features:")
print(top_1000_features)
top_10_features_list = [row['word'] for index, row in top_10_features.iterrows()]
top_100_features_list = [row['word'] for index, row in top_100_features.iterrows()]
top_1000_features_list = [row['word'] for index, row in top_1000_features.iterrows()]
print("\nTop-10 words:")
print(top_10_features_list)
print("\nTop-100 words:")
print(top_100_features_list)
print("\nTop-1000 words:")
print(top_1000_features_list)
import numpy as np
from pandas import DataFrame
def extract_binary_features(df: DataFrame, N: int):
if N == 10:
top_n_features_list = top_10_features_list
elif N == 100:
top_n_features_list = top_100_features_list
elif N == 1000:
top_n_features_list = top_1000_features_list
else:
print('Please choose a right value for N (10, 100 or 1000)!')
return
assert N == len(top_n_features_list), "The length of top_n_features_list should be equal with N!"
features_matrix = np.zeros((df.shape[0], N))
for email_idx, row in df.iterrows():
email_body = row['email_body'].split(' ')
for word_idx in range(len(top_n_features_list)):
word = top_n_features_list[word_idx]
if word in email_body:
features_matrix[email_idx, word_idx] = 1
return features_matrix
from sklearn.naive_bayes import BernoulliNB
from sklearn.metrics import precision_score, recall_score
print("Bernoulli NB classifier with Binary Features")
n = 10
x_train_10 = extract_binary_features(lingspam_train_df, n)
x_test_10 = extract_binary_features(lingspam_test_df, n)
bernoulii_nb_binary_10 = BernoulliNB()
bernoulii_nb_binary_10.fit(x_train_10, y_train)
y_pred = bernoulii_nb_binary_10.predict(x_test_10)
bernoulii_nb_binary_10_precision = precision_score(y_test, y_pred)
bernoulii_nb_binary_10_recall = recall_score(y_test, y_pred)
print("N = {}".format(n))
print("precision: {}".format(bernoulii_nb_binary_10_precision))
print("recall: {}\n".format(bernoulii_nb_binary_10_recall))
n = 100
x_train_100 = extract_binary_features(lingspam_train_df, n)
x_test_100 = extract_binary_features(lingspam_test_df, n)
bernoulii_nb_binary_100 = BernoulliNB()
bernoulii_nb_binary_100.fit(x_train_100, y_train)
y_pred = bernoulii_nb_binary_100.predict(x_test_100)
bernoulii_nb_binary_100_precision = precision_score(y_test, y_pred)
bernoulii_nb_binary_100_recall = recall_score(y_test, y_pred)
print("N = {}".format(n))
print("precision: {}".format(bernoulii_nb_binary_100_precision))
print("recall: {}\n".format(bernoulii_nb_binary_100_recall))
n = 1000
x_train_1000 = extract_binary_features(lingspam_train_df, n)
x_test_1000 = extract_binary_features(lingspam_test_df, n)
bernoulii_nb_binary_1000 = BernoulliNB()
bernoulii_nb_binary_1000.fit(x_train_1000, y_train)
y_pred = bernoulii_nb_binary_1000.predict(x_test_1000)
bernoulii_nb_binary_1000_precision = precision_score(y_test, y_pred)
bernoulii_nb_binary_1000_recall = recall_score(y_test, y_pred)
print("N = {}".format(n))
print("precision: {}".format(bernoulii_nb_binary_1000_precision))
print("recall: {}\n".format(bernoulii_nb_binary_1000_recall))
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import precision_score, recall_score
print("Multinomial NB classifier with Binary Features")
n = 10
multinomial_nb_binary_10 = MultinomialNB()
multinomial_nb_binary_10.fit(x_train_10, y_train)
y_pred = multinomial_nb_binary_10.predict(x_test_10)
multinomial_nb_binary_10_precision = precision_score(y_test, y_pred)
multinomial_nb_binary_10_recall = recall_score(y_test, y_pred)
print("N = {}".format(n))
print("precision: {}".format(multinomial_nb_binary_10_precision))
print("recall: {}\n".format(multinomial_nb_binary_10_recall))
n = 100
multinomial_nb_binary_100 = MultinomialNB()
multinomial_nb_binary_100.fit(x_train_100, y_train)
y_pred = multinomial_nb_binary_100.predict(x_test_100)
multinomial_nb_binary_100_precision = precision_score(y_test, y_pred)
multinomial_nb_binary_100_recall = recall_score(y_test, y_pred)
print("N = {}".format(n))
print("precision: {}".format(multinomial_nb_binary_100_precision))
print("recall: {}\n".format(multinomial_nb_binary_100_recall))
n = 1000
multinomial_nb_binary_1000 = MultinomialNB()
multinomial_nb_binary_1000.fit(x_train_1000, y_train)
y_pred = multinomial_nb_binary_1000.predict(x_test_1000)
multinomial_nb_binary_1000_precision = precision_score(y_test, y_pred)
multinomial_nb_binary_1000_recall = recall_score(y_test, y_pred)
print("N = {}".format(n))
print("precision: {}".format(multinomial_nb_binary_1000_precision))
print("recall: {}\n".format(multinomial_nb_binary_1000_recall))
import numpy as np
from pandas import DataFrame
def extract_tf_features(df: DataFrame, N: int):
if N == 10:
top_n_features_list = top_10_features_list
elif N == 100:
top_n_features_list = top_100_features_list
elif N == 1000:
top_n_features_list = top_1000_features_list
else:
print('Please choose a right value for N (10, 100 or 1000)!')
return
assert N == len(top_n_features_list), "The length of top_n_features_list should be equal with N!"
features_matrix = np.zeros((df.shape[0], N))
for email_idx, row in df.iterrows():
email_body = row['email_body'].split(' ')
for word_idx in range(len(top_n_features_list)):
word = top_n_features_list[word_idx]
if word in email_body:
features_matrix[email_idx, word_idx] = email_body.count(word)
return features_matrix
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import precision_score, recall_score
print("Multinomial NB classifier with TF Features")
n = 10
x_train_10 = extract_tf_features(lingspam_train_df, n)
x_test_10 = extract_tf_features(lingspam_test_df, n)
multinomial_nb_tf_10 = MultinomialNB()
multinomial_nb_tf_10.fit(x_train_10, y_train)
y_pred = multinomial_nb_tf_10.predict(x_test_10)
multinomial_nb_tf_10_precision = precision_score(y_test, y_pred)
multinomial_nb_tf_10_recall = recall_score(y_test, y_pred)
print("N = {}".format(n))
print("precision: {}".format(multinomial_nb_tf_10_precision))
print("recall: {}\n".format(multinomial_nb_tf_10_recall))
n = 100
x_train_100 = extract_tf_features(lingspam_train_df, n)
x_test_100 = extract_tf_features(lingspam_test_df, n)
multinomial_nb_tf_100 = MultinomialNB()
multinomial_nb_tf_100.fit(x_train_100, y_train)
y_pred = multinomial_nb_tf_100.predict(x_test_100)
multinomial_nb_tf_100_precision = precision_score(y_test, y_pred)
multinomial_nb_tf_100_recall = recall_score(y_test, y_pred)
print("N = {}".format(n))
print("precision: {}".format(multinomial_nb_tf_100_precision))
print("recall: {}\n".format(multinomial_nb_tf_100_recall))
n = 1000
x_train_1000 = extract_tf_features(lingspam_train_df, n)
x_test_1000 = extract_tf_features(lingspam_test_df, n)
multinomial_nb_tf_1000 = MultinomialNB()
multinomial_nb_tf_1000.fit(x_train_1000, y_train)
y_pred = multinomial_nb_tf_1000.predict(x_test_1000)
multinomial_nb_tf_1000_precision = precision_score(y_test, y_pred)
multinomial_nb_tf_1000_recall = recall_score(y_test, y_pred)
print("N = {}".format(n))
print("precision: {}".format(multinomial_nb_tf_1000_precision))
print("recall: {}\n".format(multinomial_nb_tf_1000_recall))
from sklearn import svm
from sklearn.metrics import precision_score, recall_score
from sklearn.model_selection import cross_validate
print("SVM classifier with TF Features")
n = 1000
kernel = ['linear', 'poly', 'rbf', 'sigmoid']
c = [1, 2]
x_train = extract_tf_features(lingspam_train_df, n)
x_test = extract_tf_features(lingspam_test_df, n)
# train svm models
for k in kernel:
for reg in c:
svm_model = svm.SVC(kernel=k, C=reg).fit(x_train, y_train)
y_pred = svm_model.predict(x_test)
svm_precision = precision_score(y_test, y_pred)
svm_recall = recall_score(y_test, y_pred)
print("Kernel = {}, C = {}".format(k, reg))
print("precision: {}".format(svm_precision))
print("recall: {}\n".format(svm_recall))
print("Perform 5-fold Cross Validation on training set:")
scores = cross_validate(
svm_model,
x_train,
y_train,
cv=5,
scoring=('precision', 'recall')
)
print(scores)
cv_best_precision = max(scores['test_precision'])
cv_best_reall = max(scores['test_recall'])
print("best cross validation precision: {}".format(cv_best_precision))
print("best cross validation recall: {}\n".format(cv_best_reall))
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
n = 10
x_train = extract_tf_features(lingspam_train_df, n)
x_test = extract_binary_features(lingspam_test_df, n)
multinomial_nb_binary_baseline = MultinomialNB()
multinomial_nb_binary_baseline.fit(x_train, y_train)
y_pred = multinomial_nb_binary_baseline.predict(x_test)
acc_score = accuracy_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
conf_mat = confusion_matrix(y_test, y_pred, labels = [0, 1])
tn = conf_mat[0][0]
fn = conf_mat[1][0]
tp = conf_mat[1][1]
fp = conf_mat[0][1]
# False positive rate
fpr = fp / (fp + tn)
# False negative rate
before_fnr = fn / (tp + fn)
print("baseline nb classifier accuracy rate: {}, precision: {}, recall: {}".format(acc_score, precision, recall))
print("confusion matrix: \n{}".format(conf_mat))
print("tn: {}, fp: {}, fn: {}, tp: {}".format(tn, fp, fn, tp))
print("fpr: {}, fnr: {}".format(fpr, before_fnr))
spam_email_list = []
for email_idx, row in lingspam_test_df.iterrows():
if row['is_spam'] == 1:
spam_email_list.append(email_idx)
spam_email_size = len(spam_email_list)
print("spam email size: {}".format(spam_email_size))
print("spam email list: {}".format(spam_email_list))
print('performing ADD-WORDS strategy on testset...')
all_cost = 0
modified_x_test = x_test
for email_idx, row in lingspam_test_df.iterrows():
if email_idx in spam_email_list:
cost = 0
print('email index: {}'.format(email_idx))
features_matrix = np.zeros((1, 10))
email_body = row['email_body'].split(' ')
for word_idx in range(len(top_10_features_list)):
word = top_10_features_list[word_idx]
if word in email_body:
features_matrix[0, word_idx] = 1
print('original feature matrix: {}'.format(features_matrix))
while multinomial_nb_binary_baseline.predict(features_matrix) != 0:
idx = next((i for i, x in enumerate(features_matrix[0]) if x == 0), None)
if idx == None:
break
features_matrix[0, idx] = 1
cost += 1
all_cost += cost
modified_x_test[email_idx] = features_matrix
print('modified featurex matrix: {}'.format(features_matrix))
print('cost: {}'.format(cost))
avg_cost = all_cost / spam_email_size
print('all cost on testset: {}'.format(all_cost))
print('average cost: {}'.format(avg_cost))
after_y_pred = multinomial_nb_binary_baseline.predict(modified_x_test)
acc_score = accuracy_score(y_test, after_y_pred)
precision = precision_score(y_test, after_y_pred)
recall = recall_score(y_test, after_y_pred)
conf_mat = confusion_matrix(y_test, after_y_pred, labels = [0, 1])
tn = conf_mat[0][0]
fn = conf_mat[1][0]
tp = conf_mat[1][1]
fp = conf_mat[0][1]
# False positive rate
fpr = fp / (fp + tn)
# False negative rate
after_fnr = fn / (tp + fn)
print("After the attacker's modifications to test emails")
print("baseline nb classifier accuracy rate: {}, precision: {}, recall: {}".format(acc_score, precision, recall))
print("confusion matrix: \n{}".format(conf_mat))
print("tn: {}, fp: {}, fn: {}, tp: {}".format(tn, fp, fn, tp))
print("fpr: {}, fnr: {}".format(fpr, after_fnr))
train_spam_email_list = []
for email_idx, row in lingspam_train_df.iterrows():
if row['is_spam'] == 1:
train_spam_email_list.append(email_idx)
train_spam_email_size = len(train_spam_email_list)
print("spam email size: {}".format(train_spam_email_size))
print("spam email list: {}".format(train_spam_email_list))
print('performing ADD-WORDS strategy on trainset...')
all_cost_train = 0
modified_x_train = x_train
for email_idx, row in lingspam_train_df.iterrows():
if email_idx in train_spam_email_list:
cost = 0
# print('email index: {}'.format(email_idx))
features_matrix = np.zeros((1, 10))
email_body = row['email_body'].split(' ')
for word_idx in range(len(top_10_features_list)):
word = top_10_features_list[word_idx]
if word in email_body:
features_matrix[0, word_idx] = 1
# print('original feature matrix: {}'.format(features_matrix))
while multinomial_nb_binary_baseline.predict(features_matrix) != 0:
idx = next((i for i, x in enumerate(features_matrix[0]) if x == 0), None)
if idx == None:
break
features_matrix[0, idx] = 1
cost += 1
all_cost_train += cost
modified_x_train[email_idx] = features_matrix
# print('modified featurex matrix: {}'.format(features_matrix))
# print('cost: {}'.format(cost))
avg_cost_train = all_cost_train / train_spam_email_size
print('all cost on trainset: {}'.format(all_cost_train))
print('average cost: {}'.format(avg_cost_train))
print('Updateing baseline classifier...')
multinomial_nb_binary_updated = MultinomialNB()
multinomial_nb_binary_updated.fit(modified_x_train, y_train)
updated_y_pred = multinomial_nb_binary_updated.predict(modified_x_test)
acc_score = accuracy_score(y_test, updated_y_pred)
precision = precision_score(y_test, updated_y_pred)
recall = recall_score(y_test, updated_y_pred)
conf_mat = confusion_matrix(y_test, updated_y_pred, labels = [0, 1])
tn = conf_mat[0][0]
fn = conf_mat[1][0]
tp = conf_mat[1][1]
fp = conf_mat[0][1]
# False positive rate
updated_fpr = fp / (fp + tn)
# False negative rate
updated_fnr = fn / (tp + fn)
print("baseline nb classifier accuracy rate: {}, precision: {}, recall: {}".format(acc_score, precision, recall))
print("confusion matrix: \n{}".format(conf_mat))
print("tn: {}, fp: {}, fn: {}, tp: {}".format(tn, fp, fn, tp))
print("fpr: {}, fnr: {}".format(updated_fpr, updated_fnr))
print('False Negative Rate of the baseline NB classifier before attacker\'s modification: {}'.format(before_fnr))
print('False Negative Rate of the baseline NB classifier after attacker\'s modification: {}'.format(after_fnr))
print('Average cost of attacker\'s modifications: {}'.format(avg_cost))
print('False Negative Rate of updated NB classifier: {}'.format(updated_fnr))
print('False Positvive Rate of updated NB classifier: {}'.format(updated_fpr))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can't really run the above code in this notebook since it's hard to upload the dataset folders to colab. So I run this script offline and generated 2 expected csv files. I put the link here
Step2: From the shape of csv dataframes, the trainset has 2602 samples, the testset has 291 test cases.
Step3: Experiments
Step4: Since the computation time for all words in dataset (45k words!) is quite long, I did the computation offline and save the dictionary with information gain values to a csv file. You can download this file from this link. After you download it, upload it to this colab notebook and we will need this in the following experiments.
Step5: Now we get Information Gain values for each word in the dataset, we are able to perform feature selection by ranking word on descending order based on IG. From the training data, select the top-N features ($N = {10, 100, 1000}$) on terms of the highest Information Gain (IG) scores.
Step6: Now we have 3 feature dictonaries, top-10, top-100 and top-1000. Next we can extract binary feature matrixes or term frequency feature matrixes from datasets based on them. Let's start!
Step7: Train a Bernoulli Naive Bayes classifier with binary features and get the precision score, recall score on the testing dataset.
Step8: Multinomial NB with Binary Features
Step9: Multinomial NB with Term Frequency (TF) Features
Step10: Train a Multinomial Naive Bayes classifier with term frequency features and get the precision score, recall score on the testing dataset.
Step11: SVM based Spam Filter
Step12: Adversarial Classification based Spam Filter
Step13: First, let's get all spam emails in the testset.
Step14: Next perform ADD-WORDS strategy for the adversary on these spam emails. The attacker seeks to find the minimum cost solution such that each spam email in the test set gets classified as legitimate by the baseline NB classifier. I will add words to spam emails in testset so that they can fool the classifier. The words are chosen from top-10 terms ranked by the IG metric. The ADD-WORDS strategy is like a greedy algorithm here, once the prediction result of a spam email flips, stop adding words.
Step15: Evaluate baseline classifier on modified testset.
Step16: Apply the same ADD-WORDS strategy on training set, then retrain the baseline classifier and evaluate the updated baseline classifier.
Step17: Result and Conclusion
|
6,413
|
<ASSISTANT_TASK:>
Python Code:
input_node = ak.ImageInput()
output_node = ak.Normalization()(input_node)
output_node1 = ak.ConvBlock()(output_node)
output_node2 = ak.ResNetBlock(version="v2")(output_node)
output_node = ak.Merge()([output_node1, output_node2])
output_node = ak.ClassificationHead()(output_node)
auto_model = ak.AutoModel(
inputs=input_node, outputs=output_node, overwrite=True, max_trials=1
)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print(x_train.shape) # (60000, 28, 28)
print(y_train.shape) # (60000,)
print(y_train[:3]) # array([7, 2, 1], dtype=uint8)
# Feed the AutoModel with training data.
auto_model.fit(x_train[:100], y_train[:100], epochs=1)
# Predict with the best model.
predicted_y = auto_model.predict(x_test)
# Evaluate the best model with testing data.
print(auto_model.evaluate(x_test, y_test))
class SingleDenseLayerBlock(ak.Block):
def build(self, hp, inputs=None):
# Get the input_node from inputs.
input_node = tf.nest.flatten(inputs)[0]
layer = tf.keras.layers.Dense(
hp.Int("num_units", min_value=32, max_value=512, step=32)
)
output_node = layer(input_node)
return output_node
# Build the AutoModel
input_node = ak.Input()
output_node = SingleDenseLayerBlock()(input_node)
output_node = ak.RegressionHead()(output_node)
auto_model = ak.AutoModel(input_node, output_node, overwrite=True, max_trials=1)
# Prepare Data
num_instances = 100
x_train = np.random.rand(num_instances, 20).astype(np.float32)
y_train = np.random.rand(num_instances, 1).astype(np.float32)
x_test = np.random.rand(num_instances, 20).astype(np.float32)
y_test = np.random.rand(num_instances, 1).astype(np.float32)
# Train the model
auto_model.fit(x_train, y_train, epochs=1)
print(auto_model.evaluate(x_test, y_test))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Whild building the model, the blocks used need to follow this topology
Step2: For multiple input nodes and multiple heads search space, you can refer to
Step3: You can connect it with other blocks and build it into an
|
6,414
|
<ASSISTANT_TASK:>
Python Code:
from openhunt.mordorutils import *
spark = get_spark()
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/defense_evasion/host/empire_enable_rdp.tar.gz"
registerMordorSQLTable(spark, sd_file, "sdTable")
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, TargetObject
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 13
AND (TargetObject LIKE "%fDenyTSConnections"
OR TargetObject LIKE "%UserAuthentication")
AND Details = "DWORD (0x00000000)"
'''
)
df.show(10,False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download & Process Security Dataset
Step2: Analytic I
|
6,415
|
<ASSISTANT_TASK:>
Python Code:
import pypsa
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
%matplotlib inline
plt.rc("figure", figsize=(8, 8))
network = pypsa.examples.ac_dc_meshed(from_master=True)
# get current type (AC or DC) of the lines from the buses
lines_current_type = network.lines.bus0.map(network.buses.carrier)
lines_current_type
network.plot(
line_colors=lines_current_type.map(lambda ct: "r" if ct == "DC" else "b"),
title="Mixed AC (blue) - DC (red) network - DC (cyan)",
color_geomap=True,
jitter=0.3,
)
plt.tight_layout()
network.links.loc["Norwich Converter", "p_nom_extendable"] = False
network.determine_network_topology()
network.sub_networks["n_branches"] = [
len(sn.branches()) for sn in network.sub_networks.obj
]
network.sub_networks["n_buses"] = [len(sn.buses()) for sn in network.sub_networks.obj]
network.sub_networks
network.snapshots
network.generators
network.generators_t.p_max_pu.plot.area(subplots=True)
plt.tight_layout()
network.lopf();
network.objective
network.objective + network.objective_constant
network.generators.p_nom_opt.div(1e3).plot.bar(ylabel="GW", figsize=(8, 3))
plt.tight_layout()
network.generators_t.p.div(1e3).plot.area(subplots=True, ylabel="GW")
plt.tight_layout()
network.buses_t.marginal_price.mean(1).plot.area(figsize=(8, 3), ylabel="Euro per MWh")
plt.tight_layout()
network.links_t.p0
network.lines_t.p0
network.buses_t.p
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We inspect the topology of the network. Therefore use the function determine_network_topology and inspect the subnetworks in network.sub_networks.
Step2: The network covers 10 time steps. These are given by the snapshots attribute.
Step3: There are 6 generators in the network, 3 wind and 3 gas. All are attached to buses
Step4: We see that the generators have different capital and marginal costs. All of them have a p_nom_extendable set to True, meaning that capacities can be extended in the optimization.
Step5: Alright now we know how the network looks like, where the generators and lines are. Now, let's perform a optimization of the operation and capacities.
Step6: The objective is given by
Step7: Why is this number negative? It considers the starting point of the optimization, thus the existent capacities given by network.generators.p_nom are taken into account.
Step8: The optimal capacities are given by p_nom_opt for generators, links and storages and s_nom_opt for lines.
Step9: Their production is again given as a time-series in network.generators_t.
Step10: What are the Locational Marginal Prices in the network. From the optimization these are given for each bus and snapshot.
Step11: We can inspect futher quantities as the active power of AC-DC converters and HVDC link.
Step12: ...or the active power injection per bus.
|
6,416
|
<ASSISTANT_TASK:>
Python Code:
from os.path import join
adaptor_cleanup_dir = '/path/to/output/cleanup_dir/'
closed_ref_output = '/path/to/closed_ref_output/'
gg_ref_fna = '/path/to/greengenes/97_otus.fasta'
gg_ref_tax = '/path/to/greengenes/97_otu_taxonomy.txt'
silva_ref_fna = '/path/to/silva/Silva_123_rep_set97.fna'
silva_ref_tax = '/path/to/silva/taxonomy_97_7_levels.txt'
gg_cr_dp = join(closed_ref_output, '01-closed-ref-greengenes')
gg_cr_ps_dp = join(gg_cr_dp, 'per_study')
%%bash
mkdir -p $gg_cr_dp
mkdir -p $gg_cr_ps_dp
# Index the Reference sequence file to reduce 20 min of processing per study
indexdb_rna --ref $gg_ref_fna,$gg_cr_ps_dp/97_otus.idx --max_pos 10000
echo -e "pick_otus:otu_picking_method sortmerna\npick_otus:threads 31\npick_otus:sortmerna_db $gg_cr_ps_dp/97_otus.idx" > $gg_cr_ps_dp/cr_params.txt
for i in `ls $adaptor_cleanup_dir`
do
pick_closed_reference_otus.py -i $adaptor_cleanup_dir \
-o $gg_cr_ps_dp/$i \
-r $gg_ref_fna \
-t $gg_ref_tax \
-p $gg_cr_ps_dp/cr_params.txt
done
# Merge all the individual tables
tables=''
for i in `ls $gg_cr_ps_dp/*/otu_table.biom`
do
tables=$tables','$i
done
tables=${tables:1}
parallel_merge_otu_tables.py -i $tables -o $gg_cr_dp/merged
cp $gg_cr_dp/merged/merged.biom $gg_cr_dp/emp_cr_gg_13_8.biom
silva_cr_dp = join(closed_ref_output, '01-closed-ref-silva-16S')
silva_cr_ps_dp = join(silva_cr_dp, 'per_study')
%%bash
mkdir -p $silva_cr_dp
mkdir -p $silva_cr_ps_dp
# Index the reference sequence file to reduce 20 min of processing per study
indexdb_rna --ref $silva_ref_fna,$silva_cr_ps_dp/97_otus.idx --max_pos 10000
echo -e "pick_otus:otu_picking_method sortmerna\npick_otus:threads 31\npick_otus:sortmerna_db $silva_cr_ps_dp/97_otus.idx" > $silva_cr_ps_dp/cr_params.txt
for i in `ls $adaptor_cleanup_dir`
do
pick_closed_reference_otus.py -i $adaptor_cleanup_dir \
-o $silva_cr_ps_dp/$i \
-r $silva_ref_fna \
-t $silva_ref_tax \
-p $silva_cr_ps_dp/cr_params.txt
done
# Merge all the individual tables
tables=''
for i in `ls $silva_cr_ps_dp/*/otu_table.biom`
do
tables=$tables','$i
done
tables=${tables:1}
parallel_merge_otu_tables.py -i $tables -o $silva_cr_dp/merged
cp $silva_cr_dp/merged/merged.biom $silva_cr_dp/emp_cr_silva_16S_123.biom
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Greengenes 13.8
Step2: Silva123
|
6,417
|
<ASSISTANT_TASK:>
Python Code:
x = 15
def func():
print(x)
func()
y = 15
def modify():
y = 20
print(y)
modify()
print(y)
z = 15
def modify_global():
global z
z = 20
print(z)
modify_global()
print(z)
def modify_define_global():
global var
var = 34
print(var)
modify_define_global()
print(var)
def outer():
x, y = 1, 2
def inner():
x, y = 3, 4
print(x, y)
inner()
print(x, y)
outer()
def out_1(x, y):
def inner_1():
print(x, y)
print('hello world')
return inner_1
f = out_1(1, 2)
f()
f2 = out_1(3, 4)
f2()
def pow_n(n):
def power(num):
return num ** n
return power
square = pow_n(2)
cube = pow_n(3)
print(square(5))
print(cube(5))
def out2():
x, y = 1, 2
def inner():
global x, y
x, y = 3, 4
print(x, y)
inner()
print(x, y)
out2()
print(x, y)
def out3():
i, j = 1, 2
def inner():
nonlocal i, j
i, j = 3, 4
inner()
print(i, j)
out3()
def out4():
i, j = 1, 2
def middle():
i, j = 3, 4
def inner():
nonlocal i, j
i, j = 5, 6
inner()
print(i, j)
middle()
print(i, j)
out4()
abc = 123
def modify_nonlocal():
nonlocal abc
abc = 456
modify_nonlocal()
print(abc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 在函数的局部作用域中可以访问到全局作用域中的变量x,但是需要注意的是这里的访问是读去x的值,下面我们试一试,如果在函数中写全局变量会怎么样
Step2: 我们可以看到,在modify函数中对y变量进行赋值,但是在全局作用域中打印y的值,发现全局变量y并没有被modify函数修改。
Step3: “global z”这条语句告诉编译器,不要在局部变量中找变量z了,我要的是一个全局变量,你直接在全局作用域中,把那个名字给我找出来就好了,别做画蛇添足的事情,这个时候在modify_global函数里面使用变量z,就是直接使用的全局变量z了。
Step4: 在使用了global关键字之后,解释器默认的从全局作用域中去查找变量var,如果没有找到,解释器会为var变量分配一段内存,并在全局作用域中给这段内存添加一个名字var。
Step5: 这里举了一个函数嵌套的例子,
Step6: 让外层函数把内层函数当作一个对象返回,在全局作用域中,用一个引用指向这个对象,那么这个对象在全局作用域也就有了自己的名字,用这个新名字就可以调用这个内层函数了。
Step7: 既然函数是对象,那么不同函数就应该占有不同的内存空间,在这里第二次执行了out_1函数,这个函数又定义了一个新的函数对象,并将其返回给全局作用域。
Step8: 使用闭包可以很容易的通过外部参数定制一个新的函数,而不是说我需要一个平方的函数要实现一次,需要一个立方的函数又需要重新的实现一次。
Step9: 闭包作用域和赋值语句的冲突并没有被解决。
Step10: 如果是多层嵌套怎么办?
Step11: 可以发现nonlocal只会让解释器在最近的外层函数作用域中去查找相关的变量名,并不会影响到更外层的作用域。
|
6,418
|
<ASSISTANT_TASK:>
Python Code:
class BernoulliBandit:
def __init__(self, n_actions=5):
self._probs = np.random.random(n_actions)
np.random.seed(1234)
@property
def action_count(self):
return len(self._probs)
def pull(self, action):
if np.any(np.random.random() > self._probs[action]):
return 0.0
return 1.0
def optimal_reward(self):
Used for regret calculation
return np.max(self._probs)
def step(self):
Used in nonstationary version
pass
def reset(self):
Used in nonstationary version
class AbstractAgent(metaclass=ABCMeta):
def init_actions(self, n_actions):
self._successes = np.zeros(n_actions)
self._failures = np.zeros(n_actions)
self._total_pulls = 0
@abstractmethod
def get_action(self, trial):
Get current best action
:rtype: int
pass
def update(self, action, reward):
Observe reward from action and update agent's internal parameters
:type action: int
:type reward: int
self._total_pulls += 1
if reward == 1:
self._successes[action] += 1
else:
self._failures[action] += 1
@property
def name(self):
return self.__class__.__name__
class RandomAgent(AbstractAgent):
def get_action(self):
return np.random.randint(0, len(self._successes))
class EpsilonGreedyAgent(AbstractAgent):
def __init__(self, epsilon = 0.01):
self._epsilon = epsilon
def get_action(self):
# YOUR CODE HERE
p_random = np.random.sample()
if p_random < self._epsilon:
xt = np.random.randint(0, 1)
else:
n_actions = len(self._successes) #K
theta = np.zeros(n_actions)
for k in range(n_actions):
theta[k] = self._successes[k] / (self._successes[k] + self._failures[k] + 0.1)
xt = np.argmax(theta)
return xt
@property
def name(self):
return self.__class__.__name__ + "(epsilon={})".format(self._epsilon)
class UCBAgent(AbstractAgent):
def get_action(self):
# YOUR CODE HERE
n_actions = len(self._successes) #K
w = np.zeros(n_actions)
for k in range(n_actions):
denom = (self._successes[k] + self._failures[k] + 0.1)
factor1 = self._successes[k] / denom
factor2 = np.sqrt(2 * np.log(self._total_pulls + 0.1) / denom)
w[k] = factor1 + factor2
xt = np.argmax(w)
return xt
@property
def name(self):
return self.__class__.__name__
#from numpy.random import beta
from scipy.stats import beta
class ThompsonSamplingAgent(AbstractAgent):
def get_action(self):
# YOUR CODE HERE
n_actions = len(self._successes) #K
theta = np.zeros(n_actions)
for k in range(n_actions):
if (self._successes[k] == 0) or (self._failures[k] == 0):
theta[k] = np.random.randint(0, 1)
else:
#theta[k] = beta.rvs(self._successes[k], self._failures[k], size=1)
theta[k] = beta.median(self._successes[k], self._failures[k])
xt = np.argmax(theta)
return xt
@property
def name(self):
return self.__class__.__name__
from collections import OrderedDict
def get_regret(env, agents, n_steps=5000, n_trials=50):
scores = OrderedDict({
agent.name: [0.0 for step in range(n_steps)] for agent in agents
})
for trial in range(n_trials):
env.reset()
for a in agents:
a.init_actions(env.action_count)
for i in range(n_steps):
optimal_reward = env.optimal_reward()
for agent in agents:
action = agent.get_action()
reward = env.pull(action)
agent.update(action, reward)
scores[agent.name][i] += optimal_reward - reward
env.step() # change bandit's state if it is unstationary
for agent in agents:
scores[agent.name] = np.cumsum(scores[agent.name]) / n_trials
return scores
def plot_regret(agents, scores):
for agent in agents:
plt.plot(scores[agent.name])
plt.legend([agent.name for agent in agents])
plt.ylabel("regret")
plt.xlabel("steps")
plt.show()
# Uncomment agents
agents = [
EpsilonGreedyAgent(),
UCBAgent(),
ThompsonSamplingAgent()
]
regret = get_regret(BernoulliBandit(), agents, n_steps=10000, n_trials=10)
plot_regret(agents, regret)
from submit import submit_bandits
submit_bandits(agents, regret, "tonatiuh_rangel@hotmail.com", "mPIwUddSXNqZ1NsH")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: Bernoulli Bandit
Step6: Epsilon-greedy agent
Step7: UCB Agent
Step8: Thompson sampling
Step9: Submit to coursera
|
6,419
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import graphlab
import math
import string
products = graphlab.SFrame('amazon_baby.gl/')
products
products[269]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
review_without_punctuation = products['review'].apply(remove_punctuation)
products['word_count'] = graphlab.text_analytics.count_words(review_without_punctuation)
products[269]['word_count']
products = products[products['rating'] != 3]
len(products)
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
products
train_data, test_data = products.random_split(.8, seed=1)
print len(train_data)
print len(test_data)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count'],
validation_set=None)
sentiment_model
weights = sentiment_model.coefficients
weights.column_names()
num_positive_weights = len([x for x in weights if x['value'] >= 0])
num_negative_weights = len([x for x in weights if x['value'] < 0 ])
print "Number of positive weights: %s " % num_positive_weights
print "Number of negative weights: %s " % num_negative_weights
sample_test_data = test_data[10:13]
print sample_test_data['rating']
sample_test_data
sample_test_data[0]['review']
sample_test_data[1]['review']
scores = sentiment_model.predict(sample_test_data, output_type='margin')
print scores
pred = []
for score in scores:
if score >= 0.0:
pred.insert(len(pred), 1)
else:
pred.insert(len(pred), -1)
print pred
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data)
prob_pred = []
for score in scores:
import math
pp = 1 / (1 + math.exp(- score))
prob_pred.insert(len(pred), pp)
print prob_pred
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data, output_type='probability')
test_data['prob'] = sentiment_model.predict(test_data, output_type='probability')
for k in test_data.topk('prob', k=20):
print k['name']
for k in test_data.topk('prob', k=20, reverse=True):
print k['name']
def get_classification_accuracy(model, data, true_labels):
# First get the predictions
## YOUR CODE HERE
data['sentiment_predict_1'] = model.predict(data)
# Compute the number of correctly classified examples
## YOUR CODE HERE
count = 0
for d in data:
if d['sentiment'] == d['sentiment_predict_1']:
count = count + 1
# Then compute accuracy by dividing num_correct by total number of examples
## YOUR CODE HERE
accuracy = count/len(data)
return accuracy
acc = get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])
print acc
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves',
'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed',
'work', 'product', 'money', 'would', 'return']
len(significant_words)
train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
test_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
train_data[0]['review']
print train_data[0]['word_count']
print train_data[0]['word_count_subset']
simple_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count_subset'],
validation_set=None)
simple_model
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])
simple_model.coefficients
simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21)
pv_words_simple_model = [x['index'] for x in simple_model.coefficients if x['value'] >= 0]
print pv_words_simple_model
different_values = []
for x in sentiment_model.coefficients:
index = x['index']
if index is not None and index in pv_words_simple_model and x['value'] < 0:
print '' + index + ' --> ' + str(x['value'])
different_values.append(index)
print different_values
get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])
get_classification_accuracy(simple_model, train_data, train_data['sentiment'])
get_classification_accuracy(sentiment_model, test_data, train_data['sentiment'])
get_classification_accuracy(simple_model, test_data, train_data['sentiment'])
num_positive = (train_data['sentiment'] == +1).sum()
num_negative = (train_data['sentiment'] == -1).sum()
print num_positive
print num_negative
print num_positive/len(train_data)
num_positive_test_data = (test_data['sentiment'] == +1).sum()
num_negative_test_data = (test_data['sentiment'] == -1).sum()
print num_positive_test_data
print num_negative_test_data
print 'accuracy of majority class classifier in traning in traning data ' + str(num_positive_test_data/len(test_data))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preparation
Step2: Now, let us see a preview of what the dataset looks like.
Step3: Build the word count vector for each review
Step4: Now, we will perform 2 simple data transformations
Step5: Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.
Step6: Extract sentiments
Step7: Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
Step8: Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Step9: Train a sentiment classifier with logistic regression
Step10: Aside. You may get a warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.
Step11: There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Step12: Quiz Question
Step13: Let's dig deeper into the first row of the sample_test_data. Here's the full review
Step14: That review seems pretty positive.
Step15: We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as
Step16: Predicting sentiment
Step17: Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.
Step18: Checkpoint
Step19: Checkpoint
Step20: Quiz Question
Step21: Quiz Question
Step22: Quiz Question
Step23: Now, let's compute the classification accuracy of the sentiment_model on the test_data.
Step24: Quiz Question
Step25: For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.
Step26: Let's see what the first example of the dataset looks like
Step27: The word_count column had been working with before looks like the following
Step28: Since we are only working with a subset of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.
Step29: Train a logistic regression model on a subset of data
Step30: We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.
Step31: Now, we will inspect the weights (coefficients) of the simple_model
Step32: Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.
Step33: Quiz Question
Step34: Quiz Question
Step35: Comparing models
Step36: Now, compute the classification accuracy of the simple_model on the train_data
Step37: Quiz Question
Step38: Next, we will compute the classification accuracy of the simple_model on the test_data
Step39: Quiz Question
Step40: Now compute the accuracy of the majority class classifier on test_data.
|
6,420
|
<ASSISTANT_TASK:>
Python Code:
import eex
import os
import pandas as pd
import numpy as np
# Create empty data layer
dl = eex.datalayer.DataLayer("butane", backend="Memory")
dl.summary()
First, we add atoms to the system. Atoms have associated metadata. The possible atom metadata is listed here.
dl.list_valid_atom_properties()
TOPOLOGY:
Information can be added to the datalayer in the form of pandas dataframes. Here, we add atom metadata.
The name of the column corresponds to the atom property.
Populate empty dataframe with relevant information and add to EEX datalayer
# Create empty dataframe
atom_df = pd.DataFrame()
# Create atomic system using pandas dataframe
atom_df["atom_index"] = np.arange(0,4)
atom_df["molecule_index"] = [int(x) for x in np.zeros(4)]
atom_df["residue_index"] = [int(x) for x in np.zeros(4)]
atom_df["atom_name"] = ["C1", "C2", "C3", "C4"]
atom_df["charge"] = np.zeros(4)
atom_df["atom_type"] = [1, 2, 2, 1]
atom_df["X"] = [0, 0, 0, -1.474]
atom_df["Y"] = [-0.4597, 0, 1.598, 1.573]
atom_df["Z"] = [-1.5302, 0, 0, -0.6167]
atom_df["mass"] = [15.0452, 14.02658, 14.02658, 15.0452]
# Add atoms to datalayer
dl.add_atoms(atom_df, by_value=True)
# Print datalayer information
dl.summary()
# Print stored atom properties
dl.get_atoms(properties=None, by_value=True)
TOPOLOGY:
The EEX datalayer now contains four nonbonded atoms. To create butane, atoms must be bonded
to one another.
Add bonds to system
# Create empty dataframes for bonds
bond_df = pd.DataFrame()
# Create column names. Here, "term_index" refers to the bond type index.
# i.e. - if all bonds are the same type, they will have the same term index
bond_column_names = ["atom1", "atom2", "term_index"]
# Create corresponding data. The first row specifies that atom0 is bonded
# to atom 1 and has bond_type id 0
bond_data = np.array([[0, 1, 0,],
[1, 2, 0],
[2, 3, 0]])
for num, name in enumerate(bond_column_names):
bond_df[name] = bond_data[:,num]
dl.add_bonds(bond_df)
dl.summary()
TOPOLOGY:
Add angles and dihedrals to system.
# Follow similar procedure as for bonds
angle_df = pd.DataFrame()
dihedral_df = pd.DataFrame()
angle_column_names = ["atom1", "atom2", "atom3", "term_index"]
dihedral_column_names = ["atom1", "atom2", "atom3", "atom4", "term_index"]
angle_data = np.array([[0, 1, 2, 0,],
[1, 2, 3, 0],])
dihedral_data = np.array([[0, 1, 2, 3, 0,]])
for num, name in enumerate(angle_column_names):
angle_df[name] = angle_data[:,num]
dl.add_angles(angle_df)
for num, name in enumerate(dihedral_column_names):
dihedral_df[name] = dihedral_data[:,num]
dl.add_dihedrals(dihedral_df)
dl.summary()
EEX FORCE FIELD PARAMETERS
A main component of EEX is internally stored metadata which defines details functional forms including form, constants,
unit types, and default units (if the user does not overrride this option).
This metadata is stored as human readable dictionaries which can easily be added or maninpulated.
# Here, we examine the metadata present in the bond metadata for a harmonic bond
bond_metadata = eex.metadata.two_body_terms.two_body_metadata
for k, v in bond_metadata["forms"]["harmonic"].items():
print(k, v)
FORCE FIELD PARAMETERS
To add bonds (or other parameters) using this metadata, the user specifies the form using a keyword ("harmonic") that
matches to EEX's metadata.
Values for the contstants are passed using a dictionary with the 'parameters' defined in the metadata as keys.
Each bond type is given a uid, and default dimensions may be overwritten by the user using a dictionary
and the 'utype' argument
# Here, in add_term_parameter, the first argument is the term order. '2'
# corresponds to bonded atoms.
dl.add_term_parameter(2, "harmonic", {'K': 300.9, 'R0': 1.540}, uid=0, utype={'K':"kcal * mol **-1 * angstrom ** -2",
'R0': "angstrom"})
# If units or parameters are not compatible with the metadata, the datalayer will not allow storage of the parameter.
# Here, we have changed 'K' to simply "kcal". This will fail (uncomment to test)
#dl.add_term_parameter(2, "harmonic", {'K': 300.9, 'R0': 1.540}, uid=0, utype={'K':"kcal",'R0': "angstrom"})
## Add harmonic angle parameters
dl.add_term_parameter(3, "harmonic", {'K': 62.100, 'theta0': 114}, uid=0, utype={'K':'kcal * mol ** -1 * radian ** -2',
'theta0': 'degree'})
# Add OPLS dihedral parameter
dl.add_term_parameter(4, "opls", {'K_1': 1.41103414, 'K_2': -0.27101489,
'K_3': 3.14502869, 'K_4': 0}, uid=0, utype={'K_1': 'kcal * mol ** -1',
'K_2': 'kcal * mol ** -1',
'K_3': 'kcal * mol ** -1',
'K_4': 'kcal * mol ** -1'})
NONBOND PARAMETERS
For nonbond parametets, we currently provide support for Lennard Jones and Buckingham potentials
Most programs use pair-wise Lennard Jones potentials for nonbond interactions. Our internal metadata stores these as A
and B parameters. However, uses may specify other forms such as epsilon/sigma, epsilon, Rmin, etc.
Lennard Jones parameters can be added as a pair (atom_type1, atom_type2) or for a single atom type with a mixing rule.
dl.add_nb_parameter(atom_type=1, nb_name="LJ",
nb_model="epsilon/sigma", nb_parameters={'sigma': 3.75, 'epsilon': 0.1947460018},
utype={'sigma': 'angstrom', 'epsilon': 'kcal * mol ** -1'})
dl.add_nb_parameter(atom_type=2, nb_name="LJ",
nb_model="epsilon/sigma", nb_parameters={'sigma': 3.95, 'epsilon': 0.0914112887},
utype={'sigma': 'angstrom', 'epsilon': 'kcal * mol ** -1'})
dl.set_mixing_rule('lorentz-berthelot')
# Retrieve stored parameters
print("All stored parameters\n", dl.list_nb_parameters("LJ"), "\n\n")
# To apply the mixing rule:
dl.build_LJ_mixing_table()
print("All stored parameters\n", dl.list_nb_parameters("LJ"), "\n\n")
# These can also be retrieved for only single atoms, or for atom pairs by using itype='single' or itype='pairs'
pair_interactions = dl.list_nb_parameters("LJ", itype="pair")
print("Pair parameters\n", pair_interactions)
dl.summary()
# Evaluate system energy
energy_system1 = dl.evaluate(utype="kcal * mol ** -1")
print(energy_system1)
# Preview an amber prmtop (parameter-topology file) for Amber.
butane_file = os.path.join("..", "examples", "amber","alkanes", "trappe_butane_single_molecule.prmtop")
f = open(butane_file)
print(f.read())
f.close()
# Create new datalayer and populate using amber reader
dl_amber = eex.datalayer.DataLayer("butane_amber")
eex.translators.amber.read_amber_file(dl_amber, butane_file)
dl_amber.summary()
energy_system2 = dl_amber.evaluate(utype="kcal * mol ** -1")
for k in energy_system1:
energy_difference = energy_system1[k] - energy_system2[k]
print(k," difference:\t %.3f" % energy_difference)
# Compare stored NB parameters
eex.testing.dict_compare(dl_amber.list_nb_parameters("LJ"), dl.list_nb_parameters("LJ", itype="pair"))
# We can now write the amber file we read for lammps.
eex.translators.lammps.write_lammps_file(dl_amber, "output_lammps.data", "output_lammps.in")
# Write a local copy of the amber datalayer for amber.
eex.translators.amber.write_amber_file(dl_amber, "amber_output.prmtop")
## Read the written file into a datalayer ##
dl_lammps = eex.datalayer.DataLayer("butane_lammps")
eex.translators.lammps.read_lammps_input_file(dl_lammps, "output_lammps.in")
f = open("output_lammps.data")
print(f.read())
f.close()
lammps_energy = dl_lammps.evaluate(utype="kcal * mol ** -1")
# Compare energies
for k in energy_system1:
energy_difference = lammps_energy[k] - energy_system2[k]
print(k," difference:\t %.3f" % energy_difference)
dl_dna = eex.datalayer.DataLayer("DNA_amber")
DNA_file = os.path.join("..", "examples", "amber","peptides", "alanine_dipeptide.prmtop")
eex.translators.amber.read_amber_file(dl_dna, DNA_file)
dl_dna.summary()
eex.translators.lammps.write_lammps_file(dl_dna,"lammps_ala.data", "lammps_ala.in")
f = open("lammps_ala.data")
print(f.read())
f.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Demo - Storing information in EEX
Step8: Storing force field information
Step9: Alternatively, these could have been set directly as pairs without a mixing rule.
Step10: Reading from MD input files
Step11: Comparing the two datalayers
Step12: Writing output files
Step13: Translating small peptide structure
|
6,421
|
<ASSISTANT_TASK:>
Python Code:
# source1: web
df_breed = pd.read_csv("breed_nick_names.txt",names=['breed_info'])
df_breed.head()
df_breed.shape
breeds_info = df_breed['breed_info'].values
breed_dict = {}
for breed in breeds_info:
temp = breed.lower()
temp = re.findall('\d.\s+(\D*)', temp)[0]
temp = temp.strip().split('=')
breed_dict[temp[0].strip()] = temp[1].strip()
# 1. different nicek names are separated with 'or'
for k, v in breed_dict.iteritems():
breed_dict[k] = map(lambda x:x.strip(), v.split(' or '))
# 2. get n-gram and stemmed words breed_dict
for k, v in breed_dict.iteritems():
breed_dict[k] = set(v)
breed_dict[k].add(k)
temp_set = set([snowball.stem(x) for x in breed_dict[k]])
breed_dict[k] = breed_dict[k]|temp_set
for word in word_tokenize(k):
breed_dict[k].add(word)
breed_dict[k].add(snowball.stem(word))
breed_dict[k] = breed_dict[k] - {'dog', 'dogs'} - stopword_set
print breed_dict['chow chows']
breed_lookup = defaultdict(set)
for k, v in breed_dict.iteritems():
for word in v:
breed_lookup[word].add(k)
breed_lookup.keys()
del_list = ['toy','blue','great','duck','coat','wire','st.','white','grey',
'black','old','smooth','west','soft']
for w in del_list:
breed_lookup.pop(w, None)
print len(breed_lookup)
# polish the look up tables based on 52 base classes
breed_classes = pd.read_csv("s3://dogfaces/tensor_model/output_labels_20170907.txt",names=['breed'])
base_breeds = breed_classes['breed'].values
not_found_breed = []
for breed in base_breeds:
if breed not in breed_dict:
if breed in breed_lookup:
if len(breed_lookup[breed])==1:
breed_in_dict = list(breed_lookup[breed])[0]
breed_dict[breed] = breed_dict[breed_in_dict]
breed_dict[breed].add(breed_in_dict)
breed_dict.pop(breed_in_dict, None)
print "replace the key {} with {}".format(breed_in_dict, breed)
else:
print breed, breed_lookup[breed]
elif snowball.stem(breed) in breed_lookup:
breed_stem = snowball.stem(breed)
if len(breed_lookup[breed_stem])==1:
breed_in_dict = list(breed_lookup[breed_stem])[0]
breed_dict[breed] = breed_dict[breed_in_dict]
breed_dict[breed].add(breed_in_dict)
breed_dict.pop(breed_in_dict, None)
else:
print breed,breed_stem, breed_lookup[breed_stem]
else:
not_found_breed.append(breed)
print "not found these breeds:"
print not_found_breed
# poodles:
for breed in not_found_breed:
if breed.endswith('poodle') or breed=='wheaten terrier':
breed_dict[breed] = set(breed.split())|set([snowball.stem(w) for w in breed.split()])
breed_dict.pop('poodle', None)
# bullmastiff
if 'bull mastiff' in not_found_breed:
breed_dict['bull mastiff'] = breed_dict['bullmastiffs']
breed_dict.pop('bullmastiffs', None)
# english springer
if 'english springer' in not_found_breed:
breed_dict['english springer'] = breed_dict['english springer spaniels']
breed_dict.pop('english springer spaniels', None)
# german short haired, german shepherd and 'american bulldog'
name = 'american bulldog'
if name in not_found_breed:
breed_dict[name] = breed_dict['bulldog'] | set(name.split()) | set([snowball.stem(w) for w in name.split()])
breed_dict.pop('bulldog', None)
name = 'german shorthaired'
if name in not_found_breed:
breed_dict[name] = breed_dict['german shorthaired pointers']
breed_dict.pop('german shorthaired pointers', None)
name = 'german shepherd'
if name in not_found_breed:
breed_dict[name] = breed_dict['german shepherd dog']
breed_dict.pop('german shepherd dog', None)
# basset dog
breed_dict['basset'] = breed_dict['basset hound']|breed_dict['petits bassets griffons vendeens']
'basset' in base_breeds
sorted(breed_dict.keys())
ind = np.random.randint(df_reviews.shape[0])
text_review = df_reviews['review_content'][ind].lower()
print text_review
puncs = string.punctuation
reduced_set = set([snowball.stem(x) for x in (set(filter(lambda x: x not in puncs, word_tokenize(text_review)))
- stopword_set)])
po_breeds = []
for w in reduced_set:
if w in breed_lookup:
po_breeds.extend(breed_lookup[w])
print po_breeds
df_reviews.columns
def getReviewBreed(text):
ntext = text.decode('utf-8')
reduced_set = set([snowball.stem(x) for x in
(set(filter(lambda x: x not in string.punctuation,
word_tokenize(ntext.lower()))) - stopword_set)])
po_breeds = []
for w in reduced_set:
if w in breed_lookup:
po_breeds.extend(breed_lookup[w])
return po_breeds
def getBreedTable(df):
N = df.shape[0]
breed = []
review_id = []
toy_id = []
for ind, row in df.iterrows():
breed.append(getReviewBreed(row['review_content']))
review_id.append(row['review_id'])
toy_id.append(row['toy_id'])
return pd.DataFrame({'review_id':review_id, 'toy_id':toy_id, 'breed_extract':breed})
test_df = df_reviews.copy()
start_time = time.time()
new_df = getBreedTable(test_df)
print time.time() - start_time
new_df.head()
df_reviews['review_content'][1]
new_df.shape
df_extract = pd.merge(df_reviews, new_df, on=['review_id', 'toy_id'])
df_extract.pop('review_content')
print df_extract.shape
df_extract.head()
#ind = np.random.randint(df_extract.shape[0])
ind = 4
print df_reviews['review_content'][ind]
print df_extract['breed_extract'][ind]
df_extract['breed_extract'] = df_extract['breed_extract'].apply(lambda row:','.join(row))
df_extract.head()
np.sum(df_extract['breed_extract'].isnull())
breed_lookup['poodle']
save_data = df_extract.to_csv(index=False)
s3_res = boto3.resource('s3')
s3_res.Bucket('dogfaces').put_object(Key='reviews/extract_breed_review.csv', Body=save_data)
# save breed_lookup
# save breed_dict
with open('breed_lookup.pickle', 'wb') as handle:
pickle.dump(breed_lookup, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('breed_dict.pickle', 'wb') as handle:
pickle.dump(breed_dict, handle, protocol=pickle.HIGHEST_PROTOCOL)
# source 2: classified dog names
breed_classes = pd.read_csv("s3://dogfaces/tensor_model/output_labels_20170907.txt",names=['breed'])
breed_classes.head()
# generate a data frame, review_id, toy_id, breed
len(df_extract['review_id'].unique())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: for poodles
Step2: Save intermediate import dictionaries and results
Step3: Get breed scores
|
6,422
|
<ASSISTANT_TASK:>
Python Code:
# "pip install ml_insights" in terminal if needed
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import ml_insights as mli
%matplotlib inline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss, brier_score_loss, roc_auc_score
mli.__version__
# Load dataset derived from the MMIC database
lab_aug_df = pd.read_csv("data/lab_vital_icu_table.csv")
lab_aug_df.head(10)
# Choose a subset of variables
X = lab_aug_df.loc[:,['aniongap_min', 'aniongap_max',
'albumin_min', 'albumin_max', 'bicarbonate_min', 'bicarbonate_max',
'bilirubin_min', 'bilirubin_max', 'creatinine_min', 'creatinine_max',
'chloride_min', 'chloride_max',
'hematocrit_min', 'hematocrit_max', 'hemoglobin_min', 'hemoglobin_max',
'lactate_min', 'lactate_max', 'platelet_min', 'platelet_max',
'potassium_min', 'potassium_max', 'ptt_min', 'ptt_max', 'inr_min',
'inr_max', 'pt_min', 'pt_max', 'sodium_min', 'sodium_max', 'bun_min',
'bun_max', 'wbc_min', 'wbc_max','sysbp_max', 'sysbp_mean', 'diasbp_min', 'diasbp_max', 'diasbp_mean',
'meanbp_min', 'meanbp_max', 'meanbp_mean', 'resprate_min',
'resprate_max', 'resprate_mean', 'tempc_min', 'tempc_max', 'tempc_mean',
'spo2_min', 'spo2_max', 'spo2_mean']]
y = lab_aug_df['hospital_expire_flag']
# Impute the median for in each column to replace NA's
median_vec = [X.iloc[:,i].median() for i in range(len(X.columns))]
for i in range(len(X.columns)):
X.iloc[:,i].fillna(median_vec[i],inplace=True)
X_train_calib, X_test, y_train_calib, y_test = train_test_split(X, y, test_size=0.2, random_state=942)
X_train, X_calib, y_train, y_calib = train_test_split(X_train_calib, y_train_calib, test_size=0.25, random_state=942)
X_train.shape, X_calib.shape, X_test.shape
rfmodel1 = RandomForestClassifier(n_estimators = 500, class_weight='balanced_subsample', random_state=942, n_jobs=-1 )
rfmodel1.fit(X_train,y_train)
preds_test_uncalib = rfmodel1.predict_proba(X_test)[:,1]
preds_test_uncalib[:10]
roc_auc_score(y_test, preds_test_uncalib), roc_auc_score(y_test, .1*preds_test_uncalib)
mli.plot_reliability_diagram(y_test, preds_test_uncalib, marker='.')
# Define SplineCalib object
calib1 = mli.SplineCalib()
# Use the model to make predictions on the calibration set
preds_cset = rfmodel1.predict_proba(X_calib)[:,1]
# Fit the calibration object on the calibration set
calib1.fit(preds_cset, y_calib)
# Visually inspect the quality of the calibration on the calibration set
mli.plot_reliability_diagram(y_calib, preds_cset);
calib1.show_calibration_curve()
# Visually inspect the quality of the calibration on the test set
calib1.show_calibration_curve()
mli.plot_reliability_diagram(y_test, preds_test_uncalib);
calib1.show_spline_reg_plot()
# Calibrate the previously generated predictions from the model on the test set
preds_test_calib1 = calib1.calibrate(preds_test_uncalib)
# Visually inspect the calibration of the newly calibrated predictions
mli.plot_reliability_diagram(y_test, preds_test_calib1);
## Compare the log_loss values
log_loss(y_test, preds_test_uncalib),log_loss(y_test, preds_test_calib1)
# Get the cross validated predictions given a model and training data.
cv_preds_train = mli.cv_predictions(rfmodel1, X_train, y_train, clone_model=True)
calib2 = mli.SplineCalib()
calib2.fit(cv_preds_train, y_train)
# Show the reliability diagram for the cross-validated predictions, and the calibration curve
calib2.show_calibration_curve()
mli.plot_reliability_diagram(y_train, cv_preds_train[:,1]);
mli.plot_reliability_diagram(y_test, calib2.calibrate(preds_test_uncalib));
preds_test_calib2 = calib2.calibrate(preds_test_uncalib)
log_loss(y_test, preds_test_uncalib), log_loss(y_test, preds_test_calib2)
rfmodel2 = RandomForestClassifier(n_estimators = 500, class_weight='balanced_subsample', random_state=942, n_jobs=-1 )
rfmodel2.fit(X_train_calib,y_train_calib)
preds_test_2_uncalib = rfmodel2.predict_proba(X_test)[:,1]
# Get the cross validated predictions given a model and training data.
cv_preds_train_calib = mli.cv_predictions(rfmodel2, X_train_calib, y_train_calib, stratified=True, clone_model=True)
calib3 = mli.SplineCalib()
calib3.fit(cv_preds_train_calib, y_train_calib)
# Show the reliability diagram for the cross-validated predictions, and the calibration curve
calib3.show_calibration_curve()
mli.plot_reliability_diagram(y_train_calib, cv_preds_train_calib[:,1]);
preds_test_calib3 = calib3.calibrate(preds_test_2_uncalib)
log_loss(y_test, preds_test_2_uncalib), log_loss(y_test, preds_test_calib3)
roc_auc_score(y_test, preds_test_2_uncalib), roc_auc_score(y_test, preds_test_calib3)
import joblib
joblib.dump(calib3, 'calib3.pkl')
calib3_reloaded=joblib.load('calib3.pkl')
mli.plot_reliability_diagram(y_test, calib3_reloaded.calibrate(preds_test_2_uncalib));
calib3_reloaded.show_calibration_curve()
log_loss(y_test, calib3_reloaded.calibrate(preds_test_2_uncalib))
from sklearn.isotonic import IsotonicRegression
from sklearn.linear_model import LogisticRegression
from betacal import BetaCalibration
# Fit three-parameter beta calibration
bc = BetaCalibration(parameters="abm")
bc.fit(cv_preds_train_calib[:,1], y_train_calib)
# Fit Isotonic Regression
iso = IsotonicRegression()
iso.fit(cv_preds_train_calib[:,1], y_train_calib)
# Fit Platt scaling (logistic calibration)
lr = LogisticRegression(C=99999999999)
lr.fit(cv_preds_train_calib[:,1].reshape(-1,1), y_train_calib)
tvec = np.linspace(0,1,1001)
bc_probs = bc.predict(tvec)
iso_probs = iso.predict(tvec)
platt_probs = lr.predict_proba(tvec.reshape(-1,1))[:,1]
splinecalib_probs = calib3.calibrate(tvec)
#calib3.show_calibration_curve()
mli.plot_reliability_diagram(y_train_calib, cv_preds_train_calib[:,1], error_bars=False);
plt.plot(tvec, splinecalib_probs, label='SplineCalib')
plt.plot(tvec, bc_probs, label='Beta')
plt.plot(tvec, iso_probs, label='Isotonic')
plt.plot(tvec, platt_probs, label='Platt')
plt.legend()
plt.title('Calibration Curves for different methods');
preds_test_bc = bc.predict(preds_test_2_uncalib)
preds_test_iso = iso.predict(preds_test_2_uncalib)
preds_test_platt = lr.predict_proba(preds_test_2_uncalib.reshape(-1,1))[:,1]
preds_test_splinecalib = calib3.calibrate(preds_test_2_uncalib)
bc_loss = log_loss(y_test, preds_test_bc)
iso_loss = log_loss(y_test, preds_test_iso)
platt_loss = log_loss(y_test, preds_test_platt)
splinecalib_loss = log_loss(y_test, preds_test_splinecalib)
print('Platt loss = {}'.format(np.round(platt_loss,5)))
print('Beta Calib loss = {}'.format(np.round(bc_loss,5)))
print('Isotonic loss = {}'.format(np.round(iso_loss,5)))
print('SplineCalib loss = {}'.format(np.round(splinecalib_loss,5)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the next few cells, we load in some data, inspect it, select columns for our features and outcome (mortality) and fill in missing values with the median of that column.
Step2: Now we divide the data into training, calibration, and test sets. The training set will be used to fit the model, the calibration set will be used to calibrate the probabilities, and the test set will be used to evaluate the performance. We use a 60-20-20 split (achived by first doing 80/20 and then splitting the 80 by 75/25)
Step3: Next, we fit a Random Forest model to our training data. Then we use that model to predict "probabilities" on our validation and test sets.
Step4: Model Evaluation
Step5: Above, we see that the model is largely under-predicting the probability of mortality in the range .35 to .85. For example, when the model predicts a probability of between .6 and .65, more than 80% of those patients died. And the error bars indicate that this is not likely due to random error. In other words, our model is poorly calibrated.
Step6: From the above, we see that not only do our reliability diagrams look better, but our log_loss values have substantially improved. Log_loss measures not only the discriminative power of the model but also how well-calibrated it is.
Step7: We see above that the cross-validated approach gives similar performance (slightly better in this case). Additionally, we did not use the 20% of data set aside for calibration at all in the second approach. We could use approach 2 on the entire training and calibration data and (presumably) get an even better model.
Step8: Indeed, we get a slightly better AUC and log_loss both before and after calibration, due to having a larger training set for our model to learn from
Step9: Comparison to Other Calibration Approaches
|
6,423
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a_1d = np.array ([0, 1, 2, 3]) # a vector
print a_1d
b_1d = np.array ([4, 5, 6, 7]) # another vector
print b_1d
print a_1d + b_1d
print 5*a_1d
print a_1d**2
# Append '?' to get help on a specific routine
np.array?
# Search for key text
np.lookfor ("creating array")
# Wildcard search
np.con*?
n = 1000000
%timeit L = range (n)
L = range (n)
%timeit [i**2 for i in L]
np.arange (10) # Moral equivalent to `range`
%timeit A = np.arange (n)
A = np.arange (n)
%timeit A**2
from random import gauss # Generates random numbers from a Gaussian
from math import sqrt # Computes the square root of a number
n = 1000000
X_py = [gauss (0, 1) for i in range (n)]
X_np = np.array (X_py)
print ("==> Native Python lists:")
%timeit ...
print ("\n==> Numpy:")
%timeit ...
# Create a two-dimensional array of size 3 rows x 4 columns:
B = np.array([[0, 1, 2, 3],
[4, 5, 6, 7],
[8, 9, 10, 11]])
print B.ndim # What does this do?
print B.shape # What does this do?
print len (B) # What does this do?
C1 = [[0, 1, 2, 3],
[4, 5, 6, 7],
[8, 9, 10, 11]]
C2 = [[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]
C = np.array ([C1, C2])
print C.ndim
print C.shape
print len (C)
print np.linspace (0, 1, 10)
print np.linspace (0, 1, 10, endpoint=False)
print np.ones ((3, 4))
print np.zeros ((3, 4))
print np.eye (3)
print np.diag ([1, 2, 3])
n = 3
I = np.eye (n)
print ("I = eye(n):")
print (I)
u = [1] * n
I_u = np.diag (u)
print ("u:")
print u
print ("\ndiag (u):")
print I_u
assert np.all (D_u == E)
A = np.empty ((3, 4)) # An empty 3 x 4 matrix
print A
# Recall: C
print C
print C[0, 2, :]
print C[1, 0, ::-1]
Z = ... # INSERT YOUR CODE HERE
print ("\n==> Orange slice:")
print Z[...] # Edit to print the orange-ish subset
print ("\n==> Red slice:")
print Z[...] # Edit to print the red-ish subset
print ("\n==> Blue slice:")
print Z[...] # Edit to print the blue-ish subset
print ("\n==> Green slice:")
print Z[...] # Edit to print the green-ish subset
# Rich will demo this in class
print ("==> Recall C: %s" % str (C.shape))
print C
C_view = C[1, 0:3:2, 1:4:2] # Question: What does this produce?
print ("==> C_view: %s" % str (C_view.shape))
C_view[:, :] = -C_view[::-1, ::-1] # Question: What does this do?
print C
C_copy = C[1, 0:3:2, 1:4:2].copy ()
C_copy[:, :] = -C_copy[::-1, ::-1]
print ("==> C_view:")
print (C_view)
print ("\n==> C_copy:")
print (C_copy)
print ("C and C_view share memory: %s" % np.may_share_memory (C, C_view))
print ("C and C_copy share memory: %s" % np.may_share_memory (C, C_copy))
from math import sqrt
n = 20
is_prime = np.ones (n+1, dtype=bool) # the "sieve"
# Initial values
is_prime[0:2] = False
is_prime[2:] = True
# Sieving loop
for i in range (2, int (sqrt (n))):
# Fill in your code here
is_prime[2*i::i] = False
# Prints your primes
print ("==> Primes through %d:" % n)
print np.nonzero (is_prime)[0]
np.random.seed(3)
x = np.random.random_integers(0, 20, 15) # 15 random ints in [0, 20)
print x
# Find all positive multiples of 3
mask_mult_3 = (x % 3 == 0) & (x > 0)
print mask_mult_3
print x[mask_mult_3]
# Pull out an arbitrary subset of elements
inds = np.array ([3, 7, 8, 12])
print x[inds]
n = 5000
A_rowmaj = np.ones ((n, n), order='C') # row-major (C/C++ convention)
A_colmaj = np.ones ((n, n), order='F') # column-major (Fortran convention)
def scale_colwise (A):
Given a matrix `A`, visits each column `A[:, j]` and scales it by `j`.
# Fill in this code
pass
# Measure time to scale a row-major input column-wise
%timeit scale_colwise (A_rowmaj)
# Measure time to scale a column-major input column-wise
%timeit scale_colwise (A_colmaj)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Numpy provides some natural types and operations on arrays. For instance
Step2: Getting help. By the way, if you need help getting documentation from within this notebook, here are some handy shortcuts.
Step3: Why bother with Numpy? A motivating example
Step4: Exercise. Recall the definition of the 2-norm of a vector (or Euclidean length of a vector) from Da Kuang's notes on linear algebra. Compare its speed when using native Python lists versus Numpy arrays.
Step5: Creating multidimensional arrays
Step6: Besides arange(), you can also define an interval and a number of points. What does the following code do?
Step7: There are routines for creating various kinds of structured matrices as well, which are similar to those found in MATLAB and Octave.
Step8: Exercise. The following code creates an identity matrix in two different ways, which are found to be equal according to the assertion. But in fact there is a subtle difference; can you spot it?
Step9: You can also create empty (uninitialized) arrays. What does the following produce?
Step10: Indexing and slicing
Step11: Exercise. What part of C will the following slice extract? Run the code to check your answer.
Step12: Exercise. What will the following slice return? Run the code to check your answer.
Step13: Exercise. Consider the following $6 \times 6$ matrix, which has 4 different subsets highlighted.
Step14: Incidentally, there is a very cute way to create the above matrix. I would have never guessed this method, so I've put some print statements so you can see what it is doing.
Step15: Slices are views
Step16: You can force a copy using the .copy() method
Step17: And to check whether two Numpy array variables point to the same object, you can use the numpy.may_share_memory() function
Step18: Exercise. Complete the prime number sieve algorithm, which is illustrated below.
Step19: Indirect addressing
Step20: Matrix storage
Step22: Exercise. Write a function that iterates over the columns of a matrix and scales each column by its index. That is, column $A(
|
6,424
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
x = np.arange(-5,5.01,0.5)
y1 = 1*x + 1.5 +np.random.normal(0, 1, len(x))
y2 = 2*x +np.random.normal(0, 1, len(x))
plt.figure()
plt.plot(x, y1)
plt.show()
#plt.savefig("path/to/plot.png")
plt.close()
plt.figure()
plt.plot(x, y1, "b")
plt.plot(x, y2, "g--")
plt.show()
plt.close()
plt.figure()
plt.plot(x, y1, marker="x", color="g")
plt.plot(x, y2, linestyle="--", color="m")
plt.show()
plt.close()
plt.figure()
plt.plot(x, y1, marker="x", color="g", markersize=10)
plt.plot(x, y2, linestyle="--", color="m", linewidth=4)
plt.show()
plt.close()
plt.figure()
plt.plot(x, y1, "ro", label="y1")
plt.plot(x, y2, "g--", label="y2")
plt.legend(loc="best")
plt.show()
plt.close()
plt.figure()
plt.plot(x, y1)
plt.xlabel("x-label")
plt.ylabel("y-label")
plt.title("My plot title")
plt.show()
plt.close()
plt.figure()
plt.plot(x, y1)
plt.xlabel("x-label", fontsize =24)
plt.ylabel("y-label", fontsize =18)
plt.title("My BASTA plot title", fontsize =40)
plt.show()
plt.close()
plt.figure()
plt.plot(x, y1)
plt.xlabel("distance in [$\AA$]")
plt.ylabel("$a*x+b$")
plt.title("My function $\sqrt{f_a(x)}$")
plt.show()
plt.close()
plt.figure(figsize = (8, 4))
plt.plot(x, y1, color = 'r', label="line")
plt.plot(x, y2, 'x', markersize = 10,label="markers")
plt.legend(loc="best")
plt.show()
plt.close()
plt.figure()
plt.plot(x, y1)
plt.xlim((1, 5))
plt.ylim((-10, 10))
plt.xscale("log")
plt.plot(x, y2)
plt.xscale("linear")
plt.show()
plt.close()
y1_err = np.random.normal(0, 1, len(x))
x_err = np.random.normal(0, 1, len(x))
plt.figure()
plt.errorbar(x, y1, yerr=y1_err, marker = "o", capsize = 5)
plt.errorbar(x, y2, xerr=x_err, marker = "x", capsize = 6)
plt.show()
plt.close()
norm_dist = np.random.normal(0, 1, 500)
plt.hist(norm_dist)
plt.show()
plt.close()
plt.hist(norm_dist, density=True, bins=20)
plt.show()
plt.close()
norm_dist1 = np.random.normal(0, 1, 1000)
norm_dist2 = np.random.normal(1, 0.5, 1000)
plt.hist([norm_dist1,norm_dist2], bins=20, density=True)
plt.show()
plt.close()
hist_y, hist_x, p = plt.hist(norm_dist1, bins=20, density=True)
hist_x = 0.5 * (hist_x[1:] + hist_x[:-1])
plt.close()
plt.figure()
plt.plot(hist_x, hist_y, "rx-")
plt.show()
plt.close()
plt.hist([norm_dist1,norm_dist2], bins=20, density=True)
plt.hlines(0.5, plt.xlim()[0], plt.xlim()[1], linestyle=":", color="r")
plt.vlines([0,1.0],*plt.ylim(),linestyle = "--")
plt.show()
plt.close()
plt.figure()
plt.plot(x, y1,"x")
plt.text(0,0, "text at position 0,0", fontsize=16)
plt.annotate("point to 1,1",xy=(1, 1), xytext=(3, 2),arrowprops=dict(facecolor='black'))
plt.show()
plt.close()
f, axes = plt.subplots(2,2,sharey = "row", figsize = (10,8))
f.suptitle("overall title")
axes[0,0].plot(x,y1)
axes[0,1].errorbar(x, y2, y1_err, capsize = 5)
axes[1,0].hist(norm_dist1, density=True, bins=20)
axes[1,1].hist(norm_dist1, density=True, bins=20, cumulative=True)
for i,axl in enumerate(axes):
for j,ax in enumerate(axl):
ax.set_title("subtitle ({},{})".format(i,j))
ax.set_xlabel("x-label ({},{})".format(i,j))
axl[0].set_ylabel("y-label")
plt.subplots_adjust(right = 0.98, wspace = 0.02, hspace = 0.25)
plt.show()
import matplotlib as mpl
mpl.rcParams['font.size'] = 18
mpl.rcParams['legend.fontsize'] = 16
mpl.rcParams['xtick.labelsize'] = 20
mpl.rcParams['xtick.top'] = True
mpl.rcParams['xtick.major.size'] = 12
plt.hist([norm_dist1,norm_dist2], bins=20, density=True)
plt.show()
plt.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Diese nächste Linie ist nur um plots in jupyter notebooks zu darstellen
Step2: Zuerst generieren wir daten für die plots
Step3: Daten plotten
Step4: Linie und Symbol formattierung
Step5: Anstatt der Format string kann man auch alles einzel kontrollieren
Step6: Man kann noch viel mehr mit den keyword argumenten kontrollieren. Zum Beispiel
Step7: Daten Labels
Step8: Title und Achsen Labels
Step9: Grösse von den Text Elementen werden mit fontsize kontrolliert
Step10: In allen Text Felder kann man auch die Latex syntax brauchen um Mathematische symbolen zu representieren. Einfach das Latex zwischen $ zeichen
Step11: Figur grösse
Step12: Achsen kontrollieren
Step13: Errorbar plotten
Step14: Histograms
Step15: nbins kontrolliert wieviele bins man brauchen will.
Step16: Mann kann auch das histogram für mehrere datensätze gleichzeitig rechnen
Step17: Die plt.hist funktion gibt die bin Anfang und Ende, die Werte, und dir Patches zurück. Mann kann das brauchen um einen Normaler Linien Plot zu generieren, der einfacher Lesbar ist
Step18: Linien und text
Step19: plt.text(xpos, ypos, text) um text hinzufügen an der Position (xpos, ypos)
Step20: Subplots
Step21: Alle defaults kontrollieren
|
6,425
|
<ASSISTANT_TASK:>
Python Code:
!pip install smt
%matplotlib inline
from math import exp
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors
from mpl_toolkits.mplot3d import Axes3D
from scipy.stats import norm
from scipy.optimize import minimize
import scipy
import six
from smt.applications import EGO
from smt.surrogate_models import KRG
from smt.sampling_methods import FullFactorial
from smt.sampling_methods import LHS
from sklearn import gaussian_process
from sklearn.gaussian_process.kernels import Matern, WhiteKernel, ConstantKernel
import matplotlib.font_manager
from smt.applications.mixed_integer import MixedIntegerSurrogateModel
import warnings
warnings.filterwarnings("ignore")
from smt.applications.mixed_integer import (
FLOAT,
ORD,
ENUM,
MixedIntegerSamplingMethod,
cast_to_mixed_integer, unfold_with_enum_mask
)
def PlotEgo(criterion, xdoe, bounds,npt,n_iter=12,xtypes=None, sm=KRG(print_global=False)) :
ego = EGO(n_iter=n_iter, criterion=criterion, xdoe=xdoe,xtypes=xtypes, xlimits=bounds,n_start=20,n_max_optim=35,enable_tunneling=False, surrogate=sm)
x_opt, y_opt, ind_best, x_data, y_data = ego.optimize(fun=f)
print("Minimum in x={:.0f} with f(x)={:.10f}".format(int(x_opt), float(y_opt)))
x_plot = np.atleast_2d(np.linspace(bounds[0][0], bounds[0][1], 9*(npt-1)+1)).T
fig = plt.figure(figsize=[15, 15])
for i in range(n_iter):
k = n_doe + i
x_data_k = x_data[0:k]
y_data_k = y_data[0:k]
#if check list, not already evaluated
y_data[k]=f(x_data[k][:, np.newaxis])
ego.gpr.set_training_values(x_data_k, y_data_k)
ego.gpr.train()
y_gp_plot = ego.gpr.predict_values(x_plot)
y_gp_plot_var = ego.gpr.predict_variances(x_plot)
y_ei_plot = ego.EI(x_plot,False)
ax = fig.add_subplot((n_iter + 1) // 2, 2, i + 1)
ax1 = ax.twinx()
ei, = ax1.plot(x_plot, y_ei_plot, color="red")
true_fun = ax.scatter(Xsol, Ysol,color='k',marker='d')
data, = ax.plot(
x_data_k, y_data_k, linestyle="", marker="o", color="orange"
)
if i < n_iter - 1:
opt, = ax.plot(
x_data[k], y_data[k], linestyle="", marker="*", color="r"
)
print(x_data[k], y_data[k])
gp, = ax.plot(x_plot, y_gp_plot, linestyle="--", color="g")
sig_plus = y_gp_plot + 3 * np.sqrt(y_gp_plot_var)
sig_moins = y_gp_plot - 3 * np.sqrt(y_gp_plot_var)
un_gp = ax.fill_between(
x_plot.T[0], sig_plus.T[0], sig_moins.T[0], alpha=0.3, color="g"
)
lines = [true_fun, data, gp, un_gp, opt, ei]
fig.suptitle("EGO optimization of a set of points")
fig.subplots_adjust(hspace=0.4, wspace=0.4, top=0.8)
ax.set_title("iteration {}".format(i + 1))
fig.legend(
lines,
[
"set of points",
"Given data points",
"Kriging prediction",
"Kriging 99% confidence interval",
"Next point to evaluate",
"Expected improvment function",
],
)
plt.show()
#definition of the 1D function
def f(X) :
x= X[:, 0]
if (np.abs(np.linalg.norm(np.floor(x))-np.linalg.norm(x))< 0.000001):
y = (x - 3.5) * np.sin((x - 3.5) / (np.pi))
else :
print("error")
return y
#to plot the function
bounds = np.array([[0, 25]])
npt=26
Xsol = np.linspace(bounds[0][0],bounds[0][1], npt)
Xs= Xsol[:, np.newaxis]
Ysol = f(Xs)
print("Min of the DOE: ",np.min(Ysol))
plt.scatter(Xs,Ysol,marker='d',color='k')
plt.show()
#to run the optimization process
n_iter = 10
xdoe = np.atleast_2d([0,10]).T
n_doe = xdoe.size
xtypes=[ORD]
criterion = "EI" #'EI' or 'SBO' or 'LCB'
PlotEgo(criterion,xdoe,bounds,npt,n_iter,xtypes=xtypes)
def f(X) :
x= X[:, 0]
y = -np.square(x-25)/220+0.25*(np.sin((x - 3.5) * np.sin((x - 3.5) / (np.pi)))+np.cos(x**2))
np.random.seed(10)
y2 = y+3*np.random.uniform(size=y.shape)
return -y2
#to plot the function
xlimits = np.array([[0, 60]])
npt=61
Xsol = np.linspace(xlimits[0][0],xlimits[0][1], npt)
Xs= Xsol[:, np.newaxis]
Ysol = f(Xs)
print("min of the DOE: ", np.min(Ysol))
plt.scatter(Xs,Ysol,marker='d',color='k')
plt.show()
#to run the optimization process
n_iter = 10
n_doe=2
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(n_doe)
xtypes=[ORD]
criterion = "EI" #'EI' or 'SBO' or 'LCB'
sm=KRG(print_global=False,eval_noise= True)
PlotEgo(criterion,xdoe,xlimits,npt,n_iter,xtypes,sm=sm)
#definition of the 2D function
#the first variable is a integer one and the second one is a continuous one
import math
def f(X) :
x1 = X[:,0]
x2 = X[:,1]
PI = math.pi #3.14159265358979323846
a = 1
b = 5.1/(4*np.power(PI,2))
c = 5/PI
r = 6
s = 10
t = 1/(8*PI)
y= a*(x2 - b*x1**2 + c*x1 -r)**2 + s*(1-t)*np.cos(x1) + s
return y
#to define and compute the doe
xtypes = [ORD, FLOAT]
xlimits = np.array([[-5.0, 10.0],[0.0,15.0]])
n_doe=20
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xt = sampling(n_doe)
yt = f(xt)
#to build the mixed surrogate model
sm = MixedIntegerSurrogateModel(xtypes=xtypes, xlimits=xlimits, surrogate=KRG())
sm.set_training_values(xt, yt)
sm.train()
num = 100
x = np.linspace(-5.0,10., 100)
y = np.linspace(0,15., 100)
xv, yv = np.meshgrid(x, y)
x_plot= np.array([np.ravel(xv), np.ravel(yv)]).T
y_plot = f(np.floor(x_plot))
fig = plt.figure(figsize=[14, 7])
y_gp_plot = sm.predict_values(x_plot)
y_gp_plot_sd = np.sqrt(sm.predict_variances(x_plot))
l=y_gp_plot-3*y_gp_plot_sd
h=y_gp_plot+3*y_gp_plot_sd
ax = fig.add_subplot(1, 3, 1, projection='3d')
ax1 = fig.add_subplot(1, 3, 2, projection='3d')
ax2 = fig.add_subplot(1, 3,3)
ii=-100
ax.view_init(elev=15., azim=ii)
ax1.view_init(elev=15., azim=ii)
true_fun = ax.plot_surface(xv, yv, y_plot.reshape((100, 100)), label ='true_function',color='g')
data3 = ax2.scatter(xt.T[0],xt.T[1],s=60,marker="o",color="orange")
gp1 = ax1.plot_surface(xv, yv, l.reshape((100, 100)), color="b")
gp2 = ax1.plot_surface(xv, yv, h.reshape((100, 100)), color="r")
gp3 = ax2.contour(xv, yv, y_gp_plot.reshape((100, 100)), color="k", levels=[0,1,2,5,10,20,30,40,50,60])
fig.suptitle("Mixed Branin function surrogate")
ax.set_title("True model")
ax1.set_title("surrogate model, DOE de taille {}".format(n_doe))
ax2.set_title("surrogate mean response")
#to define the 4D function
def function_test_mixed_integer(X):
import numpy as np
# float
x1 = X[:, 0]
# enum 1
c1 = X[:, 1]
x2 = c1 == 0
x3 = c1 == 1
x4 = c1 == 2
# enum 2
c2 = X[:, 2]
x5 = c2 == 0
x6 = c2 == 1
# int
i = X[:, 3]
y = (
(x2 + 2 * x3 + 3 * x4) * x5 * x1
+ (x2 + 2 * x3 + 3 * x4) * x6 * 0.95 * x1
+ i
)
return y
#to run the optimization process
n_iter = 15
xtypes = [FLOAT, (ENUM, 3), (ENUM, 2), ORD]
xlimits = np.array([[-5, 5], ["blue", "red", "green"], ["large", "small"], [0, 2]])
criterion = "EI" #'EI' or 'SBO' or 'LCB'
qEI = "KB"
sm = KRG(print_global=False)
n_doe = 3
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(n_doe)
ydoe = function_test_mixed_integer(xdoe)
print('Initial DOE: \n', 'xdoe = ',xdoe, '\n ydoe = ',ydoe)
ego = EGO(
n_iter=n_iter,
criterion=criterion,
xdoe=xdoe,
ydoe=ydoe,
xtypes=xtypes,
xlimits=xlimits,
surrogate=sm,
qEI=qEI,
)
x_opt,y_opt, _, _, y_data = ego.optimize(fun=function_test_mixed_integer)
#to plot the objective function during the optimization process
min_ref = -15
mini = np.zeros(n_iter)
for k in range(n_iter):
mini[k] = np.log(np.abs(np.min(y_data[0 : k + n_doe - 1]) - min_ref))
x_plot = np.linspace(1, n_iter + 0.5, n_iter)
u = max(np.floor(max(mini)) + 1, -100)
l = max(np.floor(min(mini)) - 0.2, -10)
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x_plot, mini, color="r")
axes.set_ylim([l, u])
plt.title("minimum convergence plot", loc="center")
plt.xlabel("number of iterations")
plt.ylabel("log of the difference w.r.t the best")
plt.show()
print(" 4D EGO Optimization: Minimum in x=",cast_to_mixed_integer(xtypes, xlimits, x_opt), "with y value =",y_opt)
#to give the initial doe in the initial space
print('Initial DOE in the initial space: ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), (cast_to_mixed_integer(xtypes, xlimits, xdoe[i]))),'\n')
#to give the initial doe in the relaxed space
print('Initial DOE in the unfold space (or relaxed space): ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), (unfold_with_enum_mask(xtypes, xdoe[i]))),'\n')
#to print the used DOE
print('Initial DOE in the fold space: ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), xdoe[i]),'\n')
import numpy as np
import matplotlib.pyplot as plt
from smt.surrogate_models import KRG
from smt.applications.mixed_integer import MixedIntegerSurrogateModel, ENUM,ORD,FLOAT,GOWER, HOMO_GAUSSIAN
xt1 = np.array([[0,0.0],
[0,1.0],
[0,4.0]])
xt2 = np.array([[1,0.0],
[1,1.0],
[1,2.0],
[1,3.0]])
xt = np.concatenate((xt1, xt2), axis=0)
xt[:,1] = xt[:,1].astype(np.float)
yt1 = np.array([0.0, 9.0, 16.0])
yt2 = np.array([ 0.0, 1.0,8.0,27.0])
yt = np.concatenate((yt1, yt2), axis=0)
xlimits = [["Blue","Red"],[0.0,4.0]]
xtypes=[(ENUM, 2),FLOAT]
# Surrogate
sm = MixedIntegerSurrogateModel(categorical_kernel = HOMO_GAUSSIAN, xtypes=xtypes, xlimits=xlimits, surrogate=KRG(theta0=[1e-2]))
sm.set_training_values(xt, yt)
sm.train()
# DOE for validation
n = 100
x_cat1 = []
x_cat2 = []
for i in range(n):
x_cat1.append(0)
x_cat2.append(1)
x_cont = np.linspace(0.0, 4.0, n)
x1 = np.concatenate((np.asarray(x_cat1).reshape(-1,1), x_cont.reshape(-1,1)), axis=1)
x2 = np.concatenate((np.asarray(x_cat2).reshape(-1,1), x_cont.reshape(-1,1)), axis=1)
y1 = sm.predict_values(x1)
y2 = sm.predict_values(x2)
# estimated variance
s2_1 = sm.predict_variances(x1)
s2_2 = sm.predict_variances(x2)
fig, axs = plt.subplots(2)
axs[0].plot(xt1[:,1].astype(np.float), yt1,'o',linestyle="None")
axs[0].plot(x_cont, y1,color ='Blue')
axs[0].fill_between(
np.ravel(x_cont),
np.ravel(y1 - 3 * np.sqrt(s2_1)),
np.ravel(y1 + 3 * np.sqrt(s2_1)),
color="lightgrey",
)
axs[0].set_xlabel("x")
axs[0].set_ylabel("y")
axs[0].legend(
["Training data", "Prediction", "Confidence Interval 99%"],
loc="upper left",
)
axs[1].plot(xt2[:,1].astype(np.float), yt2, marker='o', color='r',linestyle="None")
axs[1].plot(x_cont, y2,color ='Red')
axs[1].fill_between(
np.ravel(x_cont),
np.ravel(y2 - 3 * np.sqrt(s2_2)),
np.ravel(y2 + 3 * np.sqrt(s2_2)),
color="lightgrey",
)
axs[1].set_xlabel("x")
axs[1].set_ylabel("y")
axs[1].legend(
["Training data", "Prediction", "Confidence Interval 99%"],
loc="upper left",
)
plt.show()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from smt.surrogate_models import KRG
from smt.applications.mixed_integer import MixedIntegerSurrogateModel, ENUM,ORD,FLOAT,GOWER, HOMO_GAUSSIAN
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
from scipy.optimize import curve_fit
from sklearn.preprocessing import StandardScaler
plt.rcParams.update({'legend.labelspacing':1.0})
try :
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
json_import = drive.CreateFile({'id':'1fcB39mktJ2npTNqrF8dgZa7PWh3wU55X'})
json_import.GetContentFile('VTF_properties.json')
df = pd.read_csv(open('VTF_properties.json'))
except :
df = pd.read_csv(open('VTF_properties.csv'))
df = df.dropna()
df.head()
data = df.sample(n=1000,random_state = 10)
data.Carbon_fibre = pd.Categorical(data.Carbon_fibre)
data['Carbon_fibre'] = data.Carbon_fibre.cat.codes
data.Glass_fibre = pd.Categorical(data.Glass_fibre)
data['Glass_fibre'] = data.Glass_fibre.cat.codes
Xd = data.drop(['SmAvg','G','GiicmAvg','Initial_stiffness','Ultimate_strain','Pseudo_ductile_strain','Ultimate_strength',
'Yield_strength'],axis = 1)
yd = data.Initial_stiffness
X_train, X_test, y_train, y_test = train_test_split(Xd, yd, test_size = 0.25, random_state = 42)
X = np.asarray(X_train)
y = np.asarray(y_train).astype(float)
xtypes=[(ENUM, 16),(ENUM, 15),FLOAT,FLOAT]
xlimits = [['XN-90', 'P120J', 'T1000GB', 'C124', 'T800H', 'M60JB', 'C320',
'M40B', 'P75S', 'K13D', 'T300', 'XN-05', 'FliteStrand_S_ZT',
'HTA5131', 'GF', 'C100'],['GF', 'XN-05', 'FliteStrand_S_ZT', 'C124', 'T300', 'T800H', 'C320',
'P75S', 'C100', 'XN-90', 'HTA5131', 'T1000GB', 'P120J', 'M40B',
'M60JB'],[501.5425023,11999.96175],[2.849e-05,1.0]]
sm = MixedIntegerSurrogateModel(categorical_kernel = GOWER, xtypes=xtypes, xlimits=xlimits, surrogate=KRG(theta0=[1e-2]))
sm.set_training_values(X, y)
sm.train()
y_pred2 =sm.predict_values(np.asarray(X_test))
y_pred2[y_pred2<0]=0
print("r2_score = ", r2_score(y_test, y_pred2))
plt.scatter(y_test, y_pred2, s=30, facecolors='none', edgecolors='r')
plt.xlabel('True Values')
plt.ylabel('Predictions')
plt.title('Initial_stiffness')
plt.axis('equal')
plt.axis('square')
plt.grid()
_ = plt.plot([-500000, 500000], [-500000, 500000],'b')
#to define the 4D function
def function_test_mixed_integer(X):
import numpy as np
# float
x1 = X[:, 3]
# enum 1
c1 = X[:, 0]
x2 = c1 == 0
x3 = c1 == 1
x4 = c1 == 2
# enum 2
c2 = X[:, 1]
x5 = c2 == 0
x6 = c2 == 1
# int
i = X[:, 2]
y = (
(x2 + 2 * x3 + 3 * x4) * x5 * x1
+ (x2 + 2 * x3 + 3 * x4) * x6 * 0.95 * x1
+ i
)
return y
#to run the optimization process
n_iter = 15
xtypes = [(ENUM, 3), (ENUM, 2), ORD,FLOAT]
xlimits = np.array([["blue", "red", "green"], ["large", "small"], ["0","1","2"],[-5, 5]])
criterion = "EI" #'EI' or 'SBO' or 'LCB'
qEI = "KB"
sm = KRG(print_global=False)
n_doe = 2
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(n_doe)
ydoe = function_test_mixed_integer(xdoe)
print('Initial DOE: \n', 'xdoe = ',xdoe, '\n ydoe = ',ydoe)
ego = EGO(
n_iter=n_iter,
criterion=criterion,
xdoe=xdoe,
ydoe=ydoe,
xtypes=xtypes,
xlimits=xlimits,
surrogate=sm,
qEI=qEI,
categorical_kernel= GOWER,
)
x_opt,y_opt, _, _, y_data = ego.optimize(fun=function_test_mixed_integer)
#to plot the objective function during the optimization process
min_ref = -15
mini = np.zeros(n_iter)
for k in range(n_iter):
mini[k] = np.log(np.abs(np.min(y_data[0 : k + n_doe - 1]) - min_ref))
x_plot = np.linspace(1, n_iter + 0.5, n_iter)
u = max(np.floor(max(mini)) + 1, -100)
l = max(np.floor(min(mini)) - 0.2, -10)
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x_plot, mini, color="r")
axes.set_ylim([l, u])
plt.title("minimum convergence plot", loc="center")
plt.xlabel("number of iterations")
plt.ylabel("log of the difference w.r.t the best")
plt.show()
print(" 4D EGO Optimization: Minimum in x=",cast_to_mixed_integer(xtypes, xlimits, x_opt), "with y value =",y_opt)
#to define the 4D function
def function_test_mixed_integer(X):
import numpy as np
# float
x1 = X[:, 3]
# enum 1
c1 = X[:, 0]
x2 = c1 == 0
x3 = c1 == 1
x4 = c1 == 2
# enum 2
c2 = X[:, 1]
x5 = c2 == 0
x6 = c2 == 1
# int
i = X[:, 2]
y = (
(x2 + 2 * x3 + 3 * x4) * x5 * x1
+ (x2 + 2 * x3 + 3 * x4) * x6 * 0.95 * x1
+ i
)
return y
#to run the optimization process
n_iter = 15
xtypes = [(ENUM, 3), (ENUM, 2), ORD,FLOAT]
xlimits = np.array([["blue", "red", "green"], ["large", "small"], ["0","1","2"],[-5, 5]])
criterion = "EI" #'EI' or 'SBO' or 'LCB'
qEI = "KB"
sm = KRG(print_global=False)
n_doe = 2
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(n_doe)
ydoe = function_test_mixed_integer(xdoe)
print('Initial DOE: \n', 'xdoe = ',xdoe, '\n ydoe = ',ydoe)
ego = EGO(
n_iter=n_iter,
criterion=criterion,
xdoe=xdoe,
ydoe=ydoe,
xtypes=xtypes,
xlimits=xlimits,
surrogate=sm,
qEI=qEI,
categorical_kernel= HOMO_GAUSSIAN,
)
x_opt,y_opt, _, _, y_data = ego.optimize(fun=function_test_mixed_integer)
#to plot the objective function during the optimization process
min_ref = -15
mini = np.zeros(n_iter)
for k in range(n_iter):
mini[k] = np.log(np.abs(np.min(y_data[0 : k + n_doe - 1]) - min_ref))
x_plot = np.linspace(1, n_iter + 0.5, n_iter)
u = max(np.floor(max(mini)) + 1, -100)
l = max(np.floor(min(mini)) - 0.2, -10)
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x_plot, mini, color="r")
axes.set_ylim([l, u])
plt.title("minimum convergence plot", loc="center")
plt.xlabel("number of iterations")
plt.ylabel("log of the difference w.r.t the best")
plt.show()
print(" 4D EGO Optimization: Minimum in x=",cast_to_mixed_integer(xtypes, xlimits, x_opt), "with y value =",y_opt)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Definition of the plot function
Step2: Local minimum trap
Step3: On this 1D test case, 4 iterations are required to find the global minimum, evaluated at iteration 5.
Step4: On this noisy case, it toook 7 iterations to understand the shape of the curve but then, it took time to explore the random noise aroudn the minimum.
Step5: On the left, we have the real model in green.
Step6: Manipulate the DOE
Step7: Gower mixed based surrogate model 2D function
Step8: Gower mixed based surrogate model 4D function
Step9: Gower mixed based optimization 4D function
Step10: Group kernel mixed based optimization 4D function
|
6,426
|
<ASSISTANT_TASK:>
Python Code:
# Define the Markov transition matrix for serially correlated unemployment
unemp_length = 5 # Averange length of unemployment spell
urate_good = 0.05 # Unemployment rate when economy is in good state
urate_bad = 0.12 # Unemployment rate when economy is in bad state
bust_prob = 0.01 # Probability of economy switching from good to bad
recession_length = 20 # Averange length of bad state
p_reemploy = 1.0 / unemp_length
p_unemploy_good = p_reemploy * urate_good / (1 - urate_good)
p_unemploy_bad = p_reemploy * urate_bad / (1 - urate_bad)
boom_prob = 1.0 / recession_length
MrkvArray = np.array(
[
[
(1 - p_unemploy_good) * (1 - bust_prob),
p_unemploy_good * (1 - bust_prob),
(1 - p_unemploy_good) * bust_prob,
p_unemploy_good * bust_prob,
],
[
p_reemploy * (1 - bust_prob),
(1 - p_reemploy) * (1 - bust_prob),
p_reemploy * bust_prob,
(1 - p_reemploy) * bust_prob,
],
[
(1 - p_unemploy_bad) * boom_prob,
p_unemploy_bad * boom_prob,
(1 - p_unemploy_bad) * (1 - boom_prob),
p_unemploy_bad * (1 - boom_prob),
],
[
p_reemploy * boom_prob,
(1 - p_reemploy) * boom_prob,
p_reemploy * (1 - boom_prob),
(1 - p_reemploy) * (1 - boom_prob),
],
]
)
# Make a consumer with serially correlated unemployment, subject to boom and bust cycles
init_serial_unemployment = copy(init_idiosyncratic_shocks)
init_serial_unemployment["MrkvArray"] = [MrkvArray]
init_serial_unemployment["UnempPrb"] = 0.0 # to make income distribution when employed
init_serial_unemployment["global_markov"] = False
SerialUnemploymentExample = MarkovConsumerType(**init_serial_unemployment)
SerialUnemploymentExample.cycles = 0
SerialUnemploymentExample.vFuncBool = False # for easy toggling here
# Replace the default (lognormal) income distribution with a custom one
employed_income_dist = DiscreteDistribution(np.ones(1), [np.ones(1), np.ones(1)]) # Definitely get income
unemployed_income_dist = DiscreteDistribution(np.ones(1), [np.ones(1), np.zeros(1)]) # Definitely don't
SerialUnemploymentExample.IncShkDstn = [
[
employed_income_dist,
unemployed_income_dist,
employed_income_dist,
unemployed_income_dist,
]
]
# Interest factor, permanent growth rates, and survival probabilities are constant arrays
SerialUnemploymentExample.assign_parameters(Rfree = np.array(4 * [SerialUnemploymentExample.Rfree]))
SerialUnemploymentExample.PermGroFac = [
np.array(4 * SerialUnemploymentExample.PermGroFac)
]
SerialUnemploymentExample.LivPrb = [SerialUnemploymentExample.LivPrb * np.ones(4)]
# Solve the serial unemployment consumer's problem and display solution
start_time = process_time()
SerialUnemploymentExample.solve()
end_time = process_time()
print(
"Solving a Markov consumer with serially correlated unemployment took "
+ mystr(end_time - start_time)
+ " seconds."
)
print("Consumption functions for each discrete state:")
plot_funcs(SerialUnemploymentExample.solution[0].cFunc, 0, 50)
if SerialUnemploymentExample.vFuncBool:
print("Value functions for each discrete state:")
plot_funcs(SerialUnemploymentExample.solution[0].vFunc, 5, 50)
# Simulate some data; results stored in cHist, mNrm_hist, cNrm_hist, and Mrkv_hist
if do_simulation:
SerialUnemploymentExample.T_sim = 120
SerialUnemploymentExample.MrkvPrbsInit = [0.25, 0.25, 0.25, 0.25]
SerialUnemploymentExample.track_vars = ["mNrm", "cNrm"]
SerialUnemploymentExample.make_shock_history() # This is optional
SerialUnemploymentExample.initialize_sim()
SerialUnemploymentExample.simulate()
# Make a consumer who occasionally gets "unemployment immunity" for a fixed period
UnempPrb = 0.05 # Probability of becoming unemployed each period
ImmunityPrb = 0.01 # Probability of becoming "immune" to unemployment
ImmunityT = 6 # Number of periods of immunity
StateCount = ImmunityT + 1 # Total number of Markov states
IncomeDstnReg = DiscreteDistribution(
np.array([1 - UnempPrb, UnempPrb]),
[np.array([1.0, 1.0]),
np.array([1.0 / (1.0 - UnempPrb), 0.0])]
) # Ordinary income distribution
IncomeDstnImm = DiscreteDistribution(
np.array([1.0]),
[np.array([1.0]),
np.array([1.0])]
)
IncomeDstn = [IncomeDstnReg] + ImmunityT * [
IncomeDstnImm
] # Income distribution for each Markov state, in a list
# Make the Markov transition array. MrkvArray[i,j] is the probability of transitioning
# to state j in period t+1 from state i in period t.
MrkvArray = np.zeros((StateCount, StateCount))
MrkvArray[0, 0] = (
1.0 - ImmunityPrb
) # Probability of not becoming immune in ordinary state: stay in ordinary state
MrkvArray[
0, ImmunityT
] = (
ImmunityPrb
) # Probability of becoming immune in ordinary state: begin immunity periods
for j in range(ImmunityT):
MrkvArray[
j + 1, j
] = (
1.0
) # When immune, have 100% chance of transition to state with one fewer immunity periods remaining
init_unemployment_immunity = copy(init_idiosyncratic_shocks)
init_unemployment_immunity["MrkvArray"] = [MrkvArray]
ImmunityExample = MarkovConsumerType(**init_unemployment_immunity)
ImmunityExample.assign_parameters(
Rfree=np.array(np.array(StateCount * [1.03])), # Interest factor same in all states
PermGroFac=[
np.array(StateCount * [1.01])
], # Permanent growth factor same in all states
LivPrb=[np.array(StateCount * [0.98])], # Same survival probability in all states
BoroCnstArt=None, # No artificial borrowing constraint
cycles=0,
) # Infinite horizon
ImmunityExample.IncShkDstn = [IncomeDstn]
# Solve the unemployment immunity problem and display the consumption functions
start_time = process_time()
ImmunityExample.solve()
end_time = process_time()
print(
'Solving an "unemployment immunity" consumer took '
+ mystr(end_time - start_time)
+ " seconds."
)
print("Consumption functions for each discrete state:")
mNrmMin = np.min([ImmunityExample.solution[0].mNrmMin[j] for j in range(StateCount)])
plot_funcs(ImmunityExample.solution[0].cFunc, mNrmMin, 10)
# Make a consumer with serially correlated permanent income growth
UnempPrb = 0.05 # Unemployment probability
StateCount = 5 # Number of permanent income growth rates
Persistence = (
0.5
) # Probability of getting the same permanent income growth rate next period
IncomeDstnReg = DiscreteDistribution(
np.array([1 - UnempPrb, UnempPrb]),
[np.array([1.0, 1.0]),
np.array([1.0, 0.0])]
)
IncomeDstn = StateCount * [
IncomeDstnReg
] # Same simple income distribution in each state
# Make the state transition array for this type: Persistence probability of remaining in the same state, equiprobable otherwise
MrkvArray = Persistence * np.eye(StateCount) + (1.0 / StateCount) * (
1.0 - Persistence
) * np.ones((StateCount, StateCount))
init_serial_growth = copy(init_idiosyncratic_shocks)
init_serial_growth["MrkvArray"] = [MrkvArray]
SerialGroExample = MarkovConsumerType(**init_serial_growth)
SerialGroExample.assign_parameters(
Rfree=np.array(
np.array(StateCount * [1.03])
), # Same interest factor in each Markov state
PermGroFac=[
np.array([0.97, 0.99, 1.01, 1.03, 1.05])
], # Different permanent growth factor in each Markov state
LivPrb=[np.array(StateCount * [0.98])], # Same survival probability in all states
cycles=0,
)
SerialGroExample.IncShkDstn = [IncomeDstn]
# Solve the serially correlated permanent growth shock problem and display the consumption functions
start_time = process_time()
SerialGroExample.solve()
end_time = process_time()
print(
"Solving a serially correlated growth consumer took "
+ mystr(end_time - start_time)
+ " seconds."
)
print("Consumption functions for each discrete state:")
plot_funcs(SerialGroExample.solution[0].cFunc, 0, 10)
# Make a consumer with serially correlated interest factors
SerialRExample = deepcopy(SerialGroExample) # Same as the last problem...
SerialRExample.assign_parameters(
PermGroFac=[
np.array(StateCount * [1.01])
], # ...but now the permanent growth factor is constant...
Rfree=np.array([1.01, 1.02, 1.03, 1.04, 1.05]),
) # ...and the interest factor is what varies across states
# Solve the serially correlated interest rate problem and display the consumption functions
start_time = process_time()
SerialRExample.solve()
end_time = process_time()
print(
"Solving a serially correlated interest consumer took "
+ mystr(end_time - start_time)
+ " seconds."
)
print("Consumption functions for each discrete state:")
plot_funcs(SerialRExample.solution[0].cFunc, 0, 10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Several variant examples of the model will be illustrated below such that
Step2: Note that $\texttt{MarkovConsumerType}$ currently has no method to automatically construct a valid IncomeDstn - $\texttt{IncomeDstn}$ is manually constructed in each case. Writing a method to supersede $\texttt{IndShockConsumerType.update_income_process}$ for the “Markov model” would be a welcome contribution!
Step3: 2. Unemployment immunity for a fixed period
Step4: 3. Serial permanent income growth
Step5: 4. Serial Interest factor
|
6,427
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename='entity_extraction_process.png')
# Note: this image is taken from NLTK Book and requires citation
# Importing NLTK Dependencies
import nltk, re
from nltk import word_tokenize, pos_tag, ne_chunk
from nltk.tokenize.punkt import PunktSentenceTokenizer
# using docx2txt libarary to avoid the encoding errors while reading the text in given word file
import docx2txt
raw_data = docx2txt.process('Contract_Template.docx')
paragraphs = [p for p in raw_data.split('\n') if p]
tokenizer = PunktSentenceTokenizer()
sentences = [tokenizer.tokenize(paragraph) for paragraph in paragraphs]
sentences[:5]
# using python's regular expressions to extract the required address data
street_address = re.compile(u'\d{1,4} [\w\s]{1,20}(?:street|st|avenue|ave|road|rd|highway|hwy|square|sq|trail|trl|drive|dr|court|ct|park|parkway|pkwy|circle|cir|boulevard|blvd)\W?(?=\D|$)', re.IGNORECASE)
zip_code = re.compile(r'\b\d{5}(?:[-\s]\d{4})?\b')
proper_address = "[0-9]{1,5} .+, .+, [A-Z]{2} [0-9]{5}"
print ("Below are the street addresses found in the given document \n")
print (re.findall(street_address, raw_data))
print ("\nBelow are the zip codes found in the given document \n")
print (re.findall(zip_code, raw_data))
print ("\nBelow is the address found in the given document with PROPER format \n")
print (re.findall(proper_address, raw_data))
for i in range (len(sentences)):
if sentences[i][0] == 'Address:':
print ("\nBelow is the address found in the given document with IMPROPER format is: \n")
print (sentences[i+1][0])
tokens = [nltk.word_tokenize(sent[0]) for sent in sentences]
tokens[:6] # View the first few tokens
tokens[0]
# Viewing the parts of speech tag for each token in the given text
pos_tags = [nltk.pos_tag(token) for token_list in tokens for token in token_list]
pos[:20] # View top few pos tags
nltk.help.upenn_tagset()
chunks = ne_chunk(pos)
print (chunks.__repr__())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Figure 1
Step2: Loading the data
Step3: Converting the rawtext into sentences
Step4: Task 1
Step5: Task 2
Step6: Entity Extraction
Step7: NNP - Noun Proper Plural tag with respec to NLP library. The below command provides description of each POS tag followed by the nltk library
Step8: Name Entity Chunks
|
6,428
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
from matplotlib.pylab import *
from pymc3 import *
import numpy as np
d = np.random.normal(size=(3, 30))
d1 = d[0] + 4
d2 = d[1] + 4
yd = .2*d1 +.3*d2 + d[2]
lam = 3
with Model() as model:
s = Exponential('s', 1)
tau = Uniform('tau', 0, 1000)
b = lam * tau
m1 = Laplace('m1', 0, b)
m2 = Laplace('m2', 0, b)
p = d1*m1 + d2*m2
y = Normal('y', mu=p, sd=s, observed=yd)
with model:
start = find_MAP()
step1 = Metropolis([m1, m2])
step2 = Slice([s, tau])
trace = sample(10000, [step1, step2], start=start)
traceplot(trace);
hexbin(trace[m1],trace[m2], gridsize = 50)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then define the random variables.
Step2: For most samplers, including Metropolis and HamiltonianMC, simply pass a list of variables to sample as a block. This works with both scalar and array parameters.
|
6,429
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
from matplotlib import rcParams
rcParams["figure.figsize"] = (14, 5)
from pyke import LightCurve
lc = LightCurve(time=[1, 2, 3], flux=[78.4, 79.6, 76.5])
from pyke import KeplerLightCurveFile
lcfile = KeplerLightCurveFile("https://archive.stsci.edu/missions/kepler/lightcurves/0119/011904151/kplr011904151-2010009091648_llc.fits")
lc = lcfile.SAP_FLUX
lc.plot()
lc.keplerid
lc.channel
lc.quarter
lc.time[:10]
lc.flux[:10]
detrended_lc, _ = lc.flatten(polyorder=1)
detrended_lc.plot()
folded_lc = detrended_lc.fold(period=0.837495, phase=0.92)
folded_lc.plot();
lc.cdpp()
from pyke import KeplerTargetPixelFile
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz')
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz',
aperture_mask='kepler-pipeline',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
tpf.aperture_mask
tpf.header(ext=0)
tpf.flux.shape
tpf.plot(frame=1)
lc = tpf.to_lightcurve()
plt.figure(figsize=[17, 4])
plt.plot(lc.time, lc.flux)
from pyke.lightcurve import KeplerLightCurveFile
klc = KeplerLightCurveFile('https://archive.stsci.edu/missions/k2/lightcurves/'
'c14/200100000/82000/ktwo200182949-c14_llc.fits',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
sap_lc = klc.SAP_FLUX
plt.figure(figsize=[17, 4])
plt.plot(lc.time, lc.flux)
plt.plot(sap_lc.time, sap_lc.flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
klc_corrected = klc.compute_cotrended_lightcurve(cbvs=range(1, 17))
plt.figure(figsize=[17, 4])
plt.plot(klc_corrected.time, klc_corrected.flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
pdcsap_lc = klc.PDCSAP_FLUX
plt.figure(figsize=[17, 4])
plt.plot(klc_corrected.time, klc_corrected.flux)
plt.plot(pdcsap_lc.time, pdcsap_lc.flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
%matplotlib inline
import matplotlib.pyplot as plt
from pyke.utils import module_output_to_channel, channel_to_module_output
module_output_to_channel(module=19, output=3)
channel_to_module_output(67)
from pyke.utils import KeplerQualityFlags
KeplerQualityFlags.decode(1)
KeplerQualityFlags.decode(1 + 1024 + 1048576)
KeplerQualityFlags.decode(KeplerQualityFlags.DEFAULT_BITMASK)
KeplerQualityFlags.decode(KeplerQualityFlags.CONSERVATIVE_BITMASK)
from pyke.prf import PRFPhotometry, SceneModel, SimpleKeplerPRF
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'201500000/43000/ktwo201543306-c14_lpd-targ.fits.gz',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
tpf.plot(frame=100)
scene = SceneModel(prfs=[SimpleKeplerPRF(channel=tpf.channel, shape=tpf.shape[1:],
column=tpf.column, row=tpf.row)])
from oktopus.prior import UniformPrior
unif_prior = UniformPrior(lb=[0, 1090., 706., 0.],
ub=[1e5, 1096., 712., 1e5])
scene.plot(*unif_prior.mean)
prf_phot = PRFPhotometry(scene_model=scene, prior=unif_prior)
results = prf_phot.fit(tpf.flux + tpf.flux_bkg)
plt.imshow(prf_phot.residuals[1], origin='lower')
plt.colorbar()
flux = results[:, 0]
xcenter = results[:, 1]
ycenter = results[:, 2]
bkg_density = results[:, 3]
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, xcenter)
plt.ylabel('Column position')
plt.xlabel('Time (BJD - 2454833)')
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, ycenter)
plt.ylabel('Row position')
plt.xlabel('Time (BJD - 2454833)')
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, bkg_density)
plt.ylabel('Background density')
plt.xlabel('Time (BJD - 2454833)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introducing a generic LightCurve class
Step2: A LightCurve object provides easy access to a range of common operations, such as fold(), flatten(), remove_outliers(), cdpp(), plot(), and more. To demonstrate these operations, let's create a LightCurve object from a KeplerLightCurveFile we obtain from the data archive at MAST
Step3: Now lc is a LightCurve object on which you can run operations. For example, we can plot it
Step4: We can access several of the metadata properties
Step5: We can access the time and flux as arrays
Step6: We don't particularly care about the long-term trends, so let's use a Savitzky-Golay filter to flatten the lightcurve
Step7: We can also compute the CDPP noise metric
Step8: Target Pixel File (TPF)
Step9: A KeplerTargetPixelFile can be instantiated either from a local file or a url
Step10: Additionally, we can mask out cadences that are flagged using the quality_bitmask argument in the constructor
Step11: Furthermore, we can mask out pixel values using the aperture_mask argument. The default behaviour is to use
Step12: The TPF objects stores both data and a few metadata information, e.g., channel number, EPIC number, reference column and row, module, and shape. The whole header is also available
Step13: The pixel fluxes time series can be accessed using the flux property
Step14: This shows that our TPF is a 35 x 35 image recorded over 3209 cadences.
Step15: We can perform aperture photometry using the method to_lightcurve
Step16: Let's see how the previous light curve compares against the 'SAP_FLUX' produced by Kepler's pipeline. For that, we are going to explore the KeplerLightCurveFile class
Step17: Now, let's correct this light curve using by fitting cotrending basis vectors. That can be achived either with the KeplerCBVCorrector class or the compute_cotrended_lightcurve in KeplerLightCurveFile. Let's try the latter
Step18: Utility functions
Step19: PyKE 3.1 includes KeplerQualityFlags class which encodes the meaning of the Kepler QUALITY bitmask flags as documented in the Kepler Archive Manual (Table 2.3)
Step20: It also can handle multiple flags
Step21: A few quality flags are already computed
Step22: Pixel Response Function (PRF) Photometry
Step23: For that, let's create a SceneModel which will be fitted to the object of the following TPF
Step24: We also need to define prior distributions on the parameters of our SceneModel model. Those parameters are
|
6,430
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('home_data.gl/')
sales.head(5)
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="sqft_living", y="price")
train_data,test_data = sales.random_split(.8,seed=123)
sqft_model = graphlab.linear_regression.create(train_data, target='price', features=['sqft_living'],validation_set=None)
print test_data['price'].mean()
print sqft_model.evaluate(test_data)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(test_data['sqft_living'],test_data['price'],'.',
test_data['sqft_living'],sqft_model.predict(test_data),'-')
sqft_model.get('coefficients')
my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
#sales[my_features].show()
sales.show(view='BoxWhisker Plot', x='zipcode', y='price')
my_features_model = graphlab.linear_regression.create(train_data,target='price',features=my_features,validation_set=None)
print my_features
print sqft_model.evaluate(test_data)
print my_features_model.evaluate(test_data)
house1 = sales[sales['id']=='5309101200']
house1
print house1['price']
print sqft_model.predict(house1)
print my_features_model.predict(house1)
house2 = sales[sales['id']=='1925069082']
house2
print sqft_model.predict(house2)
print my_features_model.predict(house2)
bill_gates = {'bedrooms':[8],
'bathrooms':[25],
'sqft_living':[50000],
'sqft_lot':[225000],
'floors':[4],
'zipcode':['98039'],
'condition':[10],
'grade':[10],
'waterfront':[1],
'view':[4],
'sqft_above':[37500],
'sqft_basement':[12500],
'yr_built':[1994],
'yr_renovated':[2010],
'lat':[47.627606],
'long':[-122.242054],
'sqft_living15':[5000],
'sqft_lot15':[40000]}
print my_features_model.predict(graphlab.SFrame(bill_gates))
advanced_features = ['bedrooms', 'bathrooms', 'sqft_living',
'sqft_lot', 'floors', 'zipcode',
'condition','grade', 'waterfront',
'view','sqft_above','sqft_basement',
'yr_built','yr_renovated', 'lat', 'long',
'sqft_living15','sqft_lot15'
]
advanced_features_model = graphlab.linear_regression.create(train_data,target='price',features=advanced_features,validation_set=None)
print advanced_features
print advanced_features_model.evaluate(test_data)
print sqft_model.evaluate(test_data)
print my_features_model.evaluate(test_data)
print my_features_model.predict(house2)
print advanced_features_model.predict(house2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data and exploring the data
Step2: exploring the data let's visualize few rows of data with the head()
Step3: Exploring the data for housing sales
Step4: Create a simple regression model of sqft_living to price
Step5: Build the regression model using only sqft_living as a feature and
Step6: Evaluate the simple model
Step7: RMSE of about \$255,170!
Step8: plot a graph between the price and sqrt_living
Step9: Above
Step10: Explore other features in the data
Step11: Pull the bar at the bottom to view more of the data.
Step12: Comparing the results of the simple model with adding more features
Step13: The RMSE goes down from \$255,170 to \$179,508 with more features.
Step14: <img src="http
Step15: In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better.
Step16: <img src="https
Step17: In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house.
Step18: <img src="https
Step19: The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.)
Step20: here you can see the there is no difference between the my_features_model and advanced_features_model
|
6,431
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import warnings
from inet import DataLoader, __version__
from inet.motifs import iicounter
from inet.utils import II_slice
print('Inet version {}'.format(__version__))
# use filenames in the dataset to read list of distances to be read
mydataset = DataLoader('../data/PV')
pvfiles = [ i for i in range(len(mydataset)) if int(mydataset.filename(i)[0])>1 ]
print('{} experiments with 2 or more PV-cells'.format(len(pvfiles)))
# read distances from between inhibitory neurons
def read_dist(fname):
get distances between inhibitory pairs of neurons
from a matrix of intersomatic distances.
Argument:
fname: string
the matrix name to that contains the connected synapses (*.syn)
# take all non-diagonal elements
mypath = '../data/PV/' + fname[:-3] + 'dist'
try:
D = np.loadtxt(mypath)
D = II_slice(D, int(fname[0]))
idx = np.where(~np.eye(D.shape[0], dtype = bool))
mydist = np.abs(D[idx]).tolist()
return(mydist)
except IOError:
warnings.warn(mypath + ' not found!')
return([])
dist_tested = list()
for i in pvfiles:
dist_tested += read_dist(mydataset.filename(i))
print('{} total distances read'.format(len(dist_tested)))
mybins = arange(0,600, 50)
plt.hist(dist_tested, bins = mybins, facecolor='white', lw=2);
plt.ylim(ymax=50);
plt.ylabel('Inhbitory chemical synapses');
plt.xlabel('Intersomatic distance ($\mu$m)');
def read_rec_dist(fname):
get distances between bidirectionally connected interneurons
from a matrix of intersomatic distances.
Argument:
fname: string
the matrix name to that contains the connected synapses (*.syn)
# take all non-diagonal elements
mydistpath = '../data/PV/' + fname[:-3] + 'dist'
try:
D = II_slice(np.loadtxt(mydistpath), int(fname[0]))
except IOError:
warnings.warn(mydistpath + ' not found!')
return([])
try:
S = np.loadtxt('../data/PV/'+fname)
except IOError:
warnings.warn(fname + ' not found!')
return([])
S = II_slice(S, int(fname[0]))
S[S==2] = 0 # remove gaps
S[S==3] = 1 # remove gaps in chemical
x,y = np.nonzero(S)
ids = zip(x,y)
mydist = list()
if ids>0:
for i,j in ids:
if S[j,i] == 1:
mydist.append(D[i,j])
print(np.unique(np.abs(mydist)))
return( np.unique(np.abs(mydist)).tolist() )
# Number of bidirectionally connected interneurons
mydataset.motif['ii_c2']
# select experiments with bidirectional motifs
mybidirec = [i for i in range(len(mydataset)) if mydataset.motifs(i)['ii_c2']['found']>0 ]
for i in mybidirec:
print('Experiment {:3d}, filename: {}'.format(i, mydataset.filename(i)))
dist_found = list()
for i in mybidirec:
dist_found += read_rec_dist(mydataset.filename(i))
dist_found
mybins = arange(0,550, 50)
bid_tested = np.unique(dist_tested)
plt.hist(bid_tested, bins = mybins, facecolor='white', lw=2);
plt.ylim(ymax=20);
plt.ylabel('Inhbitory chemical synapses');
plt.xlabel('Intersomatic distance ($\mu$m)');
plt.hist(dist_found, bins = mybins, facecolor='gray', lw=2);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: <H2> Load all distances from connected PV cells</H2>
Step4: <H2>Distances in recurrently connected inhibitory neurons</H2>
|
6,432
|
<ASSISTANT_TASK:>
Python Code:
# Standard Python libraries
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
import sklearn
import seaborn as sns
sns.set(style="ticks", color_codes=True)
import pandas as pd
pd.set_option("precision", 2) # 2 decimal places
pd.set_option("display.max_rows", 20)
pd.set_option("display.max_columns", 30)
pd.set_option("display.width", 100) # wide windows
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
iris = datasets.load_iris()
# use 2 features and all 3 classes
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
# softmax_reg = LogisticRegression(multi_class="multinomial", solver="lbfgs", penalty="none")
softmax_reg = LogisticRegression(multi_class="multinomial", solver="lbfgs", C=1000, random_state=42)
softmax_reg.fit(X, y)
# Get predictive distribution for a single example
X = [[2.5, 3.0]] # (1,2) array
y_probs = softmax_reg.predict_proba(X)
print(np.round(y_probs, 2))
# Fit model and evaluate on separate test set
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features to make problem harder
# X = iris.data # use all data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
# compute MLE (penalty=None means do not use regularization)
logreg = LogisticRegression(solver="lbfgs", multi_class="multinomial", penalty="none")
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test) # categorical labels
errs = y_pred != y_test
nerrs = np.sum(errs)
print("Made {} errors out of {}, on instances {}".format(nerrs, len(y_pred), np.where(errs)))
# With ndims=2: Made 10 errors out of 50, on instances
# (array([ 4, 15, 21, 32, 35, 36, 40, 41, 42, 48]),)
from sklearn.metrics import zero_one_loss
err_rate_test = zero_one_loss(y_test, y_pred)
assert np.isclose(err_rate_test, nerrs / len(y_pred))
err_rate_train = zero_one_loss(y_train, logreg.predict(X_train))
print("Error rates on train {:0.3f} and test {:0.3f}".format(err_rate_train, err_rate_test))
# Error rates on train 0.180 and test 0.200
import sklearn.datasets
import sklearn.linear_model as lm
from sklearn.model_selection import train_test_split
boston = sklearn.datasets.load_boston()
X = boston.data
y = boston.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
scaler = sklearn.preprocessing.StandardScaler()
scaler = scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
X_scaled = scaler.transform(X) # entire dataset
# scatter plot of response vs each feature.
# The shape of the data looks the same as the unscaled case, but the x-axis of each feature is changed.
nrows = 3
ncols = 4
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, sharey=True, figsize=[15, 10])
plt.tight_layout()
plt.clf()
for i in range(0, 12):
plt.subplot(nrows, ncols, i + 1)
plt.scatter(X_scaled[:, i], y)
plt.xlabel(boston.feature_names[i])
plt.ylabel("house price")
plt.grid()
# save_fig("boston-housing-scatter-scaled.pdf")
plt.show()
# Get data
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data"
column_names = ["MPG", "Cylinders", "Displacement", "Horsepower", "Weight", "Acceleration", "Year", "Origin", "Name"]
df = pd.read_csv(url, names=column_names, sep="\s+", na_values="?")
# The last column (name) is a unique id for the car, so we drop it
df = df.drop(columns=["Name"])
# Ensure same number of rows for all features.
df = df.dropna()
# Convert origin integer to categorical factor
df["Origin"] = df.Origin.replace([1, 2, 3], ["USA", "Europe", "Japan"])
df["Origin"] = df["Origin"].astype("category")
df.info()
df.tail()
# Convert origin factor to integer
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
origin_cat = df["Origin"]
print("before transform")
print(origin_cat)
origin_int = encoder.fit_transform(origin_cat)
print("after transform")
print(origin_int)
# Make sure we can decode back to strings
print("class names are {}".format(encoder.classes_))
origin_cat2 = encoder.inverse_transform(origin_int)
print(origin_cat2)
# Convert integer encoding to one-hot vectors
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
origin_onehot_sparse = encoder.fit_transform(origin_int.reshape(-1, 1)) # Sparse array
origin_onehot_dense = origin_onehot_sparse.toarray()
print(origin_onehot_dense[-5:, :])
# We should be able to combine LabelEncoder and OneHotEncoder together
# using a Pipeline. However this fails due to known bug: https://github.com/scikit-learn/scikit-learn/issues/3956
# TypeError: fit_transform() takes 2 positional arguments but 3 were given
from sklearn.pipeline import Pipeline
pipeline = Pipeline([
('str2int', LabelEncoder()),
('int2onehot', OneHotEncoder())
])
origin_onehot2 = pipeline.fit_transform(df['Origin'])
# However, as of sckit v0.20, we can now convert Categorical to OneHot directly.
# https://jorisvandenbossche.github.io/blog/2017/11/20/categorical-encoder/
# https://medium.com/bigdatarepublic/integrating-pandas-and-scikit-learn-with-pipelines-f70eb6183696
from sklearn.preprocessing import CategoricalEncoder # not available :(
encoder = CategoricalEncoder()
origin_onehot2 = encoder.fit_transform(df['Origin'])
print(origin_onehot2)
# Function to add one-hot encoding as extra columns to a dataframe
# See also sklearn-pandas library
# https://github.com/scikit-learn-contrib/sklearn-pandas#transformation-mapping
def one_hot_encode_dataframe_col(df, colname):
encoder = OneHotEncoder(sparse=False)
data = df[[colname]] # Extract column as (N,1) matrix
data_onehot = encoder.fit_transform(data)
df = df.drop(columns=[colname])
ncats = np.size(encoder.categories_)
for c in range(ncats):
colname_c = "{}:{}".format(colname, c)
df[colname_c] = data_onehot[:, c]
return df, encoder
df_onehot, encoder_origin = one_hot_encode_dataframe_col(df, "Origin")
df_onehot.tail()
# Simple example of feature cross
import patsy
cylinders = pd.Series([4, 2, 3, 2, 4], dtype="int")
colors = pd.Series(["R", "R", "G", "B", "R"], dtype="category")
origin = pd.Series(["U", "J", "J", "U", "U"], dtype="category")
data = {"Cyl": cylinders, "C": colors, "O": origin}
df0 = pd.DataFrame(data=data)
print(df0)
df_cross0 = patsy.dmatrix("Cyl + C + O + C:O", df0, return_type="dataframe")
print(df_cross0.tail())
# Create feature crosses for AutoMPG
# For demo purposes, replace integer year with binary decade (70s and 80s)
year = df.pop("Year")
decade = [70 if (y >= 70 and y <= 79) else 80 for y in year]
df["Decade"] = pd.Series(decade, dtype="category")
# Make feature cross between #decades and origin (2*3 values)
y = df.pop("MPG") # Remove target column from dataframe and store
df.columns = ["Cyl", "Dsp", "HP", "Wgt", "Acc", "O", "D"] # Shorten names
df["O"] = df["O"].replace(["USA", "Europe", "Japan"], ["U", "E", "J"])
df_cross = patsy.dmatrix("D:O + Cyl + Dsp + HP + Wgt + Acc", df, return_type="dataframe")
print(df_cross.tail())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Estimators
Step2: Data preprocessing <a class="anchor" id="preprocess"></a>
Step5: One-hot encoding for Autompg <a class="anchor" id="preprocess-onehot"></a>
Step6: Feature crosses for Autompg <a class="anchor" id="preprocess-feature-cross"></a>
|
6,433
|
<ASSISTANT_TASK:>
Python Code:
!conda list |grep -i torch
from torchvision.models.resnet import resnet50
model = resnet50(pretrained=True)
from clipper_admin import ClipperConnection, DockerContainerManager
clipper_conn = ClipperConnection(DockerContainerManager())
clipper_conn.start_clipper(cache_size=1) # Disable PredictionCache
!docker ps -a
import io
from PIL import Image
from torch.autograd import Variable
import torchvision.transforms as transforms
# Imported from https://gist.github.com/jkarimi91/d393688c4d4cdb9251e3f939f138876e.
min_img_size = 224
transform_pipeline = transforms.Compose([transforms.Resize(min_img_size),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
def predict(model, inputs):
def _predict_one(one_input_arr):
try:
img = Image.open(io.BytesIO(one_input_arr))
if img.mode != "RGB":
img = img.convert("RGB")
img = transform_pipeline(img)
img = img.unsqueeze(0)
img = Variable(img)
return [model(img).data.numpy().argmax()]
except Exception as e:
print(e)
return []
return [_predict_one(i) for i in inputs]
import clipper_admin.deployers.pytorch as pytorch_deployer
app_name = 'pytorch-test-app'
model_name = 'pytorch-test-model'
pytorch_deployer.deploy_pytorch_model(clipper_conn=clipper_conn,
name=model_name,
version='1',
input_type='bytes',
func=predict,
pytorch_model=model,
num_replicas=1,
batch_size=1, # Disable adaptive batching policy
pkgs_to_install=['pillow'])
clipper_conn.register_application(name=app_name,
input_type="bytes",
default_output="-1.0",
slo_micros=10000000) # 10s
clipper_conn.link_model_to_app(app_name=app_name,
model_name=model_name)
import time
time.sleep(30)
!wget https://harishnarayanan.org/images/writing/artistic-style-transfer/output_13_0.png -O elephant.jpg
!wget http://kikei.github.io/images/plots/2018-08-05-rabbit2.jpg -O rabbit.jpg
from IPython.display import display
from PIL import Image
display(Image.open('elephant.jpg'))
display(Image.open('rabbit.jpg'))
import json
import base64
import requests
from datetime import datetime
headers = {'Content-type': 'application/json'}
url = "http://{addr}/{app_name}/predict".format(
addr=clipper_conn.get_query_addr(),
app_name=app_name)
start = datetime.now()
req_json = json.dumps({ "input": base64.b64encode(open('elephant.jpg', "rb").read()).decode() })
r = requests.post(url, headers=headers, data=req_json)
end = datetime.now()
latency = (end - start).total_seconds() * 1000.0
print("'%s', %f ms" % (r.text, latency))
start = datetime.now()
req_json = json.dumps({ "input": base64.b64encode(open('rabbit.jpg', "rb").read()).decode() })
r = requests.post(url, headers=headers, data=req_json)
end = datetime.now()
latency = (end - start).total_seconds() * 1000.0
print("'%s', %f ms" % (r.text, latency))
clipper_conn.stop_all()
!docker rm -f $(docker ps -a -q) && docker image prune -f
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load a pretrained PyTorch model(ResNet50)
Step2: Initialize the Clipper cluster
Step3: Define 'predict' function
Step4: Deploy PyTorch model and 'predict' function to the Clipper cluster
Step5: Download sample images
Step6: Send some requests to the Clipper cluster
Step7: First request, which is so slow due to downloading a file from https
Step8: Second request, which is moderate!
Step9: Clean-up
|
6,434
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import seaborn as sns
import random
%matplotlib inline
AlleDaten = "AlleDaten.csv"
with open(AlleDaten, "r") as infile:
AlleDaten = pd.DataFrame.from_csv(infile, sep=",")
print(AlleDaten.iloc[0:5,0:5])
Suchbegriffe = AlleDaten.columns.values[1:]
print(Suchbegriffe)
GrpDaten = AlleDaten.groupby("gender")
DatenW = GrpDaten.get_group("w")
DatenM = GrpDaten.get_group("m")
print("\nDatenW\n", DatenW.iloc[0:5,0:5])
print("\nDatenM\n", DatenM.iloc[0:5,0:5])
# Tragen Sie hier den Suchbegriff ein, zu dem Sie eine Hypothese haben.
Suchbegriff = ""
# Schreiben Sie hier in knapper Form Ihre Hypothese auf.
Hypothese = ""
print("\nMeine Hypothese zu "+Suchbegriff+": "+Hypothese)
WerteW = DatenW.loc[:,Suchbegriff]
WerteM = DatenM.loc[:,Suchbegriff]
print("\nWerteW\n", sorted(list(WerteW)))
print("\nWerteM\n", sorted(list(WerteM)))
plt.figure(figsize=(10,5))
plt.boxplot([WerteW, WerteM], vert=False, patch_artist=True)
plt.title("Boxplot für: "+str(Suchbegriff))
plt.xlabel("Häufigkeiten pro 1000 Tokens")
plt.yticks([1, 2], ['W', 'M'])
plt.show()
plt.figure(figsize=(10,5))
sns.distplot(WerteM, kde=True, bins=16, label="M").set(xlim=0)
sns.distplot(WerteW, kde=True, bins=16, label="W").set(xlim=0)
plt.title("Histogramm für: "+str(Suchbegriff))
plt.xlabel("Häufigkeiten pro 1000 Tokens")
plt.ylabel("Anteil der Romane")
plt.legend()
plt.show()
MittelW = np.mean(WerteW)
MittelM = np.mean(WerteM)
print("\nMittelwert W:", MittelW, "\nMittelwert M:", MittelM)
MedianW = np.median(WerteW)
MedianM = np.median(WerteM)
print("\nMedian W:", MedianW)
print("Median M:", MedianM)
StandardabweichungW = np.std(WerteW)
StandardabweichungM = np.std(WerteM)
print("\nStandardabweichung W:", StandardabweichungW)
print("Standardabweichung M:", StandardabweichungM)
Wilcoxon = stats.wilcoxon(WerteW, WerteM)
print("\nWilcoxon p-Wert für", str(Suchbegriff),":", Wilcoxon[1])
if Wilcoxon[1] < 0.05:
print("\nDieser Wert bedeutet, dass der Unterschied SIGNIFIKANT ist.")
else:
print("\nDieser Wert bedeutet, dass der Unterschied NICHT signifikant ist.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Start
Step2: Genauerer Blick in die Daten
Step3: Aufteilung der Daten in zwei Gruppen
Step4: 3. Auswählen eine Suchanfrage
Step5: Jetzt nutzen wir die beiden Datentabellen und den Suchbegriff, um die entsprechenden Werte zu extrahieren.
Step6: 4. Visualisierung als Verteilungen
Step7: 2. Histogramm mit Dichteschätzung
Step8: 5. Statistische Eigenschaften der Verteilungen
Step9: 2. Mediane
Step10: 3. Standardabweichung
Step11: 6. Statistische Tests für den Unterschied der Verteilungen
|
6,435
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
products = graphlab.SFrame('amazon_baby_subset.gl/')
products['sentiment']
products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products['perfect']
def contains_perfect(count):
return 1 if count >= 1 else 0
products['contains_perfect'] = products['perfect'].apply(contains_perfect)
products['contains_perfect'].sum()
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
# Warning: This may take a few minutes...
feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
feature_matrix.shape
sentiment
def prediction(score):
return (1 / (1 + np.exp(-score)))
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
scores = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = np.apply_along_axis(prediction, 0, scores)
# return predictions
return predictions
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_predictions =', correct_predictions
print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients)
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
derivative = np.dot(feature, errors)
# Return the derivative
return derivative
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print correct_scores
print correct_indicators
print correct_first_term
print correct_second_term
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_log_likelihood =', correct_ll
print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients)
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
# YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[:, j])
# add the step size times the derivative to the current coefficient
coefficients[j] += (step_size * derivative)
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301)
# Compute the scores as a dot product between feature_matrix and coefficients.
scores = np.dot(feature_matrix, coefficients)
def get_prediction(score):
if score > 0:
return 1
else:
return -1
predictions = np.zeros(shape=(scores.shape[0],))
idx = 0
for score in scores:
predictions[idx] = int(get_prediction(score))
idx += 1
pos = (predictions == 1).sum()
neg = (predictions == -1).sum()
print ("number of positive predicted reviews: {}".format(pos))
print ("number of negative predicted reviews: {}".format(neg))
sentiment = products['sentiment'].to_numpy()
num_mistakes = sentiment - predictions
correct = len(sentiment) - np.count_nonzero(num_mistakes)
accuracy = float(correct) / len(sentiment)
print "-----------------------------------------------------"
print '# Reviews correctly classified =', len(products) - num_mistakes
print '# Reviews incorrectly classified =', num_mistakes
print '# Reviews total =', len(products)
print "-----------------------------------------------------"
print 'Accuracy = %.2f' % accuracy
coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
word_coefficient_tuples[:10]
word_coefficient_tuples_descending = sorted(word_coefficient_tuples, key = lambda x: x[1], reverse=False)
word_coefficient_tuples_descending[0:10]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load review dataset
Step2: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
Step3: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
Step4: Note
Step5: Now, we will perform 2 simple data transformations
Step6: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Step7: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
Step8: Now, write some code to compute the number of product reviews that contain the word perfect.
Step9: Quiz Question. How many reviews contain the word perfect?
Step10: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step11: Let us convert the data into NumPy arrays.
Step12: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
Step13: Quiz Question
Step14: Estimating conditional probability with link function
Step15: Aside. How the link function works with matrix algebra
Step16: Compute derivative of log likelihood with respect to a single coefficient
Step17: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
Step18: Checkpoint
Step19: Taking gradient steps
Step20: Now, let us run the logistic regression solver.
Step21: Quiz Question
Step22: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above
Step23: Quiz Question
Step24: Measuring accuracy
Step25: Quiz Question
Step26: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Step27: Quiz Question
Step28: Quiz Question
|
6,436
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow.compat.v1 as tf
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
Run the model.
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 10 == 0:
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
if tf.config.list_physical_devices('GPU'):
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
import os
import tempfile
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
from tensorflow.compat.v2 import summary
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# your model code goes here
summary.scalar('global_step', global_step, step=global_step)
!ls tb/
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
def log1pexp(x):
return tf.log(1 + tf.exp(x))
class Grad(object):
def __init__(self, f):
self.f = f
def __call__(self, x):
x = tf.convert_to_tensor(x)
with tf.GradientTape() as tape:
tape.watch(x)
r = self.f(x)
g = tape.gradient(r, x)
return g
grad_log1pexp = Grad(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.).numpy()
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = Grad(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.).numpy()
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tf.config.list_physical_devices('GPU'):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
if tf.config.list_physical_devices('GPU'):
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tf.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Eager Execution
Step2: Now you can run TensorFlow operations and the results will return immediately
Step3: Enabling eager execution changes how TensorFlow operations behave—now they
Step4: Dynamic control flow
Step5: This has conditionals that depend on tensor values and it prints these values
Step6: Use tf.keras.layers.Dense layer instead of MySimpleLayer above as it has
Step8: Alternatively, organize models in classes by inheriting from tf.keras.Model.
Step9: It's not required to set an input shape for the tf.keras.Model class since
Step10: Train a model
Step11: Even without training, call the model and inspect the output in eager execution
Step12: While keras models have a builtin training loop (using the fit method), sometimes you need more customization. Here's an example, of a training loop implemented with eager
Step13: Variables and optimizers
Step14: Use objects for state during eager execution
Step15: Object-based saving
Step16: To save and load models, tf.train.Checkpoint stores the internal state of objects,
Step17: Object-oriented metrics
Step18: Summaries and TensorBoard
Step19: Advanced automatic differentiation topics
Step20: Custom gradients
Step21: Custom gradients are commonly used to provide a numerically stable gradient for a
Step22: Here, the log1pexp function can be analytically simplified with a custom
Step23: Performance
Step24: A tf.Tensor object can be copied to a different device to execute its
Step25: Benchmarks
|
6,437
|
<ASSISTANT_TASK:>
Python Code:
from os import path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.forward import make_forward_dipole
from mne.evoked import combine_evoked
from mne.simulation import simulate_evoked
from nilearn.plotting import plot_anat
from nilearn.datasets import load_mni152_template
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_ave = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
fname_cov = op.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_bem = op.join(subjects_dir, 'sample', 'bem', 'sample-5120-bem-sol.fif')
fname_trans = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
fname_surf_lh = op.join(subjects_dir, 'sample', 'surf', 'lh.white')
evoked = mne.read_evokeds(fname_ave, condition='Right Auditory',
baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False)
evoked_full = evoked.copy()
evoked.crop(0.07, 0.08)
# Fit a dipole
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
# Plot the result in 3D brain with the MRI image.
dip.plot_locations(fname_trans, 'sample', subjects_dir, mode='orthoview')
trans = mne.read_trans(fname_trans)
subject = 'sample'
mni_pos = mne.head_to_mni(dip.pos, mri_head_t=trans,
subject=subject, subjects_dir=subjects_dir)
mri_pos = mne.head_to_mri(dip.pos, mri_head_t=trans,
subject=subject, subjects_dir=subjects_dir)
t1_fname = op.join(subjects_dir, subject, 'mri', 'T1.mgz')
fig_T1 = plot_anat(t1_fname, cut_coords=mri_pos[0], title='Dipole loc.')
try:
template = load_mni152_template(resolution=1)
except TypeError: # in nilearn < 0.8.1 this did not exist
template = load_mni152_template()
fig_template = plot_anat(template, cut_coords=mni_pos[0],
title='Dipole loc. (MNI Space)')
fwd, stc = make_forward_dipole(dip, fname_bem, evoked.info, fname_trans)
pred_evoked = simulate_evoked(fwd, stc, evoked.info, cov=None, nave=np.inf)
# find time point with highest GOF to plot
best_idx = np.argmax(dip.gof)
best_time = dip.times[best_idx]
print('Highest GOF %0.1f%% at t=%0.1f ms with confidence volume %0.1f cm^3'
% (dip.gof[best_idx], best_time * 1000,
dip.conf['vol'][best_idx] * 100 ** 3))
# remember to create a subplot for the colorbar
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=[10., 3.4],
gridspec_kw=dict(width_ratios=[1, 1, 1, 0.1],
top=0.85))
vmin, vmax = -400, 400 # make sure each plot has same colour range
# first plot the topography at the time of the best fitting (single) dipole
plot_params = dict(times=best_time, ch_type='mag', outlines='skirt',
colorbar=False, time_unit='s')
evoked.plot_topomap(time_format='Measured field', axes=axes[0], **plot_params)
# compare this to the predicted field
pred_evoked.plot_topomap(time_format='Predicted field', axes=axes[1],
**plot_params)
# Subtract predicted from measured data (apply equal weights)
diff = combine_evoked([evoked, pred_evoked], weights=[1, -1])
plot_params['colorbar'] = True
diff.plot_topomap(time_format='Difference', axes=axes[2:], **plot_params)
fig.suptitle('Comparison of measured and predicted fields '
'at {:.0f} ms'.format(best_time * 1000.), fontsize=16)
fig.tight_layout()
dip_fixed = mne.fit_dipole(evoked_full, fname_cov, fname_bem, fname_trans,
pos=dip.pos[best_idx], ori=dip.ori[best_idx])[0]
dip_fixed.plot(time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's localize the N100m (using MEG only)
Step2: Plot the result in 3D brain with the MRI image using Nilearn
Step3: Calculate and visualise magnetic field predicted by dipole with maximum GOF
Step4: Estimate the time course of a single dipole with fixed position and
|
6,438
|
<ASSISTANT_TASK:>
Python Code:
def solvent_langevin(system, kT, gamma):
'''
Implicit solvation model based on Langevin dynamics (Rouse model).
'''
system.thermostat.set_langevin(kT=kT, gamma=gamma, seed=42)
def solvent_lbm(system, kT, gamma):
'''
Lattice-based solvation model based on the LBM (Zimm model).
'''
lbf = espressomd.lb.LBFluidGPU(kT=kT, seed=42, agrid=1, dens=1,
visc=5, tau=system.time_step)
system.actors.add(lbf)
system.thermostat.set_lb(LB_fluid=lbf, gamma=gamma, seed=42)
import logging
import sys
import numpy as np
import scipy.optimize
import espressomd
import espressomd.analyze
import espressomd.accumulators
import espressomd.observables
import espressomd.polymer
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
espressomd.assert_features(['LENNARD_JONES'])
# Setup constants
BOX_L = 12.0
TIME_STEP = 0.01
LOOPS = 4000
STEPS = 100
KT = 1.0
GAMMA = 5.0
POLYMER_PARAMS = {'n_polymers': 1, 'bond_length': 1, 'seed': 42, 'min_distance': 0.9}
POLYMER_MODEL = 'Rouse'
assert POLYMER_MODEL in ('Rouse', 'Zimm')
if POLYMER_MODEL == 'Zimm':
espressomd.assert_features(['CUDA'])
import espressomd.lb
# System setup
system = espressomd.System(box_l=3 * [BOX_L])
system.cell_system.skin = 0.4
# Lennard-Jones interaction
system.non_bonded_inter[0, 0].lennard_jones.set_params(
epsilon=1.0, sigma=1.0, shift="auto", cutoff=2.0**(1.0 / 6.0))
# Fene interaction
fene = espressomd.interactions.FeneBond(k=7, r_0=1, d_r_max=2)
system.bonded_inter.add(fene)
N_MONOMERS = np.array([6, 8, 10])
com_pos_tau_results = []
com_pos_msd_results = []
com_vel_tau_results = []
com_vel_acf_results = []
rh_results = []
rf_results = []
rg_results = []
for index, N in enumerate(N_MONOMERS):
logging.info(f"Polymer size: {N}")
build_polymer(system, N, POLYMER_PARAMS, fene)
logging.info("Warming up the polymer chain.")
system.time_step = 0.002
system.integrator.set_steepest_descent(
f_max=1.0,
gamma=10,
max_displacement=0.01)
system.integrator.run(2000)
system.integrator.set_vv()
logging.info("Warmup finished.")
logging.info("Equilibration.")
system.time_step = TIME_STEP
system.thermostat.set_langevin(kT=1.0, gamma=50, seed=42)
system.integrator.run(2000)
logging.info("Equilibration finished.")
system.thermostat.turn_off()
if POLYMER_MODEL == 'Rouse':
solvent_langevin(system, KT, GAMMA)
elif POLYMER_MODEL == 'Zimm':
solvent_lbm(system, KT, GAMMA)
logging.info("Warming up the system with the fluid.")
system.integrator.run(1000)
logging.info("Warming up the system with the fluid finished.")
# configure MSD correlator
com_pos_cor = correlator_msd(np.arange(N), LOOPS * STEPS)
system.auto_update_accumulators.add(com_pos_cor)
# configure Green-Kubo correlator
com_vel_cor = correlator_gk(np.arange(N), LOOPS * STEPS)
system.auto_update_accumulators.add(com_vel_cor)
logging.info("Sampling started.")
rhs = np.zeros(LOOPS)
rfs = np.zeros(LOOPS)
rgs = np.zeros(LOOPS)
for i in range(LOOPS):
system.integrator.run(STEPS)
rhs[i] = system.analysis.calc_rh(
chain_start=0,
number_of_chains=1,
chain_length=N)[0]
rfs[i] = system.analysis.calc_re(
chain_start=0,
number_of_chains=1,
chain_length=N)[0]
rgs[i] = system.analysis.calc_rg(
chain_start=0,
number_of_chains=1,
chain_length=N)[0]
logging.info("Sampling finished.")
# store results
com_pos_cor.finalize()
com_pos_tau_results.append(com_pos_cor.lag_times())
com_pos_msd_results.append(np.sum(com_pos_cor.result(), axis=1))
com_vel_cor.finalize()
com_vel_tau_results.append(com_vel_cor.lag_times())
com_vel_acf_results.append(com_vel_cor.result())
rh_results.append(rhs)
rf_results.append(rfs)
rg_results.append(rgs)
# reset system
system.part.clear()
system.thermostat.turn_off()
system.actors.clear()
system.auto_update_accumulators.clear()
rh_results = np.array(rh_results)
rf_results = np.array(rf_results)
rg_results = np.array(rg_results)
com_pos_tau_results = np.array(com_pos_tau_results)
com_pos_msd_results = np.reshape(com_pos_msd_results, [len(N_MONOMERS), -1])
com_vel_tau_results = np.array(com_vel_tau_results)
com_vel_acf_results = np.reshape(com_vel_acf_results, [len(N_MONOMERS), -1])
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
plt.rcParams.update({'font.size': 18})
def standard_error_mean_autocorrelation(time_series, variable_label):
'''
Calculate the mean and the correlation-corrected standard error
of the mean of time series by integrating the autocorrelation
function. See Janke 2002 [5] and Weigel, Janke 2010 [6].
Due to the short simulation length, it is not possible to fit an
exponential to the long-time tail. Instead, return a percentile.
'''
summary = []
fig = plt.figure(figsize=(10, 6))
for signal, N in zip(time_series, N_MONOMERS):
acf = espressomd.analyze.autocorrelation(signal - np.mean(signal))
# the acf cannot be integrated beyond tau=N/2
integral = np.array([acf[0] + 2 * np.sum(acf[1:j]) for j in np.arange(1, len(acf) // 2)])
# remove the noisy part of the integral
negative_number_list = np.nonzero(integral < 0)
if negative_number_list[0].size:
integral = integral[:int(0.95 * negative_number_list[0][0])]
# compute the standard error of the mean
std_err = np.sqrt(integral / acf.size)
# due to the small sample size, the long-time tail is not
# well resolved and cannot be fitted, so we use a percentile
asymptote = np.percentile(std_err, 75)
# plot the integral and asymptote
p = plt.plot([0, len(std_err)], 2 * [asymptote], '--')
plt.plot(np.arange(len(std_err)) + 1, std_err,
'-', color=p[0].get_color(),
label=rf'$\int {variable_label}$ for N={N}')
summary.append((np.mean(signal), asymptote))
plt.xlabel(r'Lag time $\tau / \Delta t$')
plt.ylabel(rf'$\int_{{-\tau}}^{{+\tau}} {variable_label}$')
plt.ticklabel_format(axis='y', style='sci', scilimits=(0, 0))
plt.legend()
plt.show()
return np.array(summary)
def fitting_polymer_theory(polymer_model, n_monomers, diffusion, rh_exponent):
'''
Fit the appropriate polymer diffusion coefficient equation (Rouse or
Kirkwood-Zimm).
'''
def rouse(x, a):
return a / x
def kirkwood_zimm(x, a, b, exponent):
return a / x + b / x**exponent
x = np.linspace(min(n_monomers) - 0.5, max(n_monomers) + 0.5, 20)
if polymer_model == 'Rouse':
popt, _ = scipy.optimize.curve_fit(rouse, n_monomers, diffusion)
label = rf'$D^{{\mathrm{{fit}}}} = \frac{{{popt[0]:.2f}}}{{N}}$'
y = rouse(x, popt[0])
elif polymer_model == 'Zimm':
fitting_function = kirkwood_zimm
popt, _ = scipy.optimize.curve_fit(
lambda x, a, b: kirkwood_zimm(x, a, b, rh_exponent), n_monomers, diffusion)
y = kirkwood_zimm(x, popt[0], popt[1], rh_exponent)
label = f'''\
$D^{{\\mathrm{{fit}}}} = \
\\frac{{{popt[0]:.2f}}}{{N}} + \
\\frac{{{popt[1] * 6 * np.pi:.3f} }}{{6\\pi}} \\cdot \
\\frac{{{1}}}{{N^{{{rh_exponent:.2f}}}}}$ \
'''
return x, y, label, popt
rf_summary = standard_error_mean_autocorrelation(rf_results, r'\operatorname{acf}(R_F)')
rf_exponent, rf_prefactor = np.polyfit(np.log(N_MONOMERS), np.log(rf_summary[:, 0]), 1)
rf_prefactor = np.exp(rf_prefactor)
fig = plt.figure(figsize=(10, 6))
x = np.linspace(min(N_MONOMERS) - 0.5, max(N_MONOMERS) + 0.5, 20)
plt.plot(x, rf_prefactor * x**rf_exponent, '-',
label=rf'$R_F^{{\mathrm{{fit}}}} = {rf_prefactor:.2f} N^{{{rf_exponent:.2f}}}$')
plt.errorbar(N_MONOMERS, rf_summary[:, 0],
yerr=rf_summary[:, 1],
ls='', marker='o', capsize=5, capthick=1,
label=r'$R_F^{\mathrm{simulation}}$')
plt.xlabel('Number of monomers $N$')
plt.ylabel(r'End-to-end distance [$\sigma$]')
plt.legend()
plt.show()
rg_summary = standard_error_mean_autocorrelation(rg_results, r'\operatorname{acf}(R_g)')
rg_exponent, rg_prefactor = np.polyfit(np.log(N_MONOMERS), np.log(rg_summary[:, 0]), 1)
rg_prefactor = np.exp(rg_prefactor)
fig = plt.figure(figsize=(10, 6))
x = np.linspace(min(N_MONOMERS) - 0.5, max(N_MONOMERS) + 0.5, 20)
plt.plot(x, rg_prefactor * x**rg_exponent, '-',
label=rf'$R_g^{{\mathrm{{fit}}}} = {rg_prefactor:.2f} N^{{{rg_exponent:.2f}}}$')
plt.errorbar(N_MONOMERS, rg_summary[:, 0],
yerr=rg_summary[:, 1],
ls='', marker='o', capsize=5, capthick=1,
label=r'$R_g^{\mathrm{simulation}}$')
plt.xlabel('Number of monomers $N$')
plt.ylabel('Radius of gyration [$\sigma$]')
plt.legend()
plt.show()
rf2_rg2_ratio = rf_summary[:, 0]**2 / rg_summary[:, 0]**2
print(np.around(rf2_rg2_ratio, 1))
rh_summary = standard_error_mean_autocorrelation(rh_results, r'\operatorname{acf}(R_h)')
rh_exponent, rh_prefactor = np.polyfit(np.log(N_MONOMERS), np.log(rh_summary[:, 0]), 1)
rh_prefactor = np.exp(rh_prefactor)
fig = plt.figure(figsize=(10, 6))
x = np.linspace(min(N_MONOMERS) - 0.5, max(N_MONOMERS) + 0.5, 20)
plt.plot(x, rh_prefactor * x**rh_exponent, '-',
label=rf'$R_h^{{\mathrm{{fit}}}} = {rh_prefactor:.2f} N^{{{rh_exponent:.2f}}}$')
plt.errorbar(N_MONOMERS, rh_summary[:, 0],
yerr=rh_summary[:, 1],
ls='', marker='o', capsize=5, capthick=1,
label=r'$R_h^{\mathrm{simulation}}$')
plt.xlabel('Number of monomers $N$')
plt.ylabel('Hydrodynamic radius [$\sigma$]')
plt.legend()
plt.show()
# cutoff for the diffusive regime (approximative)
tau_f_index = 40
# cutoff for the data series (larger lag times have larger variance due to undersampling)
tau_max_index = 70
plt.figure(figsize=(10, 10))
plt.xlabel(r'$\tau$ [$\Delta t$]')
plt.ylabel(r'MSD [$\sigma^2$]')
for index, (tau, msd) in enumerate(zip(com_pos_tau_results, com_pos_msd_results)):
plt.loglog(tau[1:120], msd[1:120], label=f'N={N_MONOMERS[index]}')
plt.loglog(2 * [tau[tau_f_index]], [0, np.max(com_pos_msd_results)], '-', color='black')
plt.text(tau[tau_f_index], np.max(com_pos_msd_results), r'$\tau_{f}$')
plt.loglog(2 * [tau[tau_max_index]], [0, np.max(com_pos_msd_results)], '-', color='black')
plt.text(tau[tau_max_index], np.max(com_pos_msd_results), r'$\tau_{max}$')
plt.legend()
plt.show()
diffusion_msd = np.zeros(len(N_MONOMERS))
plt.figure(figsize=(10, 8))
weights = com_pos_cor.sample_sizes()
for index, (tau, msd) in enumerate(zip(com_pos_tau_results, com_pos_msd_results)):
a, b = np.polyfit(tau[tau_f_index:tau_max_index], msd[tau_f_index:tau_max_index],
1, w=weights[tau_f_index:tau_max_index])
x = np.array([tau[1], tau[tau_max_index - 1]])
p = plt.plot(x, a * x + b, '-')
plt.plot(tau[1:tau_max_index], msd[1:tau_max_index], 'o', color=p[0].get_color(),
label=rf'$N=${N_MONOMERS[index]}')
diffusion_msd[index] = a / 6
plt.xlabel(r'$\tau$ [$\Delta t$]')
plt.ylabel(r'MSD [$\sigma^2$]')
plt.legend()
plt.show()
fig = plt.figure(figsize=(10, 6))
x, y, label, popt_msd = fitting_polymer_theory(POLYMER_MODEL, N_MONOMERS, diffusion_msd, rh_exponent)
plt.plot(x, y, '-', label=label)
plt.plot(N_MONOMERS, diffusion_msd, 'o', label=r'$D^{\mathrm{simulation}}$')
plt.xlabel('Number of monomers $N$')
plt.ylabel(r'Diffusion coefficient [$\sigma^2/t$]')
plt.ticklabel_format(axis='y', style='sci', scilimits=(0, 0))
plt.legend()
plt.show()
def exponential(x, a, b):
return a * np.exp(-b * x)
fig = plt.figure(figsize=(10, 8))
for N, tau, acf in zip(N_MONOMERS, com_vel_tau_results, com_vel_acf_results):
popt, _ = scipy.optimize.curve_fit(exponential, tau[:20], acf[:20])
x = np.linspace(tau[0], tau[20 - 1], 100)
p = plt.plot(x, exponential(x, *popt), '-')
plt.plot(tau[:20], acf[:20], 'o',
color=p[0].get_color(), label=rf'$R(\tau)$ for N = {N}')
plt.xlabel(r'$\tau$')
plt.ylabel('Autocorrelation function')
plt.legend()
plt.show()
diffusion_gk = []
fig = plt.figure(figsize=(10, 6))
for N, tau, acf in zip(N_MONOMERS, com_vel_tau_results, com_vel_acf_results):
x = np.arange(2, 28)
y = [1 / 3 * np.trapz(acf[:j], tau[:j]) for j in x]
plt.plot(tau[x], y, label=rf'$D(\tau_{{\mathrm{{int}}}})$ for $N = {N}$')
diffusion_gk.append(np.mean(y[10:]))
plt.xlabel(r'$\tau_{\mathrm{int}}$')
plt.ylabel(r'$\frac{1}{3} \int_{\tau=0}^{\tau_{\mathrm{int}}} \left<\vec{v_c}(\tau)\cdot\vec{v_c}(0)\right>\, \mathrm{d}\tau$')
plt.ticklabel_format(axis='y', style='sci', scilimits=(0, 0))
plt.legend()
plt.show()
fig = plt.figure(figsize=(10, 8))
x, y, label, popt_gk = fitting_polymer_theory(POLYMER_MODEL, N_MONOMERS, diffusion_gk, rh_exponent)
plt.plot(x, y, '-', label=label)
plt.plot(N_MONOMERS, diffusion_gk, 'o', label=r'$D^{\mathrm{simulation}}$')
plt.xlabel('Number of monomers $N$')
plt.ylabel(r'Diffusion coefficient [$\sigma^2/t$]')
plt.ticklabel_format(axis='y', style='sci', scilimits=(0, 0))
plt.legend()
plt.show()
print(f'N\tMSD\t\tGK\t\tdifference')
for N, d_msd, d_gk in zip(N_MONOMERS, diffusion_msd, diffusion_gk):
print(f'{N}\t{d_msd:.2e}\t{d_gk:.2e}\t{np.ceil(np.abs(d_msd-d_gk) * 100 / d_msd):.0f}%')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Simulating the polymer
Step2: 3. Data analysis
Step3: 3.1 Distance-based macromolecular properties
Step4: Plot the radius of gyration $R_g$ of the polymer as a function of the number of monomers. What relation do you observe?
Step5: For an ideal polymer
Step6: Plot the hydrodynamic radius $R_h$ of the polymers as a function of the number of monomers. What relation do you observe?
Step7: 3.2 Diffusion coefficient using the MSD method
Step8: Plot the dependence of the diffusion coefficient on the hydrodynamic radius.
Step9: 3.3 Diffusion coefficient using the Green–Kubo method
Step10: The Green–Kubo integral for the diffusion coefficient take the following form
Step11: Plot the dependence of the diffusion coefficient on the hydrodynamic radius.
Step12: Let us compare the value of the diffusion coefficients calcualted with the MSD and Green–Kubo methods
|
6,439
|
<ASSISTANT_TASK:>
Python Code:
t0 = time.time()
datapath = '/Users/jorgecastanon/Documents/github/w2v/data/tweets.gz'
tweets = sqlContext.read.json(datapath)
tweets.registerTempTable("tweets")
twr = tweets.count()
print "Number of tweets read: ", twr
# this line add ~7 seconds (from ~24.5 seconds to ~31.5 seconds)
# Number of tweets read: 239082
print "Elapsed time (seconds): ", time.time() - t0
#Elapsed time (seconds): 31.9646401405
filterPath = '/Users/jorgecastanon/Documents/github/w2v/data/filter.txt'
filter = pd.read_csv(filterPath,header=None)
filter.head()
# Construct SQL Command
t0 = time.time()
sqlString = "("
for substr in filter[0]: #iteration on the list of words to filter (at most 50-100 words)
sqlString = sqlString+"text LIKE '%"+substr+"%' OR "
sqlString = sqlString+"text LIKE '%"+substr.upper()+"%' OR "
sqlString=sqlString[:-4]+")"
sqlFilterCommand = "SELECT lang, text FROM tweets WHERE (lang = 'en') AND "+sqlString
# Query tweets in english that contain at least one of the keywords
tweetsDF = sqlContext.sql(sqlFilterCommand).cache()
twf = tweetsDF.count()
print "Number of tweets after filtering: ", twf
# last line add ~9 seconds (from ~0.72 seconds to ~9.42 seconds)
print "Elapsed time (seconds): ", time.time() - t0
print "Percetage of Tweets Used: ", float(twf)/twr
tweetsRDD = tweetsDF.select('text').rdd
def parseAndRemoveStopWords(text):
t = text[0].replace(";"," ").replace(":"," ").replace('"',' ').replace('-',' ')
t = t.replace(',',' ').replace('.',' ')
t = t.lower().split(" ")
stop = stopwords.words('english')
return [i for i in t if i not in stop]
tw = tweetsRDD.map(parseAndRemoveStopWords)
# map to df
twDF = tw.map(lambda p: Row(text=p)).toDF()
# default minCount = 5 (we may need to try something larger: 20-100 to reduce cost)
# default vectorSize = 100 (we may want to keep default)
t0 = time.time()
word2Vec = Word2Vec(vectorSize=100, minCount=5, inputCol="text", outputCol="result")
modelW2V = word2Vec.fit(twDF)
wordVectorsDF = modelW2V.getVectors()
print "Elapsed time (seconds) to train Word2Vec: ", time.time() - t0
print sc.version
vocabSize = wordVectorsDF.count()
print "Vocabulary Size: ", vocabSize
topN = 13
synonymsDF = modelW2V.findSynonyms('christmas', topN).toPandas()
synonymsDF
synonymsDF = modelW2V.findSynonyms('dog', 5).toPandas()
synonymsDF
dfW2V = wordVectorsDF.select('vector').withColumnRenamed('vector','features')
numComponents = 3
pca = PCA(k = numComponents, inputCol = 'features', outputCol = 'pcaFeatures')
model = pca.fit(dfW2V)
dfComp = model.transform(dfW2V).select("pcaFeatures")
word = 'christmas'
nwords = 200
#############
r = wvu.topNwordsToPlot(dfComp,wordVectorsDF,word,nwords)
############
fs=20 #fontsize
w = r['word']
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
height = 10
width = 10
fig.set_size_inches(width, height)
ax.scatter(r['X'], r['Y'], r['Z'], color='red', s=100, marker='o', edgecolors='black')
for i, txt in enumerate(w):
if(i<7):
ax.text(r['X'].ix[i],r['Y'].ix[i],r['Z'].ix[i], '%s' % (txt), size=20, zorder=1, color='k')
ax.set_xlabel('1st. Component', fontsize=fs)
ax.set_ylabel('2nd. Component', fontsize=fs)
ax.set_zlabel('3rd. Component', fontsize=fs)
ax.set_title('Visualization of Word2Vec via PCA', fontsize=fs)
ax.grid(True)
plt.show()
t0=time.time()
K = int(math.floor(math.sqrt(float(vocabSize)/2)))
# K ~ sqrt(n/2) this is a rule of thumb for choosing K,
# where n is the number of words in the model
# feel free to choose K with a fancier algorithm
dfW2V = wordVectorsDF.select('vector').withColumnRenamed('vector','features')
kmeans = KMeans(k=K, seed=1)
modelK = kmeans.fit(dfW2V)
labelsDF = modelK.transform(dfW2V).select('prediction').withColumnRenamed('prediction','labels')
print "Number of Clusters (K) Used: ", K
print "Elapsed time (seconds) :", time.time() - t0
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read Keywords
Step2: Use Spark SQL to Filter Tweets
Step3: Parse Tweets and Remove Stop Words
Step4: Word2Vec
Step5: Find top N closest words
Step6: As Expected, Unrelated terms are Not Accurate
Step7: PCA on Top of Word2Vec using DF (spark.ml)
Step8: 3D Visualization
Step9: K-means on top of Word2Vec using DF (spark.ml)
|
6,440
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import matplotlib.pyplot as plt
%matplotlib notebook
from keras.datasets import mnist
# load data...
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# check dimensions...
print('Train: ', X_train.shape, y_train.shape)
print('Test: ', X_test.shape, y_test.shape)
# select a number [0, 60000)...
idx = 1000
# plot image...
plt.figure()
plt.title('Number: %s'%y_train[idx])
plt.imshow(X_train[idx], cmap='gray')
X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], X_train.shape[2], 1).astype('float32') / 255.
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], X_test.shape[2], 1).astype('float32') / 255.
print(X_train.shape)
# import to_categorial function that does the one-hot encoding...
from keras.utils import to_categorical
# encode both training and testing data...
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
y_train[0]
from keras.models import Sequential
from keras.layers import Conv2D, MaxPool2D, Dense, Dropout, Flatten
img_shape = X_train[0].shape
print(img_shape)
model = Sequential()
# Convolutional Section...
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=img_shape))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPool2D((2, 2)))
model.add(Dropout(rate=0.25))
model.add(Conv2D(64, (3, 3), activation='relu', input_shape=img_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPool2D((2, 2)))
model.add(Dropout(rate=0.25))
# Fully Connected Section...
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(rate=0.25))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
n_epochs = 2
model.fit(X_train, y_train, batch_size=32, epochs=n_epochs, verbose=True)
loss, accuracy = model.evaluate(X_test, y_test, batch_size=32)
print('Test Accuracy: ', accuracy)
# save model for retrieval at later date...
model.save('./MNIST_CNN')
import cv2
import numpy as np
def record_location(event, x, y, flags, param):
'''callback function that draws a circle at the point x, y...'''
if flags == cv2.EVENT_FLAG_LBUTTON and event == cv2.EVENT_MOUSEMOVE:
cv2.circle(img, (x,y), 10, (255, 255, 255), -1)
img = np.zeros((256, 256, 3), np.uint8)
cv2.namedWindow('image')
cv2.setMouseCallback('image', record_location)
while(1):
cv2.imshow('image',img)
k = cv2.waitKey(1) & 0xFF
if k == ord('m'):
mode = not mode
elif k == 27:
break
cv2.destroyAllWindows()
# copy one color channel and normalize values...
_img = img[:,:,0] / 255.0
# resize image to (28, 28)
_img = cv2.resize(_img, (28, 28), interpolation=cv2.INTER_AREA).reshape(1, 28, 28, 1)
p = model.predict(_img)
print(p)
plt.figure()
plt.title('Guess: %s' %p.argmax())
plt.imshow(_img[0][:,:,0], cmap='gray')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the MNIST dataset using the keras api
Step2: Looks like we have 60k images of 28, 28 pixels. These images are single-channel, i.e. black and white... If these were color images, then we would see dimensions of (60000, 28, 28, 3)... 3 channels for Red-Green-Blue (RGB) or Blue-Green-Red (BGR), depending on the order of the color channels...
Step3: Image Processing...
Step4: now we have explicity created a one-channel dataset... and normalized it between [0, 1]... alternatively, you might normalize it more correctly as Gaussian distributed about zero with a variance of one... this would help with training but for this example, as you'll see, it doesn't really matter...
Step5: Build CNN Model...
Step6: Train model...
Step7: Build interactive notepad...
Step8: The code below will create an OpenCV popup window... the window can be closed using the 'esc' key... and we can draw in the window by holding the left-mouse button and moving the mouse within the window...
|
6,441
|
<ASSISTANT_TASK:>
Python Code:
from ambry import get_library
l = get_library()
b = l.bundle('cdph.ca.gov-hci-0.0.2')
w = b.warehouse('hci_counties')
w.clean()
print w.dsn
w.query(
-- Get only counties in California
CREATE VIEW geo AS SELECT gvid, name AS county_name, geometry FROM census.gov-tiger-2015-counties
WHERE statefp = 6;
-- Get only records for all race/ethinicities
CREATE VIEW hf_total AS SELECT gvid, mrfei FROM cdph.ca.gov-hci-healthy_food-county
WHERE race_eth_name = 'Total';
-- Get only records for all race/ethinicities
CREATE VIEW aq_total AS SELECT gvid, pm25_concentration FROM cdph.ca.gov-hci-air_quality-county
WHERE race_eth_name = 'Total';
-- THe overty table has a lot of otrher categories, for report year and type of poverty
CREATE VIEW pr_total AS SELECT gvid, percent FROM cdph.ca.gov-hci-poverty_rate-county
WHERE race_eth_name = 'Total' AND reportyear='2008-2010' AND poverty='Overall';
).close()
sql=
SELECT county_name, mrfei, pm25_concentration, percent as percent_poverty FROM geo as counties
JOIN hf_total ON hf_total.gvid = counties.gvid
JOIN aq_total ON aq_total.gvid = counties.gvid
JOIN pr_total ON pr_total.gvid = counties.gvid;
df = w.dataframe(sql)
df.head()
df.corr()
%matplotlib inline
sql=
SELECT county_name, mrfei, pm25_concentration, percent as percent_poverty, geometry FROM geo as counties
LEFT JOIN hf_total ON hf_total.gvid = counties.gvid
LEFT JOIN aq_total ON aq_total.gvid = counties.gvid
LEFT JOIN pr_total ON pr_total.gvid = counties.gvid;
w.geoframe(sql).plot(column='pm25_concentration')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: First, create a set of views to limit the individual indicators to one record per county. The Ambry SQL parser is
Step4: Now we can run a query to join the indicators.
Step6: Plot the PM2.5 Concentration, a measure of particulate air polution.
|
6,442
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import thinkstats2
import thinkplot
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
%matplotlib inline
names = ['year', 'mager9', 'restatus', 'mbrace', 'mhisp_r',
'mar_p', 'dmar', 'meduc', 'fagerrec11', 'fbrace', 'fhisp_r', 'feduc',
'lbo_rec', 'previs_rec', 'wic', 'height', 'bmi_r', 'pay_rec', 'sex']
colspecs = [(15, 18),
(93, 93),
(138, 138),
(143, 143),
(148, 148),
(152, 152),
(153, 153),
(155, 155),
(186, 187),
(191, 191),
(195, 195),
(197, 197),
(212, 212),
(272, 273),
(281, 281),
(555, 556),
(533, 533),
(413, 413),
(436, 436),
]
colspecs = [(start-1, end) for start, end in colspecs]
df = None
filename = 'Nat2012PublicUS.r20131217.gz'
#df = pd.read_fwf(filename, compression='gzip', header=None, names=names, colspecs=colspecs)
#df.head()
# store the dataframe for faster loading
#store = pd.HDFStore('store.h5')
#store['births2013'] = df
#store.close()
# load the dataframe
store = pd.HDFStore('store.h5')
df = store['births2013']
store.close()
def series_to_ratio(series):
Takes a boolean series and computes sex ratio.
boys = np.mean(series)
return np.round(100 * boys / (1-boys)).astype(int)
df['boy'] = (df.sex=='M').astype(int)
df.boy.value_counts().sort_index()
df.year.value_counts().sort_index()
df.mager9.value_counts().sort_index()
var = 'mager9'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.mager9.isnull().mean()
df['youngm'] = df.mager9<=2
df['oldm'] = df.mager9>=7
df.youngm.mean(), df.oldm.mean()
df.restatus.value_counts().sort_index()
var = 'restatus'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.mbrace.value_counts().sort_index()
var = 'mbrace'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.mhisp_r.replace([9], np.nan, inplace=True)
df.mhisp_r.value_counts().sort_index()
def copy_null(df, oldvar, newvar):
df.loc[df[oldvar].isnull(), newvar] = np.nan
df['mhisp'] = df.mhisp_r > 0
copy_null(df, 'mhisp_r', 'mhisp')
df.mhisp.isnull().mean(), df.mhisp.mean()
var = 'mhisp'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.dmar.value_counts().sort_index()
var = 'dmar'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.mar_p.replace(['U'], np.nan, inplace=True)
df.mar_p.replace(['X'], 'Y', inplace=True)
df.mar_p.value_counts().sort_index()
var = 'mar_p'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.meduc.replace([9], np.nan, inplace=True)
df.meduc.value_counts().sort_index()
var = 'meduc'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df['lowed'] = df.meduc <= 2
copy_null(df, 'meduc', 'lowed')
df.lowed.isnull().mean(), df.lowed.mean()
df.fagerrec11.replace([11], np.nan, inplace=True)
df.fagerrec11.value_counts().sort_index()
var = 'fagerrec11'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df['youngf'] = df.fagerrec11<=2
copy_null(df, 'fagerrec11', 'youngf')
df.youngf.isnull().mean(), df.youngf.mean()
df['oldf'] = df.fagerrec11>=8
copy_null(df, 'fagerrec11', 'oldf')
df.oldf.isnull().mean(), df.oldf.mean()
df.fbrace.replace([9], np.nan, inplace=True)
df.fbrace.value_counts().sort_index()
var = 'fbrace'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.fhisp_r.replace([9], np.nan, inplace=True)
df.fhisp_r.value_counts().sort_index()
df['fhisp'] = df.fhisp_r > 0
copy_null(df, 'fhisp_r', 'fhisp')
df.fhisp.isnull().mean(), df.fhisp.mean()
var = 'fhisp'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.feduc.replace([9], np.nan, inplace=True)
df.feduc.value_counts().sort_index()
var = 'feduc'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.lbo_rec.replace([9], np.nan, inplace=True)
df.lbo_rec.value_counts().sort_index()
var = 'lbo_rec'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df['highbo'] = df.lbo_rec >= 5
copy_null(df, 'lbo_rec', 'highbo')
df.highbo.isnull().mean(), df.highbo.mean()
df.previs_rec.replace([12], np.nan, inplace=True)
df.previs_rec.value_counts().sort_index()
df.previs_rec.mean()
df['previs'] = df.previs_rec - 7
var = 'previs'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df['no_previs'] = df.previs_rec <= 1
copy_null(df, 'previs_rec', 'no_previs')
df.no_previs.isnull().mean(), df.no_previs.mean()
df.wic.replace(['U'], np.nan, inplace=True)
df.wic.value_counts().sort_index()
var = 'wic'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.height.replace([99], np.nan, inplace=True)
df.height.value_counts().sort_index()
df['mshort'] = df.height<60
copy_null(df, 'height', 'mshort')
df.mshort.isnull().mean(), df.mshort.mean()
df['mtall'] = df.height>=70
copy_null(df, 'height', 'mtall')
df.mtall.isnull().mean(), df.mtall.mean()
var = 'mshort'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
var = 'mtall'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.bmi_r.replace([9], np.nan, inplace=True)
df.bmi_r.value_counts().sort_index()
var = 'bmi_r'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df['obese'] = df.bmi_r >= 4
copy_null(df, 'bmi_r', 'obese')
df.obese.isnull().mean(), df.obese.mean()
df.pay_rec.replace([9], np.nan, inplace=True)
df.pay_rec.value_counts().sort_index()
var = 'pay_rec'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.sex.value_counts().sort_index()
def logodds_to_ratio(logodds):
Convert from log odds to probability.
odds = np.exp(logodds)
return 100 * odds
def summarize(results):
Summarize parameters in terms of birth ratio.
inter_or = results.params['Intercept']
inter_rat = logodds_to_ratio(inter_or)
for value, lor in results.params.iteritems():
if value=='Intercept':
continue
rat = logodds_to_ratio(inter_or + lor)
code = '*' if results.pvalues[value] < 0.05 else ' '
print('%-20s %0.1f %0.1f' % (value, inter_rat, rat), code)
model = smf.logit('boy ~ mager9', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ youngm + oldm', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ C(restatus)', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ C(mbrace)', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ mhisp', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ C(mar_p)', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ C(dmar)', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ meduc', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ lowed', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ fagerrec11', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ youngf + oldf', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ C(fbrace)', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ fhisp', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ feduc', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ lbo_rec', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ highbo', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ previs', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ no_previs + previs', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ wic', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ height', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ mtall + mshort', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ bmi_r', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ obese', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ C(pay_rec)', data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + C(mbrace) + mhisp')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + mar_p')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + dmar')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + lowed')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + highbo')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + wic')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + obese')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + C(pay_rec)')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + previs')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + previs + no_previs')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + previs + dmar')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + previs + C(pay_rec)')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + previs + no_previs')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + previs + no_previs + mager9')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(fbrace) + fhisp + previs + no_previs + fagerrec11')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
white = df[(df.mbrace==1) & (df.fbrace==1)]
len(white)
var = 'previs'
white[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
formula = ('boy ~ previs + no_previs')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
inter = results.params['Intercept']
slope = results.params['previs']
inter, slope
previs = np.arange(-5, 5)
logodds = inter + slope * previs
odds = np.exp(logodds)
odds * 100
formula = ('boy ~ dmar')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ lowed')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ highbo')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ wic')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ obese')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(pay_rec)')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ mager9')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ youngm + oldm')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ youngf + oldf')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Trivers-Willard
Step3: I have to recode sex as 0 or 1 to make logit happy.
Step4: All births are from 2014.
Step5: Mother's age
Step6: Residence status (1=resident)
Step7: Mother's race (1=White, 2=Black, 3=American Indian or Alaskan Native, 4=Asian or Pacific Islander)
Step8: Mother's Hispanic origin (0=Non-Hispanic)
Step9: Marital status (1=Married)
Step10: Paternity acknowledged, if unmarried (Y=yes, N=no, X=not applicable, U=unknown).
Step11: Mother's education level
Step12: Father's age, in 10 ranges
Step13: Father's race
Step14: Father's Hispanic origin (0=non-hispanic, other values indicate country of origin)
Step15: Father's education level
Step16: Live birth order.
Step17: Number of prenatal visits, in 11 ranges
Step18: Whether the mother is eligible for food stamps
Step19: Mother's height in inches
Step20: Mother's BMI in 6 ranges
Step21: Payment method (1=Medicaid, 2=Private insurance, 3=Self pay, 4=Other)
Step22: Sex of baby
Step25: Regression models
Step26: Now I'll run models with each variable, one at a time.
Step27: The estimated ratios for young mothers is higher, and the ratio for older mothers is lower, but neither is statistically significant.
Step28: Neither does residence status
Step29: Mother's race seems to have predictive value. Relative to whites, black and Native American mothers have more girls; Asians have more boys.
Step30: Hispanic mothers have more girls.
Step31: If the mother is married or unmarried but paternity is acknowledged, the sex ratio is higher (more boys)
Step32: Being unmarried predicts more girls.
Step33: Each level of mother's education predicts a small increase in the probability of a boy.
Step34: Older fathers are slightly more likely to have girls (but this apparent effect could be due to chance).
Step35: Predictions based on father's race are similar to those based on mother's race
Step36: If the father is Hispanic, that predicts more girls.
Step37: Father's education level might predict more boys, but the apparent effect could be due to chance.
Step38: Babies with high birth order are slightly more likely to be girls.
Step39: Strangely, prenatal visits are associated with an increased probability of girls.
Step40: The effect seems to be non-linear at zero, so I'm adding a boolean for no prenatal visits.
Step41: If the mother qualifies for food stamps, she is more likely to have a girl.
Step42: Mother's height seems to have no predictive value.
Step43: Mother's with higher BMI are more likely to have girls.
Step44: If payment was made by Medicaid, the baby is more likely to be a girl. Private insurance, self-payment, and other payment method are associated with more boys.
Step45: Adding controls
Step46: In fact, once we control for father's race and Hispanic origin, almost every other variable becomes statistically insignificant, including acknowledged paternity.
Step47: Being married still predicts more boys.
Step48: The effect of education disappears.
Step49: The effect of birth order disappears.
Step50: WIC is no longer associated with more girls.
Step51: The effect of obesity disappears.
Step52: The effect of payment method is diminished, but self-payment is still associated with more boys.
Step53: But the effect of prenatal visits is still a strong predictor of more girls.
Step54: And the effect is even stronger if we add a boolean to capture the nonlinearity at 0 visits.
Step55: More controls
Step56: The effect of payment method disappears.
Step57: Here's a version with the addition of a boolean for no prenatal visits.
Step58: Now, surprisingly, the mother's age has a small effect.
Step59: So does the father's age. But both age effects are small and borderline significant.
Step60: What's up with prenatal visits?
Step61: And compute sex ratios for each level of previs
Step62: The effect holds up. People with fewer than average prenatal visits are substantially more likely to have boys.
|
6,443
|
<ASSISTANT_TASK:>
Python Code:
lasso = Lasso(random_state=1, max_iter=10000)
lasso.fit(X_train_std, y_train)
rmse(y_test, lasso.predict(X_test_std))
scores = cross_val_score(cv=10, estimator = lasso, scoring="neg_mean_squared_error", X=X_train_std, y = y_train)
scores = np.sqrt(-scores)
scores
from sklearn import linear_model
from sklearn import metrics
from sklearn import tree
from sklearn import ensemble
from sklearn import neighbors
import xgboost as xgb
rs = 1
estimatores = {
#'Linear': linear_model.LinearRegression(),
'Ridge': linear_model.Ridge(random_state=rs, max_iter=10000),
'Lasso': linear_model.Lasso(random_state=rs, max_iter=10000),
'ElasticNet': linear_model.ElasticNet(random_state=rs, max_iter=10000),
'BayesRidge': linear_model.BayesianRidge(),
'OMP': linear_model.OrthogonalMatchingPursuit(),
'DecisionTree': tree.DecisionTreeRegressor(max_depth=10, random_state=rs),
'RandomForest': ensemble.RandomForestRegressor(random_state=rs),
'KNN': neighbors.KNeighborsRegressor(n_neighbors=5),
'GradientBoostingRegressor': ensemble.GradientBoostingRegressor(n_estimators=300, max_depth=4, learning_rate=0.01, loss="ls", random_state=rs),
'xgboost': xgb.XGBRegressor(max_depth=10)
}
errvals = {}
for k in estimatores:
e = estimatores[k]
e.fit(X_train_std, y_train)
err = np.sqrt(metrics.mean_squared_error(y_test, e.predict(X_test_std)))
errvals[k] = err
result = pd.Series.from_array(errvals).sort_values()
result.plot.barh(width = 0.8)
for y, error in enumerate(result):
plt.text(x = 0.01, y = y - 0.1, s = "%.3f" % error, fontweight='bold', color = "white")
plt.title("Performance comparison of algorithms")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This rmse score seems reasonable. Find cross validation scores.
|
6,444
|
<ASSISTANT_TASK:>
Python Code:
array1 = np.array([1, 2, 3, 4])
array2 = np.array([[1, 2], [3, 4]])
print type(array1), '\n', array1
print type(array2), '\n', array2
array3 = np.arange(1, 4)
print array3, type(array3)
# 0 ~ 10의 범위를 5등분
array4 = np.linspace(0, 10, 5)
print array4
print np.zeros((3, 5))
print np.zeros(5)
# (n1, n2, x, y): (x, y) 행렬(1) n2개 생성 후 그것을 다시 n1개 만큼 생성하여 반환(dimension 증가)
print np.ones((2, 3, 2, 3))
print np.ones(5)
# np.random.rand(n)
## Uniform Distribution으로부터 [0, 1) 실수 n개 출력
print np.random.rand(3)
# np.random.randn(n)
## Standard Normal Distribution으로부터 n개의 실수 출력
print np.random.randn(3)
a = np.random.rand()
print a
np.random.seed(0)
print np.random.rand()
np.random.seed(7)
print np.random.rand()
array1 = np.array([1, 2, 3, 4])
array2 = np.array([[1, 2], [3, 4], [5, 6]])
print array1[0]
print array1[1:-1]
print array2
# 1행 출력
print array2[0]
# 1행 2열 출력
print array2[0, 1]
# 2행부터 끝까지, 1열부터 2열까지
print array2[1:, :2]
# 전체 행, 2열
print array2[:, 1]
# 전체 행, 2열부터(범위)
print array2[:, 1:]
# np.random.randint(x, y, n): [x, y)의 정수 n개 반환
array = np.random.randint(0, 20, 10)
print array, type(array)
# 짝수 선별
even_mask = (array % 2 == 0)
print array[even_mask]
print array[array % 2 == 0]
array = np.arange(1, 6) * 5
print array
array2 = array ** 2
print array2
print array2 ** 0.5
array1 = np.arange(0, 20, 2) + 2
array2 = np.arange(1, 11)
print 'array1:', array1, 'array2:', array2
print '-'*100
print 'array1 - array2', array1 - array2
print 'array1 * array2', array1 * array2
print 'array1 * array1', array1 * array1
array1 = np.array([[1, 1], [1, 1]])
array2 = np.array([[2, 2], [2, 2]])
print array1 * array2 # 원소간 곱셈
print array1.dot(array2) # 행렬곱셈(내적)
array = np.random.randint(1, 10, size = (4,4))
print array
print np.all(array < 7)
print np.any(array % 7 == 0)
array = np.array([[1, 2, 3], [7, 8, 9]])
print array
print array.ravel()
array = np.arange(1, 11)
print array
array2 = array.reshape(2, 5)
print array2
# numpy methods for descriptive statistics
print array
print 'min:', np.min(array) # 최소값
print 'mean:', np.mean(array) # 평균
print 'median:', np.median(array) # 중앙값
print 'max:', np.max(array) # 최대값
print 'std:', np.std(array) # 표준편차
print 'argmin:', np.argmin(array) # 최소값 index 반환
print 'argmax:', np.argmax(array) # 최대값 index 반환
print 'sum:', np.sum(array) # 합계
print 'sqrt:', np.sqrt(array) # 제곱근
#bit-wise XOR
a = 10
print 10 ^ 2
print 17 ^ 8
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: arange
Step2: linspace(start, end, n)
Step3: np.zeros((x, y))
Step4: np.ones((x, y))
Step5: random sub package
Step6: ()
Step7: seed(n)
Step8: slicing
Step9: array masking
Step10: operations
Step11: logical operator
Step12: ravel
Step13: .reshape(x, y)
|
6,445
|
<ASSISTANT_TASK:>
Python Code:
# เรียกใช้ไลบรารี่ที่จำเป็น
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from sklearn import datasets
from __future__ import unicode_literals
matplotlib.rc('font', family='Garuda')
np.random.seed(2)
iris = datasets.load_iris()
# X เป็น array 150x2 เหมือนตารางด้านบน
X = iris.data[:, [0, 2]]
# Y คือประเภทของดอกไม้ แทนด้วย 0=Setosa,1=Versicolour,2=Virginica
Y = iris.target
# plot ข้อมูล
#plt.figure(1, figsize=(7, 5))
plt.plot(X[Y==0, 0], X[Y==0, 1], 'or', label='Setosa')
plt.plot(X[Y==1, 0], X[Y==1, 1], 'og', label='Versicolour')
plt.plot(X[Y==2, 0], X[Y==2, 1], 'ob', label='Virginica')
plt.xlabel('ความยาวใบ')
plt.ylabel('ความยาวกลีบ')
plt.legend(loc='best')
# สร้างข้อมูลเทียมที่ตัวอย่างจากต่างประเภทกันทับซ้อนกัน
nc = 50
cov = np.array([[1, 0.7], [0.3, 0.7]])
X_toy_0 = np.random.randn(nc, 2).dot(cov)
X_toy_1 = np.random.randn(nc, 2).dot(np.diag([2, 0.4])) + np.array([1, 0])
# plot ข้อมูลเทียม
plt.plot(X_toy_0[:, 0], X_toy_0[:, 1], 'or')
plt.plot(X_toy_1[:, 0], X_toy_1[:, 1], 'og')
from mpl_toolkits.mplot3d import Axes3D
X_toy_d3_0 = np.random.randn(nc)*0.5 + 2.0;
X_toy_d3_1 = np.random.randn(nc)*0.5 - 2.0;
# plot ข้อมูลเทียม 3 มิติ
fig = plt.figure(figsize=(7,5))
ax = fig.add_subplot(111, projection='3d')
ax.view_init(20, -100)
ax.scatter(X_toy_0[:, 0], X_toy_0[:, 1], X_toy_d3_0, c='r')
ax.scatter(X_toy_1[:, 0], X_toy_1[:, 1], X_toy_d3_1, c='g')
xte = np.array([5.5, 3.1])
plt.plot(X[Y==0, 0], X[Y==0, 1], 'or', label='Setosa')
plt.plot(X[Y==1, 0], X[Y==1, 1], 'og', label='Versicolour')
plt.plot(X[Y==2, 0], X[Y==2, 1], 'ob', label='Virginica')
plt.plot(xte[0], xte[1], '*k', label='ตัวอย่างทดสอบ', markersize=12)
plt.xlabel('ความยาวใบ')
plt.ylabel('ความยาวกลีบ')
plt.legend(loc='best')
xte = np.array([7.5, 5.5])
plt.plot(X[Y==0, 0], X[Y==0, 1], 'or', label='Setosa')
plt.plot(X[Y==1, 0], X[Y==1, 1], 'og', label='Versicolour')
plt.plot(X[Y==2, 0], X[Y==2, 1], 'ob', label='Virginica')
plt.plot(xte[0], xte[1], '*k', label='ตัวอย่างทดสอบ', markersize=12)
plt.xlabel('ความยาวใบ')
plt.ylabel('ความยาวกลีบ')
plt.legend(loc='best')
def classify_1nn(x, X, Y):
x = x.reshape(1, 2)
dists = np.sum((X - x)**2, 1)
i = np.argmin(dists)
return Y[i]
colours = ['r', 'g', 'b']
plt.figure()
# cox = x coordinate
for cox in np.linspace(4.0, 8.0, 50):
for coy in np.linspace(1.0, 7.0, 50):
xte = np.array([cox, coy])
yhat = classify_1nn(xte, X, Y)
plt.plot(xte[0], xte[1], colours[yhat], marker='s', markersize=6, alpha=0.2)
# plot training data
plt.plot(X[Y==0, 0], X[Y==0, 1], 'or', label='Setosa')
plt.plot(X[Y==1, 0], X[Y==1, 1], 'og', label='Versicolour')
plt.plot(X[Y==2, 0], X[Y==2, 1], 'ob', label='Virginica')
plt.xlabel('ความยาวใบ')
plt.ylabel('ความยาวกลีบ')
plt.legend(loc='best', framealpha=1)
X_toy = np.vstack((X_toy_0, X_toy_1))
Y_toy = np.hstack((np.zeros(nc), np.ones(nc))).astype(int)
plt.figure()
xmin = np.min(X_toy[:, 0])
xmax = np.max(X_toy[:, 0])
ymin = np.min(X_toy[:, 1])
ymax = np.max(X_toy[:, 1])
# cox = x coordinate
for cox in np.linspace(xmin, xmax, 50):
for coy in np.linspace(ymin, ymax, 50):
xte = np.array([cox, coy])
yhat = classify_1nn(xte, X_toy, Y_toy)
plt.plot(xte[0], xte[1], colours[yhat], marker='s', markersize=6, alpha=0.2)
# plot ข้อมูลเทียม
plt.plot(X_toy_0[:, 0], X_toy_0[:, 1], 'or')
plt.plot(X_toy_1[:, 0], X_toy_1[:, 1], 'og')
plt.xlim([xmin, xmax])
plt.ylim([ymin, ymax])
A = (np.random.rand(4,4)*255).astype(int)
A
import matplotlib.cm as cm
plt.imshow(A, interpolation='none', cmap=cm.Greys_r)
plt.imshow(A.reshape(1, -1), interpolation='none', cmap=cm.Greys_r)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: จะเห็นว่าตัวอย่างดอกไม้แต่ละประเภทเกาะกลุ่มกัน แปลว่าปัญหาการจำแนกประเภทอันนี้ไม่ยากมากนัก ขอเสริมอีกหน่อย หากลองเทียบกับข้อมูลต่อไปนี้ (ข้อมูลเทียมที่สุ่มขึ้นมา)
Step2: ในปัญหานี้ข้อมูลของ 2 ประเภททับกัน การจำแนกประเภทจะทำได้ยากกว่า (ความผิดพลาดจะสูงกว่า) ในกรณีนี้ถ้าเป็นไปได้ก็ต้องเพิ่มตัวแปรจาก 2 มิติเป็น 3 มิติหรือมากกว่า เพื่อให้ข้อมูลต่างประเภทกันกระจายออกจากกัน ในกรณีของดอกไม้ตัวอย่างตัวแปรที่ต้องไปเก็บเพิ่มก็เช่น ความยาวลำต้น สีดอก เป็นต้น สมมติว่าเราไปเก็บตัวแปรเพิ่มมาอีก 1 ตัว แล้ว plot เป็น 3 มิติ ข้อมูลอาจจะกลายเป็นแบบนี้
Step3: ซึ่งสามารถแยกประเภทได้ง่ายกว่าข้อมูล 2 มิติ กลับเข้าปัญหาเดิมคือเรื่องแยกประเภทดอกไม้ กรณีนี้เราไม่จำเป็นต้องเก็บตัวแปรเพิ่มเติมเพราะดูจาก plot ข้อมูลไม่ได้ทับกันมาก
Step4: วิธีของ KNN คือ
Step5: ตัวอย่างทดสอบนี้จะถูกจำแนกเป็นสีฟ้า (Virginica) เพราะใกล้จุดสีฟ้าในชุดฝึกสอนมากที่สุด คราวนี้ลองจินตนาการว่าลองเลื่อนจุดทดสอบ (จุดดำ) นี้ไปทั่วทุกจุดที่เป็นไปได้โดยไม่เลื่อนจุดในชุดฝึกสอน ทุกๆจุดที่เลื่อนไปก็ใส่สีที่จุดทดสอบ ณ ตำแหน่งนั้นจะถูกจำแนก เราจะได้ประมาณนี้
Step6: ภาพที่เห็นนี้คือขอบเขตการตัดสินใจของ KNN โดยทั่วไปยิ่งค่า K มาก ขอบเขตการตัดสินใจ (decision boundary) จะ smooth คือไม่เปลี่ยนสี (ประเภทดอกไม้) กระทันหันเมื่อเลื่อนจุดทดสอบเพียงเล็กน้อย แต่ถ้าค่า K ต่ำ เช่น K=1 decision boundary อาจซับซ้อนหรือขรุขระกว่า ข้อมูลชุดนี้อาจไม่เห็นภาพมากนักกว่าค่า K น้อยมากจะเป็นอย่างไรเพราะตัวอย่างต่างประเภทแยกจากกันอยู่แล้ว
Step7: จะเห็นว่าขอบเขตการตัดสินใจค่อนข้างขรุขระ ถ้า K มีค่ามากขึ้นจะเรียบกว่านี้ ถ้าอยากลองสร้างข้อมูลเอง 2 มิติ และกำหนดค่า K ได้ตามใจชอบลองเล่นได้ที่หน้านี้ จะเห็นกล่องขึ้นมา ทุกๆคลิกลงบนพื้นที่จะสร้างตัวอย่างฝึกสอนขึ้นมา สามารถเปลี่ยนประเภทตัวอย่างได้โดยกด ปุ่มสีแดงหรือสีฟ้าด้านบน กด ++ จะเพิ่มค่า K หนึ่งค่า กด -- จะลดค่า K ลงหนึ่ง decision boundary จะถูกอัพเดททุกครั้งที่มีการกดปุ่ม ผมแนะนำให้ลองสร้างข้อมูลขึ้นมาตามใจชอบแล้วกด ++ หลายๆครั้ง จากนั้นกด -- หลายๆครั้งเพื่อเทียบดูว่าเปลี่ยน K แล้วขอบเขตตัดสินใจเปลี่ยนไปอย่างไร
Step8: หากเปลี่ยนเป็นภาพก็จะได้
Step9: ค่าในแต่ละช่องยิ่งมากก็ยิ่งสว่าง ถ้าจะทำให้เป็นแถวในตารางก็แค่อ่านแต่ละค่าในตารางไล่ไปตามแต่ละแถวแล้วเอาไปใส่เป็นแถวเดียวก็ได้แล้ว แบบนี้
|
6,446
|
<ASSISTANT_TASK:>
Python Code:
from lightning import Lightning
from numpy import random
lgn = Lightning(ipython=True, host='http://public.lightning-viz.org')
states = ["NA", "AK", "AL", "AR", "AZ", "CA", "CO","CT",
"DC","DE","FL","GA","HI","IA","ID","IL","IN",
"KS","KY","LA","MA","MD","ME","MI","MN","MO",
"MS","MT","NC","ND","NE","NH","NJ","NM","NV",
"NY","OH","OK","OR","PA","RI","SC","SD","TN",
"TX","UT","VA","VI","VT","WA","WI","WV","WY"]
values = random.randn(len(states))
lgn.map(states, values, colormap='Purples')
values = (random.rand(len(states)) * 5).astype('int')
lgn.map(states, values, colormap='Pastel1')
values = (random.rand(len(states)) * 5).astype('int')
lgn.map(states, values, colormap='Lightning')
countries = ['ISO', 'SLE', 'COD', 'CAF', 'TCD', 'AGO', 'GNB', 'GNQ', 'MLI', 'MWI',
'BDI', 'NGA', 'SOM', 'SSD', 'MOZ', 'CIV', 'CMR', 'GIN', 'BFA', 'AFG',
'ZMB', 'MRT', 'SWZ', 'LSO', 'TGO', 'BEN', 'COG', 'COM', 'LBR', 'PAK',
'UGA', 'NER', 'DJI', 'YEM', 'TZA', 'GMB', 'RWA', 'ETH', 'KEN', 'TJK',
'GHA', 'SEN', 'ERI', 'MMR', 'ZWE', 'ZAF', 'GAB', 'KHM', 'TLS', 'IND',
'TKM', 'PNG', 'HTI', 'LAO', 'UZB', 'STP', 'BOL', 'MDG', 'NPL', 'ESH',
'BGD', 'NAM', 'SLB', 'AZE', 'BTN', 'KIR', 'BWA', 'KGZ', 'FSM', 'IRQ',
'MAR', 'PRY', 'GUY', 'MNG', 'GTM', 'DZA', 'DOM', 'IDN', 'VUT', 'HND',
'PRK', 'KAZ', 'TTO', 'JAM', 'BRA', 'EGY', 'PHL', 'WSM', 'PSE', 'SUR',
'TON', 'GEO', 'CPV', 'NIC', 'ECU', 'ARM', 'PER', 'IRN', 'SLV', 'JOR',
'COL', 'TUN', 'VCT', 'CHN', 'FJI', 'PAN', 'VEN', 'LBY', 'MEX', 'TUR',
'ALB', 'ABW', 'VNM', 'BLZ', 'MDA', 'MDV', 'NCL', 'SYR', 'GUF', 'SAU',
'ARG', 'MUS', 'URY', 'UKR', 'ROU', 'MKD', 'LCA', 'THA', 'BRB', 'GUM',
'MNE', 'VIR', 'LKA', 'GRD', 'SYC', 'BHS', 'ATG', 'LBN', 'CRI', 'BGR',
'OMN', 'KWT', 'BIH', 'PYF', 'BHR', 'LVA', 'MTQ', 'QAT', 'CHL', 'PRI',
'GLP', 'ARE', 'USA', 'BLR', 'SVK', 'POL', 'LTU', 'MLT', 'HRV', 'MYT',
'REU', 'HUN', 'CAN', 'TWN', 'BRN', 'CUB', 'MAC', 'NZL', 'GBR', 'MYS',
'EST', 'KOR', 'AUS', 'CYP', 'GRC', 'CHE', 'NLD', 'ISR', 'DNK', 'BEL',
'AUT', 'IRL', 'DEU', 'FRA', 'ESP', 'ITA', 'PRT', 'CZE', 'NOR', 'SVN',
'FIN', 'JPN', 'SWE', 'LUX', 'SGP', 'ISL', 'HKG', 'FLK', 'SMR', 'TCA',
'VAT', 'RUS', 'GRL']
values = (random.rand(len(countries)) * 5).astype('int')
lgn.map(countries, values, colormap='Pastel1', width=900)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Connect to server
Step2: <hr> US Map
Step3: Discrete values are automatically handled for appriopriate colormaps
Step4: Including our custom Lightning colormap
Step5: <hr> World Map
Step6: Now plot random values. We'll also make it bigger so it's easier to see.
|
6,447
|
<ASSISTANT_TASK:>
Python Code:
# imports
import pandas
import matplotlib.pyplot as plt
from timeit import default_timer as timer
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.grid_search import GridSearchCV
# load dataset from task 1
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class']
dataset = pandas.read_csv(url, names=names)
# split-out dataset
array = dataset.values
X = array[:,0:4]
y = array[:,4]
# specify parameter space and performance metric
max_n = 30
k = list(range(1, max_n + 1))
parameter_grid = {"n_neighbors": k}
scoring = "accuracy"
cross_val = 10
# parameter for performance test
max_jobs = 8
best_in = 3
# performance test
measurements = []
i = 1
while i <= max_jobs:
min_t = float("inf")
for j in range(best_in):
kneighbors = KNeighborsClassifier()
grid_search = GridSearchCV(kneighbors, parameter_grid, cv=cross_val, scoring=scoring, n_jobs=i)
start = timer()
grid_search.fit(X, y)
stop = timer()
min_t = min(min_t, stop - start)
measurements.append(min_t)
i += 1
fig = plt.figure()
fig.suptitle('Visualization of the runtime depending on the number of used jobs.')
plt.xticks(range(1, max_jobs + 1))
ax = fig.add_subplot(111)
ax.set_xlabel('used jobs')
ax.set_ylabel('runtime in seconds')
ax.plot(range(1, max_jobs + 1), measurements, 'ro')
plt.show()
neighbors = [s[0]["n_neighbors"] for s in grid_search.grid_scores_]
val_score = [s[1] for s in grid_search.grid_scores_]
fig = plt.figure()
fig.suptitle('Visualization of the precision depending on the used parameter n_neighbors.')
plt.xticks(range(1,max_n + 1))
ax = fig.add_subplot(111)
ax.set_xlabel('n_neighbors')
ax.set_ylabel('mean test score')
ax.plot(neighbors, val_score, 'ro')
plt.show()
max_score = max(val_score)
i = val_score.index(max_score)
n = neighbors[i]
print("Maximum precision:", max_score)
print("Is reached with:","n_neighbors =", n)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we load the iris data from task 1 and split it into training and validation set.
Step2: Then we specify our parameter space and performance metric.
Step3: Next we run a performance test on GridSearchCV. Therefor we search mulitple times to maximize the precision save the best time for later comparison. Each time we use a different number of jobs.
Step4: Finally we plot our results
|
6,448
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('YbNE3zhtsoo', width=800, height=450)
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from tensorflow.python import keras
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, Conv2D, Dropout
img_rows, img_cols = 28, 28
num_classes = 10
def data_prep(raw):
out_y = keras.utils.to_categorical(raw.label, num_classes)
num_images = raw.shape[0]
x_as_array = raw.values[:,1:]
x_shaped_array = x_as_array.reshape(num_images, img_rows, img_cols, 1)
out_x = x_shaped_array / 255
return out_x, out_y
train_file = "../input/digit-recognizer/train.csv"
raw_data = pd.read_csv(train_file)
x, y = data_prep(raw_data)
model = Sequential()
model.add(Conv2D(20, kernel_size=(3, 3),
activation='relu',
input_shape=(img_rows, img_cols, 1)))
model.add(Conv2D(20, kernel_size=(3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
model.fit(x, y,
batch_size=128,
epochs=2,
validation_split = 0.2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sample Code
|
6,449
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
import numpy as np
import tensorflow as tf
import tensorflow.compat.v1 as tf1
import tensorflow_datasets as tfds
def normalize_img(image, label):
return tf.cast(image, tf.float32) / 255., label
def _input_fn():
ds_train = tfds.load(
name='mnist',
split='train',
shuffle_files=True,
as_supervised=True)
ds_train = ds_train.map(
normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_train = ds_train.batch(128)
ds_train = ds_train.repeat(100)
return ds_train
def _eval_input_fn():
ds_test = tfds.load(
name='mnist',
split='test',
shuffle_files=True,
as_supervised=True)
ds_test = ds_test.map(
normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(128)
return ds_test
def _model_fn(features, labels, mode):
flatten = tf1.layers.Flatten()(features)
features = tf1.layers.Dense(128, 'relu')(flatten)
logits = tf1.layers.Dense(10)(features)
loss = tf1.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
optimizer = tf1.train.AdagradOptimizer(0.005)
train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())
return tf1.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
estimator = tf1.estimator.Estimator(model_fn=_model_fn)
start_time = time.time()
max_train_seconds = 20
def should_stop_fn():
return time.time() - start_time > max_train_seconds
early_stopping_hook = tf1.estimator.experimental.make_early_stopping_hook(
estimator=estimator,
should_stop_fn=should_stop_fn,
run_every_secs=1,
run_every_steps=None)
train_spec = tf1.estimator.TrainSpec(
input_fn=_input_fn,
hooks=[early_stopping_hook])
eval_spec = tf1.estimator.EvalSpec(input_fn=_eval_input_fn)
tf1.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
(ds_train, ds_test), ds_info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)
ds_train = ds_train.map(
normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_train = ds_train.batch(128)
ds_test = ds_test.map(
normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(128)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
optimizer=tf.keras.optimizers.Adam(0.005),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
# Only around 25 epochs are run during training, instead of 100.
history = model.fit(
ds_train,
epochs=100,
validation_data=ds_test,
callbacks=[callback]
)
len(history.history['loss'])
class LimitTrainingTime(tf.keras.callbacks.Callback):
def __init__(self, max_time_s):
super().__init__()
self.max_time_s = max_time_s
self.start_time = None
def on_train_begin(self, logs):
self.start_time = time.time()
def on_train_batch_end(self, batch, logs):
now = time.time()
if now - self.start_time > self.max_time_s:
self.model.stop_training = True
# Limit the training time to 30 seconds.
callback = LimitTrainingTime(30)
history = model.fit(
ds_train,
epochs=100,
validation_data=ds_test,
callbacks=[callback]
)
len(history.history['loss'])
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(0.005)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
train_acc_metric = tf.keras.metrics.SparseCategoricalAccuracy()
train_loss_metric = tf.keras.metrics.SparseCategoricalCrossentropy()
val_acc_metric = tf.keras.metrics.SparseCategoricalAccuracy()
val_loss_metric = tf.keras.metrics.SparseCategoricalCrossentropy()
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
train_loss_metric.update_state(y, logits)
return loss_value
@tf.function
def test_step(x, y):
logits = model(x, training=False)
val_acc_metric.update_state(y, logits)
val_loss_metric.update_state(y, logits)
epochs = 100
patience = 5
wait = 0
best = 0
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
for step, (x_batch_train, y_batch_train) in enumerate(ds_train):
loss_value = train_step(x_batch_train, y_batch_train)
if step % 200 == 0:
print("Training loss at step %d: %.4f" % (step, loss_value.numpy()))
print("Seen so far: %s samples" % ((step + 1) * 128))
train_acc = train_acc_metric.result()
train_loss = train_loss_metric.result()
train_acc_metric.reset_states()
train_loss_metric.reset_states()
print("Training acc over epoch: %.4f" % (train_acc.numpy()))
for x_batch_val, y_batch_val in ds_test:
test_step(x_batch_val, y_batch_val)
val_acc = val_acc_metric.result()
val_loss = val_loss_metric.result()
val_acc_metric.reset_states()
val_loss_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
# The early stopping strategy: stop the training if `val_loss` does not
# decrease over a certain number of epochs.
wait += 1
if val_loss > best:
best = val_loss
wait = 0
if wait >= patience:
break
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Migrate early stopping
Step2: TensorFlow 1
Step3: In TensorFlow 1, early stopping works by setting up an early stopping hook with tf.estimator.experimental.make_early_stopping_hook. You pass the hook to the make_early_stopping_hook method as a parameter for should_stop_fn, which can accept a function without any arguments. The training stops once should_stop_fn returns True.
Step4: TensorFlow 2
Step5: In TensorFlow 2, when you use the built-in Keras Model.fit (or Model.evaluate), you can configure early stopping by passing a built-in callback—tf.keras.callbacks.EarlyStopping—to the callbacks parameter of Model.fit.
Step6: TensorFlow 2
Step7: TensorFlow 2
Step8: Define the parameter update functions with tf.GradientTape and the @tf.function decorator for a speedup
Step9: Next, write a custom training loop, where you can implement your early stopping rule manually.
|
6,450
|
<ASSISTANT_TASK:>
Python Code:
# Python built in support for TCP sockets
import socket
# this just opens a 'porthole' out from my computer
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# this connects me to the other computer
mysock.connect(('www.py4inf.com', 80))
import socket
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect(('www.py4inf.com', 80))
mysock.send(b'GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\n\n')
while True:
data = mysock.recv(512)
if ( len(data) < 1 ) :
break
print(data);
mysock.close()
import socket
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect(('www.py4inf.com', 80))
# since I need to send bytes and not a string... I add a 'b' literal before the GET
mysock.send(b'GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\n\n')
while True:
data = mysock.recv(512)
if ( len(data) < 1 ) :
break
print(data);
mysock.close()
import urllib.request
fhand = urllib.request.urlopen('http://www.py4inf.com/code/romeo.txt')
# the response needs to be translated to html using read()
fhand_html = fhand.read()
print(fhand_html)
# a nicer version of the code...
import urllib.request
with urllib.request.urlopen('http://www.py4inf.com/code/romeo.txt') as response:
fhand_html = response.read()
#fhand_html = response.readline()
#fhand_html = response.readlines()
print(fhand_html)
# read() - will store the response as a string
# readline() - will store only the first line as a string
# readlines() - will store the response as a list
# https://pymotw.com/3/urllib.request/
from urllib import request
URL = 'http://data.pr4e.org/intro-short.txt'
response = request.urlopen(URL)
print('RESPONSE:', response)
print('URL :', response.geturl())
headers = response.info()
print('DATE :', headers['date'])
print('HEADERS :')
print('---------')
print(headers)
data = response.read().decode('utf-8')
print('LENGTH :', len(data))
print('DATA :')
print('---------')
print(data)
# getting the response code, with error handling
import urllib.request
URL = 'http://data.pr4e.org/intro-short.txt'
try:
response = urllib.request.urlopen(URL)
if response.getcode() == 200:
print('Bingo')
else:
print('The response code was not 200, but: {}'.format(
response.get_code()))
except urllib.error.HTTPError as e:
print('''An error occurred: {}
The response code was {}'''.format(e, e.getcode()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's write a browser
Step2: Now make the same thing easier with another library
Step3: Doing the assignment
|
6,451
|
<ASSISTANT_TASK:>
Python Code:
from nltk.corpus import stopwords
import string
from transform.normalizer import *
from transform.parser import *
from match.match import *
import inspect
import jellyfish
from retrieve.search import *
punctuation = set(string.punctuation)
language = 'portuguese'
prefix_file = '../data/prefixes.csv'
with open(prefix_file, 'r') as g:
prefixes = g.read().splitlines()
address_prefixes = prefixes
stopw = stopwords.words(language)
address = "Rua XV de Novembro, 123 bloco 23 A"
normalized_address = normalize_address(
address, punctuation, stopw , address_prefixes)
print("Normalized address: ", normalized_address)
inspect.getsourcelines(normalize_address)
parsed_address = parse_address(normalized_address)
print(parsed_address)
inspect.getsourcelines(parse_address)
schema = create_schema()
idx = create_index(schema, 'indexdir')
results = search(parsed_address['street'], 'street', idx)
print(results)
print(address)
similarity(parsed_address['street'],results[0]['street'] )
similarity(parsed_address['street'],results[1]['street'] )
similarity(str(parsed_address['number']),str(results[0]['number'] ))
similarity(str(parsed_address['number']),str(results[1]['number'] ))
similarity(parsed_address['complement'],results[0]['complement'] )
similarity(parsed_address['complement'],results[1]['complement'] )
print("Original Address:", address)
print("Canonical Address:", str(results[1]['street']) + ', ' + str(results[1]['number']) + ' ' + str(results[1]['complement']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, let's read the data that we're going to use to normalize and parse the addresses
Step2: punctuation is the file with the punctuation characters that we want to remove.
Step3: So what are we doing here? Let's see what normalize_address is doing
Step4: So we are doing several operations in sequence
Step5: So how are we doing that?
Step6: That's the regular expression
Step7: So now we have our candidates! But the other informations are different from what whe have in our address. Is this a match?
Step8: We have some prior information about how addresses are and which parts are more important than others. We can devise a matching algorithm with a linear regression, for example, using the knowledge that street names are more important than complements
Step9: So the similarity of the street name is exactly the same. Let's compare the numbers
Step10: Ops, still the same. Let's go to complements
Step11: Ok, there's a small difference, but we will assume that we can work with that! The second canonical address is a better match than the first one
|
6,452
|
<ASSISTANT_TASK:>
Python Code:
preamble = np.array([1,0,0,0,1,0,1,1], dtype = 'uint8')
preamble_detect = np.where(np.abs(np.correlate(2*bits.astype('int')-1, 2*preamble.astype('int')-1)) == 8)[0]
preamble_offset = np.argmax(np.histogram(preamble_detect % subframe_size, bins = np.arange(0,subframe_size))[0])
subframes = bits[preamble_offset:]
subframes = subframes[:subframes.size//subframe_size*subframe_size].reshape((-1,subframe_size))
words = subframes.reshape((-1,word_size))
# Last bits from previous word, used for parity calculations
words_last = np.roll(words[:,-1], 1)
words_prelast = np.roll(words[:,-2], 1)
# Correct data using last bit from previous word
words_data = words[:, :word_data_size] ^ words_last.reshape((-1,1))
subframes_data = words_data.reshape((-1,subframe_data_size))
# Parity checks for each of the bits (0 means valid)
parity0 = np.bitwise_xor.reduce(words_data[:, np.array([1,2,3,5,6,10,11,12,13,14,17,18,20,23])-1], axis = 1) ^ words_prelast ^ words[:,word_data_size]
parity1 = np.bitwise_xor.reduce(words_data[:, np.array([2,3,4,6,7,11,12,13,14,15,18,19,21,24])-1], axis = 1) ^ words_last ^ words[:,word_data_size+1]
parity2 = np.bitwise_xor.reduce(words_data[:, np.array([1,3,4,5,7,8,12,13,14,15,16,19,20,22])-1], axis = 1) ^ words_prelast ^ words[:,word_data_size+2]
parity3 = np.bitwise_xor.reduce(words_data[:, np.array([2,4,5,6,8,9,13,14,15,16,17,20,21,23])-1], axis = 1) ^ words_last ^ words[:,word_data_size+3]
parity4 = np.bitwise_xor.reduce(words_data[:, np.array([1,3,5,6,7,9,10,14,15,16,17,18,21,22,24])-1], axis = 1) ^ words_last ^ words[:,word_data_size+4]
parity5 = np.bitwise_xor.reduce(words_data[:, np.array([3,5,6,8,9,10,11,13,15,19,22,23,24])-1], axis = 1) ^ words_prelast ^ words[:,word_data_size+5]
# Parity check for word
parity = parity0 | parity1 | parity2 | parity3 | parity4 | parity5
# Parity check for subframe
parity_subframe = np.any(parity.reshape((-1,subframe_size//word_size)), axis = 1)
parity_subframe
correct_frames = (parity[::10] == 0) & (parity[1::10] == 0)
plt.plot(correct_frames)
tow = np.sum(words_data[1::10,:17].astype('int') * 2**np.arange(16,-1,-1), axis = 1) * 6
plt.plot(np.arange(tow.size)[correct_frames], tow[correct_frames])
subframe_id = np.packbits(words_data[1::10,19:22], axis = 1).ravel() >> 5
subframe_id[correct_frames]
np.any(words_data[1::10,17:19][correct_frames])
filler_subframe = subframes[correct_frames & (subframe_id == 1), 60:][0,:]
filler_subframe
np.any(subframes[correct_frames & (subframe_id <= 3), 60:] ^ filler_subframe, axis = 1)
np.all(parity_subframe[correct_frames & (subframe_id <= 3)])
np.all(parity_subframe[correct_frames & (subframe_id >= 4)])
np.any(subframes_data[correct_frames & (subframe_id >= 4), 2*word_data_size:2*word_data_size+2] ^ np.array([0,1]))
svid_subframe4 = np.packbits(subframes_data[correct_frames & (subframe_id == 4), 2*word_data_size+2:2*word_data_size+8], axis = 1).ravel() >> 2
svid_subframe5 = np.packbits(subframes_data[correct_frames & (subframe_id == 5), 2*word_data_size+2:2*word_data_size+8], axis = 1).ravel() >> 2
svid_subframe4
svid_subframe5
subframe5_page25 = subframes_data[correct_frames & (subframe_id == 5), :][svid_subframe5 == 51, :]
toa = np.packbits(subframe5_page25[:,2*word_data_size+8:2*word_data_size+16], axis = 1).ravel().astype('int') * 2**12
wna = np.packbits(subframe5_page25[:,2*word_data_size+16:2*word_data_size+24], axis = 1).ravel() + 2048
toa
wna
subframe5_page25[:,2*word_data_size+24:][:,:6*6*4:].reshape((-1,6*4,6))
subframe4_page25 = subframes_data[correct_frames & (subframe_id == 4), :][svid_subframe4 == 63, :]
anti_spoofing = subframe4_page25[:,2*word_data_size+8:][:,:32*4].reshape((-1,32,4))
anti_spoofing
health = subframe4_page25[:,2*word_data_size+8+32*4+2:][:,:6*8].reshape((-1,8,6))
health
np.packbits(subframes_data[correct_frames & (subframe_id == 5), :][svid_subframe5 <= 24, 3*word_data_size:3*word_data_size+8], axis = 1) * 2**12
np.packbits(subframes_data[correct_frames & (subframe_id == 4), :][svid_subframe4 <= 32, 3*word_data_size:3*word_data_size+8], axis = 1) * 2**12
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Most subframes do not have valid parity, as shown below. We use a weaker heuristic, where only parity of TLM and HOW words are required to be valid. With this criterion, all the subframes are valid, except the first few and last subframes.
Step2: Analysis of the TOW in valid frames. It ranges between 298338 and 301728, corresponding to 10
Step3: The subframe ID in the HOW word cycles as usual.
Step4: Alert and anti-spoofing flags in the HOW word are not set.
Step5: For subframes 1 (WN, health and clock), 2 and 3 (ephemeris), a filler of alternating 1's and 0's is transmitted in all the words after the HOW (including the parity bits). This makes parity invalid for these subframes.
Step6: On the other hand, the parity for subframes 4 and 5 (almanacs) is correct.
Step7: Data ID field for subframes 4 and 5 has the nominal value 01.
Step8: The SVID in subframes 4 and 5 follows the nominal schedule, except that SVID 4 has been replaced with 0 to indicate dummy SV. This is normal, since PRN 4 is not currently assigned.
Step9: For subframe 5, we omit the study of pages 1 through 24, which contain almanac data and we assume to be valid. We study page 25, which is marked by SVID 51 and contains SV health.
Step10: The t_oa and WN_a for page 25 correspond to times near the beginning and end of GPS week 2059.
Step11: SV health in subframe 5 page 25 indicates that all SV except SV 4 are healthy.
Step12: The anti-spoofing and SV configurations flags in subframe 4 page 25 indicate that AS is on for all SVs and different signal capabilities for different SVs.
Step13: The health flags in subframe 4 page 25 indicate that SV 25 to 32 are all healthy.
Step14: Below we show t_oa for almanac entries in subframes 4 and 5.
|
6,453
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from pydiffexp import DEAnalysis
test_path = "/Users/jfinkle/Documents/Northwestern/MoDyLS/Python/sprouty/data/raw_data/all_data_formatted.csv"
raw_data = pd.read_csv(test_path, index_col=0)
# Initialize analysis object with data. Data is retained
'''
The hierarchy provides the names for each label in the multiindex. 'condition' and 'time' are supplied as the reference
labels, which are used to make contrasts.
'''
hierarchy = ['condition', 'well', 'time', 'replicate']
dea = DEAnalysis(raw_data, index_names=hierarchy, reference_labels=['condition', 'time'] )
raw_data.head()
dea.data.head()
dea.data.columns
dea.experiment_summary
dea.print_experiment_summary()
# Types of contrasts
c_dict = {'Diff0': "(KO_15-KO_0)-(WT_15-WT_0)", 'Diff15': "(KO_60-KO_15)-(WT_60-WT_15)",
'Diff60': "(KO_120-KO_60)-(WT_120-WT_60)", 'Diff120': "(KO_240-KO_120)-(WT_240-WT_120)"}
c_list = ["KO_15-KO_0", "KO_60-KO_15", "KO_120-KO_60", "KO_240-KO_120"]
c_string = "KO_0-WT_0"
dea.fit(c_string)
print(dea.design, '', dea.contrast_robj, '', dea.de_fit)
dea.get_results(p_value=0.01, n=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step2: Let's look at the data that has been added to the object. Notice that the columns are a Multiindex in which the levels correspond to lists of the possible values and the names of each level come from the list supplied to index_names
Step3: Formatted data as Hierarchial Dataframe
Step4: When the data is added, DEA automatically saves a summary of the experiment, which can also be summarized with the print function.
Step5: Model Fitting
Step6: After the fit, we want to see our significant results. DEA calls <a href="http
|
6,454
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Import { display-mode: "form" }
import numpy as np
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
tfk = tf.keras
tfkl = tf.keras.layers
tfpl = tfp.layers
tfd = tfp.distributions
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
datasets, datasets_info = tfds.load(name='mnist',
with_info=True,
as_supervised=False)
def _preprocess(sample):
image = tf.cast(sample['image'], tf.float32) / 255. # Scale to unit interval.
image = image < tf.random.uniform(tf.shape(image)) # Randomly binarize.
return image, image
train_dataset = (datasets['train']
.map(_preprocess)
.batch(256)
.prefetch(tf.data.AUTOTUNE)
.shuffle(int(10e3)))
eval_dataset = (datasets['test']
.map(_preprocess)
.batch(256)
.prefetch(tf.data.AUTOTUNE))
input_shape = datasets_info.features['image'].shape
encoded_size = 16
base_depth = 32
prior = tfd.Independent(tfd.Normal(loc=tf.zeros(encoded_size), scale=1),
reinterpreted_batch_ndims=1)
encoder = tfk.Sequential([
tfkl.InputLayer(input_shape=input_shape),
tfkl.Lambda(lambda x: tf.cast(x, tf.float32) - 0.5),
tfkl.Conv2D(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(2 * base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(2 * base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(4 * encoded_size, 7, strides=1,
padding='valid', activation=tf.nn.leaky_relu),
tfkl.Flatten(),
tfkl.Dense(tfpl.MultivariateNormalTriL.params_size(encoded_size),
activation=None),
tfpl.MultivariateNormalTriL(
encoded_size,
activity_regularizer=tfpl.KLDivergenceRegularizer(prior)),
])
decoder = tfk.Sequential([
tfkl.InputLayer(input_shape=[encoded_size]),
tfkl.Reshape([1, 1, encoded_size]),
tfkl.Conv2DTranspose(2 * base_depth, 7, strides=1,
padding='valid', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(2 * base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(2 * base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(filters=1, kernel_size=5, strides=1,
padding='same', activation=None),
tfkl.Flatten(),
tfpl.IndependentBernoulli(input_shape, tfd.Bernoulli.logits),
])
vae = tfk.Model(inputs=encoder.inputs,
outputs=decoder(encoder.outputs[0]))
negloglik = lambda x, rv_x: -rv_x.log_prob(x)
vae.compile(optimizer=tf.optimizers.Adam(learning_rate=1e-3),
loss=negloglik)
_ = vae.fit(train_dataset,
epochs=15,
validation_data=eval_dataset)
# We'll just examine ten random digits.
x = next(iter(eval_dataset))[0][:10]
xhat = vae(x)
assert isinstance(xhat, tfd.Distribution)
#@title Image Plot Util
import matplotlib.pyplot as plt
def display_imgs(x, y=None):
if not isinstance(x, (np.ndarray, np.generic)):
x = np.array(x)
plt.ioff()
n = x.shape[0]
fig, axs = plt.subplots(1, n, figsize=(n, 1))
if y is not None:
fig.suptitle(np.argmax(y, axis=1))
for i in range(n):
axs.flat[i].imshow(x[i].squeeze(), interpolation='none', cmap='gray')
axs.flat[i].axis('off')
plt.show()
plt.close()
plt.ion()
print('Originals:')
display_imgs(x)
print('Decoded Random Samples:')
display_imgs(xhat.sample())
print('Decoded Modes:')
display_imgs(xhat.mode())
print('Decoded Means:')
display_imgs(xhat.mean())
# Now, let's generate ten never-before-seen digits.
z = prior.sample(10)
xtilde = decoder(z)
assert isinstance(xtilde, tfd.Distribution)
print('Randomly Generated Samples:')
display_imgs(xtilde.sample())
print('Randomly Generated Modes:')
display_imgs(xtilde.mode())
print('Randomly Generated Means:')
display_imgs(xtilde.mean())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TFP 확률적 계층
Step2: 빠르게 처리하세요!
Step3: 참고
Step4: 위의 preprocess()는 image가 아닌 image, image를 반환합니다. Keras는 (example, label) 입력 형식, 즉 $p\theta(y|x)$를 가진 차별적 모델로 설정되었기 때문입니다. VAE의 목표는 x 자체에서 입력 x를 복구하는 것이므로(즉, $p_\theta(x|x)$) 데이터 쌍은 (example, example)입니다.
Step5: 추론을 수행합니다.
Step6: 보세요, 노~~핸즈~~텐서!
|
6,455
|
<ASSISTANT_TASK:>
Python Code:
!apt-get install libsdl2-dev
!apt-get install libosmesa6-dev
!apt-get install libffi-dev
!apt-get install gettext
!apt-get install python3-numpy-dev python3-dev
BAZEL_VERSION = '3.6.0'
!wget https://github.com/bazelbuild/bazel/releases/download/{BAZEL_VERSION}/bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!chmod +x bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!./bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!bazel --version
!git clone https://github.com/deepmind/lab.git
%%writefile lab/bazel/python.BUILD
# Description:
# Build rule for Python and Numpy.
# This rule works for Debian and Ubuntu. Other platforms might keep the
# headers in different places, cf. 'How to build DeepMind Lab' in build.md.
cc_library(
name = "python",
hdrs = select(
{
"@bazel_tools//tools/python:PY3": glob([
"usr/include/python3.6m/*.h",
"usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/*.h",
]),
},
no_match_error = "Internal error, Python version should be one of PY2 or PY3",
),
includes = select(
{
"@bazel_tools//tools/python:PY3": [
"usr/include/python3.6m",
"usr/local/lib/python3.6/dist-packages/numpy/core/include",
],
},
no_match_error = "Internal error, Python version should be one of PY2 or PY3",
),
visibility = ["//visibility:public"],
)
alias(
name = "python_headers",
actual = ":python",
visibility = ["//visibility:public"],
)
!cd lab && bazel build -c opt --python_version=PY3 //python/pip_package:build_pip_package
!cd lab && ./bazel-bin/python/pip_package/build_pip_package /tmp/dmlab_pkg
!pip install /tmp/dmlab_pkg/deepmind_lab-1.0-py3-none-any.whl --force-reinstall
!pip install dm_env
!pip install dm-acme[reverb]
!pip install dm-acme[tf]
!pip install dm-sonnet
# Upgrade to recent commit for latest R2D2 learner.
!pip install --upgrade git+https://github.com/deepmind/acme.git@3dfda9d392312d948906e6c567c7f56d8c911de5
# @title Imports
import copy
import functools
from acme import environment_loop
from acme import specs
from acme.adders import reverb as acme_reverb
from acme.agents.tf import actors
from acme.agents.tf.r2d2 import learning as r2d2
from acme.tf import utils as tf_utils
from acme.tf import networks
from acme.utils import loggers
from acme.wrappers import observation_action_reward
import tree
import deepmind_lab
import dm_env
import numpy as np
import reverb
import sonnet as snt
import tensorflow as tf
import trfl
# @title Environment
_ACTION_MAP = {
0: (0, 0, 0, 1, 0, 0, 0),
1: (0, 0, 0, -1, 0, 0, 0),
2: (0, 0, -1, 0, 0, 0, 0),
3: (0, 0, 1, 0, 0, 0, 0),
4: (-10, 0, 0, 0, 0, 0, 0),
5: (10, 0, 0, 0, 0, 0, 0),
6: (-60, 0, 0, 0, 0, 0, 0),
7: (60, 0, 0, 0, 0, 0, 0),
8: (0, 10, 0, 0, 0, 0, 0),
9: (0, -10, 0, 0, 0, 0, 0),
10: (-10, 0, 0, 1, 0, 0, 0),
11: (10, 0, 0, 1, 0, 0, 0),
12: (-60, 0, 0, 1, 0, 0, 0),
13: (60, 0, 0, 1, 0, 0, 0),
14: (0, 0, 0, 0, 1, 0, 0),
}
class DeepMindLabEnvironment(dm_env.Environment):
DeepMind Lab environment.
def __init__(self, level_name: str, action_repeats: int = 4):
Construct environment.
Args:
level_name: DeepMind lab level name (e.g. 'rooms_watermaze').
action_repeats: Number of times the same action is repeated on every
step().
config = dict(fps='30',
height='72',
width='96',
maxAltCameraHeight='1',
maxAltCameraWidth='1',
hasAltCameras='false')
# seekavoid_arena_01 is not part of dmlab30.
if level_name != 'seekavoid_arena_01':
level_name = 'contributed/dmlab30/{}'.format(level_name)
self._lab = deepmind_lab.Lab(level_name, ['RGB_INTERLEAVED'], config)
self._action_repeats = action_repeats
self._reward = 0
def _observation(self):
last_action = getattr(self, '_action', 0)
last_reward = getattr(self, '_reward', 0)
self._last_observation = observation_action_reward.OAR(
observation=self._lab.observations()['RGB_INTERLEAVED'],
action=np.array(last_action, dtype=np.int64),
reward=np.array(last_reward, dtype=np.float32))
return self._last_observation
def reset(self):
self._lab.reset()
return dm_env.restart(self._observation())
def step(self, action):
if not self._lab.is_running():
return dm_env.restart(self.reset())
self._action = action.item()
if self._action not in _ACTION_MAP:
raise ValueError('Action not available')
lab_action = np.array(_ACTION_MAP[self._action], dtype=np.intc)
self._reward = self._lab.step(lab_action, num_steps=self._action_repeats)
if self._lab.is_running():
return dm_env.transition(self._reward, self._observation())
return dm_env.termination(self._reward, self._last_observation)
def observation_spec(self):
return observation_action_reward.OAR(
observation=dm_env.specs.Array(shape=(72, 96, 3), dtype=np.uint8),
action=dm_env.specs.Array(shape=(), dtype=np.int64),
reward=dm_env.specs.Array(shape=(), dtype=np.float32))
def action_spec(self):
return dm_env.specs.DiscreteArray(num_values=15, dtype=np.int64)
# @title Dataset
def _decode_images(pngs):
Decode tensor of PNGs.
decode_rgb_png = functools.partial(tf.io.decode_png, channels=3)
images = tf.map_fn(decode_rgb_png, pngs, dtype=tf.uint8,
parallel_iterations=10)
# [N, 72, 96, 3]
images.set_shape((pngs.shape[0], 72, 96, 3))
return images
def _tf_example_to_step_ds(tf_example: tf.train.Example,
episode_length: int) -> reverb.ReplaySample:
Create a Reverb replay sample from a TF example.
# Parse tf.Example.
def sequence_feature(shape, dtype=tf.float32):
return tf.io.FixedLenFeature(shape=[episode_length] + shape, dtype=dtype)
feature_description = {
'episode_id': tf.io.FixedLenFeature([], tf.int64),
'start_idx': tf.io.FixedLenFeature([], tf.int64),
'episode_return': tf.io.FixedLenFeature([], tf.float32),
'observations_pixels': sequence_feature([], tf.string),
'observations_reward': sequence_feature([]),
# actions are one-hot arrays.
'observations_action': sequence_feature([15]),
'actions': sequence_feature([], tf.int64),
'rewards': sequence_feature([]),
'discounted_rewards': sequence_feature([]),
'discounts': sequence_feature([]),
}
data = tf.io.parse_single_example(tf_example, feature_description)
pixels = _decode_images(data['observations_pixels'])
observation = observation_action_reward.OAR(
observation=pixels,
action=tf.argmax(data['observations_action'],
axis=1, output_type=tf.int64),
reward=data['observations_reward'])
data = acme_reverb.Step(
observation=observation,
action=data['actions'],
reward=data['rewards'],
discount=data['discounts'],
start_of_episode=tf.zeros((episode_length,), tf.bool),
extras={})
# Keys are all zero and probabilities are all one.
info = reverb.SampleInfo(key=tf.zeros((episode_length,), tf.int64),
probability=tf.ones((episode_length,), tf.float32),
table_size=tf.zeros((episode_length,), tf.int64),
priority=tf.ones((episode_length,), tf.float32))
sample = reverb.ReplaySample(info=info, data=data)
return tf.data.Dataset.from_tensor_slices(sample)
def subsequences(step_ds: tf.data.Dataset,
length: int, shift: int = 1
) -> tf.data.Dataset:
Dataset of subsequences from a dataset of episode steps.
window_ds = step_ds.window(length, shift=shift, stride=1)
return window_ds.interleave(_nest_ds).batch(length, drop_remainder=True)
def _nest_ds(nested_ds: tf.data.Dataset) -> tf.data.Dataset:
Produces a dataset of nests from a nest of datasets of the same size.
flattened_ds = tuple(tree.flatten(nested_ds))
zipped_ds = tf.data.Dataset.zip(flattened_ds)
return zipped_ds.map(lambda *x: tree.unflatten_as(nested_ds, x))
def make_dataset(path: str,
episode_length: int,
sequence_length: int,
sequence_shift: int,
num_shards: int = 500) -> tf.data.Dataset:
Create dataset of DeepMind Lab sequences.
filenames = [f'{path}/tfrecord-{i:05d}-of-{num_shards:05d}'
for i in range(num_shards)]
file_ds = tf.data.Dataset.from_tensor_slices(filenames)
file_ds = file_ds.repeat().shuffle(num_shards)
tfrecord_dataset = functools.partial(tf.data.TFRecordDataset,
compression_type='GZIP')
# Dataset of tf.Examples containing full episodes.
example_ds = file_ds.interleave(tfrecord_dataset)
# Dataset of episodes, each represented as a dataset of steps.
_tf_example_to_step_ds_with_length = functools.partial(
_tf_example_to_step_ds, episode_length=episode_length)
episode_ds = example_ds.map(_tf_example_to_step_ds_with_length,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# Dataset of sequences.
training_sequences = functools.partial(subsequences, length=sequence_length,
shift=sequence_shift)
return episode_ds.interleave(training_sequences)
# task | episode length | run
# ----------------------------------------------------------------------------
# seekavoid_arena_01 | 301 | training_{0..2}
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.0
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.01
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.1
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.25
# explore_object_rewards_few | 1351 | training_{0..2}
# explore_object_rewards_many | 1801 | training_{0..2}
# rooms_select_nonmatching_object | 181 | training_{0..2}
# rooms_watermaze | 1801 | training_{0..2}
TASK = 'seekavoid_arena_01'
RUN = 'training_0'
EPISODE_LENGTH = 301
BATCH_SIZE = 1
DATASET_PATH = f'gs://rl_unplugged/dmlab/{TASK}/{RUN}'
environment = DeepMindLabEnvironment(TASK, action_repeats=2)
dataset = make_dataset(DATASET_PATH, num_shards=500,
episode_length=EPISODE_LENGTH,
sequence_length=120,
sequence_shift=40)
dataset = dataset.padded_batch(BATCH_SIZE, drop_remainder=True)
# Create network.
def process_observations(x):
return x._replace(observation=tf.image.convert_image_dtype(x.observation, tf.float32))
environment_spec = specs.make_environment_spec(environment)
num_actions = environment_spec.actions.maximum + 1
network = snt.DeepRNN([
process_observations,
networks.R2D2AtariNetwork(num_actions=num_actions)
])
tf_utils.create_variables(network, [environment_spec.observations])
# Create a logger.
logger = loggers.TerminalLogger(label='learner', time_delta=1.)
# Create the R2D2 learner.
learner = r2d2.R2D2Learner(
environment_spec=environment_spec,
network=network,
target_network=copy.deepcopy(network),
discount=0.99,
learning_rate=1e-4,
importance_sampling_exponent=0.2,
target_update_period=100,
burn_in_length=0,
sequence_length=120,
store_lstm_state=False,
dataset=dataset,
logger=logger)
for _ in range(5):
learner.step()
# Create a logger.
logger = loggers.TerminalLogger(label='evaluator', time_delta=1.)
# Create evaluation loop.
eval_network = snt.DeepRNN([
network,
lambda q: trfl.epsilon_greedy(q, epsilon=0.4**8).sample(),
])
eval_loop = environment_loop.EnvironmentLoop(
environment=environment,
actor=actors.DeprecatedRecurrentActor(policy_network=eval_network),
logger=logger)
eval_loop.run(2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bazel
Step2: DeepMind Lab
Step3: Python dependencies
Step11: Imports and Utils
Step12: Experiment
Step13: Learning
Step14: Evaluation
|
6,456
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/kc_house_data.gl')
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe['price']
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list
print example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'
print example_output[0] # and the corresponding output
my_weights = np.array([1., 1.]) # the example weights
my_features = example_features[0,] # we'll use the first data point
predicted_value = np.dot(my_features, my_weights)
print predicted_value
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
test_predictions = predict_output(example_features, my_weights)
print test_predictions[0] # should be 1181.0
print test_predictions[1] # should be 2571.0
def feature_derivative(errors, feature):
# Assume that errors and feature are both numpy arrays of the same length (number of data points)
# compute twice the dot product of these vectors as 'derivative' and return the value
derivative = 2*np.dot(errors, feature)
return(derivative)
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([0., 0.]) # this makes all the predictions 0
test_predictions = predict_output(example_features, my_weights)
# just like SFrames 2 numpy arrays can be elementwise subtracted with '-':
errors = test_predictions - example_output # prediction errors in this case is just the -example_output
feature = example_features[:,0] # let's compute the derivative with respect to 'constant', the ":" indicates "all rows"
derivative = feature_derivative(errors, feature)
print derivative
print -np.sum(example_output)*2 # should be the same as derivative
from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights) # make sure it's a numpy array
gradient_magnitude = 0
while not converged:
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
gradient_sum_squares = 0 # initialize the gradient sum of squares
# while we haven't reached the tolerance yet, update each feature's weight
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
# compute the derivative for weight[i]:
drivative = feature_derivative(errors, feature_matrix[:, i])
# add the squared value of the derivative to the gradient magnitude (for assessing convergence)
gradient_sum_squares += drivative * drivative
# subtract the step size times the derivative from the current weight
weights[i] -= step_size * drivative
# compute the square-root of the gradient sum of squares to get the gradient matnigude:
gradient_magnitude = sqrt(gradient_sum_squares)
if gradient_magnitude < tolerance:
converged = True
return(weights)
train_data,test_data = sales.random_split(.8,seed=0)
# let's test out the gradient descent
simple_features = ['sqft_living']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
initial_weights = np.array([-47000., 1.])
step_size = 7e-12
tolerance = 2.5e7
simple_weights = regression_gradient_descent(simple_feature_matrix, output,initial_weights, step_size,tolerance)
print simple_weights
(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
test_predictions = predict_output(test_simple_feature_matrix, simple_weights)
print test_predictions[0]
rss = 0
for i in range(0, len(test_predictions)):
error = test_predictions[i] - test_data['price'][i]
rss += error * error
print rss
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
initial_weights = np.array([-100000., 1., 1.])
step_size = 4e-12
tolerance = 1e9
simple_weights = regression_gradient_descent(feature_matrix, output,initial_weights, step_size, tolerance)
print simple_weights
(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
test_predictions = predict_output(test_simple_feature_matrix, simple_weights)
print test_predictions[0]
test_data[0]
rss = 0
for i in range(0, len(test_predictions)):
error = test_predictions[i] - test_data['price'][i]
rss += error * error
print rss
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.
Step3: Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things
Step4: For testing let's use the 'sqft_living' feature and a constant as our features and price as our output
Step5: Predicting output given regression weights
Step6: np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights
Step7: If you want to test your code run the following cell
Step8: Computing the Derivative
Step9: To test your feature derivartive run the following
Step10: Gradient Descent
Step11: A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features.
Step12: Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model
Step13: Next run your gradient descent with the above parameters.
Step14: How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)?
Step15: Now compute your predictions using test_simple_feature_matrix and your weights from above.
Step16: Quiz Question
Step17: Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
Step18: Running a multiple regression
Step19: Use the above parameters to estimate the model weights. Record these values for your quiz.
Step20: Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
Step21: What is the actual price for the 1st house in the test data set?
Step22: Quiz Question
Step23: Quiz Question
|
6,457
|
<ASSISTANT_TASK:>
Python Code:
# reprodução do ia898:conv com alterações para ilustrar o tutorial
import numpy as np
import sys,os
ia898path = os.path.abspath('../')
if ia898path not in sys.path:
sys.path.append(ia898path)
#import ia898.src as ia
def iaconvdemo(f,h):
f, h = np.asarray(f), np.asarray(h,float)
if len(f.shape) == 1: f = f[np.newaxis,:]
if len(h.shape) == 1: h = h[np.newaxis,:]
if f.size < h.size:
f, h = h, f
g = np.zeros(np.array(f.shape) + np.array(h.shape) - 1)
if f.ndim == 2:
H,W = f.shape
#for (r,c) in np.transpose(np.nonzero(h)):
# g[r:r+H, c:c+W] += f * h[r,c]
if f.ndim == 3:
D,H,W = f.shape
#for (d,r,c) in np.transpose(np.nonzero(h)):
# g[d:d+D, r:r+H, c:c+W] += f * h[d,r,c]
mask_h = np.zeros(h.shape,dtype = bool)
mask_g = np.zeros(g.shape,dtype = bool)
for i in np.arange(h.shape[0]):
for j in np.arange(h.shape[1]):
g[i:i+f.shape[0], j:j+f.shape[1]] += h[i,j] * f
mask_h[i,j] = True
mask_g[i:i+f.shape[0], j:j+f.shape[1]] = True
print('iteração ',(i,j),':')
print('mascara da imagem de saida:\n', mask_g)
print('mascara de h:\n', mask_h, '\nitem de h sendo utilizado:\n',h[mask_h])
print('h*f dessa iteração:\n',h[i,j] * f)
print('imagem de saida acumulada:\n', g)
print('\n')
mask_h[i,j] = False
mask_g[i:i+f.shape[0], j:j+f.shape[1]] = False
return g
import numpy as np
f = np.array([0,1,2,3,4,5])
h = np.array([-1,0,1])
print('f =\n', f)
print('\nh = \n', h)
print('\n')
print('\n f*h = \n',iaconvdemo(f, h) )
import numpy as np
f = np.array([[1,1,1,1,1],
[2,2,2,2,2],
[3,3,3,3,3],
[4,4,4,4,4],
[5,5,5,5,5]])
h = np.array([[-1,0,1],
[-2,0,2],
[-1,0,1]])
print('f =\n', f)
print('\nh = \n', h)
print('\n')
print('\n f*h = \n',iaconvdemo(f, h) )
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import sys,os
os.chdir('../data')
f = mpimg.imread('cameraman.tif')
plt.imshow(f, cmap='gray')
plt.title('imagem original')
plt.show()
h = np.array([[-1,0,1],
[-2,0,2],
[-1,0,1]])
g = ia.conv(f,h)
plt.imshow(g, cmap='gray')
plt.title('imagem após convolução')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ilustração da convolução 1D
Step2: Ilustração da convolução 2D
Step3: Ilustração com imagem
|
6,458
|
<ASSISTANT_TASK:>
Python Code:
attendance106 = ia.attendance_tables(106)
attendance106.groupby('Organization') \
.count()['First Name'] \
.sort_values(ascending=False)[:30]
attendance106['Organization'].dropna().unique().shape
N = 250
topN = attendance106.groupby('Organization')\
.count()['First Name']\
.sort_values(ascending=False)[:N]
distance_matrix = process.matricize(topN.index,
process.containment_distance) \
.replace(to_replace=float('inf'), value= 100)
plt.pcolor(distance_matrix)
plt.colorbar()
ents = process.resolve_entities(topN,
process.containment_distance,
threshold=.25)
replacements = {}
for r in [{name: ent for name in ents[ent]} for ent in ents]:
replacements.update(r)
attendance106_clean = attendance106.replace(to_replace=replacements)
attendance106_clean.groupby('Organization') \
.size() \
.sort_values(ascending=False)[:30]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What organizations are best represented?
Step2: Even in this short list, there are repeat names. We need to apply entity resolution.
Step3: This is too many names! It will overwhelm the entity resolver. Let's use a subset of the most relevant entities.
|
6,459
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
INDEX = ['Boiling point of He',
'Boiling point of N',
'Melting point of H2O',
'Body temperature',
'Boiling point of H2O']
X = np.array([-452.1, -320.4, 32.0, 98.6, 212.0])
Y = np.array([4.22, 77.36, 273.2, 310.5, 373.2])
pd.DataFrame(np.stack([X, Y]).T, index=INDEX,
columns=['Fahrenheit ($x$)', 'Kelvin ($y$)'])
# Lets initialise `a` to between 1.0 and 2.0; it is therefore impossible
# for it to choose a (nearly) correct value at the start, forcing our model to do some work.
a = np.random.uniform(1.0, 2.0, size=())
b = 0.0
print('a={}, b={}'.format(a, b))
Y_pred = X * a + b
pd.DataFrame(np.stack([X, Y, Y_pred]).T, index=INDEX,
columns=['Fahrenheit ($x$)', 'Kelvin ($y$)', '$y_{pred}$'])
sqr_err = (Y_pred - Y)**2
pd.DataFrame(np.stack([X, Y, Y_pred, sqr_err]).T, index=INDEX,
columns=['Fahrenheit ($x$)', 'Kelvin ($y$)', '$y_{pred}$', 'squared err ($\epsilon$)'])
def iterative_gradient_descent_step(a, b, lr):
A single gradient descent iteration
:param a: current value of `a`
:param b: current value of `b`
:param lr: learning rate
:return: a tuple `(a_next, b_next)` that are the values of `a` and `b` after the iteration.
# Derivative of a and b w.r.t. epsilon:
da_depsilon = (2 * a * X**2 + 2 * b * X - 2 * X * Y).mean()
db_depsilon = (2 * b + 2 * a * X - 2 * Y).mean()
# Gradient descent:
a = a - da_depsilon * lr
b = b - db_depsilon * lr
# Return new values
return a, b
def state_as_table(a, b):
Helper function to generate a Pandas DataFrame showing the current state, including predicted values and errors
:param a: current value of `a`
:param b: current value of `b`
:return: tuple `(df, mean_sqr_err)` where `df` is the Pandas DataFrame and `sqr_err` is the mean squared error
Y_pred = X * a + b
sqr_err = (Y_pred - Y)**2
df = pd.DataFrame(np.stack([X, Y, Y_pred, sqr_err]).T, index=INDEX,
columns=['Fahrenheit ($x$)', 'Kelvin ($y$)', '$y_{pred}$', 'squared err ($\epsilon$)'])
return df, sqr_err.mean()
LEARNING_RATE = 0.00001
N_ITERATIONS = 50000
df, mean_sqr_err = state_as_table(a, b)
print('a = {}, b = {}, mean sqr. err. = {}'.format(a, b, mean_sqr_err))
df
for i in xrange(N_ITERATIONS):
a, b = iterative_gradient_descent_step(a, b, LEARNING_RATE)
df, mean_sqr_err = state_as_table(a, b)
print('a = {}, b = {}, mean sqr. err. = {}'.format(a, b, mean_sqr_err))
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Our data set
Step2: Show our data set in a table
Step3: Model - linear regression
Step4: How good is our guess?
Step7: Reducing the error
Step8: Define learning rate and show initial state
Step9: Gradient descent
|
6,460
|
<ASSISTANT_TASK:>
Python Code:
from second_folio import (all_repos)
all_repos[:5]
len(all_repos)
repo_name = all_repos[0]
repos = all_repos[:]
def status_for_repo(repo_name):
rs = GitenbergJob(username=username, password=password, repo_name=repo_name,
repo_owner='GITenberg',
update_travis_commit_msg='update travis',
tag_commit_message='update travis')
return rs.status()
results_iter = apply_to_repos(status_for_repo, repos=repos)
results = []
for (i,result) in enumerate(results_iter):
results.append(result)
print ("\r{}: {}".format(i, result['repo_name']), end="")
df = DataFrame(results)
df.head()
import json
print (json.dumps(results))
df.columns
from semantic_version import Version
import re
def my_filter(r):
#return (r['metadata.yaml'] and r['book.asciidoc']) and (r['ebooks_in_release_count'] == 0) and (not r['.travis.yml'])
#return (r['ebooks_in_release_count'] == 0) and (r['book.asciidoc'])
# return (r['ebooks_in_release_count'] > 0)
return len(r['asciidocs'])> 0 and not r['book.asciidoc']
def txts(r):
repo_id = r['repo_name'].split("_")[-1]
return [hash_ for hash_ in r['root_hashes'] if re.match(r'(\d+)\-*(\d*).txt', hash_)]
# how to find .txt or html files
len(df[df.apply(my_filter, axis=1)])
repos = list(df[df.apply(my_filter, axis=1)]['repo_name'])
repos
repos_to_process = repos[0:1]
def build_repo(repo_name):
rs = BuildRepo(username=username, password=password, repo_name=repo_name,
repo_owner='GITenberg',
update_travis_commit_msg='update travis',
tag_commit_message='update travis',
github_token = GITENBERG_GITHUB_TOKEN)
return rs.run()
results_iter = apply_to_repos(build_repo, repos=repos_to_process)
results = []
for (i,result) in enumerate(results_iter):
results.append(result)
print ("\r{}: {}".format(i, result[0]), end="")
results
repo_name = repos_to_process[0]
rs = BuildRepo(username=username, password=password, repo_name=repo_name,
repo_owner='GITenberg',
update_travis_commit_msg='update travis',
tag_commit_message='update travis',
github_token = GITENBERG_GITHUB_TOKEN)
rs.run()
repo_md('1079')
repos_to_process = ['The-Life-and-Opinions-of-Tristram-Shandy-Gentleman_1079']
def write_metadata(repo_name):
rs = MetadataWrite(username=username, password=password, repo_name=repo_name,
repo_owner='GITenberg',
update_travis_commit_msg='update travis',
tag_commit_message='update travis')
return rs.run()
results_iter = apply_to_repos(write_metadata, repos=repos_to_process)
results = []
for (i,result) in enumerate(results_iter):
results.append(result)
print ("\r{}: {}".format(i, result[0]), end="")
repos_to_process
repos[0]
write_metadata(repos[0])
yaml.safe_dump(y,default_flow_style=False,
allow_unicode=True)
# https://gist.githubusercontent.com/rdhyee/9665aa23084a45269cd3c941f702602f/raw/9332e96fa5d3c86137d4a5c49c2ba7b484a4f7c1/repos_status.json
for repo in repos2_to_build[:]:
try:
bj = BuildRepo2(username=username, password=password, repo_name=repo,
repo_owner='GITenberg',
update_travis_commit_msg='update travis',
tag_commit_message='update travis',
github_token=GITENBERG_GITHUB_TOKEN)
bj.run()
except Exception as e:
print (repo, e)
from gitenberg import metadata
import os
def local_yaml_file(id_):
fname = "/Users/raymondyee/C/src/gitenberg-dev/giten_site/metadata/{}.yaml".format(id_)
if os.path.exists(fname):
md = metadata.pandata.Pandata(fname)
return md
else:
return None
for repo in all_repos[:]:
id_ = repo.split("_")[-1]
yaml_file = local_yaml_file(id_)
if yaml_file is None:
print (repo, yaml_file)
gj.travis_encrypt(RDHYEE_DON_QUIXOTE_TOKEN)
u = gj.gh.user()
u.email
gj.fork_and_build_gitenberg_repo()
gj.create_or_update_file(path='JUNK.md', message=b'updated junk.md', content=u'hello'.encode('utf-8'))
print(gj.update_travis_template(write_changes=False,
encrypted_key=gj.travis_encrypt(gj.repo_token()))[0])
from travispy import TravisPy
travis = TravisPy.github_auth(RDHYEE_GITHUB_TOKEN)
t_user = travis.user()
t_user.login
travis_repo = travis.repo('rdhyee/Don-Quixote_996')
travis_repo.active
travis_repo.enable()
travis_encrypt(token_to_encrypt=token.token.encode('utf-8'),
repo_slug="rdhyee/Don-Quixote_996")
import json
from travispy import TravisPy
def create_travispy_test_settings(github_token, repo_slug):
settings = {}
travis = TravisPy.github_auth(github_token)
settings['github_token'] = github_token
settings['repo_slug'] = repo_slug
# account
# https://github.com/menegazzo/travispy/blob/v0.3.4/travispy/_tests/test_authenticated.py#L31
accounts = travis.accounts()
account = travis.account(accounts[0].id)
settings['account'] = {
'count': len(accounts),
'id': account.id,
'name': account.name,
'login': account.login,
'type': account.type,
'repos_count': account.repos_count,
'subscribed': hasattr(account, 'subscribed')
}
# hook
# https://github.com/menegazzo/travispy/blob/v0.3.4/travispy/_tests/test_authenticated.py#L73
hooks = travis.hooks()
hook = hooks[0]
settings['hook'] = {
'count': len(hooks),
'name': hook.name,
'description': hook.description,
'owner_name': hook.owner_name,
'active': hook.active,
'private': hook.private,
'admin': hook.admin
}
# user
# https://github.com/menegazzo/travispy/blob/v0.3.4/travispy/_tests/test_authenticated.py#L110
user = travis.user()
settings['user'] = {
'login': user['login'],
'name': user['name']
}
# branch
# https://github.com/menegazzo/travispy/blob/v0.3.4/travispy/_tests/test_not_authenticated.py#L19
branches = travis.branches(slug=repo_slug)
branch = travis.branch('master', repo_slug)
settings['branch'] = {
'count': len(branches),
'id': branch.id,
'repository_id': branch.repository_id,
'pull_request': branch.pull_request,
'config': branch.config,
'number': branch.number
}
# build
# https://github.com/menegazzo/travispy/blob/v0.3.4/travispy/_tests/test_not_authenticated.py#L66
builds = travis.builds(slug=repo_slug)
build = travis.build(builds[0].id)
build_id = builds[0].id
settings['build'] = {
'count': len(builds),
'id': build.id,
'repository_id': build.repository_id,
'number': build.number,
'pull_request': build.pull_request,
'pull_request_title': build.pull_request_title,
'pull_request_number': build.pull_request_number,
'config': build.config
}
# commit
# https://github.com/menegazzo/travispy/blob/v0.3.4/travispy/_tests/test_not_authenticated.py#L115
commit = build.commit
settings['commit'] = {
'count': len(builds),
'id': commit.id,
'sha': commit.sha,
'branch': commit.branch,
'message': commit.message,
'committed_at': commit.committed_at,
'author_name': commit.author_name,
'author_email': commit.author_email,
'commiter_name': commit.committer_name, # sic
'commiter_email': commit.committer_email, # sic
'compare_url': commit.compare_url,
'pull_request_number': None if not hasattr(commit, 'pull_request_number') else commit.pull_request_number
}
# jobs
# https://github.com/menegazzo/travispy/blob/v0.3.4/travispy/_tests/test_not_authenticated.py#L140
jobs = travis.jobs(ids=build.job_ids)
job = travis.job(build.job_ids[0])
settings['job'] = {
'count': len(jobs),
'build_id': job.build_id,
'repository_id': job.repository_id,
'number': job.number,
'config': job.config,
'queue': job.queue,
'allow_failure': job.allow_failure,
'annotation_ids': job.annotation_ids
}
# repo
# https://github.com/menegazzo/travispy/blob/v0.3.4/travispy/_tests/test_not_authenticated.py#L252
# let's add fake stuff for now
settings['repo'] = {
"public_count": 25,
"member_count": 5,
"owner_count": 7,
"github_language": "Python",
"id": 2598876,
"description": "TravisPy test project for Python 2.7",
"active": True
}
return json.dumps(settings, indent=2)
print(create_travispy_test_settings(RDHYEE_GITHUB_TOKEN, 'rdhyee/hello-travis-ci'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: gitenberg for book metadata
Step2: changes to make in metadata file to initialize it
Step3: next step
Step4: create test parameters for travispy
|
6,461
|
<ASSISTANT_TASK:>
Python Code:
# add intercept=1 for x0
X = np.insert(raw_X, 0, values=np.ones(raw_X.shape[0]), axis=1)
X.shape
# y have 10 categories here. 1..10, they represent digit 0 as category 10 because matlab index start at 1
# I'll ditit 0, index 0 again
y_matrix = []
for k in range(1, 11):
y_matrix.append((raw_y == k).astype(int))
# last one is k==10, it's digit 0, bring it to the first position
y_matrix = [y_matrix[-1]] + y_matrix[:-1]
y = np.array(y_matrix)
y.shape
t0 = lr.logistic_regression(X, y[0])
print(t0.shape)
y_pred = lr.predict(X, t0)
print('Accuracy={}'.format(np.mean(y[0] == y_pred)))
k_theta = np.array([lr.logistic_regression(X, y[k]) for k in range(10)])
print(k_theta.shape)
prob_matrix = lr.sigmoid(X @ k_theta.T)
np.set_printoptions(suppress=True)
prob_matrix
y_pred = np.argmax(prob_matrix, axis=1)
y_answer = raw_y.copy()
y_answer[y_answer==10] = 0
print(classification_report(y_answer, y_pred))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: train 1 model
Step2: Is this real......
Step3: making prediction
|
6,462
|
<ASSISTANT_TASK:>
Python Code:
def ins_sort(k):
for i in range(1,len(k)): #since we want to swap an item with previous one, we start from 1
j = i #because we need 2 indexes as one will reduce and we do not want to affect i
while j > 0 and k[j] < k[j-1]: #j>0 since there is no value to the left of the 0 index
k[j], k[j-1] = k[j-1], k[j] # swap the items, if right one is smaller. (sometimes we use a temp variable)
j=j-1 #reduce j and repeat
return k
print(ins_sort([5,8,1,3,6,9]))
C = []
A = [1, 3]
B = [2, 4]
C.append(min(min(A), min(B)))
print(C)
A = [3]
B = [2, 4]
C.append(min(min(A), min(B)))
print(C)
A = [3]
B = [4]
C.append(min(min(A), min(B)))
print(C)
# nothing left in A, so just add what is left of B
C += B
print(C)
def merge_sort(x):
result = []
if len(x) < 2:
return x
mid = int(len(x)/2) # calculate mid point
y = merge_sort(x[:mid]) #split up to mid point and sort
z = merge_sort(x[mid:]) #get the rest of the mid point and sort
i = 0
j = 0
while i < len(y) and j < len(z): # loop through both lists
if y[i] > z[j]: #if left is more than right, add to result and increment index
result.append(z[j])
j += 1
else: #if right is more than left, add to result and increment right index
result.append(y[i])
i += 1
result += y[i:] #we can assume this is sorted, so we add the rest of the lists based on the index
result += z[j:]
return result
print(merge_sort([5,8,1,3,6,9]))
print(sorted([5, 100, 7, 2, 8149, 23, 19, 473]))
import time
import random
small_sorted_list = [x for x in range(4)]
rev_small_sorted_list = small_sorted_list[::-1]
sorted_list = [x for x in range(1024)]
rev_sorted_list = sorted_list[::-1]
#Start timing
def time_to_sort(to_sort, print_string):
start = time.time()
to_sort.sort()
elapsed = time.time() - start
print(print_string.format(elapsed))
time_to_sort(small_sorted_list, "Sorted: {}")
time_to_sort(rev_small_sorted_list, "Reverse: {}")
time_to_sort([1,5,2,8,3], "Partially Sorted: {}")
time_to_sort([random.randint(0,1000) for r in range(1024)], "Random: {}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mergesort
Step2: sorted()
Step3: Python has sorting methods built into its standard library.
|
6,463
|
<ASSISTANT_TASK:>
Python Code:
import pandas
import collections
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats
import igraph
sif_data = pandas.read_csv("shared/pathway_commons.sif",
sep="\t", names=["species1","interaction_type","species2"])
interaction_types_ppi = set(["interacts-with",
"in-complex-with"])
interac_ppi = sif_data[sif_data.interaction_type.isin(interaction_types_ppi)].copy()
boolean_vec = interac_ppi['species1'] > interac_ppi['species2']
interac_ppi.loc[boolean_vec, ['species1', 'species2']] = interac_ppi.loc[boolean_vec, ['species2', 'species1']].values
for rowtuple in interac_ppi.head().iterrows():
row = rowtuple[1]
rowid = rowtuple[0]
print(rowid)
if row['species1'] > row['species2']:
interac_ppi['species1'][rowid] = row['species2']
interac_ppi['species2'][rowid] = row['species1']
type(interac_ppi.head())
for i in range(0, interac_ppi.shape[0]):
if interac_ppi.iat[i,0] > interac_ppi.iat[i,2]:
temp_name = interac_ppi.iat[i,0]
interac_ppi.set_value(i, 'species1', interac_ppi.iat[i,2])
interac_ppi.set_value(i, 'species2', temp_name)
interac_ppi_unique = interac_ppi[["species1","species2"]].drop_duplicates()
vertex_degrees_ctr = collections.Counter()
allproteins = interac_ppi_unique["species1"].tolist() + interac_ppi_unique["species2"].tolist()
for proteinname in allproteins:
vertex_degrees_ctr.update([proteinname])
vertex_degrees = list(vertex_degrees_ctr.values())
dict(list(dict(vertex_degrees_ctr).items())[0:9])
vertex_degrees[0:9]
nbins=30
hist_res = plt.hist(np.array(vertex_degrees), bins=nbins)
hist_counts = hist_res[0]
hist_breaks = hist_res[1]
kvals = 0.5*(hist_breaks[0:(nbins-1)]+hist_breaks[1:nbins])
kvals
hist_counts
plt.loglog(kvals[1:14],
hist_counts[1:14], "o")
plt.xlabel("k")
plt.ylabel("N(k)")
plt.gca().set_xlim([50, 2000])
plt.show()
scipy.stats.linregress(np.log10(kvals[0:3]), np.log10(hist_counts[0:3]))
jeong_slope = -6.5/(np.log(45)-np.log(2))
print("%.2f" % jeong_slope)
g = igraph.Graph.TupleList(interac_ppi_unique.values.tolist(), directed=False)
xs, ys = zip(*[(left, count) for left, _, count in
g.degree_distribution().bins()])
plt.loglog(xs, ys)
plt.show()
igraph.statistics.power_law_fit(g.degree())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Since iterating is reasonably fast in Python, you could also do this using a for loop through all of the rows of the data frame, swapping species1 and species2 entries as needed (and in-place in the data frame) so that in the resulting data frame interac_ppi satisfies species1 < species2 for all rows.
Step5: Step 4
Step6: Step 5
Step7: Let's print out the vertex degrees of the first 10 vertices, in whatever the key order is. Pythonistas -- anyone know of a less convoluted way to do this?
Step8: Let's print out the first ten entries of the vertex_degrees list. Note that we don't expect it to be in the same order as the output from the previous command above, since dict changes the order in the above.
Step9: Step 6
Step10: Let's print the k values of the bin centers
Step11: Let's print the histogram bin counts
Step12: Step 7
Step13: Step 8
Step14: Slope is -1.87 with SE 0.084, i.e., gamma = 1.87 with a 95% CI of about +/- 0.17.
Step15: How close was your slope from the human PPI, to the slope for the yeast PPI from the Jeong et al. article?
|
6,464
|
<ASSISTANT_TASK:>
Python Code:
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
# Roman Goj <roman.goj@gmail.com>
# Denis Engemann <denis.engemann@gmail.com>
# Stefan Appelhoff <stefan.appelhoff@mailbox.org>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import mne
from mne.datasets import somato
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
print(__doc__)
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
# Use a shorter segment of raw just for speed here
raw = mne.io.read_raw_fif(raw_fname)
raw.crop(0, 120) # one minute for speed (looks similar to using all ~800 sec)
# Read epochs
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-1.5, tmax=2, preload=True)
del raw
# Paths to forward operator and FreeSurfer subject directory
fname_fwd = op.join(data_path, 'derivatives', 'sub-{}'.format(subject),
'sub-{}_task-{}-fwd.fif'.format(subject, task))
subjects_dir = op.join(data_path, 'derivatives', 'freesurfer', 'subjects')
freqs = np.logspace(np.log10(12), np.log10(30), 9)
csd = csd_morlet(epochs, freqs, tmin=-1, tmax=1.5, decim=20)
csd_baseline = csd_morlet(epochs, freqs, tmin=-1, tmax=0, decim=20)
# ERS activity starts at 0.5 seconds after stimulus onset
csd_ers = csd_morlet(epochs, freqs, tmin=0.5, tmax=1.5, decim=20)
info = epochs.info
del epochs
csd = csd.mean()
csd_baseline = csd_baseline.mean()
csd_ers = csd_ers.mean()
fwd = mne.read_forward_solution(fname_fwd)
filters = make_dics(info, fwd, csd, noise_csd=csd_baseline,
pick_ori='max-power', reduce_rank=True)
del fwd
baseline_source_power, freqs = apply_dics_csd(csd_baseline, filters)
beta_source_power, freqs = apply_dics_csd(csd_ers, filters)
stc = beta_source_power / baseline_source_power
message = 'DICS source power in the 12-30 Hz frequency band'
brain = stc.plot(hemi='both', views='axial', subjects_dir=subjects_dir,
subject=subject, time_label=message)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading the raw data and creating epochs
Step2: We are interested in the beta band. Define a range of frequencies, using a
Step3: Computing the cross-spectral density matrix for the beta frequency band, for
Step4: To compute the source power for a frequency band, rather than each frequency
Step5: Computing DICS spatial filters using the CSD that was computed on the entire
Step6: Applying DICS spatial filters separately to the CSD computed using the
Step7: Visualizing source power during ERS activity relative to the baseline power.
|
6,465
|
<ASSISTANT_TASK:>
Python Code:
F_1 = Matrix( [4,0] )
F_2 = Matrix( [5*cos(30*pi/180), 5*sin(30*pi/180) ] )
F_net = F_1 + F_2
F_net # in Newtons
F_net.evalf() # in Newtons
F_net.norm().evalf() # |F_net| in [N]
(atan2( F_net[1],F_net[0] )*180/pi).n() # angle in degrees
t, a, v_i, x_i = symbols('t a v_i x_i')
v = v_i + integrate(a, (t, 0,t) )
v
x = x_i + integrate(v, (t, 0,t) )
x
(v*v).expand()
((v*v).expand() - 2*a*x).simplify()
x_i = 20 # initial position
v_i = 10 # initial velocity
a = 5 # acceleration (constant during motion)
x = x_i + integrate( v_i+integrate(a,(t,0,t)), (t,0,t) )
x
x.subs({t:3}).n() # x(3) in [m]
diff(x,t).subs({t:3}).n() # v(3) in [m/s]
t, v_i, x_i, k = symbols('t v_i x_i k')
a = sqrt(k*t)
x = x_i + integrate( v_i+integrate(a,(t,0,t)), (t, 0,t) )
x
x, y = symbols('x y')
m, g, k, h = symbols('m g k h')
F_g = -m*g # Force of gravity on mass m
U_g = - integrate( F_g, (y,0,h) )
U_g # Grav. potential energy
F_s = -k*x # Spring force for displacement x
U_s = - integrate( F_s, (x,0,x) )
U_s # Spring potential energy
t = Symbol('t') # time t
x = Function('x') # position function x(t)
w = Symbol('w', positive=True) # angular frequency w
sol = dsolve( diff(x(t),t,t) + w**2*x(t), x(t) )
sol
x = sol.rhs
x
A, phi = symbols("A phi")
(A*cos(w*t - phi)).expand(trig=True)
x = sol.rhs.subs({"C1":0,"C2":A})
x
v = diff(x, t)
v
E_T = (0.5*k*x**2 + 0.5*m*v**2).simplify()
E_T
E_T.subs({k:m*w**2}).simplify() # = K_max
E_T.subs({w:sqrt(k/m)}).simplify() # = U_max
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To express the answer in length-and-direction notation,
Step2: The net force on the object is $\vec{F}_{\textrm{net}}= 8.697\angle 16.7^\circ$[N].
Step3: You may remember these equations from your high school physics class.
Step4: The above calculation shows $v_f^2 - 2ax_f = -2ax_i + v_i^2$.
Step5: If you think about it,
Step6: Potential energy
Step7: Note the negative sign in the formula defining the potential energy.
Step8: Note the solution $x(t)=C_1\sin(\omega t)+C_2 \cos(\omega t)$ is equivalent to $x(t) = A\cos(\omega t + \phi)$,
Step9: If we define $C_1=A\sin(\phi)$ and $C_2=A\cos(\phi)$,
|
6,466
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
print(np.sin(np.deg2rad(21)))
import numpy as np
stu1 = 80.0
stu2 = 90.0
stu3 = 66.5
ave = (stu1 + stu2 + stu3)/3
print("Student scores:")
print(stu1)
print(stu2)
print(stu3)
print("Average: %f" %ave)
n1, lef1 = divmod(32,5)
n2, lef2 = divmod(45,7)
n3, lef3 = divmod(51,6)
n = [n1,n2,n3]
lef = [lef1,lef2,lef3]
print("Number of students in each group:")
for i in range(0,3):
print('Class %d: %d' %(i+1,n[i]))
print("Number of students leftover:")
for i in range(0,3):
print('Class %d: %d' %(i+1,lef[i]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 2
Step2: Exercise 3
|
6,467
|
<ASSISTANT_TASK:>
Python Code:
# Imports the functionality that we need to display YouTube videos in a Jupyter Notebook.
# You need to run this cell before you run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("fF841G53fGo",width=640,height=360) # random numbers
# put your code here.
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("o_wzbAUZWQk",width=640,height=360) # functions
# put your code here.
# put your code here
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/rTmsyHG72q8pF0cT2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some possibly useful links
Step2: Tutorial on functions in python
Step3: Question 3
Step5: Assignment wrapup
|
6,468
|
<ASSISTANT_TASK:>
Python Code:
import gammalib
import ctools
import cscripts
%matplotlib inline
import matplotlib.pyplot as plt
obsfile = 'obs_crab_selected.xml'
emin = 0.66
emax = 100.0
skymap = ctools.ctskymap()
skymap['inobs'] = obsfile
skymap['proj'] = 'CAR'
skymap['coordsys'] = 'CEL'
skymap['xref'] = 83.63
skymap['yref'] = 22.01
skymap['binsz'] = 0.02
skymap['nxpix'] = 200
skymap['nypix'] = 200
skymap['emin'] = emin
skymap['emax'] = emax
skymap['bkgsubtract'] = 'NONE'
skymap.run()
#module used to stretch the color palette
import matplotlib.colors as colors
ax = plt.subplot()
plt.imshow(skymap.skymap().array(),origin='lower',
extent=[83.63+0.02*200,83.63-0.02*200,22.01-0.02*200,22.01+0.02*200],
# boundaries of the coord grid
norm=colors.PowerNorm(gamma=0.5),
cmap=plt.get_cmap('jet')) # square root scale
ax.set_xlabel('R.A. (deg)')
ax.set_ylabel('Dec (deg)')
cbar = plt.colorbar()
cbar.set_label('Counts')
srcmodel = 'crab.xml'
regfile = 'crab_sources.ds9'
srcdetect = cscripts.cssrcdetect(skymap.skymap().copy())
srcdetect['srcmodel'] = 'POINT'
srcdetect['bkgmodel'] = 'NONE' # we will determine the background model in a later step
srcdetect['threshold'] = 5
srcdetect['corr_rad'] = 0.1
srcdetect['outmodel'] = srcmodel
srcdetect['outds9file'] = regfile
srcdetect.execute()
models = gammalib.GModels(srcmodel)
print(models)
ax1 = plt.subplot()
plt.imshow(srcdetect._map.array(),origin='lower',
extent=[83.63+0.02*200,83.63-0.02*200,22.01-0.02*200,22.01+0.02*200],
# boundaries of the coord grid
cmap=plt.get_cmap('jet')) # square root scale
ax1.set_xlabel('R.A. (deg)')
ax1.set_ylabel('Dec (deg)')
cbar = plt.colorbar()
cbar.set_label('Counts')
crab = models['Src001']
ax1.scatter(crab['RA'].value(),crab['DEC'].value(),
marker='x',c='white')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will use the matplotlib package to display the results.
Step2: We will use the events selected in the previous step. Since the data correspond to observations of the Crab nebula we should centre the sky map on this source. The energy limits of the sky map should be set to the energy interval of the data (0.66 - 100 TeV).
Step3: We can visualise the skymap.
Step4: Based on the sky map you can now extract a model for all potentially significant sources in the data. You do this using the cssrcdetect script, and to extract only significant sources, you should set the detection threshold for example to 5 sigma. We will make a copy of the skymap since the detection process involves smoothing and we do not want to modify the original map.
Step5: Let's look at the resulting model. It contains a point source Src001 at the position of the Crab nebula.
Step6: We can overlay the position of the source we found to the smoothed skymap to make sure everything worked well.
|
6,469
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
%matplotlib inline
baseball_dir = "lahman-csv_2015-01-24/"
salaries = pd.read_csv(baseball_dir + "Salaries.csv", sep=",")
batting = pd.read_csv(baseball_dir + "Batting.csv", sep=",")
batting.dropna(inplace=True)
batting.info()
total_salaries = salaries.groupby(["playerID"])["salary"].sum()
total_batting = batting.groupby(["playerID"])[["HR", 'HBP', 'G','stint']].sum()
all_stats = pd.concat((total_batting, total_salaries), axis=1)
all_stats = all_stats[(all_stats.HR > 0) & (all_stats.salary > 0)]
plt.figure(figsize=(12, 4))
plt.scatter(all_stats.HR, all_stats.salary, edgecolor="None",
s=5, c='k', alpha=0.2)
plt.yscale("log")
plt.xlabel("Home Runs", fontsize=12); plt.ylabel("Salary ($)", fontsize=12)
plt.minorticks_on()
plt.xlim(-50, 800)
plt.show()
from sklearn import linear_model
import sklearn.cross_validation as cv
kfolds = cv.KFold(len(all_stats), n_folds=10)
regressor = linear_model.LinearRegression()
Xvals = np.array(all_stats.HR)[:, np.newaxis]
yvals = np.array(all_stats.salary)
slopes, intercepts = [], []
for train_index, test_index in kfolds:
X_train, X_test = Xvals[train_index], Xvals[test_index]
y_train, y_test = yvals[train_index], yvals[test_index]
regressor.fit(X_train, y_train)
slopes.append(regressor.coef_)
intercepts.append(regressor.intercept_)
slope = np.mean(slopes)
intercept = np.mean(intercepts)
regressor.coef_ = slope
regressor.intercept_ = intercept
print("Our model is:\n\tSalary = %.2f x N_HomeRuns + %.2f" % (slope, intercept))
plt.figure(figsize=(12, 4))
plt.scatter(all_stats.HR, all_stats.salary, edgecolor="None",
s=5, c='k', alpha=0.2)
plt.scatter(Xvals, regressor.predict(Xvals), edgecolor="None",
s=2, c='r')
plt.yscale("log")
plt.xlabel("Home Runs", fontsize=12); plt.ylabel("Salary ($)", fontsize=12)
plt.minorticks_on()
plt.xlim(-50, 800)
plt.show()
print("Score: {0}".format(regressor.score(Xvals, yvals)))
N_folds = 10
kfolds = cv.KFold(len(all_stats), n_folds=N_folds)
regressor = linear_model.LinearRegression()
valid_data = ["HR", 'HBP', 'G', 'stint']
Xvals = np.array(all_stats[valid_data])
yvals = np.array(all_stats.salary)
coeffs, intercepts = [], []
for train_index, test_index in kfolds:
X_train, X_test = Xvals[train_index], Xvals[test_index]
y_train, y_test = yvals[train_index], yvals[test_index]
regressor.fit(X_train, y_train)
coeffs.append(regressor.coef_)
intercepts.append(regressor.intercept_)
coeffs = np.array(coeffs).mean(axis=0) #averages each column
intercept = np.array(intercepts).mean(axis=0)
regressor.coef_ = coeffs
regressor.intercept_ = intercept
print("Score: {0}".format(regressor.score(Xvals, yvals)))
fig = plt.figure(figsize=(12, 4))
fig.subplots_adjust(wspace=0)
ax = plt.subplot(121)
ax.scatter(all_stats.HR, all_stats.salary, edgecolor="None",
s=5, c='k', alpha=0.2)
ax.set_yscale("log")
ax.set_xlabel("Home Runs", fontsize=12); ax.set_ylabel("Salary ($)", fontsize=12)
ax.set_xlim(-50, 800); ax.minorticks_on()
ax = plt.subplot(122)
ax.scatter(Xvals[:, 1], regressor.predict(Xvals), edgecolor="None",
s=2, c='r')
ax.set_xlabel("Home Runs", fontsize=12)
ax.set_ylim(1E4, 1E9)
ax.set_yscale("log"); ax.set_yticklabels([])
ax.set_xlim(-50, 800); ax.minorticks_on()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We are combining the two sheets by linking by player ID below and combines them into one giant table, then create a plot of all data points of batters who have hit at least one home run.
Step2: We are creating a linear regression univariate model to best fit our data points that takes the home runs as our 'x' input and salary as our 'y'.
Step3: Our r^2 value is .376 as seen below.
Step4: Multivariate Linear Regression Model
Step5: Using these four categories we were able to get an r^2 value of .414 as seen below.
Step6: Below gives us our model versus the actual data.
|
6,470
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-3', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
6,471
|
<ASSISTANT_TASK:>
Python Code:
dfnum = pd.read_csv('transformed_numerical_dataset_imputed.csv', index_col=['Dataset','Id'])
dfnum.head()
dfcat = pd.read_csv('cleaned_categorical_vars_with_colz_sorted_by_goodness.csv', index_col=['Dataset','Id'])
dfcat.head()
dfcat.head()
df = pd.concat([dfnum, dfcat.iloc[:, :ncat]], axis=1)
df.shape
target = pd.read_csv('../data/train_target.csv')
scaler = sk.preprocessing.StandardScaler()
def transform_target(target):
logtarget = np.log1p(target / 1000)
return scaler.fit_transform(logtarget)
def inverse_transform_target(target_t):
logtarget = scaler.inverse_transform(target_t)
return np.expm1(logtarget) * 1000
target_t = transform_target(target)
# Test
assert all(target == inverse_transform_target(target_t))
data = df.loc['train',:].copy()
data['SalePrice'] = target_t
data.columns
desc = 'SalePrice' + \
' ~ ' + \
' + '.join(data.drop('SalePrice', axis=1).iloc[:, :-ncat]) + \
' + ' + \
' + '.join('C({})'.format(col) for col in data.drop('SalePrice', axis=1).iloc[:, -ncat:])
desc
regression2 = smapi.ols(desc, data=data).fit()
regression2.summary()
def get_data(X, y):
df = X.copy()
df['SalePrice'] = y
return df
def ols3(X, y):
data = get_data(X, y)
return smapi.ols(desc, data=data)
submission_t = regression2.predict(df.loc['test',:])
submission = inverse_transform_target(submission_t)
submission
def save(filename, submission):
df = pd.DataFrame(data={
"Id": np.arange(len(submission)) + 1461,
"SalePrice": submission
})
df.to_csv(filename, index=False)
save('ols_full_{}.csv'.format(ncat), submission)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Recreate transformed (standardized) sale price
Step2: Ordinary Least Squares
Step3: As can be seen below, using more numerical values improves R-squared to 0.88 which is pretty good, though there's of course a risk of overfitting.
Step4: Cross validation
Step5: Make a submission
Step6: Scale the result
|
6,472
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import sys
from casadi import *
# Add do_mpc to path. This is not necessary if it was installed via pip
sys.path.append('../../../')
# Import do_mpc package:
import do_mpc
import matplotlib.pyplot as plt
model_type = 'continuous' # either 'discrete' or 'continuous'
model = do_mpc.model.Model(model_type)
# States struct (optimization variables):
C_a = model.set_variable(var_type='_x', var_name='C_a', shape=(1,1))
C_b = model.set_variable(var_type='_x', var_name='C_b', shape=(1,1))
T_R = model.set_variable(var_type='_x', var_name='T_R', shape=(1,1))
T_K = model.set_variable(var_type='_x', var_name='T_K', shape=(1,1))
# Input struct (optimization variables):
F = model.set_variable(var_type='_u', var_name='F')
Q_dot = model.set_variable(var_type='_u', var_name='Q_dot')
# Certain parameters
K0_ab = 1.287e12 # K0 [h^-1]
K0_bc = 1.287e12 # K0 [h^-1]
K0_ad = 9.043e9 # K0 [l/mol.h]
R_gas = 8.3144621e-3 # Universal gas constant
E_A_ab = 9758.3*1.00 #* R_gas# [kj/mol]
E_A_bc = 9758.3*1.00 #* R_gas# [kj/mol]
E_A_ad = 8560.0*1.0 #* R_gas# [kj/mol]
H_R_ab = 4.2 # [kj/mol A]
H_R_bc = -11.0 # [kj/mol B] Exothermic
H_R_ad = -41.85 # [kj/mol A] Exothermic
Rou = 0.9342 # Density [kg/l]
Cp = 3.01 # Specific Heat capacity [kj/Kg.K]
Cp_k = 2.0 # Coolant heat capacity [kj/kg.k]
A_R = 0.215 # Area of reactor wall [m^2]
V_R = 10.01 #0.01 # Volume of reactor [l]
m_k = 5.0 # Coolant mass[kg]
T_in = 130.0 # Temp of inflow [Celsius]
K_w = 4032.0 # [kj/h.m^2.K]
C_A0 = (5.7+4.5)/2.0*1.0 # Concentration of A in input Upper bound 5.7 lower bound 4.5 [mol/l]
# Uncertain parameters:
alpha = model.set_variable(var_type='_p', var_name='alpha')
beta = model.set_variable(var_type='_p', var_name='beta')
# Auxiliary terms
K_1 = beta * K0_ab * exp((-E_A_ab)/((T_R+273.15)))
K_2 = K0_bc * exp((-E_A_bc)/((T_R+273.15)))
K_3 = K0_ad * exp((-alpha*E_A_ad)/((T_R+273.15)))
T_dif = model.set_expression(expr_name='T_dif', expr=T_R-T_K)
model.set_rhs('C_a', F*(C_A0 - C_a) -K_1*C_a - K_3*(C_a**2))
model.set_rhs('C_b', -F*C_b + K_1*C_a - K_2*C_b)
model.set_rhs('T_R', ((K_1*C_a*H_R_ab + K_2*C_b*H_R_bc + K_3*(C_a**2)*H_R_ad)/(-Rou*Cp)) + F*(T_in-T_R) +(((K_w*A_R)*(-T_dif))/(Rou*Cp*V_R)))
model.set_rhs('T_K', (Q_dot + K_w*A_R*(T_dif))/(m_k*Cp_k))
# Build the model
model.setup()
mpc = do_mpc.controller.MPC(model)
setup_mpc = {
'n_horizon': 20,
'n_robust': 1,
'open_loop': 0,
't_step': 0.005,
'state_discretization': 'collocation',
'collocation_type': 'radau',
'collocation_deg': 2,
'collocation_ni': 2,
'store_full_solution': True,
# Use MA27 linear solver in ipopt for faster calculations:
#'nlpsol_opts': {'ipopt.linear_solver': 'MA27'}
}
mpc.set_param(**setup_mpc)
mpc.scaling['_x', 'T_R'] = 100
mpc.scaling['_x', 'T_K'] = 100
mpc.scaling['_u', 'Q_dot'] = 2000
mpc.scaling['_u', 'F'] = 100
_x = model.x
mterm = (_x['C_b'] - 0.6)**2 # terminal cost
lterm = (_x['C_b'] - 0.6)**2 # stage cost
mpc.set_objective(mterm=mterm, lterm=lterm)
mpc.set_rterm(F=0.1, Q_dot = 1e-3) # input penalty
# lower bounds of the states
mpc.bounds['lower', '_x', 'C_a'] = 0.1
mpc.bounds['lower', '_x', 'C_b'] = 0.1
mpc.bounds['lower', '_x', 'T_R'] = 50
mpc.bounds['lower', '_x', 'T_K'] = 50
# upper bounds of the states
mpc.bounds['upper', '_x', 'C_a'] = 2
mpc.bounds['upper', '_x', 'C_b'] = 2
mpc.bounds['upper', '_x', 'T_K'] = 140
# lower bounds of the inputs
mpc.bounds['lower', '_u', 'F'] = 5
mpc.bounds['lower', '_u', 'Q_dot'] = -8500
# upper bounds of the inputs
mpc.bounds['upper', '_u', 'F'] = 100
mpc.bounds['upper', '_u', 'Q_dot'] = 0.0
mpc.set_nl_cons('T_R', _x['T_R'], ub=140, soft_constraint=True, penalty_term_cons=1e2)
alpha_var = np.array([1., 1.05, 0.95])
beta_var = np.array([1., 1.1, 0.9])
mpc.set_uncertainty_values(alpha = alpha_var, beta = beta_var)
mpc.setup()
estimator = do_mpc.estimator.StateFeedback(model)
simulator = do_mpc.simulator.Simulator(model)
params_simulator = {
'integration_tool': 'cvodes',
'abstol': 1e-10,
'reltol': 1e-10,
't_step': 0.005
}
simulator.set_param(**params_simulator)
p_num = simulator.get_p_template()
tvp_num = simulator.get_tvp_template()
# function for time-varying parameters
def tvp_fun(t_now):
return tvp_num
# uncertain parameters
p_num['alpha'] = 1
p_num['beta'] = 1
def p_fun(t_now):
return p_num
simulator.set_tvp_fun(tvp_fun)
simulator.set_p_fun(p_fun)
simulator.setup()
# Set the initial state of mpc, simulator and estimator:
C_a_0 = 0.8 # This is the initial concentration inside the tank [mol/l]
C_b_0 = 0.5 # This is the controlled variable [mol/l]
T_R_0 = 134.14 #[C]
T_K_0 = 130.0 #[C]
x0 = np.array([C_a_0, C_b_0, T_R_0, T_K_0]).reshape(-1,1)
mpc.x0 = x0
simulator.x0 = x0
estimator.x0 = x0
mpc.set_initial_guess()
%%capture
for k in range(50):
u0 = mpc.make_step(x0)
y_next = simulator.make_step(u0)
x0 = estimator.make_step(y_next)
mpc_graphics = do_mpc.graphics.Graphics(mpc.data)
from matplotlib import rcParams
rcParams['axes.grid'] = True
rcParams['font.size'] = 18
%%capture
fig, ax = plt.subplots(5, sharex=True, figsize=(16,12))
# Configure plot:
mpc_graphics.add_line(var_type='_x', var_name='C_a', axis=ax[0])
mpc_graphics.add_line(var_type='_x', var_name='C_b', axis=ax[0])
mpc_graphics.add_line(var_type='_x', var_name='T_R', axis=ax[1])
mpc_graphics.add_line(var_type='_x', var_name='T_K', axis=ax[1])
mpc_graphics.add_line(var_type='_aux', var_name='T_dif', axis=ax[2])
mpc_graphics.add_line(var_type='_u', var_name='Q_dot', axis=ax[3])
mpc_graphics.add_line(var_type='_u', var_name='F', axis=ax[4])
ax[0].set_ylabel('c [mol/l]')
ax[1].set_ylabel('T [K]')
ax[2].set_ylabel('$\Delta$ T [K]')
ax[3].set_ylabel('Q [kW]')
ax[4].set_ylabel('Flow [l/h]')
ax[4].set_xlabel('time [h]')
# Update properties for all prediction lines:
for line_i in mpc_graphics.pred_lines.full:
line_i.set_linewidth(2)
# Highlight nominal case:
for line_i in np.sum(mpc_graphics.pred_lines['_x', :, :,0]):
line_i.set_linewidth(5)
for line_i in np.sum(mpc_graphics.pred_lines['_u', :, :,0]):
line_i.set_linewidth(5)
for line_i in np.sum(mpc_graphics.pred_lines['_aux', :, :,0]):
line_i.set_linewidth(5)
# Add labels
label_lines = mpc_graphics.result_lines['_x', 'C_a']+mpc_graphics.result_lines['_x', 'C_b']
ax[0].legend(label_lines, ['C_a', 'C_b'])
label_lines = mpc_graphics.result_lines['_x', 'T_R']+mpc_graphics.result_lines['_x', 'T_K']
ax[1].legend(label_lines, ['T_R', 'T_K'])
fig.align_ylabels()
from matplotlib.animation import FuncAnimation, ImageMagickWriter
def update(t_ind):
print('Writing frame: {}.'.format(t_ind), end='\r')
mpc_graphics.plot_results(t_ind=t_ind)
mpc_graphics.plot_predictions(t_ind=t_ind)
mpc_graphics.reset_axes()
lines = mpc_graphics.result_lines.full
return lines
n_steps = mpc.data['_time'].shape[0]
anim = FuncAnimation(fig, update, frames=n_steps, blit=True)
gif_writer = ImageMagickWriter(fps=5)
anim.save('anim_CSTR.gif', writer=gif_writer)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model
Step2: States and control inputs
Step3: The control inputs are the feed $F$ and the heat flow $\dot{Q}$
Step4: ODE and parameters
Step5: In the next step, we formulate the $k_i$-s
Step6: Additionally, we define an artificial variable of interest, that is not a state of the system, but will be later used for plotting
Step7: WIth the help ot the $k_i$-s and $T_{\text{dif}}$ we can define the ODEs
Step8: Finally, the model setup is completed
Step9: Controller
Step10: We choose the prediction horizon n_horizon, set the robust horizon n_robust to 1. The time step t_step is set to one second and parameters of the applied discretization scheme orthogonal collocation are as seen below
Step11: Because the magnitude of the states and inputs is very different, we introduce scaling factors
Step12: Objective
Step13: Constraints
Step14: If a constraint is not critical, it is possible to implement it as a soft constraint.
Step15: Uncertain values
Step16: This means with n_robust=1, that 9 different scenarios are considered.
Step17: Estimator
Step18: Simulator
Step19: For the simulation, we use the same time step t_step as for the optimizer
Step20: Realizations of uncertain parameters
Step21: We define two functions which are called in each simulation step, which return the current realizations of the parameters, with respect to defined inputs (in this case t_now)
Step22: These two custum functions are used in the simulation via
Step23: By defining p_fun as above, the function will always return the value 1.0 for both $\alpha$ and $\beta$.
Step24: Closed-loop simulation
Step25: Now, we simulate the closed-loop for 50 steps (and suppress the output of the cell with the magic command %%capture)
Step26: Animating the results
Step27: We quickly configure Matplotlib.
Step28: We then create a figure, configure which lines to plot on which axis and add labels.
Step29: Some "cosmetic" modifications are easily achieved with the structure pred_lines and result_lines.
Step30: After importing the necessary package
Step31: We obtain the animation with
|
6,473
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
x_dists = np.array([[ 0, -1, -2],
[ 1, 0, -1],
[ 2, 1, 0]])
y_dists = np.array([[ 0, -1, -2],
[ 1, 0, -1],
[ 2, 1, 0]])
dists = np.vstack(([x_dists.T], [y_dists.T])).T
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
6,474
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import json
import time
import pickle
import requests
from io import BytesIO
from zipfile import ZipFile
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction import text
import seaborn as sns
sns.set(font_scale=1.5)
URL = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip'
SMS_PATH = os.path.join('datasets', 'sms')
file_name = requests.get(URL)
zipfile = ZipFile(BytesIO(file_name.content))
zip_names = zipfile.namelist()
def fetch_data(file='SMSSPamCollection'):
for file in zip_names:
if not os.path.isdir(SMS_PATH):
os.makedirs(SMS_PATH)
outpath = os.path.join(SMS_PATH, file)
extracted_file = zipfile.read(file)
with open(outpath, 'wb') as f:
f.write(extracted_file)
return outpath
DATA = fetch_data()
df = pd.read_csv(DATA, sep='\t', header=None)
df.columns = ['Label', 'Text']
pd.set_option('max_colwidth', 220)
df.head(20)
df.describe()
df.info()
# Add a field to our dataframe with the length of each message.
df['Length'] = df['Text'].apply(len)
df.head()
df.groupby('Label').describe()
df.Length.plot(bins=100, kind='hist')
df.hist(column='Length', by='Label', bins=50, figsize=(10,4))
text_data = df['Text']
text_data.shape
# Give our target labels numbers.
df['Label_'] = df['Label'].map({'ham': 0, 'spam': 1})
#stop_words = text.ENGLISH_STOP_WORDS
#Adding stop words did not significantly improve the model.
#textWithoutNums = text_data.replace('\d+', 'NUM_', regex=True)
#Removing all of the numbers in the messages and replacing with a text string did not improve the model either.
vectorizer = CountVectorizer(analyzer='word') #, stop_words=stop_words)
#vectorizer.fit(textWithoutNums)
vectorizer.fit(text_data)
vectorizer.get_feature_names()
pd.DataFrame.from_dict(vectorizer.vocabulary_, orient='index').sort_values(by=0, ascending=False).head()
dtm = vectorizer.transform(text_data)
features = pd.DataFrame(dtm.toarray(), columns=vectorizer.get_feature_names())
features.shape
features.head()
X = features
y = np.array(df['Label_'].tolist())
from sklearn.cross_validation import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
print(X_train.shape, y_train.shape)
model = MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
model.fit(X_train, y_train)
y_pred_class = model.predict(X_test)
print(metrics.classification_report(y_test, y_pred_class))
print('Accuracy Score: ', metrics.accuracy_score(y_test, y_pred_class))
from yellowbrick.classifier import ClassificationReport
bayes = MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
visualizer = ClassificationReport(bayes, classes=['ham', 'spam'])
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
g = visualizer.poof()
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred_class)
sns.set(font_scale=1.5)
ax = plt.subplot()
sns.heatmap(cm, annot=True, ax=ax, fmt='g', cbar=False)
ax.set_xlabel('Predicted')
ax.set_ylabel('Actual')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(['Ham', 'Spam'])
ax.yaxis.set_ticklabels(['Ham', 'Spam'])
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
# Split the dataset in two equal parts
X_train_, X_test_, y_train_, y_test_ = train_test_split(
X, y, test_size=0.5, random_state=1)
# Set the parameters by cross-validation
tuned_parameters = [{'alpha': [0.5, 1.0, 1.5, 2.0, 2.5, 3.0], 'class_prior':[None], 'fit_prior': [True, False]}]
scores = ['precision', 'recall']
for score in scores:
print("### Tuning hyper-parameters for %s ###" % score)
print()
clf = GridSearchCV(MultinomialNB(), tuned_parameters, cv=5,
scoring='%s_macro' % score)
clf.fit(X_train_, y_train_)
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
print()
print("Detailed classification report:")
print()
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.")
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print()
print('Accuracy Score: ', metrics.accuracy_score(y_test, y_pred))
print()
from yellowbrick.classifier import ClassificationReport
bayes = MultinomialNB(alpha=3.0, class_prior=None, fit_prior=True)
visualizer = ClassificationReport(bayes, classes=['ham', 'spam'])
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
g = visualizer.poof()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
print(X_train.shape, y_train.shape)
model = MultinomialNB(alpha=3.0, class_prior=None, fit_prior=True)
model.fit(X_train, y_train)
y_pred_class = model.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred_class)
sns.set(font_scale=1.5)
ax = plt.subplot()
sns.heatmap(cm, annot=True, ax=ax, fmt='g', cbar=False)
ax.set_xlabel('Predicted')
ax.set_ylabel('Actual')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(['Ham', 'Spam'])
ax.yaxis.set_ticklabels(['Ham', 'Spam'])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the data from the UCI Machine Learning Repository
Step2: Data Exploration
Step3: Since the data is labeled for us, we can do further data exploration by taking a look at how spam and ham differ.
Step4: In addition to the difference in the number of ham vs. spam, it appears that spam messages are generally longer than spam messages and more normally distributed than ham messages.
Step5: Define the feature set through vectorization.
Step6: Using Yellowbrick
Step7: Using the default settings for our model does a pretty good job predicting spam and ham although not perfect. The confusion matrix shows us that there are 12 false positives ( 5 actual spam messages that are predicted to be ham with 7 actual ham message predicted as spam).
Step8: Since we are more concerned with minimizing the false positives especially with ham classified as spam, we will use an alpha value of 3.0 with fit_prior = True.
|
6,475
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.integrate import odeint
from matplotlib import rc
import matplotlib.pyplot as plt
%matplotlib inline
rc("text", usetex=True)
rc("font", size=18)
rc("figure", figsize=(6,4))
rc("axes", grid=True)
# Constantes del problema:
M1 = 3
M2 = 3
g = 9.81
# Condiciones iniciales del problema:
r0 = 2
r_punto0 = 0
tita0 = 0
tita_punto0 = 1
C1 = (M2*g)/(M1+M2) # Defino constantes utiles
C2 = (M1)/(M1+M2)
cond_iniciales = [r0, r_punto0, tita0, tita_punto0]
def derivada(X, t, c1, c2): # esto sería la f del caso { x' = f(x,t) }
r, r_punto, tita, tita_punto = X
deriv = [0, 0, 0, 0] # es como el vector columna de arriba pero en filado
deriv[0] = r_punto # derivada de r
deriv[1] = -c1 + c2*r*(tita_punto)**2 # r dos puntos
deriv[2] = tita_punto # derivada de tita
deriv[3] = -2*r_punto*tita_punto/r
return deriv
def resuelvo_sistema(m1, m2, tmax = 20):
t0 = 0
c1 = (m2*g)/(m1+m2) # Defino constantes utiles
c2 = (m1)/(m1+m2)
t = np.arange(t0, tmax, 0.001)
# aca podemos definirnos nuestro propio algoritmo de integracion
# o bien usar el que viene a armado de scipy.
# Ojo que no es perfecto eh, a veces es mejor escribirlo uno
out = odeint(derivada, cond_iniciales, t, args = (c1, c2,))
return [t, out.T]
t, (r, rp, tita, titap) = resuelvo_sistema(M1, M2, tmax=10)
plt.figure()
plt.plot(t, r/r0, 'r')
plt.ylabel(r"$r / r_0$")
plt.xlabel(r"tiempo")
# plt.savefig("directorio/r_vs_t.pdf", dpi=300)
plt.figure()
plt.plot(t, tita-tita0, 'b')
plt.ylabel(r"$\theta - \theta_0$")
plt.xlabel(r"tiempo")
# plt.savefig("directorio/tita_vs_t.pdf", dpi=300)
plt.figure()
plt.plot(r*np.cos(tita-tita0)/r0, r*np.sin(tita-tita0)/r0, 'g')
plt.ylabel(r"$r/r_0\ \sin\left(\theta - \theta_0\right)$")
plt.xlabel(r"$r/r_0\ \cos\left(\theta - \theta_0\right)$")
# plt.savefig("directorio/trayectoria.pdf", dpi=300)
from scipy.integrate import solve_ivp
def resuelvo_sistema(m1, m2, tmax = 20, metodo='RK45'):
t0 = 0
c1 = (m2*g)/(m1+m2) # Defino constantes utiles
c2 = (m1)/(m1+m2)
t = np.arange(t0, tmax, 0.001)
# acá hago uso de las lambda functions, solamente para usar
# la misma funcion que definimos antes. Pero como ahora
# voy a usar otra funcion de integracion (no odeint)
# que pide otra forma de definir la funcion, en vez de pedir
# f(x,t) esta te pide f(t, x), entonces nada, hay que dar vuelta
# parametros y nada mas...
deriv_bis = lambda t, x: derivada(x, t, c1, c2)
out = solve_ivp(fun=deriv_bis, t_span=(t0, tmax), y0=cond_iniciales,\
method=metodo, t_eval=t)
return out
# Aca armo dos arrays con los metodos posibles y otro con colores
all_metodos = ['RK45', 'RK23', 'Radau', 'BDF', 'LSODA']
all_colores = ['r', 'b', 'm', 'g', 'c']
# Aca les dejo la forma piola de loopear sobre dos arrays a la par
for met, col in zip(all_metodos, all_colores):
result = resuelvo_sistema(M1, M2, tmax=30, metodo=met)
t = result.t
r, rp, tita, titap = result.y
plt.plot(t, r/r0, col, label=met)
plt.xlabel("tiempo")
plt.ylabel(r"$r / r_0$")
plt.legend(loc=3)
from matplotlib import animation
%matplotlib notebook
result = resuelvo_sistema(M1, M2, tmax=30, metodo='Radau')
t = result.t
r, rp, tita, titap = result.y
fig, ax = plt.subplots()
ax.set_xlim([-1, 1])
ax.set_ylim([-1, 1])
ax.plot(r*np.cos(tita)/r0, r*np.sin(tita)/r0, 'm', lw=0.2)
line, = ax.plot([], [], 'ko', ms=5)
N_SKIP = 50
N_FRAMES = int(len(r)/N_SKIP)
def animate(frame_no):
i = frame_no*N_SKIP
r_i = r[i]/r0
tita_i = tita[i]
line.set_data(r_i*np.cos(tita_i), r_i*np.sin(tita_i))
return line,
anim = animation.FuncAnimation(fig, animate, frames=N_FRAMES,
interval=50, blit=False)
from ipywidgets import interactive, interact, FloatProgress
from IPython.display import clear_output, display
%matplotlib inline
@interact(m1=(0,5,0.5), m2=(0,5,0.5), tmax=(0.01,20,0.5)) #Permite cambiar el parámetro de la ecuación
def resuelvo_sistema(m1, m2, tmax = 20):
t0 = 0
c1 = (m2*g)/(m1+m2) # Defino constantes utiles
c2 = (m1)/(m1+m2)
t = np.arange(t0, tmax, 0.05)
# out = odeint(derivada, cond_iniciales, t, args = (c1, c2,))
r, rp, tita, titap = odeint(derivada, cond_iniciales, t, args=(c1, c2,)).T
plt.xlim((-1,1))
plt.ylim((-1,1))
plt.plot(r*np.cos(tita)/r0, r*np.sin(tita)/r0,'b-')
# plt.xlabel("tiempo")
# plt.ylabel(r"$r / r_0$")
# plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problema físico
Step2: Todo muy lindo!!
Step3: Ven cómo los distintos métodos van modificando más y más la curva de $r(t)$ a medida que van pasando los pasos de integración. Tarea para ustedes es correr el mismo código con la conservación de energía.
Step4: Recuerden que esta animación no va a parar eh, sabemos que verla te deja en una especie de trance místico, pero recuerden pararla cuando haya transcurrido suficiente tiempo
|
6,476
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
batch_size = 128 # number of samples to include in each mini-batch
nb_classes = 10 # there are ten digit classes in the MNIST data set
nb_epoch = 10 # number of epochs to train for
img_rows, img_cols = 28, 28 # input image dimensions
nb_filters = 32 # number of convolutional filters to use
pool_size = (2, 2) # size of pooling area for max pooling
kernel_size = (3, 3) # convolution kernel size
(X_train, y_train), (X_test, y_test) = mnist.load_data()
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
y_train
Y_train
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='valid', input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configure Model
Step2: Load data, shuffle it, and split between test and training sets
Step3: Convert class vectors to binary class matrices
Step4: Build Model
|
6,477
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
import socnet as sn
sn.graph_width = 320
sn.graph_height = 180
g = sn.load_graph('3-bellman.gml', has_pos=True)
for n, m in g.edges():
g.edge[n][m]['label'] = g.edge[n][m]['c']
sn.show_graph(g, elab=True)
from math import inf, isinf
s = 0
for n in g.nodes():
g.node[n]['d'] = inf
g.node[s]['d'] = 0
for i in range(g.number_of_nodes() - 1):
for n, m in g.edges():
d = g.node[n]['d'] + g.edge[n][m]['c']
if g.node[m]['d'] > d:
g.node[m]['d'] = d
for n in g.nodes():
print('distância de {}: {}'.format(n, g.node[n]['d']))
from math import inf, isinf
def snapshot(g, frames):
for n in g.nodes():
if isinf(g.node[n]['d']):
g.node[n]['label'] = '∞'
else:
g.node[n]['label'] = str(g.node[n]['d'])
frame = sn.generate_frame(g, nlab=True)
frames.append(frame)
red = (255, 0, 0)
blue = (0, 0, 255)
frames = []
s = 0
for n in g.nodes():
g.node[n]['d'] = inf
g.node[s]['d'] = 0
sn.reset_node_colors(g)
sn.reset_edge_colors(g)
snapshot(g, frames)
for i in range(g.number_of_nodes() - 1):
for n, m in g.edges():
d = g.node[n]['d'] + g.edge[n][m]['c']
g.edge[n][m]['color'] = red
# snapshot(g, frames)
if g.node[m]['d'] > d:
g.node[m]['d'] = d
g.edge[n][m]['color'] = blue
snapshot(g, frames)
g.edge[n][m]['color'] = sn.edge_color
snapshot(g, frames)
sn.reset_edge_colors(g)
sn.show_animation(frames)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A seguir, vamos configurar as propriedades visuais
Step2: Por fim, vamos carregar e visualizar um grafo
Step3: Passeios de custo mínimo
Step4: No entanto, essa demonstração depende de certas hipóteses em relação ao grafo. Tenho uma má e uma boa notícia
|
6,478
|
<ASSISTANT_TASK:>
Python Code:
from lea import *
# the canonical random variable : a fair coin
faircoin = Lea.fromVals('Head', 'Tail')
# toss the coin a few times
faircoin.random(10)
# Amitabh Bachan's coin from Sholay
sholaycoin = Lea.fromVals('Head', 'Head')
# Amitabh always wins (and, heroically, sacrifices himself for Dharamendra!)
sholaycoin.random(10)
# more reasonably, a biased coin
biasedcoin = Lea.fromValFreqs(('Head', 1), ('Tail', 2))
# toss it a few times
biasedcoin.random(10)
# random variables with more states : a fair die
die = Lea.fromVals(1, 2, 3, 4, 5, 6)
# throw the die a few times
die.random(20)
# Lea does standard statistics
# die.mean
# die.mode
# die.var
# die.entropy
# Lets create a pair of dies
die1 = die.clone()
die2 = die.clone()
# The throw of dice
dice = die1 + die2
dice
dice.random(10)
dice.mean
dice.mode
print dice.histo()
## We can create a new distribution, conditioned on our state of knowledge : P(sum | sum <= 6)
conditionalDice = dice.given(dice<=6)
## What is our best guess for the result of the throw ?
conditionalDice.mode
## Conditioning can be done in many ways : suppose we know that the first die came up 3.
dice.given(die1 == 3)
## Conditioning can be done in still more ways : suppose we know that **either** of the two dies came up 3
dice.given((die1 == 3) | (die2 == 3))
# Species is a random variable with states "common" and "rare", with probabilities determined by the population. Since
# are only two states, species states are, equivalently, "rare" and "not rare". Species can be a Boolean!
rare = Lea.boolProb(1,1000)
# Similarly, pattern is either "present" or "not present". It too is a Boolean, but, its probability distribution
# is conditioned on "rare" or "not rare"
patternIfrare = Lea.boolProb(98, 100)
patternIfNotrare = Lea.boolProb(5, 100)
# Now, lets build the conditional probability table for P(pattern | species)
pattern = Lea.buildCPT((rare , patternIfrare), ( ~rare , patternIfNotrare))
# Sanity check : do we get what we put in ?
pattern.given(rare)
# Finally, our moment of truth : Bayesian inference - what is P(rare | pattern )?
rare.given(pattern)
# And, now some show off : what is the probability of being rare and having a pattern ?
rare & pattern
# All possible outcomes
Lea.cprod(rare,pattern)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Summary
Step2: Summary
Step3: Summary
|
6,479
|
<ASSISTANT_TASK:>
Python Code:
from pomegranate import *
%pylab inline
d1 = DiscreteDistribution({'A': 0.10, 'C': 0.40, 'G': 0.40, 'T': 0.10})
d2 = ConditionalProbabilityTable([['A', 'A', 0.10],
['A', 'C', 0.50],
['A', 'G', 0.30],
['A', 'T', 0.10],
['C', 'A', 0.10],
['C', 'C', 0.40],
['C', 'T', 0.40],
['C', 'G', 0.10],
['G', 'A', 0.05],
['G', 'C', 0.45],
['G', 'G', 0.45],
['G', 'T', 0.05],
['T', 'A', 0.20],
['T', 'C', 0.30],
['T', 'G', 0.30],
['T', 'T', 0.20]], [d1])
clf = MarkovChain([d1, d2])
clf.log_probability( list('CAGCATCAGT') )
clf.log_probability( list('C') )
clf.log_probability( list('CACATCACGACTAATGATAAT') )
clf.fit( map( list, ('CAGCATCAGT', 'C', 'ATATAGAGATAAGCT', 'GCGCAAGT', 'GCATTGC', 'CACATCACGACTAATGATAAT') ) )
print clf.log_probability( list('CAGCATCAGT') )
print clf.log_probability( list('C') )
print clf.log_probability( list('CACATCACGACTAATGATAAT') )
print clf.distributions[0]
print clf.distributions[1]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Markov chains have log probability, fit, summarize, and from summaries methods implemented. They do not have classification capabilities by themselves, but when combined with a Naive Bayes classifier can be used to do discrimination between multiple models (see the Naive Bayes tutorial notebook).
Step2: We can fit the model to sequences which we pass in, and as expected, get better performance on sequences which we train on.
|
6,480
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors
df = pd.read_csv('datasets/exam_dataset1.csv', encoding='utf-8')
n_neighbors = 5
X = np.array(df[['exam1','exam2']])
y = np.array(df[['admission']]).ravel()
h = .02 # step size in the mesh
# # Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
print(clf.score(X,y))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("2-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.show()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import linear_model
df = pd.read_csv('datasets/exam_dataset1.csv', encoding='utf-8')
X = np.array(df[['exam1','exam2']])
y = np.array(df[['admission']]).ravel()
h = .02 # step size in the mesh
logreg = linear_model.LogisticRegression(C=1e5)
# we create an instance of Neighbours Classifier and fit the data.
logreg.fit(X, y)
print(logreg.score(X,y))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = logreg.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(4, 3))
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='k', cmap=plt.cm.Paired)
plt.xlabel('Exam 1')
plt.ylabel('Exam 2')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 1
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
print(clf.score(X,y))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.show()
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
Y = iris.target
h = .02 # step size in the mesh
logreg = linear_model.LogisticRegression(C=1e5)
# we create an instance of Neighbours Classifier and fit the data.
logreg.fit(X, Y)
print(logreg.score(X,y))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = logreg.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(4, 3))
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, edgecolors='k', cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.show()
logreg.coef_
logreg.intercept_
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors
from matplotlib.colors import ListedColormap
from sklearn import linear_model
df_reg = pd.read_csv('datasets/example2.csv', encoding='utf-8')
X = np.array(df_reg[['x']])
y = np.array(df_reg[['y']]).ravel()
# X = np.array(df_reg[['x1','x2']])
# y = np.array(df_reg[['label']]).ravel()
plt.scatter(X,y)
plt.show()
X.shape
df_reg["x_2"] = df_reg["x"]**2
df_reg["x_3"] = df_reg["x"]**3
df_reg["x_4"] = df_reg["x"]**4
X = np.array(df_reg[['x','x_2','x_3','x_4']])
reg = linear_model.Ridge(alpha=100)
# we create an instance of Neighbours Classifier and fit the data.
reg.fit(X, y)
print(reg.score(X,y))
x_line = np.linspace(0,8,100)
x_line = np.array([x_line,x_line**2,x_line**3,x_line**4]).T
y_line = reg.predict(x_line)
reg.intercept_
plt.scatter(X[:,0],y)
plt.plot(x_line[:,0],y_line)
plt.show()
reg.coef_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Logistic Regression
Step2: <br>
Step3: <br>
Step4: Regularization Example
|
6,481
|
<ASSISTANT_TASK:>
Python Code:
## Loading the model with `gensim`
# import wrod2vec model from gensim
from gensim.models.word2vec import Word2Vec
# load Google News pre-trained network
model = Word2Vec.load_word2vec_format('GNvectors.bin', binary=True)
pp(model['table'])
plt.plot(model['car'][:50], label = 'car')
plt.plot(model['vehicle'][:50], label = 'vehicle')
plt.legend()
pp(model.most_similar(positive=['car']))
result = model.most_similar(negative=['man'],
positive=['woman', 'king'])
pp(result)
def plot_data(orig_data, labels):
pca = PCA(n_components=2)
data = pca.fit_transform(orig_data)
plt.plot(data[:,0], data[:,1], '.')
for i in range(len(data)):
plt.annotate(labels[i], xy = data[i])
for i in range(len(data)/2):
plt.annotate("",
xy=data[i],
xytext=data[i+len(data)/2],
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3")
)
def analogy(worda, wordb, wordc):
result = model.most_similar(negative=[worda],
positive=[wordb, wordc])
return result[0][0]
adjectives = ['big', 'small', 'large', 'wide', 'strong']
comparatives = [analogy('good', 'better', adjective) for adjective in adjectives]
pp(zip(adjectives,comparatives))
labels = comparatives + adjectives
data = [model[w] for w in labels]
plot_data(data, labels)
pp(model.most_similar(positive=['Colombia', 'currency']))
model_es = Word2Vec.load_word2vec_format('eswikinews.bin', binary=True)
pp(model_es.most_similar(positive=['yo_soy_betty']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Continuous representation of words
Step2: Semantically related words have similar representations
Step3: Vector representation similarity = semantic similarity
Step4: Word vector space encodes linguistic regularities
Step5: Calculating comparatives from adjectives
Step6: Comparative vector
Step7: Compositionality
Step8: Spanish WikiNews dataset
|
6,482
|
<ASSISTANT_TASK:>
Python Code:
# sphinx_gallery_thumbnail_number = 9
# Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import find_events, fit_dipole
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
from mayavi import mlab
print(__doc__)
data_path = bst_phantom_elekta.data_path(verbose=True)
raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
raw = read_raw_fif(raw_fname)
events = find_events(raw, 'STI201')
raw.plot(events=events)
raw.info['bads'] = ['MEG2421']
raw.plot_psd(tmax=60., average=False)
raw.fix_mag_coil_types()
raw = mne.preprocessing.maxwell_filter(raw, origin=(0., 0., 0.))
raw.filter(None, 40., fir_design='firwin')
raw.plot(events=events)
tmin, tmax = -0.1, 0.1
event_id = list(range(1, 33))
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01),
decim=3, preload=True)
epochs['1'].average().plot(time_unit='s')
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.08)
mne.viz.plot_alignment(raw.info, subject='sample', show_axes=True,
bem=sphere, dig=True, surfaces='inner_skull')
# here we can get away with using method='oas' for speed (faster than "shrunk")
# but in general "shrunk" is usually better
cov = mne.compute_covariance(
epochs, tmax=0, method='oas', rank=None)
mne.viz.plot_evoked_white(epochs['1'].average(), cov)
data = []
t_peak = 0.036 # true for Elekta phantom
for ii in event_id:
evoked = epochs[str(ii)].average().crop(t_peak, t_peak)
data.append(evoked.data[:, 0])
evoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)
del epochs, raw
dip, residual = fit_dipole(evoked, cov, sphere, n_jobs=1)
fig, axes = plt.subplots(2, 1)
evoked.plot(axes=axes)
for ax in axes:
ax.texts = []
for line in ax.lines:
line.set_color('#98df81')
residual.plot(axes=axes)
actual_pos, actual_ori = mne.dipole.get_phantom_dipoles()
actual_amp = 100. # nAm
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, figsize=(6, 7))
diffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))
print('mean(position error) = %0.1f mm' % (np.mean(diffs),))
ax1.bar(event_id, diffs)
ax1.set_xlabel('Dipole index')
ax1.set_ylabel('Loc. error (mm)')
angles = np.rad2deg(np.arccos(np.abs(np.sum(dip.ori * actual_ori, axis=1))))
print(u'mean(angle error) = %0.1f°' % (np.mean(angles),))
ax2.bar(event_id, angles)
ax2.set_xlabel('Dipole index')
ax2.set_ylabel(u'Angle error (°)')
amps = actual_amp - dip.amplitude / 1e-9
print('mean(abs amplitude error) = %0.1f nAm' % (np.mean(np.abs(amps)),))
ax3.bar(event_id, amps)
ax3.set_xlabel('Dipole index')
ax3.set_ylabel('Amplitude error (nAm)')
fig.tight_layout()
plt.show()
def plot_pos_ori(pos, ori, color=(0., 0., 0.), opacity=1.):
x, y, z = pos.T
u, v, w = ori.T
mlab.points3d(x, y, z, scale_factor=0.005, opacity=opacity, color=color)
q = mlab.quiver3d(x, y, z, u, v, w,
scale_factor=0.03, opacity=opacity,
color=color, mode='arrow')
q.glyph.glyph_source.glyph_source.shaft_radius = 0.02
q.glyph.glyph_source.glyph_source.tip_length = 0.1
q.glyph.glyph_source.glyph_source.tip_radius = 0.05
mne.viz.plot_alignment(evoked.info, bem=sphere, surfaces='inner_skull',
coord_frame='head', meg='helmet', show_axes=True)
# Plot the position and the orientation of the actual dipole
plot_pos_ori(actual_pos, actual_ori, color=(0., 0., 0.), opacity=0.5)
# Plot the position and the orientation of the estimated dipole
plot_pos_ori(dip.pos, dip.ori, color=(0.2, 1., 0.5))
mlab.view(70, 80, distance=0.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
Step2: Data channel array consisted of 204 MEG planor gradiometers,
Step3: The data have strong line frequency (60 Hz and harmonics) and cHPI coil
Step4: Let's use Maxwell filtering to clean the data a bit.
Step5: We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.
Step6: Now we epoch our data, average it, and look at the first dipole response.
Step7: Let's use a sphere head geometry model and let's see the coordinate
Step8: Let's do some dipole fits. We first compute the noise covariance,
Step9: Do a quick visualization of how much variance we explained, putting the
Step10: Now we can compare to the actual locations, taking the difference in mm
Step11: Let's plot the positions and the orientations of the actual and the estimated
|
6,483
|
<ASSISTANT_TASK:>
Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
x_min = image_data.min()
x_max = image_data.max()
a = 0.1
b = 0.9
mult = (b - a)/(x_max - x_min)
return np.add(np.multiply(np.subtract(image_data, x_min), mult),0.1)
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean Variance - Image.png" style="height
Step6: Checkpoint
Step7: Problem 2
Step8: <img src="image/Learn Rate Tune - Image.png" style="height
Step9: Test
|
6,484
|
<ASSISTANT_TASK:>
Python Code:
import formulae as fm
import numpy as np
import pandas as pd
fm.model_description('y ~ x')
fm.model_description('y ~ 0 + x') # same with -1
fm.model_description('1|x')
fm.model_description('a + (1|x)')
fm.model_description('(x | g1 + g2)')
fm.model_description('y ~ a + b - c')
fm.model_description('y ~ a + c + b - c')
fm.model_description('y ~ a - c + b + c')
fm.model_description('y ~ a:b + c:d')
fm.model_description('y ~ 0 + a*b + c*d')
fm.model_description('y ~ 0 + a/b + c/d')
fm.model_description('a**3')
fm.model_description('(a + b + c)**3')
fm.model_description('y ~ center(x) + d')
fm.model_description('np.log(y) ~ center(x) + d')
fm.model_description('y ~ a * (b + c)')
fm.model_description('y ~ (a+b)*(c+d)')
fm.model_description('y ~ a / (b+c)')
fm.model_description('y ~ (a + b) / c')
fm.model_description("(x + y) * u * v")
fm.model_description("y ~ a + b + (0 + a | g) + (b | g)")
fm.model_description("np.sqrt(y) ~ -1 + (x1 + x2 + x3) ** 2 - x2:x3 - x1")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Operator precedence
Step2: Group specific terms (a.k.a random terms)
Step3: Note that if we don't use parenthesis here, formulae will understand that the LHS of the | operator is a + 1. That's why you will almost always see parenthesis with random terms.
Step4: The operator is associative.
Step5: Add and remove terms
Step6: And here we don't see it either.
Step7: But since terms are left-associative, there's no c when we remove it and we end up adding it in the end.
Step8: Below we're going to see better usages for the - operator.
Step9: * operator
Step10: / operator
Step11: Power operator
Step12: Function calls
Step13: You can also have calls on the left side of the formula. Of course, function names will have to be bound to a value for calls to work.
Step14: Some examples to see associativity rules
Step15: See the following about the / operator
Step16: but (a + b) / c is not equivalent to a + a
Step17: Some group level effects specifications
Step18: (x1 + x2 + x3) ** 2 computes all the pairwise interactions between the terms between parenthesis. We then remove x2
|
6,485
|
<ASSISTANT_TASK:>
Python Code:
2+2
san = 2
print san
diego = 2
san + diego
string = "Hello"
decimal = 1.2
list_of_strings = ["a", "b", "c", "d"]
list_of_integers = [1, 2, 3, 4]
list_of_whatever = ["a", 2, "c", 4]
my_phonebook = {'Mom': '713-555-5555', 'Chinese Takeout': '573-555-5555'}
data_file = open("./first-python-notebook.csv", "r")
print data_file
data = data_file.read()
print data
import pandas
pandas.read_csv("./first-python-notebook.csv")
table = pandas.read_csv("./first-python-notebook.csv")
table.info()
table.head()
print len(table)
table.sort_values("AMOUNT")
table.sort_values("AMOUNT", ascending=False)
table.sort_values("AMOUNT", ascending=False).head(10)
table['AMOUNT']
table['AMOUNT'].sum()
table['COMMITTEE_POSITION']
table[table['COMMITTEE_POSITION'] == 'SUPPORT']
support_table = table[table['COMMITTEE_POSITION'] == 'SUPPORT']
print len(support_table)
support_table.sort_values("AMOUNT", ascending=False).head(10)
oppose_table = table[table['COMMITTEE_POSITION'] == 'OPPOSE']
print len(oppose_table)
oppose_table.sort_values("AMOUNT", ascending=False).head(10)
support_table['AMOUNT'].sum()
oppose_table['AMOUNT'].sum()
table.groupby("COMMITTEE_NAME")['AMOUNT'].sum()
table.groupby("COMMITTEE_NAME")['AMOUNT'].sum().reset_index()
table.groupby("COMMITTEE_NAME")['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False)
table.groupby(["FIRST_NAME", "LAST_NAME"])['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False)
table.groupby([
"FIRST_NAME",
"LAST_NAME",
"COMMITTEE_POSITION"
])['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False)
import matplotlib.pyplot as plt
%matplotlib inline
top_supporters = support_table.groupby(
["FIRST_NAME", "LAST_NAME"]
)['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False).head(10)
top_supporters
top_supporters['AMOUNT'].plot.bar()
top_supporters['AMOUNT'].plot.barh()
top_supporters.head(5)['AMOUNT'].plot.barh()
chart = top_supporters.head(5)['AMOUNT'].plot.barh()
chart.set_yticklabels(top_supporters['LAST_NAME'])
top_supporters.head(5)
print string
print string + "World"
print string + " " + "World"
top_supporters['FULL_NAME'] = top_supporters['FIRST_NAME'] + " " + top_supporters['LAST_NAME']
top_supporters.head()
chart = top_supporters.head(5)['AMOUNT'].plot.barh()
chart.set_yticklabels(top_supporters['FULL_NAME'])
top_supporters.head(5).to_csv("top_supporters.csv")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There. You've just written your first Python code. You've entered two integers (the 2's) and added them together using the plus sign operator. Not so bad, right?
Step2: In this case, we’ve created a variable called san and assigned it the integer value 2.
Step3: We can do the same thing again with a different variable name
Step4: Then add those two together the same way we added the numbers at the top.
Step5: Variables can contain many different kinds of data types. There are integers, strings, floating point numbers (decimals), lists and dictionaries.
Step6: Playing with data we invent can be fun, but it's a long way from investigative journalism.
Step7: Print that variable and you see that open has created a file "object" that offers a number of different ways to interact with the contents of the file.
Step8: One thing a file object can do is read in all of the data from the file. Let's do that next and store the contents in a new variable.
Step9: That's all good, but the data is printing out as one big long string. If we're going to do some real analysis, we need Python to recognize and respect the structure of our data, in the way an Excel spreadsheet would.
Step10: Opening our CSV isn't any harder than with open, you just need to know the right trick to make it work.
Step11: Great. Now let's do it again and assign it to a variable this time
Step12: Now let's see what that returns when we print it.
Step13: Here's how you can see the first few rows
Step14: How many rows are there? Here's how to find out.
Step15: Even with that simple question and answer, we've begun the process of interviewing our data.
Step16: We've got it sorted the wrong way. Let's reverse it.
Step17: Now let's limit it to the top 10.
Step18: What is the total sum of contributions that have been reported?
Step19: Now adding it up is this easy.
Step20: There's our big total. Why is it lower than the ones I quoted above? That's because campaigns are only required to report the names of donors over $200, so our data is missing all of the donors who gave smaller amounts of money.
Step21: Now let's filter using that column using pandas oddball method
Step22: Stick that in a variable
Step23: So now we can ask
Step24: Next
Step25: Now let's ask the same questions of the opposing side.
Step26: How about the sum total of contributions for each?
Step27: Grouping
Step28: Wow. That's pretty ugly. Why? Because pandas is weird.
Step29: Now let's sort it by size
Step30: Okay. Committees are good. But what about something a little more interesting. Who has given the most money?
Step31: But which side where they are? Add in the position column to see that too.
Step32: Pretty cool, right? Now now that we've got this interesting list of people, let's see if we can make a chart out of it.
Step33: Before we'll get started, let's run one more trick to configure matplotlib to show its charts in our notebook.
Step34: Now let's save the data we want to chart into a variable
Step35: Making a quick bar chart is as easy as this.
Step36: It's really those first five that are the most interesting, so let's trim our chart.
Step37: What are those y axis labels? Those are the row number (pandas calls them indexes) of each row. We don't want that. We want the names.
Step38: Okay, but what if I want to combine the first and last name?
Step39: In plain old Python, we created a string at the start of our less. Remember this?
Step40: Combining strings can be as easy as addition.
Step41: And if we want to get a space in there yet we can do something like
Step42: And guess what we can do the same thing with two columns in our table, and use a pandas trick that will apply it to every row.
Step43: Now let's see the results
Step44: Now let's chart that.
Step45: That's all well and good, but this chart is pretty ugly. If you wanted to hand this data off to your graphics department, or try your hand at a simple chart yourself using something like Chartbuilder, you'd need to export this data into a spreadsheet.
|
6,486
|
<ASSISTANT_TASK:>
Python Code:
import xgboost as xgb
import pandas as pd
from sklearn import *
import matplotlib.pyplot as plt
%matplotlib inline
df_train = pd.read_csv("/data/churn-bigml-80.csv")
df_train.head()
df_train.info()
df_train.Churn.value_counts()
df_train.Churn.value_counts()/len(df_train)
df_train.columns
df_test = pd.read_csv("/data/churn-bigml-20.csv")
df_test.info()
df_test.Churn.value_counts()/len(df_test)
len(df_test)/len(df_train)
cat_columns = ['State', 'Area code', 'International plan', 'Voice mail plan']
num_columns = ['Account length', 'Number vmail messages', 'Total day minutes',
'Total day calls', 'Total day charge', 'Total eve minutes',
'Total eve calls', 'Total eve charge', 'Total night minutes',
'Total night calls', 'Total night charge', 'Total intl minutes',
'Total intl calls', 'Total intl charge', 'Customer service calls']
target = "Churn"
X_train = df_train.drop(columns=target)
y_train = df_train[target]
X_test = df_test.drop(columns=target)
y_test = df_test[target]
cat_pipe = pipeline.Pipeline([
('imputer', impute.SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', preprocessing.OneHotEncoder(handle_unknown='error', drop="first"))
])
num_pipe = pipeline.Pipeline([
('imputer', impute.SimpleImputer(strategy='median')),
('scaler', preprocessing.StandardScaler()),
])
preprocessing_pipe = compose.ColumnTransformer([
("cat", cat_pipe, cat_columns),
("num", num_pipe, num_columns)
])
X_train = preprocessing_pipe.fit_transform(X_train)
X_test = preprocessing_pipe.transform(X_test)
pd.DataFrame(X_train.toarray()).describe()
est = linear_model.LogisticRegression(solver="liblinear")
est.fit(X_train, y_train)
y_test_pred = est.predict(X_test)
est.score(X_test, y_test)
est = tree.DecisionTreeClassifier(max_depth=6)
est.fit(X_train, y_train)
y_test_pred = est.predict(X_test)
est.score(X_test, y_test)
print(metrics.classification_report(y_test, y_test_pred))
metrics.confusion_matrix(y_test, y_test_pred)
eval_sets = [
(X_train, y_train),
(X_test, y_test)
]
cls = xgb.XGBRFClassifier(silent=False,
scale_pos_weight=1,
learning_rate=0.1,
colsample_bytree = 0.99,
subsample = 0.8,
objective='binary:logistic',
n_estimators=100,
reg_alpha = 0.003,
max_depth=10,
gamma=10,
min_child_weight = 1
)
print(cls.fit(X_train
, y_train
, eval_set = eval_sets
, early_stopping_rounds = 10
, eval_metric = ["error", "logloss"]
, verbose = True
))
print("test accuracy: " , cls.score(X_test, y_test))
cls.evals_result()
y_test_pred = cls.predict(X_test)
metrics.confusion_matrix(y_test, y_test_pred)
y_test_prob = cls.predict_proba(X_test)[:, 1]
y_test_prob
auc = metrics.roc_auc_score(y_test, y_test_prob)
auc
ftr, tpr, thresholds = metrics.roc_curve(y_test, y_test_prob)
plt.rcParams['figure.figsize'] = 8,8
plt.plot(ftr, tpr)
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.title("ROC, auc: " + str(auc))
params = { 'objective': "binary:logistic"
, 'colsample_bytree': 0.9
, 'learning_rate': 0.01
, 'max_depth': 10
, 'alpha': 0.5
, 'min_child_weight': 1
, 'subsample': 1
, 'eval_metric': "auc"
, 'n_estimators': 300
, 'verbose': True
}
data_dmatrix = xgb.DMatrix(data=X_train,label=y_train)
cv_results = xgb.cv(dtrain=data_dmatrix
, params=params
, nfold=5
, maximize = "auc"
, num_boost_round=100
, early_stopping_rounds=10
, metrics=["logloss", "error", "auc"]
, as_pandas=True
, seed=123
, verbose_eval=True
)
cv_results
cv_results[["train-error-mean"]].plot()
plt.rcParams['figure.figsize'] = 50,50
xgb.plot_tree(cls, num_trees=0, rankdir='LR')
plt.rcParams['figure.figsize'] =15, 15
xgb.plot_importance(cls, )
cls.feature_importances_
one_hot_encoder = preprocessing_pipe.transformers_[0][1].steps[1][1]
one_hot_encoder
one_hot_encoder.get_feature_names()
preprocessing_pipe.transformers_[0][1]
parameters = {
'max_depth': range (2, 10, 1),
'n_estimators': range(60, 220, 40),
'learning_rate': [0.1, 0.01, 0.05]
}
cls = xgb.XGBRFClassifier(silent=False,
scale_pos_weight=1,
learning_rate=0.01,
colsample_bytree = 0.99,
subsample = 0.8,
objective='binary:logistic',
n_estimators=100,
reg_alpha = 0.003,
max_depth=10,
gamma=10,
min_child_weight = 1
)
grid_search = model_selection.GridSearchCV(
estimator=cls,
param_grid=parameters,
scoring = 'roc_auc',
n_jobs = 12,
cv = 10,
verbose=True,
return_train_score=True
)
grid_search.fit(X_train, y_train)
grid_search.best_estimator_
grid_search.best_params_
grid_search.best_score_
pd.DataFrame(grid_search.cv_results_)
folds = 5
param_comb = 5
cls = xgb.XGBRFClassifier(silent=False,
scale_pos_weight=1,
learning_rate=0.01,
colsample_bytree = 0.99,
subsample = 0.8,
objective='binary:logistic',
n_estimators=100,
reg_alpha = 0.003,
max_depth=10,
gamma=10,
min_child_weight = 1
)
skf = model_selection.StratifiedKFold(n_splits=folds, shuffle = True, random_state = 1001)
random_search = model_selection.RandomizedSearchCV(cls,
param_distributions=parameters,
n_iter=param_comb,
scoring='accuracy',
n_jobs=12,
cv=skf.split(X_train,y_train),
verbose=3,
random_state=1001 )
random_search.fit(X_train, y_train)
random_search.best_score_, random_search.best_params_
pd.DataFrame(random_search.cv_results_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the training data
Step2: Let's check number of records, number of columns, types of columns and whether the data contains NULL values.
Step3: Let's check distribution of the output class. As it shows it contains 85% records are negative. It gives a sense of desired accuracy - which is closure to 90% or more.
Step4: Loaded the test data and performed similar analysis as before.
Step5: Sort out of categorical and numeric columns so that it can be passed to pipeline for pre-proceessing steps. In the processing steps, we are doing the following
Step6: Build a basic logistic regression model and decision tree models and check the accuracy. Basic logistic regression model gives accuracy of 85%.
Step7: Print classification report. The report shows that precision and recall score quite poor. Accuracy is 85%. Confusion metrics shows a high number of false positive and false negatives.
Step8: Next, we build a similar model using XGBoost. Performance the model is slightly better than logistic regression model.
Step9: Cross validate the model
Step10: Install graphviz to display the decision graph
Step11: These plots provide insight into how the model arrived at its final decisions and what splits it made to arrive at those decisions.
|
6,487
|
<ASSISTANT_TASK:>
Python Code:
%%writefile server.py
from flask import Flask, request, jsonify
import tempfile
app = Flask(__name__)
@app.route('/pitch_track', methods=['POST'])
def pitch_track():
import parselmouth
# Save the file that was sent, and read it into a parselmouth.Sound
with tempfile.NamedTemporaryFile() as tmp:
tmp.write(request.files['audio'].read())
sound = parselmouth.Sound(tmp.name)
# Calculate the pitch track with Parselmouth
pitch_track = sound.to_pitch().selected_array['frequency']
# Convert the NumPy array into a list, then encode as JSON to send back
return jsonify(list(pitch_track))
import os
import subprocess
import sys
import time
# Start a subprocess that runs the Flask server
p = subprocess.Popen([sys.executable, "-m", "flask", "run"], env=dict(**os.environ, FLASK_APP="server.py"), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Start two subthreads that forward the output from the Flask server to the output of the Jupyter notebook
def forward(i, o):
while p.poll() is None:
l = i.readline().decode('utf-8')
if l:
o.write("[SERVER] " + l)
import threading
threading.Thread(target=forward, args=(p.stdout, sys.stdout)).start()
threading.Thread(target=forward, args=(p.stderr, sys.stderr)).start()
# Let's give the server a bit of time to make sure it has started
time.sleep(2)
from IPython.display import Audio
Audio(filename="audio/the_north_wind_and_the_sun.wav")
import requests
import json
# Load the file to send
files = {'audio': open("audio/the_north_wind_and_the_sun.wav", 'rb')}
# Send the HTTP request and get the reply
reply = requests.post("http://127.0.0.1:5000/pitch_track", files=files)
# Extract the text from the reply and decode the JSON into a list
pitch_track = json.loads(reply.text)
print(pitch_track)
import matplotlib.pyplot as plt
import seaborn as sns
sns.set() # Use seaborn's default style to make attractive graphs
plt.rcParams['figure.dpi'] = 100 # Show nicely large images in this notebook
plt.figure()
plt.plot([float('nan') if x == 0.0 else x for x in pitch_track], '.')
plt.show()
# Let's shut down the server
p.kill()
# Cleaning up the file that was written to disk
!rm server.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Normally, we can then run the server typing FLASK_APP=server.py flask run on the command line, as explained in the Flask documentation. Please do note that to run this server publicly, in a secure way and as part of a bigger setup, other options are available to deploy! Refer to the Flask deployment documentation.
Step2: Now that the server is up and running, we can make a standard HTTP request to this web service. For example, we can send a Wave file with an audio recording of someone saying "The north wind and the sun [...]"
Step3: To do so, we use the requests library in this example, but we could use any library to send a standard HTTP request.
Step4: Since we used the standard json library from Python to decode the reply from server, pitch_track is now a normal list of floats and we can for example plot the estimated pitch track
Step5: Refer to the examples on plotting for more details on using Parselmouth for plotting.
|
6,488
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
# For presentation purposes only.
%matplotlib inline
x = [1, 2, 3, 5] # List of x coordinates.
y = [4, 3, 6, 2] # List of y coordinates.
y_error = [0.1, 0.5, .2, 0.25] # Errors associated with the y readings.
plt.scatter(x, y)
plt.show() # Displays the plot.
plt.clf() # Clears the plot.
plt.scatter(x, y)
plt.xlabel("X Axis")
plt.ylabel("Y Axis")
plt.title("Basic Scatterplot")
plt.grid()
plt.show()
plt.clf()
plt.plot(x, y)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Plot with Interpolated Lines")
plt.grid()
plt.show()
plt.clf()
plt.plot(x, y, marker="D") # D is for diamond
plt.xlabel("x")
plt.ylabel("y")
plt.title("Plot with Interpolated Lines and Points Displayed")
plt.grid()
plt.show()
plt.clf()
plt.plot(x, y, marker="D")
plt.xlim([0, 5.2]) # Displays the x-axis from 0 to 4
plt.ylim([0, 6.4]) # Displays the y-axis from 0 to 7
plt.xlabel("x")
plt.ylabel("y")
plt.title("Plot Showing Origin")
plt.grid()
plt.show()
plt.clf()
plt.plot(x, y, marker="x") # x is for x
plt.errorbar(x, y, yerr=y_error) # Adding in error bars.
plt.xlabel("x")
plt.ylabel("y")
plt.title("Realistic Plot")
plt.grid()
plt.show()
plt.clf()
x = np.linspace(-np.pi, np.pi, 10, endpoint=True)
y = np.sin(x)
plt.plot(x, y, marker=".")
plt.xlabel("x")
plt.ylabel("sin(x)")
plt.title("Sine Function")
plt.grid()
plt.show()
plt.clf()
x = np.linspace(-np.pi, np.pi, 100, endpoint=True) # Switch from 10 to 100 points.
y = np.sin(x)
plt.plot(x, y)
plt.xlabel("x")
plt.ylabel("sin(x)")
plt.title("Sine Function")
plt.grid()
plt.show()
plt.clf()
x = np.linspace(-np.pi, np.pi, 100, True)
plt.plot(x, np.sin(x))
plt.plot(x, np.cos(x))
plt.xlabel("x")
plt.ylabel("y")
plt.title("Sine and Cosine Function")
plt.grid()
plt.show()
plt.clf()
x = np.linspace(-np.pi, np.pi, 100, True)
plt.plot(x, np.sin(x), label="Sin(x)") # label associates a name with the plot.
plt.plot(x, np.cos(x), label="Cos(x)")
plt.xlabel("x")
plt.ylabel("y")
plt.title("Sine and Cosine Function")
plt.legend(loc='upper left') # Display the legend in the upper right.
plt.grid()
plt.show()
plt.clf()
x = np.linspace(-100, 10, 100, True)
plt.plot(x, 10**x)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Power Function")
plt.grid()
plt.show()
plt.semilogy(x, 10**x) # Semi-Log Plot
plt.xlabel("x")
plt.ylabel("y")
plt.title("Exponential Function - Semi-Log Plot")
plt.grid()
plt.show()
x = np.linspace(1, 10000, 50, True)
plt.loglog(x, x) # Log-log Plot
plt.xlabel("x")
plt.ylabel("y")
plt.title("Log-Log Plot")
plt.grid()
plt.show()
x = np.linspace(1, 10**3, 50, True)
plt.loglog(x, x, marker="x", linestyle='None')
plt.xlabel("x")
plt.ylabel("y")
plt.title("Skewed Log-Log")
plt.grid()
plt.show()
x = np.logspace(1, 3, 50, endpoint=True, base=10)
plt.loglog(x, x, marker="x", linestyle='None')
plt.xlabel("x")
plt.ylabel("y")
plt.title("Clean Log-Log")
plt.grid()
plt.show()
x = np.linspace(0, 10, 50)
plt.xlabel("x")
plt.ylabel("y")
plt.title("A Line")
plt.grid()
plt.plot(x, x)
plt.savefig("Line.pdf") # Save the plot.
plt.clf()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting Datapoints
Step2: Three functions were called during the generation of this plot
Step3: Let's say we wanted a line drawn between the points. Matplotlib has got us covered with the plot function.
Step4: Matplotlib has gone ahead and interpolated a line between consecutive points. However it would be useful to still show the original points. This can be achieved using the marker keyword argument.
Step5: There are lots of different marker styles available. A list of them can be found here.
Step6: This plot looks pretty good. Too good. Let's add in error bars using the errorbar function.
Step7: To specify x-errors use the xerr keyword.
Step8: A whole bunch of stuff just happened. Let's take a minute to go through it
Step9: That looks much smoother. Now let's plot multiple figures in a single plot.
Step10: This is as simple as making two calls to plot. However with two different things on the plot it's a good idea to label them.
Step11: And that's the general gist of generating plots.
Step12: It's hard to see what's happening to see what's happening with this power function on a normal scale. Let's clean it up using a semi-log plot.
Step13: If you wanted the log scale on the x-axis you would use the semilogx function.
Step14: One thing to note is that when using a logarithmic scale, linspace isn't the best function to use because the distribution of sample points becomes skewed visually.
Step15: This can be improved by using the logspace function which takes as its first three arguments a starting power, ending power and the number of points to sample.
Step16: Saving Plots to a File
|
6,489
|
<ASSISTANT_TASK:>
Python Code:
import json
import great_expectations as ge
import great_expectations.jupyter_ux
from great_expectations.datasource.types import BatchKwargs
import datetime
context = ge.data_context.DataContext()
context.list_expectation_suite_names()
expectation_suite_name = # TODO: set to a name from the list above
# list datasources of the type PandasDatasource in your project
[datasource['name'] for datasource in context.list_datasources() if datasource['class_name'] == 'PandasDatasource']
datasource_name = # TODO: set to a datasource name from above
# If you would like to validate a file on a filesystem:
batch_kwargs = {'path': "YOUR_FILE_PATH", 'datasource': datasource_name}
# If you already loaded the data into a Pandas Data Frame:
batch_kwargs = {'dataset': "YOUR_DATAFRAME", 'datasource': datasource_name}
batch = context.get_batch(batch_kwargs, expectation_suite_name)
batch.head()
# This is an example of invoking a validation operator that is configured by default in the great_expectations.yml file
Create a run_id. The run_id must be of type RunIdentifier, with optional run_name and run_time instantiation
arguments (or a dictionary with these keys). The run_name can be any string (this could come from your pipeline
runner, e.g. Airflow run id). The run_time can be either a dateutil parsable string or a datetime object.
Note - any provided datetime will be assumed to be a UTC time. If no instantiation arguments are given, run_name will
be None and run_time will default to the current UTC datetime.
run_id = {
"run_name": "some_string_that_uniquely_identifies_this_run", # insert your own run_name here
"run_time": datetime.datetime.now(datetime.timezone.utc)
}
results = context.run_validation_operator(
"action_list_operator",
assets_to_validate=[batch],
run_id=run_id)
context.open_data_docs()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Get a DataContext
Step2: 2. Choose an Expectation Suite
Step3: 3. Load a batch of data you want to validate
Step5: 4. Validate the batch with Validation Operators
Step6: 5. View the Validation Results in Data Docs
|
6,490
|
<ASSISTANT_TASK:>
Python Code:
def compute_sum(n):
total = 0
for i in range(n):
m = int(input('请输入一个正整数为加数,以回车结束。 '))
total += m
return total
n = int(input('请输入一个正整数为次数,以回车结束。 '))
print ('total=',compute_sum(n))
def computer_sum(num):
total = 1
for i in range(1,num+1):
total *= i
return total
m = int(input('请输入第一个整数,回车结束 '))
n = int(input('请输入第二个整数,回车结束 '))
k = int(input('请输入第三个整数,回车结束 '))
print ('tatol =',computer_sum(m)+computer_sum(n)+computer_sum(k))
def computer_add(num):
total = 0
for i in range(num):
j = 2*i + 1
if (i+1)%2 != 0:
total += 1/j
else:
total -= 1/j
return total
m = int(input('请输入第一个整数,回车结束 '))
print ('4倍的和:tatol1 =',4*computer_add(m))
n = int(input('请输入第二个整数,回车结束 '))
print ('4倍的和:tatol2 =',4*computer_add(n))
def computer_sum(m,n,k):
for i in range(m,n):
i += k
return i
m = int(input('请输入第一个整数即m,回车结束 '))
n = int(input('请输入第二个整数即n,回车结束 '))
k = int(input('请输入第三个整数即k,回车结束 '))
print ('tatol =',computer_sum(m,n,k))
import random,math
def compute_sqrt(n,m,k,total,avg):
for i in range(1,n):
num = random.randint(m,k)
print ('num',i,': ',num,sep='')
total += num
avg = total/2
sqr = math.sqrt(avg)
return sqr
m = int(input('please input a smaller number '))
k = int(input('please input a bigger number '))
n = int(input('please input a number for times '))
total = 0
avg = 0
num = random.randint(m,k)
print ('num0:',num)
total += num
print (n,'个随机整数均值的平方根等于:',compute_sqrt(n,m,k,total,avg))
import random,math
def compute_sqrt(n,m,k):
total1 = 0
total2 = 0
for i in range(n):
num = random.randint(m,k)
print ('num',i,': ',num,sep='')
num1 = math.log(num)
num2 = 1/math.log(num)
total1 += num1
total2 += num2
i += 1
print ('西格玛log总和为:',total1,'\n西格玛1/log总和为:',total2)
return
m = int(input('please input a smaller number '))
k = int(input('please input a bigger number '))
n = int(input('please input a number for times '))
compute_sqrt(n,m,k)
import random,math
def compute(n,a):
b = a
total = a
for i in range(n-1):
b = b*10+a
total += b
print ('total:',total)
return
a = random.randint(1,9)
print ('a:',a)
n = int(input('please input a number for times '))
print ('n:',n)
compute(n,a)
def compute_sum(numbers):
numTotal = 0
for num in numbers:
numTotal += num
print ('numTotal:',numTotal)
numbers = [i for i in range(1,10)]
compute_sum(numbers)
def compute_min(numbers):
min = numbers[0]
for num in numbers:
if num < min:
min = num
print ('Min:',min)
return
numbers = [i for i in range(1,10)]
compute_min(numbers)
def compute_pos(numbers):
n = int(input('please input a number: '))
num = 0
for i in range(len(numbers)):
if numbers[i] == n :
print ('pos:',i+1)
break
num += 1
if num >= len(numbers):
print ('pos:',-1)
numbers = [i for i in range(1,10)]
print (numbers)
compute_pos(numbers)
def compute_cos():
vector_frt = []
vector_sec = []
for i in range(2):
print ('请输入第',i+1,'个向量的坐标')
x = int(input('请输入x坐标: '))
y = int(input('请输入y坐标: '))
if i == 0:
vector_frt.append(x)
vector_frt.append(y)
else:
vector_sec.append(x)
vector_sec.append(y)
print ('vector1: [',vector_frt[0],',',vector_frt[1],']',sep = '')
print ('vector2: [',vector_sec[0],',',vector_sec[1],']',sep = '')
vect_a = math.sqrt(vector_frt[0]*vector_frt[0] + vector_frt[1]*vector_frt[1])
vect_b = math.sqrt(vector_sec[0]*vector_sec[0] + vector_sec[1]*vector_sec[1])
vect_ab = vector_frt[0]*vector_sec[0] + vector_frt[1]*vector_sec[1]
vect_cos = vect_ab/(vect_a*vect_b)
print ('两个向量的夹角余弦值为:',vect_cos)
compute_cos()
def compute_counts():
num = 0
for i in range(5,100):
for j in range(5,100):
for k in range(5,100):
if i+j+k == 100:
num += 1
#print (i,j,k,sep = '--')
print ('total:',num)
compute_counts()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 以前的:
Step2: 以前的:
Step3: 以前的:
Step4: 以前的:
Step5: 以前的:
Step6: 以前的:
Step7: 6.8 习题
Step8: 6.8 习题
Step9: 6.8 习题
Step10: 6.8 习题
Step11: 6.8 习题
|
6,491
|
<ASSISTANT_TASK:>
Python Code:
# Importamos todas las librerías que usaremos. Explicación...
%matplotlib inline
import matplotlib.pyplot as plt
from scipy import special
import numpy as np
from ipywidgets import *
# Graficamos funciones de Bessel de orden n = 0,1,...,4
r = np.linspace(0, 10,100)
for n in range(5):
plt.plot(r, special.jn(n, r), label = '$J_{%s}(r)$'%n)
plt.xlabel('$r$', fontsize = 18)
plt.ylabel('$J_{n}(r)$', fontsize = 18)
plt.axhline(y = 0, color = 'k') # Para graficar lineas horizontales
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5), prop={'size': 14})
plt.show()
def f_shape(r):
return 1 - r**4
a = 1
r = np.linspace(0, 1, 100)
angle = np.linspace(0, 2*np.pi, 200)
r_shape = f_shape(r)
# Explicación de los siguientes comandos...
u = np.array([np.full(len(angle), radi) for radi in r_shape])
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
# ¿Cómo se grafica en polares?
plt.figure(figsize = (6, 5))
plt.pcolor(x, y, u, cmap = 'viridis')
plt.axis('off')
plt.colorbar()
plt.show()
# Para imprimir con fotmato LaTeX
from sympy import init_printing; init_printing(use_latex='mathjax')
import sympy as sym
sym.var('r theta', real = True)
#r, theta, k = sym.symbols('r theta k')
r, theta
sym.var('n k', positive = True, integer=True)
#n, k = sym.symbols('n k', positive = True, integer=True)
n, k
def lamb(n,k):
return sym.Symbol('lambda_%s%s'%(n,k), positive = True, real = True)
lamb(0,k)
f = 1 - r**4; f
integrand = f * sym.besselj(n, lamb(n,k) * r) * sym.cos(n *theta) * r
integrand
ank = sym.Integral(integrand, (r, 0, 1), (theta, 0, 2*sym.pi))
ank
solution_ank = ank.doit()
solution_ank
integ = lambda n: f * sym.besselj(n, lamb(n,k) * r) * sym.cos(n*theta) * r
integ(0)
a0k = sym.Integral(integ(0), (r, 0, 1), (theta, 0, 2*sym.pi))
a0k
solution_a0k = a0k.doit()
solution_a0k
a0k_solution = solution_a0k/(sym.pi*sym.besselj(1, lamb(0,k))**2)
a0k_solution
sym.simplify(a0k_sol)
integrand_b = f * sym.besselj(n, lamb(n,k) * r) * sym.sin(n *theta) * r
integrand_b
bnk = sym.Integral(integrand_b, (r, 0, 1), (theta, 0, 2*sym.pi))
bnk
solution_bnk = bnk.doit()
solution_bnk
a0k_solution
def a0k_sym(lambd):
solucion = 2*(-4*special.jn(0, lambd)/lambd**2
+16*special.jn(1, lambd)/lambd**3 +
32*special.jn(0, lambd)/lambd**4 -
64*special.jn(1, lambd)/lambd**5)/special.jn(1, lambd)**2
return solucion
def tambor(v, kth_zero, nt, t):
r = np.r_[0:1:100j]
angle = np.r_[0:2*np.pi:200j]
ceros = special.jn_zeros(0, nt)
lambd = ceros[kth_zero]
u_r = a0k_sym(lambd)*special.jn(0, lambd * r) * np.cos(lambd * v * t)
u = np.array([np.full(len(angle), u_rs) for u_rs in u_r])
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
return x, y, u
x1, y1, u1 = tambor(1, 0, 15, 7)
plt.figure(figsize = (6, 5))
plt.pcolor(x1 , y1 , u1, cmap = 'viridis')
plt.axis('off')
plt.colorbar()
plt.show()
def tambor_nk(t = 0, kth=0):
fig = plt.figure(figsize = (6,5))
ax = fig.add_subplot(1, 1, 1)
x, y, u = tambor(1, kth, 50, t)
im = ax.pcolor(x, y, u, cmap = 'viridis')
ax.set_xlim(xmin = -1.1, xmax = 1.1)
ax.set_ylim(ymin = -1.1, ymax = 1.1)
plt.colorbar(im)
plt.axis('off')
fig.canvas.draw()
interact_manual(tambor_nk, t = (0, 15,.01), kth = (0, 10, 1));
def tambor_n_allk(v, nk_zeros, t):
r = np.linspace(0, 1, 100)
angle = np.linspace(0, 2*np.pi, 200)
ceros = special.jn_zeros(0, nk_zeros)
lambd = ceros[0]
u_r = a0k_sym(lambd)*special.jn(0, lambd * r) * np.cos(lambd * v * t)
u0 = np.array([np.full(len(angle), u_rs) for u_rs in u_r])
for cero in range(1, nk_zeros):
lambd = ceros[cero]
u_r = a0k_sym(lambd)*special.jn(0, lambd * r) * np.cos(lambd * v * t)
u = np.array([np.full(len(angle), u_rs) for u_rs in u_r])
u0 += u
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
return x, y, u0
def tambor_0(t = 0):
fig = plt.figure(figsize = (6,5))
ax = fig.add_subplot(1, 1, 1)
x, y, u = tambor_n_allk(1, 15, t)
im = ax.pcolor(x, y, u, cmap = 'viridis')
ax.set_xlim(xmin = -1.1, xmax = 1.1)
ax.set_ylim(ymin = -1.1, ymax = 1.1)
plt.colorbar(im)
plt.axis('off')
fig.canvas.draw()
interact_manual(tambor_0, t = (0, 15,.01));
def tambor(n, r_max, v, kth_zero, nt, t):
r = np.r_[0:r_max:100j]
angle = np.r_[0:2*np.pi:200j]
ceros = special.jn_zeros(0, nt)
lamb = ceros[kth_zero]
u = np.array([special.jn(n, lamb* var_r) * np.cos(n * angle)
* np.cos(lamb * v * t) for var_r in r])
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
return x, y, u
x, y, u = tambor(1, 1, 1, 0, 15, 0)
plt.figure(figsize = (6, 5))
plt.pcolor(x, y, u, cmap = 'viridis')
plt.axis('off')
plt.colorbar()
plt.show()
def tambor_nk(t = 0, n = 0, kth=0):
fig = plt.figure(figsize = (6,5))
ax = fig.add_subplot(1, 1, 1)
x, y, u = tambor(n, 1, 1, kth, 15, t)
im = ax.pcolor(x, y, u, cmap = 'viridis')
ax.set_xlim(xmin = -1.1, xmax = 1.1)
ax.set_ylim(ymin = -1.1, ymax = 1.1)
plt.colorbar(im)
plt.axis('off')
fig.canvas.draw()
interact_manual(tambor_nk, t = (0, 15,.01), n = (0, 10, 1), kth = (0, 10, 1));
def tambor_n_allk(n, r_max, v, nk_zeros, t):
r = np.r_[0:r_max:100j]
angle = np.r_[0:2*np.pi:200j]
ceros = special.jn_zeros(0, nk_zeros)
lamb = ceros[0]
u0 = np.array([special.jn(n, lamb* var_r) * np.cos(n * angle)
* np.cos(lamb * v * t) for var_r in r])
for cero in range(1, nk_zeros):
lamb = ceros[cero]
u = np.array([special.jn(n, lamb* var_r) * np.cos(n * angle)
* np.cos(lamb * v * t) for var_r in r])
u0 += u
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
return x, y, u0
def tambor_n(t = 0, n = 0):
fig = plt.figure(figsize = (6,5))
ax = fig.add_subplot(1, 1, 1)
x, y, u = tambor_n_allk(n, 1, 1, 15, t)
im = ax.pcolor(x, y, u, cmap = 'viridis')
ax.set_xlim(xmin = -1.1, xmax = 1.1)
ax.set_ylim(ymin = -1.1, ymax = 1.1)
plt.colorbar(im)
plt.axis('off')
fig.canvas.draw()
interact_manual(tambor_n, t = (0, 15,.01), n = (0, 10, 1));
def order_n(n, ceros, nk_zeros, angle, v, r, t):
lamb = ceros[0]
u0 = np.array([special.jn(n, lamb* var_r) * np.cos(n * angle)
* np.cos(lamb * v * t) for var_r in r])
for cero in range(1, nk_zeros):
lamb = ceros[cero]
u = np.array([special.jn(n, lamb* var_r) * np.cos(n * angle)
* np.cos(lamb * v * t) for var_r in r])
u0 += u
return u0
def tambor(orden_n, r_max, v, nk_zeros, t):
r = np.r_[0:r_max:100j]
angle = np.r_[0:2*np.pi:100j]
ceros = special.jn_zeros(0, nk_zeros)
u0 = order_n(0, ceros, nk_zeros, angle, v, r, t)
for n in range(1, orden_n):
u = order_n(n, ceros, nk_zeros, angle, v, r, t)
u0 += u
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
return x, y, u0
x, y, u = tambor(10, 1, 1, 5, 5)
plt.figure(figsize = (5, 5))
plt.pcolor(x, y, u, cmap = 'inferno')
plt.axis('on')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Por simplicidad vamos a suponer que $a = 1$ y determinar los ceros, significa encontrar todas las intersecciones de las curvas anteriores con el eje horizontal.
Step2: Dado que la rapidez inicial es cero, entonces $a^{}_{nk} = b^{}_{nk} = 0$, y la solución para el desplazamiento en el tiempo es simplemente,
Step3: Entonces para cualquier $n>0$ no se tiene contribución.
Step4: Ahora $b_{nk}$.
Step5: \begin{equation}
Step6: Primero vamos a programar para algún modo $k$.
Step7: Y ahora, la solución completa.
Step8: Fíjise bien, la condición inicial en $t = 0$, se cumple para la solución encontrada.
Step9: Entonces, por ejemplo si $n = 1$, $a = 1$, $v = 1$, $k = 1$ y $t= 0$. Este sería el modo de vibración $(n,k)\rightarrow (1,1)$.
Step10: Ahora, veamos como lucen todos demás modos de vibración $(n,k)$.
Step11: Ahora, tal vez nos interesaría conocer el comportamiento de la membrana cuando sumamos sobre un conjunto de modos $k$. Es decir,
Step12: Por último, nos queda el caso cuando sumamos sobre todos los modos $n$. Es decir,
|
6,492
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from dillinger.gaussian_process import GaussianProcess
from dillinger.kernel_functions import PeriodicKernel
%matplotlib inline
sns.set(font_scale=1.3, palette='deep', color_codes=True)
np.random.seed(0)
# setting up the objective function
def objective_func(x):
return np.sin(x) + np.cos(x)**2
n_training_points = 10
x = np.linspace(0, 10, 100)
y_true = objective_func(x)
plt.plot(x, y_true, linestyle='dashed', linewidth=3, c='g')
plt.title('Objective function');
# plot the prior
GP = GaussianProcess(x, noise = .3, kernel_function='periodic', kernel_args={'ell': 1, 'p': np.pi*2, 'sigma': 1.1})
GP.plot()
plt.plot(x, y_true, linestyle='dashed', linewidth=3, c='g', label='True objective')
plt.legend()
plt.title('Regression example (prior)');
# create random observations with noisy targets
x_train = 10*np.random.rand(n_training_points)
y_train = objective_func(x_train) + .3*np.random.randn(n_training_points)
x_train.shape = -1, 1
y_train.shape = -1, 1
plt.plot(x, y_true, linestyle='dashed', linewidth=3, c='g', label='true objective')
plt.scatter(x_train, y_train, c='r', label='noisy observations')
plt.legend();
# and the posterior
GP.fit(x_train, y_train)
GP.plot()
plt.plot(x, y_true, linestyle='dashed', linewidth=3, c='g', label='True objective')
plt.legend()
plt.title('Regression example (posterior)');
# perform more function evals and update the GP
x_train_new = 10*np.random.rand(n_training_points)
y_train_new = objective_func(x_train_new) + .3*np.random.randn(n_training_points)
x_train_new.shape = -1, 1
y_train_new.shape = -1, 1
# simply call the fit method again and pass in the new data points
GP.fit(x_train_new, y_train_new)
GP.plot()
plt.plot(x, y_true, linestyle='dashed', linewidth=3, c='g', label='True objective')
plt.legend()
plt.title('Regression example (posterior)');
GP = GaussianProcess(x, kernel_function='periodic', kernel_args={'ell': 1, 'p': np.pi*2, 'sigma': 1.1})
kernel = PeriodicKernel
GP = GaussianProcess(x, noise=.3, kernel_function=kernel)
GP.fit(x_train, y_train)
GP.plot()
plt.plot(x, y_true, linestyle='dashed', linewidth=3, c='g', label='True objective')
plt.legend()
plt.title('Regression example w/automatic hyperparameter tuning');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The plot of the GP shows the mean in black, along with confidence intervals in purple. The plot above shows a blank prior.
Step2: The model object remembers data points it's already seen, so it's easy to continue to make observations and fit the posterior.
Step3: Kernel Hyperparameters
Step4: If you are using the Squared Exponential or Periodic kernels, there is support for automatic hyperparameter selection
|
6,493
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import os
import sys
import scipy.io as sio
sys.path.append('../scripts/')
import bicorr as bicorr
import bicorr_e as bicorr_e
import bicorr_math as bicorr_math
%load_ext autoreload
%autoreload 2
data_path = '../datar'
os.listdir(data_path)
os.listdir('../meas_info/')
os.path.join(data_path,'timeOffset.txt')
timeOffsetData = np.genfromtxt(os.path.join(data_path,'timeOffset.txt'))
chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists(print_flag=True)
X, Y = np.meshgrid(chList, chList)
fig = plt.figure(figsize=(4,4))
ax = plt.gca()
sc = ax.scatter(X, Y, c=timeOffsetData, s=14,edgecolor='none', marker='s', cmap = 'viridis')
cbar = plt.colorbar(sc, fraction = 0.043, pad=0.1)
cbar.set_label('Time offset (ns)')
ax.set_aspect('equal')
plt.xlabel('Detector channel 1')
plt.ylabel('Detector channel 2')
plt.title('Time offset values')
plt.show()
ccedType = np.dtype([('event', np.int32), ('detector', np.int8), ('particle_type', np.int8), ('time', np.float16), ('integral', np.float32), ('height', np.float32)])
data = np.genfromtxt(os.path.join(data_path,'cced1'),dtype=ccedType)
data[0]
print_flag = False
# l is the line number of the current line, starting at 0.
# e is the event number of the current line, starting at 1
# eventNum is the current event number, extending from lines i to j.
eventNum = data[0]['event']; # Start with the first event in the data chunk.
# If reading entire file, this is 1.
# If reading a chunk, this may be higher.
i = 0; # First line number of first event is always 0
# This is a clever way of keeping track what line you're on. Enumerate through the event numbers, `e`, and python also keeps track of the line number `l`.
for l, e in enumerate(data[:200]['event']):
if print_flag: print("Reading line: ",l,"; event: ",e)
if e == eventNum: # Still on the same event
pass
if e != eventNum: # Store info from current event, move onto next event.
j = l # Store line number
n_ints = j-i # Number interactions in current event
if print_flag: print(n_ints)
if n_ints >= 2:# At least two channels
ccedEvent = data[i:j][:] # Data in this event
chs_present = ccedEvent[:]['detector'] # What channels triggered?
chs_bool = np.in1d(chs_present,detList) # True = detector, False = fission chamber
dets_present = chs_present[chs_bool]
fc_corr = (16*np.floor(dets_present/16)).astype(int) # Corr fc for each det ch
fc_bool = np.in1d(fc_corr, chs_present) # Did fc corr trigger?
if print_flag: print(i,j,ccedEvent)
if print_flag: print('Chs:', chs_present,chs_bool,'Dets:',dets_present,fc_corr,fc_bool)
if sum(fc_bool) >=1 : # At least one det-fc pair triggered
dets_present = dets_present[fc_bool]
fc_corr = fc_corr[fc_bool]
if print_flag: print(e-1, dets_present, fc_corr)
# Set up vectors
det_indices = np.zeros(len(dets_present),dtype=np.int8) # det in chs_present
fc_indices = np.zeros(len(fc_corr),dtype=np.int8) # fc in chs_present
time_offset = np.zeros(len(dets_present),dtype=np.float16) # time offset
for d in range(0,len(dets_present),1):
det_indices = np.where(chs_present == dets_present[d])[0]
fc_indices = np.where(chs_present == fc_corr[d])[0]
time_offset[d] = timeOffsetData[fc_corr[d]][dets_present[d]]
if print_flag: print(det_indices, fc_indices, time_offset)
# Store dt and particle type for each detector event
dt = ccedEvent[det_indices]['time']-ccedEvent[fc_indices]['time']+time_offset
par_type = ccedEvent[det_indices]['particle_type']
if print_flag: print(dt, par_type)
# Store to histogram here! (Filled in later section)
eventNum = e # Move onto the next event
i = l # Current line is the first line for next event
num_dets
dt_bin_edges, num_dt_bins = bicorr.build_dt_bin_edges(-300,300,0.25,True)
e_bin_edges, num_e_bins = bicorr_e.build_energy_bin_edges()
det_df = bicorr.load_det_df()
dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df)
dict_det_dist = bicorr.build_dict_det_dist()
e_min = np.min(e_bin_edges)
e_max = np.max(e_bin_edges)
e_step = e_bin_edges[1]-e_bin_edges[0]
singles_hist = np.zeros((2,num_dets,num_dt_bins),dtype=np.uint64)
singles_hist.shape
singles_hist_e_n = np.zeros((num_dets,num_e_bins),dtype=np.uint64)
singles_hist_e_n.shape
det_df.head()dd
det_indices = np.arange(num_dets)
dict_det_to_index = dict(zip(detList,det_indices))
dict_index_to_det = dict(zip(det_indices,detList))
dict_det_to_index[44]
detList
np.argwhere(detList==1)
%timeit dict_det_to_index[44]
%timeit np.argwhere(detList==44)
print_flag = False
# l is the line number of the current line, starting at 0.
# e is the event number of the current line, starting at 1
# eventNum is the current event number, extending from lines i to j.
eventNum = data[0]['event']; # Start with the first event in the data chunk.
# If reading entire file, this is 1.
# If reading a chunk, this may be higher.
i = 0; # First line number of first event is always 0
# Calculate important things about dt_bin_edges
# Time indices
dt_min = np.min(dt_bin_edges); dt_max = np.max(dt_bin_edges)
dt_step = dt_bin_edges[1]-dt_bin_edges[0]
num_dt_bins = len(dt_bin_edges)-1
# This is a clever way of keeping track what line you're on. Enumerate through the event numbers, `e`, and python also keeps track of the line number `l`.
for l, e in enumerate(data['event']):
if print_flag: print("Reading line: ",l,"; event: ",e)
if e == eventNum: # Still on the same event
pass
if e != eventNum: # Store info from current event, move onto next event.
j = l # Store line number
n_ints = j-i # Number interactions in current event
if print_flag: print(n_ints)
if n_ints >= 2:# At least two channels
ccedEvent = data[i:j][:] # Data in this event
chs_present = ccedEvent[:]['detector'] # What channels triggered?
chs_bool = np.in1d(chs_present,detList) # True = detector, False = fission chamber
dets_present = chs_present[chs_bool]
fc_corr = (16*np.floor(dets_present/16)).astype(int) # Corr fc for each det ch
fc_bool = np.in1d(fc_corr, chs_present) # Did fc corr trigger?
if print_flag: print(i,j,ccedEvent)
if print_flag: print('Chs:', chs_present,chs_bool,'Dets:',dets_present,fc_corr,fc_bool)
if sum(fc_bool) >=1 : # At least one det-fc pair triggered
dets_present = dets_present[fc_bool]
fc_corr = fc_corr[fc_bool]
if print_flag: print(e-1, dets_present, fc_corr)
# Set up vectors
det_indices = np.zeros(len(dets_present),dtype=np.int8) # det in chs_present
fc_indices = np.zeros(len(fc_corr),dtype=np.int8) # fc in chs_present
time_offset = np.zeros(len(dets_present),dtype=np.float16) # time offset
for d in range(0,len(dets_present),1):
det_indices[d] = np.where(chs_present == dets_present[d])[0]
fc_indices[d] = np.where(chs_present == fc_corr[d])[0]
time_offset[d] = timeOffsetData[fc_corr[d]][dets_present[d]]
if print_flag: print(det_indices, fc_indices, time_offset)
# Store dt and particle type for each detector event
dt = ccedEvent[det_indices]['time']-ccedEvent[fc_indices]['time']+time_offset
par_type = ccedEvent[det_indices]['particle_type']
if print_flag: pass
# Store to histogram here! (Filled in later section)
for d in np.arange(len(dets_present)): # Loop through verified singles
# Store to time histogram
if print_flag: print(d,'of:',len(dt))
if print_flag: print('Time:', dt[d])
if print_flag: print('Particle:', par_type[d])
t_i = int(np.floor((dt[d]-dt_min)/dt_step))
t_i_check = np.logical_and(t_i>=0, t_i<num_dt_bins) # Within range?
if print_flag: print('t_i:',t_i)
if t_i_check:
singles_hist[par_type[d]-1,dict_det_to_index[dets_present[d]],t_i]+= 1
# Store to energy histogram
if np.logical_and(par_type[d] == 1,dt[d] > 0):
dist = dict_det_dist[dets_present[d]]
energy = bicorr_math.convert_time_to_energy(dt[d],dist)
if (e_min < energy < e_max):
e_i = int(np.floor((energy-e_min)/e_step))
singles_hist_e_n[dict_det_to_index[dets_present[d]],e_i] += 1
eventNum = e # Move onto the next event
i = l # Current line is the first line for next event
np.sum(singles_hist)
singles_hist.shape
dt_bin_centers = (dt_bin_edges[:-1]+dt_bin_edges[1:])/2
plt.plot(dt_bin_centers,np.sum(singles_hist,axis=(0,1)))
plt.xlabel('Time (ns)')
plt.ylabel('Number of events')
plt.title('TOF distribution, all events')
plt.yscale('log')
plt.show()
singles_hist[0,:,:].shape
dt_bin_centers = (dt_bin_edges[:-1]+dt_bin_edges[1:])/2
plt.plot(dt_bin_centers,np.sum(singles_hist[0,:,:],axis=(0)))
plt.plot(dt_bin_centers,np.sum(singles_hist[1,:,:],axis=(0)))
plt.xlabel('Time (ns)')
plt.ylabel('Number of events')
plt.title('TOF distribution, all detectors')
plt.legend(['N','G'])
plt.yscale('log')
plt.show()
e_bin_centers = bicorr_math.calc_centers(e_bin_edges)
plt.plot(e_bin_centers, np.sum(singles_hist_e_n[:,:],axis=(0)))
plt.xlabel('Energy (MeV)')
plt.ylabel('Number of events')
plt.title('Singles energy distribution, all channels')
plt.yscale('log')
plt.show()
singles_hist.nbytes
np.savez(os.path.join(data_path,'singles_hist'),singles_hist=singles_hist, dict_det_to_index=dict_det_to_index, dt_bin_edges = dt_bin_edges)
np.savez(os.path.join(data_path,'singles_hist_e_n'),
singles_hist_e_n=singles_hist_e_n,dict_det_to_index=dict_det_to_index,
e_bin_edges=e_bin_edges)
help(bicorr_e.build_singles_hist_both)
os.chdir('../methods/')
bicorr_e.build_singles_hist_both(['cced1'],data_path,show_flag=True, save_flag = False);
bicorr_e.build_singles_hist_both(['cced1','cced1','cced1'],data_path,show_flag=True, save_flag = False);
os.listdir(data_path)
filenames = ['1/cced1','2/cced2']
bicorr_e.build_singles_hist_both?
bicorr_e.build_singles_hist_both(filenames,data_path,show_flag=True);
os.listdir(data_path)
npzfile_t = np.load(os.path.join(data_path,'singles_hist.npz'))
npzfile_e = np.load(os.path.join(data_path,'singles_hist_e_n.npz'))
npzfile_t.files
npzfile_e.files
bicorr_e.load_singles_hist_both(filepath=data_path,plot_flag=True,show_flag=True);
num_folders = 5
cced_filenames = []
for folder_num in (np.arange(num_folders)+1):
cced_filenames.append(str(folder_num) + '/cced' + str(folder_num))
print(cced_filenames)
for filename in cced_filenames:
print(filename[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Look at what I did before
Step2: Import time offset data, build channel lists
Step3: The syntax for calling a value from timeOffsetData is
Step4: Loop through each event
Step5: Set up histogram
Step6: What should my time bins be? I want to store more information than I need but not take up too much disk space. This is only a 2D array, so that should not be a problem.
Step7: Energy
Step8: Allocate memory
Step9: Implement det # -> channel index
Step10: Actually... can I just use the channel list directly?
Step11: Do a quick time test to compare the two.
Step12: The dictionary is much faster by 50x. Go with that.
Step13: This looks good to me. This is only a few events, so I want to functionalize this and run it on the larger data sets on the cluster.
Step14: Store histogram to file
Step15: So approximately 1.7 MB. That is perfectly acceptable.
Step16: Functionalize this
Step17: Looks good
Step18: Reload data
Step19: Put together a script
|
6,494
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tflite-model-maker
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import image_classifier
from tflite_model_maker import ImageClassifierDataLoader
from tflite_model_maker import model_spec
import matplotlib.pyplot as plt
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
data = ImageClassifierDataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
model = image_classifier.create(train_data)
loss, accuracy = model.evaluate(test_data)
model.export(export_dir='.')
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
data = ImageClassifierDataLoader.from_folder(image_path)
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.dataset.take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
model = image_classifier.create(train_data, validation_data=validation_data)
model.summary()
loss, accuracy = model.evaluate(test_data)
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_top_k(test_data)
for i, (image, label) in enumerate(test_data.dataset.take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
model.export(export_dir='.')
model.evaluate_tflite('model.tflite', test_data)
config = configs.QuantizationConfig.create_full_integer_quantization(representative_data=test_data, is_integer_only=True)
model.export(export_dir='.', tflite_filename='model_quant.tflite', quantization_config=config)
model = image_classifier.create(train_data, model_spec=model_spec.mobilenet_v2_spec, validation_data=validation_data)
loss, accuracy = model.evaluate(test_data)
inception_v3_spec = model_spec.ImageModelSpec(
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)
loss, accuracy = model.evaluate(test_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image classification with TensorFlow Lite Model Maker
Step2: Import the required packages.
Step3: Simple End-to-End Example
Step4: You could replace image_path with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
Step5: Step 2. Customize the TensorFlow model.
Step6: Step 3. Evaluate the model.
Step7: Step 4. Export to TensorFlow Lite model.
Step8: After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in image classification reference app.
Step 1
Step9: Use ImageClassifierDataLoader class to load data.
Step10: Split it to training data (80%), validation data (10%, optional) and testing data (10%).
Step11: Show 25 image examples with labels.
Step12: Step 2
Step13: Have a look at the detailed model structure.
Step14: Step 3
Step15: We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
Step16: If the accuracy doesn't meet the app requirement, one could refer to Advanced Usage to explore alternatives such as changing to a larger model, adjusting re-training parameters etc.
Step 4
Step17: The TensorFlow Lite model file and label file could be used in image classification reference app.
Step18: Advanced Usage
Step19: Then we export TensorFlow Lite model with such configuration.
Step20: In Colab, you can download the model named model_quant.tflite from the left sidebar, same as the uploading part mentioned above.
Step21: Evaluate the newly retrained MobileNetV2 model to see the accuracy and loss in testing data.
Step22: Change to the model in TensorFlow Hub
Step23: Then, by setting parameter model_spec to inception_v3_spec in create method, we could retrain the Inception V3 model.
Step24: Evaluate the newly retrained model with 10 training epochs.
|
6,495
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip -q install lucid>=0.3.8
!pip -q install umap-learn>=0.3.7
# General support
import math
import tensorflow as tf
import numpy as np
# For plots
import matplotlib.pyplot as plt
# Dimensionality reduction
import umap
from sklearn.manifold import TSNE
# General lucid code
from lucid.misc.io import save, show, load
import lucid.modelzoo.vision_models as models
# For rendering feature visualizations
import lucid.optvis.objectives as objectives
import lucid.optvis.param as param
import lucid.optvis.render as render
import lucid.optvis.transform as transform
model = models.InceptionV1()
model.load_graphdef()
# model.layers[7] is "mixed4c"
layer = "mixed4c"
print(model.layers[7])
raw_activations = model.layers[7].activations
activations = raw_activations[:100000]
print(activations.shape)
def whiten(full_activations):
correl = np.matmul(full_activations.T, full_activations) / len(full_activations)
correl = correl.astype("float32")
S = np.linalg.inv(correl)
S = S.astype("float32")
return S
S = whiten(raw_activations)
def normalize_layout(layout, min_percentile=1, max_percentile=99, relative_margin=0.1):
Removes outliers and scales layout to between [0,1].
# compute percentiles
mins = np.percentile(layout, min_percentile, axis=(0))
maxs = np.percentile(layout, max_percentile, axis=(0))
# add margins
mins -= relative_margin * (maxs - mins)
maxs += relative_margin * (maxs - mins)
# `clip` broadcasts, `[None]`s added only for readability
clipped = np.clip(layout, mins, maxs)
# embed within [0,1] along both axes
clipped -= clipped.min(axis=0)
clipped /= clipped.max(axis=0)
return clipped
layout = umap.UMAP(n_components=2, verbose=True, n_neighbors=20, min_dist=0.01, metric="cosine").fit_transform(activations)
## You can optionally use TSNE as well
# layout = TSNE(n_components=2, verbose=True, metric="cosine", learning_rate=10, perplexity=50).fit_transform(d)
layout = normalize_layout(layout)
plt.figure(figsize=(10, 10))
plt.scatter(x=layout[:,0],y=layout[:,1], s=2)
plt.show()
#
# Whitened, euclidean neuron objective
#
@objectives.wrap_objective
def direction_neuron_S(layer_name, vec, batch=None, x=None, y=None, S=None):
def inner(T):
layer = T(layer_name)
shape = tf.shape(layer)
x_ = shape[1] // 2 if x is None else x
y_ = shape[2] // 2 if y is None else y
if batch is None:
raise RuntimeError("requires batch")
acts = layer[batch, x_, y_]
vec_ = vec
if S is not None: vec_ = tf.matmul(vec_[None], S)[0]
# mag = tf.sqrt(tf.reduce_sum(acts**2))
dot = tf.reduce_mean(acts * vec_)
# cossim = dot/(1e-4 + mag)
return dot
return inner
#
# Whitened, cosine similarity objective
#
@objectives.wrap_objective
def direction_neuron_cossim_S(layer_name, vec, batch=None, x=None, y=None, cossim_pow=1, S=None):
def inner(T):
layer = T(layer_name)
shape = tf.shape(layer)
x_ = shape[1] // 2 if x is None else x
y_ = shape[2] // 2 if y is None else y
if batch is None:
raise RuntimeError("requires batch")
acts = layer[batch, x_, y_]
vec_ = vec
if S is not None: vec_ = tf.matmul(vec_[None], S)[0]
mag = tf.sqrt(tf.reduce_sum(acts**2))
dot = tf.reduce_mean(acts * vec_)
cossim = dot/(1e-4 + mag)
cossim = tf.maximum(0.1, cossim)
return dot * cossim ** cossim_pow
return inner
#
# Renders a batch of activations as icons
#
def render_icons(directions, model, layer, size=80, n_steps=128, verbose=False, S=None, num_attempts=2, cossim=True, alpha=True):
image_attempts = []
loss_attempts = []
# Render multiple attempts, and pull the one with the lowest loss score.
for attempt in range(num_attempts):
# Render an image for each activation vector
param_f = lambda: param.image(size, batch=directions.shape[0], fft=True, decorrelate=True, alpha=alpha)
if(S is not None):
if(cossim is True):
obj_list = ([
direction_neuron_cossim_S(layer, v, batch=n, S=S, cossim_pow=4) for n,v in enumerate(directions)
])
else:
obj_list = ([
direction_neuron_S(layer, v, batch=n, S=S) for n,v in enumerate(directions)
])
else:
obj_list = ([
objectives.direction_neuron(layer, v, batch=n) for n,v in enumerate(directions)
])
obj = objectives.Objective.sum(obj_list)
transforms = []
if alpha:
transforms.append(transform.collapse_alpha_random())
transforms.append(transform.pad(2, mode='constant', constant_value=1))
transforms.append(transform.jitter(4))
transforms.append(transform.jitter(4))
transforms.append(transform.jitter(8))
transforms.append(transform.jitter(8))
transforms.append(transform.jitter(8))
transforms.append(transform.random_scale([0.995**n for n in range(-5,80)] + [0.998**n for n in 2*list(range(20,40))]))
transforms.append(transform.random_rotate(list(range(-20,20))+list(range(-10,10))+list(range(-5,5))+5*[0]))
transforms.append(transform.jitter(2))
# This is the tensorflow optimization process.
# We can't use the lucid helpers here because we need to know the loss.
print("attempt: ", attempt)
with tf.Graph().as_default(), tf.Session() as sess:
learning_rate = 0.05
losses = []
trainer = tf.train.AdamOptimizer(learning_rate)
T = render.make_vis_T(model, obj, param_f, trainer, transforms)
loss_t, vis_op, t_image = T("loss"), T("vis_op"), T("input")
losses_ = [obj_part(T) for obj_part in obj_list]
tf.global_variables_initializer().run()
for i in range(n_steps):
loss, _ = sess.run([losses_, vis_op])
losses.append(loss)
if (i % 100 == 0):
print(i)
img = t_image.eval()
img_rgb = img[:,:,:,:3]
if alpha:
print("alpha true")
k = 0.8
bg_color = 0.0
img_a = img[:,:,:,3:]
img_merged = img_rgb*((1-k)+k*img_a) + bg_color * k*(1-img_a)
image_attempts.append(img_merged)
else:
print("alpha false")
image_attempts.append(img_rgb)
loss_attempts.append(losses[-1])
# Use the icon with the lowest loss
loss_attempts = np.asarray(loss_attempts)
loss_final = []
image_final = []
print("Merging best scores from attempts...")
for i, d in enumerate(directions):
# note, this should be max, it is not a traditional loss
mi = np.argmax(loss_attempts[:,i])
image_final.append(image_attempts[mi][i])
return (image_final, loss_final)
#
# Takes a list of x,y layout and bins them into grid cells
#
def grid(xpts=None, ypts=None, grid_size=(8,8), x_extent=(0., 1.), y_extent=(0., 1.)):
xpx_length = grid_size[0]
ypx_length = grid_size[1]
xpt_extent = x_extent
ypt_extent = y_extent
xpt_length = xpt_extent[1] - xpt_extent[0]
ypt_length = ypt_extent[1] - ypt_extent[0]
xpxs = ((xpts - xpt_extent[0]) / xpt_length) * xpx_length
ypxs = ((ypts - ypt_extent[0]) / ypt_length) * ypx_length
ix_s = range(grid_size[0])
iy_s = range(grid_size[1])
xs = []
for xi in ix_s:
ys = []
for yi in iy_s:
xpx_extent = (xi, (xi + 1))
ypx_extent = (yi, (yi + 1))
in_bounds_x = np.logical_and(xpx_extent[0] <= xpxs, xpxs <= xpx_extent[1])
in_bounds_y = np.logical_and(ypx_extent[0] <= ypxs, ypxs <= ypx_extent[1])
in_bounds = np.logical_and(in_bounds_x, in_bounds_y)
in_bounds_indices = np.where(in_bounds)[0]
ys.append(in_bounds_indices)
xs.append(ys)
return np.asarray(xs)
def render_layout(model, layer, S, xs, ys, activ, n_steps=512, n_attempts=2, min_density=10, grid_size=(10, 10), icon_size=80, x_extent=(0., 1.0), y_extent=(0., 1.0)):
grid_layout = grid(xpts=xs, ypts=ys, grid_size=grid_size, x_extent=x_extent, y_extent=y_extent)
icons = []
for x in range(grid_size[0]):
for y in range(grid_size[1]):
indices = grid_layout[x, y]
if len(indices) > min_density:
average_activation = np.average(activ[indices], axis=0)
icons.append((average_activation, x, y))
icons = np.asarray(icons)
icon_batch, losses = render_icons(icons[:,0], model, alpha=False, layer=layer, S=S, n_steps=n_steps, size=icon_size, num_attempts=n_attempts)
canvas = np.ones((icon_size * grid_size[0], icon_size * grid_size[1], 3))
for i, icon in enumerate(icon_batch):
y = int(icons[i, 1])
x = int(icons[i, 2])
canvas[(grid_size[0] - x - 1) * icon_size:(grid_size[0] - x) * icon_size, (y) * icon_size:(y + 1) * icon_size] = icon
return canvas
#
# Given a layout, renders an icon for the average of all the activations in each grid cell.
#
xs = layout[:, 0]
ys = layout[:, 1]
canvas = render_layout(model, layer, S, xs, ys, raw_activations, n_steps=512, grid_size=(20, 20), n_attempts=1)
show(canvas)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple Activation Atlas
Step2: Load model and activations
Step3: Whiten
Step5: Dimensionality reduction
Step6: Feature visualization
Step7: Grid
|
6,496
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from welly import Well
% matplotlib inline
ls
data_dir = 'data/'
fname = 'L30.las'
file = data_dir + '/' + fname
# your code here
# it should looks like this
well
pslow = well.data['DT']
rhob = well.data['RHOB']
gr = well.data['GRD']
# your code here
# your code here
# your code here
# your code here
lithfile = 'data/L30_samples.txt'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Q
Step2: Q. Write a function that converts slowness (us/m) into velocity (m/s) your code here
Step3: Q. Create an acoustic impedance log
Step4: Q. Using a Sand-Shale cutoff for Gamma-Ray of 60 API, what is the Gross sand thickness from 2200 - 3000 m?
Step5: Q. Use the porosity log, NPHISS to determine the average value of porosity for a) sands, b) shales, c) carbonates.
Step6: Q. Explore the data distributions based on the actual lithologies based in
|
6,497
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(113) #set seed before any keras import
import pandas as pd
import random
from sklearn.model_selection import train_test_split
from collections import defaultdict
from keras.preprocessing import sequence
from collections import Counter
import pydot
seed=0
corpus = pd.read_csv('twistytest.csv',
index_col=0,
header=1,
names=['user_id', 'lang', 'text', 'mbti'])
corpus.sample(5)
#here we limit the corpus size. The SVM with all the text can learn somethign
corpus.text = corpus.text.apply(lambda x: x[:1000])
corpus.mbti = corpus.mbti.apply(lambda x: x[0])
#corpus = tmp.sample(frac=1, random_state=seed)
e = corpus[corpus.mbti.apply(lambda x: x == 'E')]
i = corpus[corpus.mbti.apply(lambda x: x == 'I')].sample(226)
corpus = pd.concat([e,i]).sample(frac=0.3, random_state=seed)
print(corpus.shape)
## set max length of doc per author
sentences = corpus.text#.apply(lambda x: x[:100000])
## trim labels: convert problem to binary classification I vs E
labels = corpus.mbti
## make sure we have a label for every data instance
assert(len(sentences)==len(labels))
data={}
np.random.seed(113) #seed
data['target']= np.random.permutation(labels)
np.random.seed(113) # use same seed!
data['data'] = np.random.permutation(sentences)
# preview the dataset
print(corpus.shape)
corpus.head()
# plot the distribution of labels
import matplotlib.pyplot as plt
l, v = zip(*Counter(y_train).items())
indexes = np.arange(len(l))
width = 1
plt.bar(indexes, v, width, color=['r', 'b'])
plt.xticks(indexes + width * 0.5, l)
plt.show()
#split the data into train, dev, test
X_rest, X_test, y_rest, y_test = train_test_split(data['data'], data['target'], test_size=0.2)
X_train, X_dev, y_train, y_dev = train_test_split(X_rest, y_rest, test_size=0.2)
del X_rest, y_rest
print("#train instances: {} #dev: {} #test: {}".format(len(X_train),len(X_dev),len(X_test)))
from sklearn.svm import LinearSVC
from sklearn.dummy import DummyClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
pipeline = Pipeline([('features', FeatureUnion([('wngram', TfidfVectorizer(ngram_range=(1,2))),
('cngram', TfidfVectorizer(analyzer='char'))])),
('cls', LinearSVC())])
pipeline.fit(X_train, y_train)
testpred = pipeline.predict(X_test)
print(accuracy_score(testpred, y_test))
print(classification_report(testpred, y_test))
from keras.utils import np_utils
y2i = defaultdict(lambda: len(y2i))
y_train_num = [y2i[mbti] for mbti in y_train]
y_dev_num = [y2i[mbti] for mbti in y_dev]
y_test_num = [y2i[mbti] for mbti in y_test]
num_classes = len(np.unique(y_train_num))
print(num_classes)
from collections import defaultdict
# convert words to indices, taking care of UNKs
def get_characters(sentence, c2i):
out = []
for word in sentence.split(" "):
chars = []
for c in word:
chars.append(c2i[c])
out.append(chars)
return out
c2i = defaultdict(lambda: len(c2i))
PAD = c2i["<pad>"] # index 0 is padding
UNK = c2i["<unk>"] # index 1 is for UNK
X_train_num = [get_characters(sentence, c2i) for sentence in X_train]
c2i = defaultdict(lambda: UNK, c2i) # freeze - cute trick!
X_dev_num = [get_characters(sentence, c2i) for sentence in X_dev]
X_test_num = [get_characters(sentence, c2i) for sentence in X_test]
max_sentence_length=max([len(s.split(" ")) for s in X_train]
+ [len(s.split(" ")) for s in X_dev]
+ [len(s.split(" ")) for s in X_test] )
max_word_length = max([len(word) for sentence in X_train_num for word in sentence])
### we need both max sent and word length
print(max_sentence_length)
print(max_word_length)
print(X_train[0:2])
print(X_train_num[0][:100]) # example how the first two sentences are encoded
def pad_words(tensor_words, max_word_len, pad_symbol_id, max_sent_len=None):
pad character list all to same word length
padded = []
for words in tensor_words:
if max_sent_len: #pad all to same sentence length (insert empty word list)
words = [[[0]]*(max_sent_len-len(words))+ words][0] #prepending empty words
padded.append(sequence.pad_sequences(words, maxlen=max_word_len, value=pad_symbol_id))
return np.array(padded)
X_train_pad_char = pad_words(X_train_num, max_word_length, 0, max_sent_len=max_sentence_length)
X_dev_pad_char = pad_words(X_dev_num, max_word_length, 0, max_sent_len=max_sentence_length)
X_test_pad_char = pad_words(X_test_num, max_word_length, 0, max_sent_len=max_sentence_length)
X_train_pad_char.shape
from keras.models import Model, Sequential
from keras.layers import Dense, Input, GRU, TimeDistributed, Embedding, Bidirectional
import keras
batch_size=8
max_chars = len(c2i)
c_dim=50
c_h_dim=32
w_h_dim=32
char_vocab_size = len(c2i)
## lower-level character LSTM
input_chars = Input(shape=(max_sentence_length, max_word_length), name='main_input')
embedded_chars = TimeDistributed(Embedding(char_vocab_size, c_dim,
input_length=max_word_length), name='char_embedding')(input_chars)
char_lstm = TimeDistributed(Bidirectional(GRU(c_h_dim)), name='GRU_on_char')(embedded_chars)
word_lstm_from_char = Bidirectional(GRU(w_h_dim), name='GRU_on_words')(char_lstm)
# And add a prediction node on top
predictions = Dense(1, activation='relu', name='output_layer')(word_lstm_from_char)
model = Model(inputs=input_chars, outputs=predictions)
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot, plot_model
SVG(model_to_dot(model).create(prog='dot', format='svg'))
model.fit(X_train_pad_char, y_train_num, epochs=10, batch_size=8)
loss, accuracy = model.evaluate(X_test_pad_char, y_test_num)
print(accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset
Step2: Baseline
Step3: Results
Step4: Neural network
Step6: Text representation
Step7: My model
|
6,498
|
<ASSISTANT_TASK:>
Python Code:
import gym
env = gym.make('Blackjack-v0')
STICK = 0
HIT = 1
print(env.observation_space)
print(env.action_space)
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
def generate_episode_from_limit(bj_env):
episode = []
state = bj_env.reset()
while True:
action = 0 if state[0] > 18 else 1
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
for i in range(3):
print(generate_episode_from_limit(env))
{(4, 7, False): -0.38775510204081631, (18, 6, False): -0.58434296365330851, (13, 2, False): -0.43409090909090908, (6, 7, False): -0.3783783783783784, ...
from collections import defaultdict
import numpy as np
import sys
def mc_prediction_v(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionary of lists
returns = defaultdict(list)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return V
from plot_utils import plot_blackjack_values
# obtain the value function
V = mc_prediction_v(env, 500000, generate_episode_from_limit)
# plot the value function
plot_blackjack_values(V)
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
def mc_control_GLIE(env, num_episodes, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionaries of arrays
Q = defaultdict(lambda: np.zeros(nA))
N = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
# obtain the estimated optimal policy and action-value function
policy_glie, Q_glie = mc_control_GLIE(env, 500000)
# obtain the state-value function
V_glie = dict((k,np.max(v)) for k, v in Q_glie.items())
# plot the state-value function
plot_blackjack_values(V_glie)
from plot_utils import plot_policy
# plot the policy
plot_policy(policy_glie)
def mc_control_alpha(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
# obtain the estimated optimal policy and action-value function
policy_alpha, Q_alpha = mc_control_alpha(env, 500000, 0.008)
# obtain the state-value function
V_alpha = dict((k,np.max(v)) for k, v in Q_alpha.items())
# plot the state-value function
plot_blackjack_values(V_alpha)
# plot the policy
plot_policy(policy_alpha)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 每个状态都是包含以下三个元素的 3 元组:
Step2: 通过运行以下代码单元格进行验证。
Step3: 执行以下代码单元格以按照随机策略玩二十一点。
Step4: 第 1 部分:MC 预测 - 状态值
Step5: 执行以下代码单元格以按照该策略玩二十一点。
Step6: 现在你已经准备好自己编写 MC 预测的实现了。你可以选择实现首次经历或所有经历 MC 预测;对于 Blackjack 环境,这两种技巧是对等的。
Step7: 则状态 (4, 7, False) 的值估算为 -0.38775510204081631。
Step8: 使用以下单元格计算并绘制状态值函数估算值。 (用于绘制值函数的代码来自此源代码,并且稍作了修改。)
Step9: 第 2 部分:MC 预测 - 动作值
Step10: 现在你已经准备好自己编写 MC 预测的实现了。你可以选择实现首次经历或所有经历 MC 预测;对于 Blackjack 环境,这两种技巧是对等的。
Step11: 请使用以下单元格获取动作值函数估值 $Q$。我们还绘制了相应的状态值函数。
Step12: 第 3 部分:MC 控制 - GLIE
Step13: 通过以下单元格获取估算的最优策略和动作值函数。
Step14: 接着,我们将绘制相应的状态值函数。
Step15: 最后,我们将可视化估算为最优策略的策略。
Step16: 真最优策略 $\pi_*$ 可以在该教科书的第 82 页找到(下文也提供了)。请将你的最终估算值与最优策略进行比较——它们能够有多接近?如果你对算法的效果不满意,请花时间调整 $\epsilon$ 的衰减率和/或使该算法运行更多个阶段,以获得更好的结果。
Step17: 通过以下单元格获得估算的最优策略和动作值函数。
Step18: 接着,我们将绘制相应的状态值函数。
Step19: 最后,我们将可视化估算为最优策略的策略。
|
6,499
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
###source_sent = [ sent for sent in source_text.split("\n") ]
###target_sent = [ sent + ' <EOS>' for sent in target_text.split("\n") ]
###source_ids = [ [ source_vocab_to_int[word] for word in sent.split() ] for sent in source_sent ]
###target_ids = [ [ target_vocab_to_int[word] for word in sent.split() ] for sent in target_sent ]
# Advice from Udacity Reviewer
target_ids = [[target_vocab_to_int[w] for w in s.split()] + [target_vocab_to_int['<EOS>']] for s in target_text.split('\n')]
source_ids = [[source_vocab_to_int[w] for w in s.split()] for s in source_text.split('\n')]
return source_ids, target_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
input_ = tf.placeholder( tf.int32, [None, None], name = "input" )
target_ = tf.placeholder( tf.int32, [None, None], name = "target" )
learn_rate_ = tf.placeholder( tf.float32, None, name = "learn_rate" )
keep_prob_ = tf.placeholder( tf.float32, None, name = "keep_prob" )
target_sequence_length = tf.placeholder( tf.int32, [None], name="target_sequence_length" )
max_target_sequence_length = tf.reduce_max( target_sequence_length )
source_sequence_length = tf.placeholder( tf.int32, [None], name="source_sequence_length" )
return input_, target_, learn_rate_, keep_prob_, target_sequence_length, max_target_sequence_length, source_sequence_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
go_id = source_vocab_to_int[ '<GO>' ]
ending_text = tf.strided_slice( target_data, [0, 0], [batch_size, -1], [1, 1] )
decoded_text = tf.concat( [ tf.fill([batch_size, 1], go_id), ending_text ], 1)
return decoded_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
encod_inputs = tf.contrib.layers.embed_sequence( rnn_inputs, source_vocab_size, encoding_embedding_size )
rnn_cell = tf.contrib.rnn.MultiRNNCell( [ tf.contrib.rnn.LSTMCell( rnn_size ) for _ in range(num_layers) ] )
# Adding dropout layer
rnn_cell = tf.contrib.rnn.DropoutWrapper( rnn_cell, output_keep_prob = keep_prob )
rnn_output, rnn_state = tf.nn.dynamic_rnn( rnn_cell, encod_inputs, source_sequence_length, dtype = tf.float32 )
return rnn_output, rnn_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
decode_helper = tf.contrib.seq2seq.TrainingHelper( dec_embed_input, target_sequence_length )
decoder = tf.contrib.seq2seq.BasicDecoder( dec_cell, decode_helper, encoder_state, output_layer )
decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode( decoder, impute_finished=True,
maximum_iterations= max_summary_length )
return decoder_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
start_tokens = tf.tile( tf.constant( [start_of_sequence_id], dtype=tf.int32),
[ batch_size ], name = "start_tokens" )
decode_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper( dec_embeddings, start_tokens, end_of_sequence_id )
decoder = tf.contrib.seq2seq.BasicDecoder( dec_cell, decode_helper, encoder_state, output_layer = output_layer )
decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode( decoder, impute_finished=True,
maximum_iterations = max_target_sequence_length )
return decoder_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
from tensorflow.python.layers import core as layers_core
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
decode_embed = tf.Variable( tf.random_uniform( [ target_vocab_size, decoding_embedding_size ] ) )
decode_embed_input = tf.nn.embedding_lookup( decode_embed, dec_input )
decode_cell = tf.contrib.rnn.MultiRNNCell( [ tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers) ] )
# Adding dropout layer
decode_cell = tf.contrib.rnn.DropoutWrapper( decode_cell, output_keep_prob = keep_prob )
output_layer = layers_core.Dense( target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer( mean = 0.0, stddev=0.1 ) )
with tf.variable_scope( "decoding" ) as decoding_scope:
decode_outputs_train = decoding_layer_train( encoder_state, decode_cell, decode_embed_input,
target_sequence_length, max_target_sequence_length, output_layer, keep_prob )
SOS_id = target_vocab_to_int[ "<GO>" ]
EOS_id = target_vocab_to_int[ "<EOS>" ]
with tf.variable_scope( "decoding", reuse=True) as decoding_scope:
decode_outputs_infer = decoding_layer_infer( encoder_state, decode_cell, decode_embed, SOS_id,EOS_id,
max_target_sequence_length,target_vocab_size, output_layer, batch_size, keep_prob )
return decode_outputs_train, decode_outputs_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
encode_output, encode_state = encoding_layer( input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size, enc_embedding_size )
decode_input = process_decoder_input( target_data, target_vocab_to_int, batch_size )
decode_outputs_train, decode_outputs_infer = decoding_layer( decode_input, encode_state,
target_sequence_length, tf.reduce_max( target_sequence_length ), rnn_size, num_layers,
target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size )
return decode_outputs_train, decode_outputs_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.8
display_step = 10
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
sequence = [ vocab_to_int.get( word, vocab_to_int[ "<UNK>"] ) for word in sentence.lower().split() ]
return sequence
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.