Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
7,900
|
<ASSISTANT_TASK:>
Python Code:
last_match_winner = defaultdict(int)
dataset['HomeTeamWonLast'] = 0
for index,row in dataset.iterrows():
home_team = row['home']
visitor_team = row['visitor']
teams = tuple(sorted([home_team,visitor_team]))
row['HomeTeamWonLast'] = 1 if last_match_winner[teams] == home_team else 0
dataset.ix[index] = row
last_match_winner[teams] = home_team if row["HomeWin"] else visitor_team
x_lastWinner = dataset[["HomeTeamWonLast","HomeTeamRanksHigher"]].values
clf = DecisionTreeClassifier(random_state=14)
scores3 = cross_val_score(clf,x_lastWinner,y_true,scoring='accuracy')
print("Accuracy: {0:.1f}%".format(np.mean(scores3) * 100))
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
encoding = LabelEncoder()
encoding.fit(dataset['home'].values)
home_teams = encoding.transform(dataset["home"].values)
visitor_teams = encoding.transform(dataset["visitor"].values)
X_teams = np.vstack([home_teams,visitor_teams]).T
onehot = OneHotEncoder()
X_teams_expanded = onehot.fit_transform(X_teams).todense()
clf = DecisionTreeClassifier(random_state=14)
scores4 = cross_val_score(clf, X_teams_expanded, y_true,
scoring='accuracy')
print("Accuracy: {0:.1f}%".format(np.mean(scores4) * 100))
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=14)
scores5 = cross_val_score(clf,X_teams_expanded,y_true,scoring="accuracy")
print("Accuracy: {0:.1f}%".format(np.mean(socres5) * 100))
X_all = np.hstack([x_homehighter, X_teams])
clf = RandomForestClassifier(random_state=14)
scores6 = cross_val_score(clf, X_all, y_true, scoring='accuracy')
print("Accuracy: {0:.1f}%".format(np.mean(scores6) * 100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 特征数增加后,决策
Step2: 决策树可以用这些特征值进行训练,但DecisionTreeClassifier仍把它们当作连续型特
Step3: 使用随机森林
|
7,901
|
<ASSISTANT_TASK:>
Python Code:
# Import modules
import numpy as np
# Import PySwarms
import pyswarms as ps
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def distance(query, target):
x_dist = (target[0] - query[0])**2
y_dist = (target[1] - query[1])**2
z_dist = (target[2] - query[2])**2
dist = np.sqrt(x_dist + y_dist + z_dist)
return dist
swarm_size = 20
dim = 6 # Dimension of X
epsilon = 1.0
options = {'c1': 1.5, 'c2':1.5, 'w':0.5}
constraints = (np.array([-np.pi , -np.pi/2 , 1 , -np.pi , -5*np.pi/36 , -np.pi]),
np.array([np.pi , np.pi/2 , 3 , np.pi , 5*np.pi/36 , np.pi]))
d1 = d2 = d3 = d4 = d5 = d6 = 3
def getTransformMatrix(theta, d, a, alpha):
T = np.array([[np.cos(theta) , -np.sin(theta)*np.cos(alpha) , np.sin(theta)*np.sin(alpha) , a*np.cos(theta)],
[np.sin(theta) , np.cos(theta)*np.cos(alpha) , -np.cos(theta)*np.sin(alpha) , a*np.sin(theta)],
[0 , np.sin(alpha) , np.cos(alpha) , d ],
[0 , 0 , 0 , 1 ]
])
return T
def get_end_tip_position(params):
# Create the transformation matrices for the respective joints
t_00 = np.array([[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]])
t_01 = getTransformMatrix(params[0] , d2 , 0 , -np.pi/2)
t_12 = getTransformMatrix(params[1] , d2 , 0 , -np.pi/2)
t_23 = getTransformMatrix(0 , params[2] , 0 , -np.pi/2)
t_34 = getTransformMatrix(params[3] , d4 , 0 , -np.pi/2)
t_45 = getTransformMatrix(params[4] , 0 , 0 , np.pi/2)
t_56 = getTransformMatrix(params[5] , d6 ,0 , 0)
# Get the overall transformation matrix
end_tip_m = t_00.dot(t_01).dot(t_12).dot(t_23).dot(t_34).dot(t_45).dot(t_56)
# The coordinates of the end tip are the 3 upper entries in the 4th column
pos = np.array([end_tip_m[0,3],end_tip_m[1,3],end_tip_m[2,3]])
return pos
def opt_func(X):
n_particles = X.shape[0] # number of particles
target = np.array([-2,2,3])
dist = [distance(get_end_tip_position(X[i]), target) for i in range(n_particles)]
return np.array(dist)
%%time
# Call an instance of PSO
optimizer = ps.single.GlobalBestPSO(n_particles=swarm_size,
dimensions=dim,
options=options,
bounds=constraints)
# Perform optimization
cost, joint_vars = optimizer.optimize(opt_func, iters=1000)
print(get_end_tip_position(joint_vars))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: IK as an Optimization Problem
Step2: We are going to use the distance function to compute the cost, the further away the more costly the position is.
Step3: In order to obtain the current position, we need to calculate the matrices of rotation and translation for every joint. Here we use the Denvait-Hartenberg parameters for that. So we define a function that calculates these. The function uses the rotation angle and the extension $d$ of a prismatic joint as input
Step4: Now we can calculate the transformation matrix to obtain the end tip position. For this we create another function that takes our vector $\mathbf{X}$ with the joint variables as input
Step5: The last thing we need to prepare in order to run the algorithm is the actual function that we want to optimize. We just need to calculate the distance between the position of each swarm particle and the target point
Step6: Running the algorithm
Step7: Now let's see if the algorithm really worked and test the output for joint_vars
|
7,902
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
import pandas as pd
import datetime as dt
import matplotlib.pyplot as plt
import torch
from torch.utils.data import DataLoader
from deep4cast.forecasters import Forecaster
from deep4cast.models import WaveNet
from deep4cast.datasets import TimeSeriesDataset
import deep4cast.transforms as transforms
import deep4cast.metrics as metrics
# Make RNG predictable
np.random.seed(0)
torch.manual_seed(0)
# Use a gpu if available, otherwise use cpu
device = ('cuda' if torch.cuda.is_available() else 'cpu')
%matplotlib inline
if not os.path.exists('data/Daily-train.csv'):
!wget https://raw.githubusercontent.com/M4Competition/M4-methods/master/Dataset/Train/Daily-train.csv -P data/
if not os.path.exists('data/Daily-test.csv'):
!wget https://raw.githubusercontent.com/M4Competition/M4-methods/master/Dataset/Test/Daily-test.csv -P data/
data_arr = pd.read_csv('data/Daily-train.csv')
data_arr = data_arr.iloc[:, 1:].values
data_arr = list(data_arr)
for i, ts in enumerate(data_arr):
data_arr[i] = ts[~np.isnan(ts)][None, :]
horizon = 14
lookback = 128
import random
data_train = []
for time_series in data_arr:
data_train.append(time_series[:, :-horizon],)
data_train = random.sample(data_train, int(len(data_train) * 0.1))
transform = transforms.Compose([
transforms.ToTensor(),
transforms.LogTransform(targets=[0], offset=1.0),
transforms.RemoveLast(targets=[0]),
transforms.Target(targets=[0]),
])
data_train = TimeSeriesDataset(
data_train,
lookback,
horizon,
step=1,
transform=transform
)
# Create mini-batch data loader
dataloader_train = DataLoader(
data_train,
batch_size=512,
shuffle=True,
pin_memory=True,
num_workers=1
)
# Define the model architecture
model = WaveNet(input_channels=1,
output_channels=1,
horizon=horizon,
hidden_channels=89,
skip_channels=199,
n_layers=7)
print('Number of model parameters: {}.'.format(model.n_parameters))
print('Receptive field size: {}.'.format(model.receptive_field_size))
# Enable multi-gpu if available
if torch.cuda.device_count() > 1:
print('Using {} GPUs.'.format(torch.cuda.device_count()))
model = torch.nn.DataParallel(model)
# .. and the optimizer
optim = torch.optim.Adam(model.parameters(), lr=0.0008097436666349985)
# .. and the loss
loss = torch.distributions.StudentT
# Fit the forecaster
forecaster = Forecaster(model, loss, optim, n_epochs=5, device=device)
forecaster.fit(dataloader_train, eval_model=True)
data_train = pd.read_csv('data/Daily-train.csv')
data_test = pd.read_csv('data/Daily-test.csv')
data_train = data_train.iloc[:, 1:].values
data_test = data_test.iloc[:, 1:].values
data_arr = []
for ts_train, ts_test in zip(data_train, data_test):
ts_a = ts_train[~np.isnan(ts_train)]
ts_b = ts_test
ts = np.concatenate([ts_a, ts_b])[None, :]
data_arr.append(ts)
# Sequentialize the training and testing dataset
data_test = []
for time_series in data_arr:
data_test.append(time_series[:, -horizon-lookback:])
data_test = TimeSeriesDataset(
data_test,
lookback,
horizon,
step=1,
transform=transform
)
dataloader_test = DataLoader(
data_test,
batch_size=1024,
shuffle=False,
num_workers=2
)
# Get time series of actuals for the testing period
y_test = []
for example in dataloader_test:
example = dataloader_test.dataset.transform.untransform(example)
y_test.append(example['y'])
y_test = np.concatenate(y_test)
# Get corresponding predictions
y_samples = forecaster.predict(dataloader_test, n_samples=100)
# Evaluate forecasts
test_smape = metrics.smape(y_samples, y_test)
print('SMAPE: {}%'.format(test_smape.mean()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset
Step2: Divide into train and test
Step3: We've also found that it is not necessary to train on the full dataset, so we here select a 10% random sample of time series for training. We will evaluate on the full dataset later.
Step4: We follow Torchvision in processing examples using Transforms chained together by Compose.
Step5: TimeSeriesDataset inherits from Torch Datasets for use with Torch DataLoader. It handles the creation of the examples used to train the network using lookback and horizon to partition the time series.
Step6: Modeling and Forecasting
Step7: Evaluation
Step8: We need to transform the output forecasts. The output from the foracaster is of the form (n_samples, n_time_series, n_variables, n_timesteps).
Step9: We calculate the symmetric MAPE.
|
7,903
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
def strip_parentheses(col, df):
'''
splits single column strings of "value (error)" into two columns of value and error
input:
-string name of column to split in two
-dataframe to apply to
returns dataframe
'''
out1 = df[col].str.replace(")","").str.split(pat="(")
df_out = out1.apply(pd.Series)
# Split the string on the whitespace
base, sufx = col.split(" ")
df[base] = df_out[0].copy()
df[base+"_e"] = df_out[1].copy()
del df[col]
return df
names = ["Name","R.A. (J2000.0)","Decl. (J2000.0)","Spectral Type","SpectralType Ref.","Parallax (error)(arcsec)",
"Parallax Ref.","J (error)","H (error)","Ks (error)","JHKRef.","PhotSys"]
tbl1 = pd.read_csv("http://iopscience.iop.org/0004-637X/651/1/502/fulltext/64991.tb1.txt",
sep='\t', names=names, na_values='\ldots')
cols_to_fix = [col for col in tbl1.columns.values if "(error)" in col]
for col in cols_to_fix:
print col
tbl1 = strip_parentheses(col, tbl1)
tbl1.head()
names = ["Name","Spectral Type","[3.6] (error)","n1","[4.5] (error)","n2",
"[5.8] (error)","n3","[8.0] (error)","n4","[3.6]-[4.5]","[4.5]-[5.8]","[5.8]-[8.0]","Notes"]
tbl3 = pd.read_csv("http://iopscience.iop.org/0004-637X/651/1/502/fulltext/64991.tb3.txt",
sep='\t', names=names, na_values='\ldots')
cols_to_fix = [col for col in tbl3.columns.values if "(error)" in col]
cols_to_fix
for col in cols_to_fix:
print col
tbl3 = strip_parentheses(col, tbl3)
tbl3.head()
pd.options.display.max_columns = 50
del tbl3["Spectral Type"] #This is repeated
patten2006 = pd.merge(tbl1, tbl3, how="outer", on="Name")
patten2006.head()
import gully_custom
patten2006["SpT_num"], _1, _2, _3= gully_custom.specTypePlus(patten2006["Spectral Type"])
sns.set_context("notebook", font_scale=1.5)
for color in ["[3.6]-[4.5]", "[4.5]-[5.8]", "[5.8]-[8.0]"]:
plt.plot(patten2006["SpT_num"], patten2006[color], '.', label=color)
plt.xlabel(r'Spectral Type (M0 = 0)')
plt.ylabel(r'$[3.6]-[4.5]$')
plt.title("IRAC colors as a function of spectral type")
plt.legend(loc='best')
patten2006.to_csv('../data/Patten2006/patten2006.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The tables define the value and error as a string
Step2: Table 1 - Basic data on sources
Step3: Table 3- IRAC photometry
Step4: Convert spectral type to number
Step5: Make a plot of mid-IR colors as a function of spectral type.
Step6: Save the cleaned data.
|
7,904
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
df = pd.read_csv('kyphosis.csv')
df.head()
sns.pairplot(df,hue='Kyphosis',palette='Set1')
from sklearn.model_selection import train_test_split
X = df.drop('Kyphosis',axis=1)
y = df['Kyphosis']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier()
dtree.fit(X_train,y_train)
predictions = dtree.predict(X_test)
from sklearn.metrics import classification_report,confusion_matrix
print(classification_report(y_test,predictions))
print(confusion_matrix(y_test,predictions))
from IPython.display import Image
from sklearn.externals.six import StringIO
from sklearn.tree import export_graphviz
import pydot
features = list(df.columns[1:])
features
dot_data = StringIO()
export_graphviz(dtree, out_file=dot_data,feature_names=features,filled=True,rounded=True)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
Image(graph[0].create_png())
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=100)
rfc.fit(X_train, y_train)
rfc_pred = rfc.predict(X_test)
print(confusion_matrix(y_test,rfc_pred))
print(classification_report(y_test,rfc_pred))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the Data
Step2: EDA
Step3: Train Test Split
Step4: Decision Trees
Step5: Prediction and Evaluation
Step6: Tree Visualization
Step7: Random Forests
|
7,905
|
<ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import scipy.stats as st
from sci_analysis import analyze
%matplotlib inline
import numpy as np
import scipy.stats as st
from sci_analysis import analyze
np.random.seed(987654321)
data = st.norm.rvs(size=1000)
analyze(xdata=data)
pets = ['dog', 'cat', 'rat', 'cat', 'rabbit', 'dog', 'hamster', 'cat', 'rabbit', 'dog', 'dog']
analyze(pets)
from inspect import signature
print(analyze.__name__, signature(analyze))
print(analyze.__doc__)
example1 = [0.2, 0.25, 0.27, np.nan, 0.32, 0.38, 0.39, np.nan, 0.42, 0.43, 0.47, 0.51, 0.52, 0.56, 0.6]
example2 = [0.23, 0.27, 0.29, np.nan, 0.33, 0.35, 0.39, 0.42, np.nan, 0.46, 0.48, 0.49, np.nan, 0.5, 0.58]
analyze(example1, example2)
np.random.seed(987654321)
group_a = st.norm.rvs(size=50)
group_b = st.norm.rvs(size=25)
group_c = st.norm.rvs(size=30)
group_d = st.norm.rvs(size=40)
analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d})
np.random.seed(987654321)
group_a = st.norm.rvs(0.0, 1, size=50)
group_b = st.norm.rvs(0.0, 3, size=25)
group_c = st.norm.rvs(0.1, 1, size=30)
group_d = st.norm.rvs(0.0, 1, size=40)
analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d})
np.random.seed(987654321)
group_a = st.norm.rvs(0.0, 1, size=50)
group_b = st.norm.rvs(0.0, 3, size=25)
group_c = st.weibull_max.rvs(1.2, size=30)
group_d = st.norm.rvs(0.0, 1, size=40)
analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This will tell python to import the sci-analysis function analyze().
Step2: Now, sci-analysis should be ready to use. Try the following code
Step3: A histogram, box plot, summary stats, and test for normality of the data should appear above.
Step4: Let's examine the analyze() function in more detail. Here's the signature for the analyze() function
Step5: analyze() will detect the desired type of data analysis to perform based on whether the ydata argument is supplied, and whether the xdata argument is a two-dimensional array-like object.
Step6: If xdata is a sequence or dictionary of vectors, a location test and summary statistics for each vector will be performed. If each vector is normally distributed and they all have equal variance, a one-way ANOVA is performed. If the data is not normally distributed or the vectors do not have equal variance, a non-parametric Kruskal-Wallis test will be performed instead of a one-way ANOVA.
Step7: In the example above, sci-analysis is telling us the four groups are normally distributed (by use of the Bartlett Test, Oneway ANOVA and the near straight line fit on the quantile plot), the groups have equal variance and the groups have matching means. The only significant difference between the four groups is the sample size we specified. Let's try another example, but this time change the variance of group B
Step8: In the example above, group B has a standard deviation of 2.75 compared to the other groups that are approximately 1. The quantile plot on the right also shows group B has a much steeper slope compared to the other groups, implying a larger variance. Also, the Kruskal-Wallis test was used instead of the Oneway ANOVA because the pre-requisite of equal variance was not met.
|
7,906
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import active
import experiment
import logistic_regression as logr
from sklearn import datasets # The Iris dataset is imported from here.
from IPython.display import display
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 1
%aimport active
%aimport experiment
%aimport logistic_regression
np.set_printoptions(precision=4)
plt.rcParams['axes.labelsize'] = 15
plt.rcParams['axes.titlesize'] = 15
plt.rcParams['xtick.labelsize'] = 15
plt.rcParams['ytick.labelsize'] = 15
plt.rcParams['legend.fontsize'] = 15
plt.rcParams['figure.titlesize'] = 18
# This code was adapted from
# http://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html#
iris = datasets.load_iris()
X = iris.data[:, :2] # Take the first two features.
Y = iris.target
print('X has shape', X.shape)
print('Y has shape', Y.shape)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
plt.figure(2, figsize=(12, 7))
plt.clf()
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
plt.title('The Iris flower data set')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.savefig('iris.png', dpi=600, bbox_inches='tight', transparent=True)
plt.show()
bias = np.ones((X.shape[0], 1)) # Add a bias variable set to 1.
X = np.hstack((X, bias))
Y[Y==1] = -1
Y[Y==2] = -1
Y[Y==0] = +1
np.random.seed(1)
size = X.shape[0]
index = np.arange(size)
np.random.shuffle(index)
training_index = index[:int(size/2)]
testing_index = index[int(size/2):]
X_iris_training = X[training_index]
Y_iris_training = Y[training_index]
X_iris_testing = X[testing_index]
Y_iris_testing = Y[testing_index]
n = 10
iterations = 75
X_testing = X_iris_testing
Y_testing = Y_iris_testing
X_training = X_iris_training
Y_training = Y_iris_training
Y_training[Y_training== -1] = 0
Y_testing[Y_testing==-1] = 0
Y_training
Y_testing
average_accuracies_logr = \
logr.experiment(n, iterations, X_testing, Y_testing, X_training, Y_training)
print(average_accuracies_logr)
w_best = logr.train(X_training, Y_training)
print('w_best is', w_best)
predictions = logr.predict(w_best, X_testing)
print('Using w_best the accuracy is', \
logr.compute_accuracy(predictions, Y_testing))
Y_training[Y_training==0] = -1
Y_testing[Y_testing==0] = -1
Y_training
Y_testing
average_accuracies_ac = \
experiment.experiment(n, iterations, X_testing, Y_testing,
X_training, Y_training, center='ac',
sample=1, M=None)
average_accuracies_cc = \
experiment.experiment(n, iterations, X_testing, Y_testing,
X_training, Y_training, center='cc',
sample=1, M=None)
average_accuracies_rand = \
experiment.experiment(n, iterations, X_testing, Y_testing,
X_training, Y_training, center='random',
sample=1, M=None)
plt.figure(figsize=(12,7))
queries = np.arange(1, iterations + 1)
plt.plot(queries, average_accuracies_logr, 'mx-', label='LR',
markevery=5,
lw=1.5, ms=10, markerfacecolor='none', markeredgewidth=1.5,
markeredgecolor = 'm')
plt.plot(queries, average_accuracies_ac, 'r^-', label='AC',
markevery=5,
lw=1.5, ms=10, markerfacecolor='none', markeredgewidth=1.5,
markeredgecolor = 'r')
plt.plot(queries, average_accuracies_cc, 'go-', label='CC',
markevery=5,
lw=1.5, ms=10, markerfacecolor='none', markeredgewidth=1.5,
markeredgecolor = 'g')
plt.plot(queries, average_accuracies_rand, 'bs-', label='Random',
markevery=5,
lw=1.5, ms=10, markerfacecolor='none', markeredgewidth=1.5,
markeredgecolor = 'b')
plt.xlabel('Number of iterations')
plt.ylabel('Accuracy averaged over %d tests' % n)
plt.title('Average accuracy of a cutting plane active learning procedure (Iris flower data set)')
plt.legend(loc='best')
plt.savefig('iris_experiment.png', dpi=600, bbox_inches='tight', transparent=True)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing and processing the Iris data set
Step2: Experimental procedure
Step3: The experiment
|
7,907
|
<ASSISTANT_TASK:>
Python Code:
upload_dir = './sketch'
import boto
runThis = 0
if runThis:
conn = boto.connect_s3()
b = conn.create_bucket('sketchpad_basic_pilot2_sketches')
all_files = [i for i in os.listdir(upload_dir) if i != '.DS_Store']
for a in all_files:
print a
k = b.new_key(a)
k.set_contents_from_filename(os.path.join(upload_dir,a))
k.set_acl('public-read')
## read in experimental metadata file
path_to_metadata = '../../analysis/sketchpad_basic_pilot2_group_data.csv'
meta = pd.read_csv(path_to_metadata)
## clean up and add filename column
meta2 = meta.drop(['svg','png','Unnamed: 0'],axis=1)
filename = []
games = []
for i,row in meta2.iterrows():
filename.append('gameID_{}_trial_{}.png'.format(row['gameID'],row['trialNum']))
games.append([])
meta2['filename'] = filename
meta2['games'] = games
## write out metadata to json file
stimdict = meta2.to_dict(orient='records')
import json
with open('sketchpad_basic_recog_meta.js', 'w') as fout:
json.dump(stimdict, fout)
J = json.loads(open('sketchpad_basic_recog_meta.js',mode='ru').read())
assert len(J)==len(meta2)
'{} unique games.'.format(len(np.unique(meta2.gameID.values)))
# set vars
auth = pd.read_csv('auth.txt', header = None) # this auth.txt file contains the password for the sketchloop user
pswd = auth.values[0][0]
user = 'sketchloop'
host = 'rxdhawkins.me' ## cocolab ip address
# have to fix this to be able to analyze from local
import pymongo as pm
conn = pm.MongoClient('mongodb://sketchloop:' + pswd + '@127.0.0.1')
db = conn['stimuli']
coll = db['sketchpad_basic_pilot2_sketches']
## actually add data now to the database
for (i,j) in enumerate(J):
if i%100==0:
print ('%d of %d' % (i,len(J)))
coll.insert_one(j)
## How many sketches have been retrieved at least once? equivalent to: coll.find({'numGames':{'$exists':1}}).count()
coll.find({'numGames':{'$gte':0}}).count()
## stashed away handy querying things
# coll.find({'numGames':{'$gte':1}}).sort('trialNum')[0]
# from bson.objectid import ObjectId
# coll.find({'_id':ObjectId('5a9a003d47e3d54db0bf33cc')}).count()
import os
from PIL import Image
def RGBA2RGB(image, color=(255, 255, 255)):
Alpha composite an RGBA Image with a specified color.
Simpler, faster version than the solutions above.
Source: http://stackoverflow.com/a/9459208/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
image.load() # needed for split()
background = Image.new('RGB', image.size, color)
background.paste(image, mask=image.split()[3]) # 3 is the alpha channel
return background
def load_and_crop_image(path, dest='object_cropped', imsize=224):
im = Image.open(path)
# if np.array(im).shape[-1] == 4:
# im = RGBA2RGB(im)
# crop to sketch only
arr = np.asarray(im)
if len(arr.shape)==2:
w,h = np.where(arr!=127)
else:
w,h,d = np.where(arr!=127) # where the image is not white
if len(h)==0:
print(path)
xlb = min(h)
xub = max(h)
ylb = min(w)
yub = max(w)
lb = min([xlb,ylb])
ub = max([xub,yub])
im = im.crop((lb, lb, ub, ub))
im = im.resize((imsize, imsize), Image.ANTIALIAS)
objname = path.split('/')[-1]
if not os.path.exists(dest):
os.makedirs(dest)
im.save(os.path.join(dest,objname))
run_this = 0
if run_this:
## actually crop images now
data_dir = './object'
allobjs = ['./object/' + i for i in os.listdir(data_dir)]
for o in allobjs:
load_and_crop_image(o)
run_this = 0
if run_this:
## rename objects in folder
data_dir = './object'
allobjs = [data_dir + '/' + i for i in os.listdir(data_dir) if i != '.DS_Store']
for o in allobjs:
if len(o.split('_'))==4:
os.rename(o, os.path.join(data_dir, o.split('/')[-1].split('_')[2] + '.png'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: build stimulus dictionary
Step2: upload stim dictionary to mongo (db = 'stimuli', collection='sketchpad_basic_recog')
Step4: crop 3d objects
|
7,908
|
<ASSISTANT_TASK:>
Python Code:
width = 20
height = 5*9
width * height
tax = 8.25 / 100
price = 100.50
price * tax
price + _
round(_, 2)
print('spam email')
# This would cause error
print('doesn't')
# One way of doing it correctly
print('doesn\'t')
# Another way of doing it correctly
print("doesn't")
print('''
Usage: thingy [OPTIONS]
-h Display this usage message
-H hostname Hostname to connect to
''')
print('''Cheng highly recommends Python programming language''')
word = 'HELP' + 'A'
word
word[0]
word[4]
# endding index not included
word[0:2]
word[2:4]
# length of a string
len(word)
a = ['spam', 'eggs', 100, 1234]
a
a[0]
a[3]
a[2:4]
sum(a[2:4])
a
a[2] = a[2] + 23
a
q = [2, 3]
p = [1, q, 4]
p
len(p)
p[1]
p[1][0]
x=(1,2,3,4)
x[0]
x[0]=7 # it will raise error since tuple is immutable
tel = {'jack': 4098, 'sam': 4139}
tel['dan'] = 4127
tel
tel['jack']
del tel['sam']
tel
tel['mike'] = 4127
tel
# Is dan in the dict?
'dan' in tel
for key in tel:
print('key:', key, '; value:', tel[key])
x = int(input("Please enter an integer for x: "))
if x < 0:
x = 0
print('Negative; changed to zero')
elif x == 0:
print('Zero')
elif x == 1:
print('Single')
else:
print('More')
a, b = 0, 1 # multiple assignment
while a < 10:
print(a)
a, b = b, a+b
# Measure some strings:
words = ['cat', 'window', 'defenestrate']
for i in words:
print(i, len(i))
def fib(n): # write Fibonacci series up to n
Print a Fibonacci series up to n.
a, b = 0, 1
while a < n:
print(a)
a, b = b, a+b
fib(200)
fib(2000000000000000) # do not need to worry about the type of a,b
# output for viewing first
import string
import random
# fix the pseudo-random sequences for easy replication
# It will generate the same random sequences
# of nubmers/letters with the same seed.
random.seed(123)
for i in range(50):
# Data values separated by comma(csv file)
print(i+1,random.choice(string.ascii_uppercase),
random.choice(range(6)), sep=',')
# write the data to a file
random.seed(123)
out_file=open('data.csv','w')
columns=['id','name','age']
out_file.write(','.join(columns)+'\n')
for i in range(50):
row=[str(i+1),random.choice(string.ascii_uppercase),
str(random.choice(range(6)))]
out_file.write(','.join(row)+'\n')
else:
out_file.close()
# read data into Python
for line in open('data.csv', 'r'):
print(line)
# crawl_UTD_reviews
# Author: Cheng Nie
# Email: me@chengnie.com
# Date: Feb 8, 2016
# Updated: Feb 12, 2016
from urllib.request import urlopen
num_pages = 2
reviews_per_page = 20
# the file we will save the rating and date
out_file = open('UTD_reviews.csv', 'w')
# the url that we need to locate the page for UTD reviews
url = 'http://www.yelp.com/biz/university-of-texas-at-dallas-\
richardson?start={start_number}'
# the three string patterns we just explained
review_start_pattern = '<div class="review-wrapper">'
rating_pattern = '<i class="star-img stars_'
date_pattern = '"datePublished" content="'
reviews_count = 0
for page in range(num_pages):
print('processing page', page)
# open the url and save the source code string to page_content
html = urlopen(url.format(start_number = page * reviews_per_page))
page_content = html.read().decode('utf-8')
# locate the beginning of an individual review
review_start = page_content.find(review_start_pattern)
while review_start != -1:
# it means there at least one more review to be crawled
reviews_count += 1
# get the rating
cut_front = page_content.find(rating_pattern, review_start) \
+ len(rating_pattern)
cut_end = page_content.find('" title="', cut_front)
rating = page_content[cut_front:cut_end]
# get the date
cut_front = page_content.find(date_pattern, cut_end) \
+ len(date_pattern)
cut_end = page_content.find('">', cut_front)
date = page_content[cut_front:cut_end]
# save the data into out_file
out_file.write(','.join([rating, date]) + '\n')
review_start = page_content.find(review_start_pattern, cut_end)
print('crawled', reviews_count, 'reviews so far')
out_file.close()
word
# first index default to 0 and second index default to the size
word[:2]
# It's equivalent to
word[0:2]
# Everything except the first two characters
word[2:]
# It's equivalent to
word[2:len(word)]
# start: end: step
word[0::2]
word[0:len(word):2]
word[-1] # The last character
word[-2] # The last-but-one character
word[-2:] # The last two characters
word[:-2] # Everything except the last two characters
a
a[-2]
a[1:-1]
a[:2] + ['bacon', 2*2]
3*a[:3] + ['Boo!']
# Replace some items:
a[0:2] = [1, 12]
a
# Remove some:
a[0:2] = [] # or del a[0:2]
a
# Insert some:
a[1:1] = ['insert', 'some']
a
# inserting at one position is not the same as changing one element
# a=[1, 12, 100, 1234]
a = [123, 1234]
sum(a)
a[1] = ['insert', 'some']
a
# loop way
cubes = []
for x in range(11):
cubes.append(x**3)
cubes
# map way
def cube(x):
return x*x*x
list(map(cube, range(11)))
# list comprehension way
[x**3 for x in range(11)]
result = []
for i in range(11):
if i%2 == 0:
result.append(i)
else:
print(result)
[i for i in range(11) if i%2==0]
l=[1,3,5,6,8,10]
[i for i in l if i%2==0]
#
# ----------------------- In Python ------------------
# access table from Python
# connect to MySQL in Python
import mysql.connector
cnx = mysql.connector.connect(user='root',
password='pythonClass',
database='test')
# All DDL (Data Definition Language) statements are
# executed using a handle structure known as a cursor
cursor = cnx.cursor()
#cursor.execute("")
# write the same data to the example table
query0 = '''insert into example (id, name, age) \
values ({id_num},"{c_name}",{c_age});'''
random.seed(123)
for i in range(50):
query1 = query0.format(id_num = i+1,
c_name = random.choice(string.ascii_uppercase),
c_age = random.choice(range(6)))
print(query1)
cursor.execute(query1)
cnx.commit()
#
# ----------------------- In Python ------------------
#
cursor.execute('select * from e_copy;')
for i in cursor:
print(i)
#
# ----------------------- In Python ------------------
#
# # example for adding new info for existing record
# cursor.execute('alter table e_copy add mother_name varchar(1) default null')
query='update e_copy set mother_name="{m_name}" where id={id_num};'
# random.seed(333)
for i in range(50):
query1=query.format(m_name = random.choice(string.ascii_uppercase),id_num = i+1)
print(query1)
cursor.execute(query1)
cnx.commit()
#
# ----------------------- In Python ------------------
#
# example for insert new records
query2='insert into e_copy (id, name,age,mother_name) \
values ({id_num},"{c_name}",{c_age},"{m_name}")'
for i in range(10):
query3=query2.format(id_num = i+60,
c_name = random.choice(string.ascii_uppercase),
c_age = random.randint(0,6),
m_name = random.choice(string.ascii_uppercase))
print(query3)
cursor.execute(query3)
cnx.commit()
# check if you've updated the data successfully in MySQL
import re
# digits
# find all the numbers
infile=open('digits.txt','r')
content=infile.read()
print(content)
# Find all the numbers in the file
numbers=re.findall(r'\d+',content)
for n in numbers:
print(n)
# find equations
equations=re.findall(r'(\d+)=\d+',content)
for e in equations:
print(e)
# subsitute equations to correct them
print(re.sub(r'(\d+)=\d+',r'\1=\1',content))
# Save to file
print(re.sub(r'(\d+)=\d+',r'\1=\1',content), file = open('digits_corrected.txt', 'w'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Calculator
Step2: Strings
Step3: show ' and " in a string
Step4: span multiple lines
Step5: slice and index
Step6: Index in the Python way
Step7: List
Step8: Built-in functions like sum and len are explained in the document too. Here is a link to it.
Step9: Nest lists
Step10: tuple
Step11: dict
Step12: Quiz
Step13: while
Step14: for
Step16: Define function
Step17: Data I/O
Step18: MySQL
Step19: Quiz
Step20: Target
Step21: Negative index
Step22: More about list
Step23: Versatile features
Step24: Target
Step25: Use if in list comprehension
Step26: Use Python to access MySQL database
Step27: To get better understanding of the table we just created. We will use MySQL command line again.
Step28: Regular expression in Python
|
7,909
|
<ASSISTANT_TASK:>
Python Code:
NAME = "Michelle Appel"
NAME2 = "Verna Dankers"
NAME3 = "Yves van Montfort"
EMAIL = "michelle.appel@student.uva.nl"
EMAIL2 = "verna.dankers@student.uva.nl"
EMAIL3 = "yves.vanmontfort@student.uva.nl"
%pylab inline
plt.rcParams["figure.figsize"] = [20,10]
def true_mean_function(x):
return np.cos(2*pi*(x+1))
def add_noise(y, sigma):
return y + sigma*np.random.randn(len(y))
def generate_t(x, sigma):
return add_noise(true_mean_function(x), sigma)
sigma = 0.2
beta = 1.0 / pow(sigma, 2)
N_test = 100
x_test = np.linspace(-1, 1, N_test)
mu_test = np.zeros(N_test)
y_test = true_mean_function(x_test)
t_test = add_noise(y_test, sigma)
plt.plot( x_test, y_test, 'b-', lw=2)
plt.plot( x_test, t_test, 'go')
plt.show()
def k_n_m(xn, xm, thetas):
theta0, theta1, theta2, theta3 = thetas # Unpack thetas
if(xn == xm):
k = theta0 + theta2 + theta3*xn*xm
else:
k = theta0 * np.exp(-(theta1/2)*(xn-xm)**2) + theta2 + theta3*xn*xm
return k
def computeK(x1, x2, thetas):
K = np.zeros(shape=(len(x1), len(x2))) # Create empty array
for xn, row in zip(x1, range(len(x1))): # Iterate over x1
for xm, column in zip(x2, range(len(x2))): # Iterate over x2
K[row, column] = k_n_m(xn, xm, thetas) # Add kernel to matrix
return K
x1 = [0, 0, 1]
x2 = [0, 0, 1]
thetas = [1, 2, 3, 1]
K = computeK(x1, x2, thetas)
### Test your function
x1 = [0, 1, 2]
x2 = [1, 2, 3, 4]
thetas = [1, 2, 3, 4]
K = computeK(x1, x2, thetas)
assert K.shape == (len(x1), len(x2)), "the shape of K is incorrect"
import matplotlib.pyplot as plt
# The thetas
thetas0 = [1, 4, 0, 0]
thetas1 = [9, 4, 0, 0]
thetas2 = [1, 64, 0, 0]
thetas3 = [1, 0.25, 0, 0]
thetas4 = [1, 4, 10, 0]
thetas5 = [1, 4, 0, 5]
f, ((ax1, ax2, ax3), (ax4, ax5, ax6)) = plt.subplots(2, 3) # Subplot setup
all_thetas = [thetas0, thetas1, thetas2, thetas3, thetas4, thetas5] # List of all thetas
all_plots = [ax1, ax2, ax3, ax4, ax5, ax6] # List of all plots
n = 5 # Number of samples per subplot
for subplot_, theta_ in zip(all_plots, all_thetas): # Iterate over all plots and thetas
K = computeK(x_test, x_test, theta_) # Compute K
# Fix numerical error on eigenvalues 0 that are slightly negative (<e^15)
min_eig = np.min(np.real(np.linalg.eigvals(K)))
if min_eig < 0:
K -= 10*min_eig * np.eye(*K.shape)
mean = np.zeros(shape=len(K)) # Generate Means
random = numpy.random.multivariate_normal(mean, K)
# Draw n random samples
samples = [numpy.random.multivariate_normal(mean, K) for i in range(n)]
# Calculate expected y and variance
expected_y = numpy.mean(np.array(samples), axis=0)
uncertainties = numpy.sqrt(K.diagonal())
for sample in samples:
subplot_.plot(sample)
x = np.arange(0, 100) # 100 Steps
# Plot uncertainty
subplot_.fill_between(
x, expected_y - 2 * uncertainties, expected_y + 2 * uncertainties,
alpha=0.3, color='pink'
)
subplot_.plot(expected_y, 'g--') # Plot ground truth
# subplot_.legend(['Sampled y', 'Expected y']) # Add legend
subplot_.set_title(theta_) # Set title
plt.show()
def computeC(x1, x2, theta, beta):
K = computeK(x1, x2, theta)
return K + np.diag(np.array([1/beta for x in x1]))
def gp_predictive_distribution(x_train, t_train, x_test, theta, beta, C=None):
# Calculate or reuse C
if C is None:
C = computeC(x_train, x_train, theta, beta)
# Calculate mean and variance
c = computeC(x_test, x_test, theta, beta)
K = computeK(x_train, x_test, theta)
KC = np.matmul(np.linalg.inv(C), K)
mean_test = np.asarray(np.matrix(t_train) * KC)
var_test = c - np.matmul(KC.T, K)
return mean_test.squeeze(), var_test.squeeze(), C
### Test your function
N = 2
train_x = np.linspace(-1, 1, N)
train_t = 2*train_x
test_N = 3
test_x = np.linspace(-1, 1, test_N)
theta = [1, 2, 3, 4]
beta = 25
test_mean, test_var, C = gp_predictive_distribution(train_x, train_t, test_x, theta, beta, C=None)
assert test_mean.shape == (test_N,), "the shape of mean is incorrect"
assert test_var.shape == (test_N, test_N), "the shape of var is incorrect"
assert C.shape == (N, N), "the shape of C is incorrect"
C_in = np.array([[0.804, -0.098168436], [-0.098168436, 0.804]])
_, _, C_out = gp_predictive_distribution(train_x, train_t, test_x, theta, beta, C=C_in)
assert np.allclose(C_in, C_out), "C is not reused!"
import math
def gp_log_likelihood(x_train, t_train, theta, beta, C=None, invC=None):
if C is None:
C = computeC(x_train, x_train, theta, beta)
if invC is None:
invC = np.linalg.inv(C)
t_train = np.matrix(t_train)
# Data likelihood as represented in Bishop page 311
lp = -0.5 * np.log(np.linalg.det(C)) - 0.5 * t_train * \
invC * t_train.T - len(x_train) / 2 * np.log(2*np.pi)
lp = np.asscalar(lp)
return lp, C, invC
### Test your function
N = 2
train_x = np.linspace(-1, 1, N)
train_t = 2 * train_x
theta = [1, 2, 3, 4]
beta = 25
lp, C, invC = gp_log_likelihood(train_x, train_t, theta, beta, C=None, invC=None)
assert lp < 0, "the log-likelihood should smaller than 0"
assert C.shape == (N, N), "the shape of var is incorrect"
assert invC.shape == (N, N), "the shape of C is incorrect"
C_in = np.array([[0.804, -0.098168436], [-0.098168436, 0.804]])
_, C_out, _ = gp_log_likelihood(train_x, train_t, theta, beta, C=C_in, invC=None)
assert np.allclose(C_in, C_out), "C is not reused!"
invC_in = np.array([[1.26260453, 0.15416407], [0.15416407, 1.26260453]])
_, _, invC_out = gp_log_likelihood(train_x, train_t, theta, beta, C=None, invC=invC_in)
assert np.allclose(invC_in, invC_out), "invC is not reused!"
def gp_plot( x_test, y_test, mean_test, var_test, x_train, t_train, theta, beta ):
# x_test:
# y_test: the true function at x_test
# mean_test: predictive mean at x_test
# var_test: predictive covariance at x_test
# t_train: the training values
# theta: the kernel parameters
# beta: the precision (known)
# the reason for the manipulation is to allow plots separating model and data stddevs.
std_total = np.sqrt(np.diag(var_test)) # includes all uncertainty, model and target noise
std_model = np.sqrt(std_total**2 - 1.0/beta) # remove data noise to get model uncertainty in stddev
std_combo = std_model + np.sqrt(1.0/beta) # add stddev (note: not the same as full)
plt.plot(x_test, y_test, 'b', lw=3)
plt.plot(x_test, mean_test, 'k--', lw=2)
plt.fill_between(x_test, mean_test+2*std_combo,mean_test-2*std_combo, color='k', alpha=0.25)
plt.fill_between(x_test, mean_test+2*std_model,mean_test-2*std_model, color='r', alpha=0.25)
plt.plot(x_train, t_train, 'ro', ms=10)
np.random.seed(70121327)
# Number of data points
n = 2
def plot_conditioned_on_training(n):
# Use the periodic data generator to create 2 training points
x_train = np.random.uniform(low=-1.0, high=1.0, size=n)
t_train = generate_t(x_train, sigma)
# 100 data points for testing
x_test = np.linspace(-1, 1, 100)
y_test = true_mean_function(x_test)
# Iterate over all plots and thetas
for i, theta in enumerate(all_thetas):
plt.subplot(2, 3, i+1)
mean, var, C = gp_predictive_distribution(x_train, t_train, x_test, theta, beta)
lp, C, invC = gp_log_likelihood(x_train, t_train, theta, beta, C)
# Put theta info and log likelihood in title
plt.title("thetas : {}, lp : {}".format(theta, lp))
gp_plot( x_test, y_test, mean, var, x_train, t_train, theta, beta)
plt.show()
plot_conditioned_on_training(n)
# Number of data points
n = 10
plot_conditioned_on_training(n)
np.random.seed(1)
plt.rcParams["figure.figsize"] = [10,10]
# Cov should be diagonal (independency) and have the same values (identical), i.e. a*I.
def create_X(mean, sig, N):
return np.random.multivariate_normal(mean, sig * np.identity(2), N)
m1 = [1, 1]; m2 = [3, 3]
s1 = 1/2; s2 = 1/2
N1 = 20; N2 = 30
X1 = create_X(m1, s1, N1)
X2 = create_X(m2, s2, N2)
plt.figure()
plt.axis('equal')
plt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o')
plt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o')
plt.show()
def create_X_and_t(X1, X2):
X1_len = X1.shape[0]
X2_len = X2.shape[0]
X = np.vstack((X1, X2))
t = np.hstack((-np.ones(X1_len), np.ones(X2_len)))
# Shuffle data?
indices = np.arange(X1_len + X2_len)
np.random.shuffle(indices)
return X[indices], t[indices]
### Test your function
dim = 2
N1_test = 2
N2_test = 3
X1_test = np.arange(4).reshape((N1_test, dim))
X2_test = np.arange(6).reshape((N2_test, dim))
X_test, t_test = create_X_and_t(X1_test, X2_test)
assert X_test.shape == (N1_test + N2_test, dim), "the shape of X is incorrect"
assert t_test.shape == (N1_test + N2_test,), "the shape of t is incorrect"
def computeK(X):
K = np.dot(X, X.T).astype('float')
return K
dim = 2
N_test = 3
X_test = np.arange(6).reshape((N_test, dim))
K_test = computeK(X_test)
assert K_test.shape == (N_test, N_test)
import cvxopt
def compute_multipliers(X, t):
K = computeK(np.dot(np.diag(t), X))
q = cvxopt.matrix(-np.ones_like(t, dtype='float'))
G = cvxopt.matrix(np.diag(-np.ones_like(t, dtype='float')))
A = cvxopt.matrix(t).T
h = cvxopt.matrix(np.zeros_like(t, dtype='float'))
b = cvxopt.matrix(0.0)
P = cvxopt.matrix(K)
sol = cvxopt.solvers.qp(P, q, G, h, A, b)
a = np.array(sol['x'])
return a
### Test your function
dim = 2
N_test = 3
X_test = np.arange(6).reshape((N_test, dim))
t_test = np.array([-1., 1., 1.])
a_test = compute_multipliers(X_test, t_test)
assert a_test.shape == (N_test, 1)
np.random.seed(420)
X, t = create_X_and_t(X1, X2)
a_opt = compute_multipliers(X, t)
sv_ind = np.nonzero(np.around(a_opt[:, 0]))
X_sv = X[sv_ind]
t_sv = t[sv_ind]
a_sv = a_opt[sv_ind]
plt.figure()
plt.axis('equal')
plt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o')
plt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o')
plt.scatter(X_sv[:, 0], X_sv[:, 1], s=200, facecolors='none', edgecolors='lime', linewidth='3')
plt.show()
w_opt = np.squeeze(np.dot(a_opt.T, np.dot(np.diag(t), X)))
K_sv = computeK(X_sv)
N_sv = size(sv_ind)
atk_sv = np.dot(a_sv.T * t_sv, K_sv)
b = np.sum(t_sv - atk_sv)/N_sv
x_lim = np.array([1, 4])
y_lim = (-w_opt[0] * x_lim - b) / w_opt[1]
plt.figure()
plt.axis('equal')
plt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o')
plt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o')
plt.scatter(X_sv[:, 0], X_sv[:, 1], s=200, facecolors='none', edgecolors='lime', linewidth='3')
plt.plot(x_lim, y_lim, c='black')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lab 3
Step2: Part 1
Step3: 1. Sampling from the Gaussian process prior (30 points)
Step4: 1.2 computeK( X1, X2, thetas ) (10 points)
Step5: 1.3 Plot function samples (15 points)
Step6: 2. Predictive distribution (35 points)
Step7: 2.2 gp_log_likelihood(...) (10 points)
Step8: 2.3 Plotting (10 points)
Step9: 2.4 More plotting (5 points)
Step10: Part 2
Step11: b) (10 points) In the next step we will combine the two datasets X_1, X_2 and generate a vector t containing the labels. Write a function create_X_and_t(X1, X2) it should return the combined data set X and the corresponding target vector t.
Step12: 2.2 Finding the support vectors (15 points)
Step13: Next, we will rewrite the dual representation so that we can make use of computationally efficient vector-matrix multiplication. The objective becomes
Step14: 2.3 Plot support vectors (5 points)
Step15: 2.4 Plot the decision boundary (10 Points)
|
7,910
|
<ASSISTANT_TASK:>
Python Code:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from gensim import corpora, models, similarities
dictionary = corpora.Dictionary.load('/tmp/deerwester.dict')
corpus = corpora.MmCorpus('/tmp/deerwester.mm') # comes from the first tutorial, "From strings to vectors"
print(corpus)
lsi = models.LsiModel(corpus, id2word=dictionary, num_topics=2)
doc = "Human computer interaction"
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lsi = lsi[vec_bow] # convert the query to LSI space
print(vec_lsi)
index = similarities.MatrixSimilarity(lsi[corpus]) # transform corpus to LSI space and index it
index.save('/tmp/deerwester.index')
index = similarities.MatrixSimilarity.load('/tmp/deerwester.index')
sims = index[vec_lsi] # perform a similarity query against the corpus
print(list(enumerate(sims))) # print (document_number, document_similarity) 2-tuples
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Similarity Interface
Step2: To follow Deerwester’s example, we first use this tiny corpus to define a 2-dimensional LSI space
Step3: Now suppose a user typed in the query “Human computer interaction”. We would like to sort our nine corpus documents in decreasing order of relevance to this query. Unlike modern search engines, here we only concentrate on a single aspect of possible similarities—on apparent semantic relatedness of their texts (words). No hyperlinks, no random-walk static ranks, just a semantic extension over the boolean keyword match
Step4: In addition, we will be considering cosine similarity to determine the similarity of two vectors. Cosine similarity is a standard measure in Vector Space Modeling, but wherever the vectors represent probability distributions, different similarity measures may be more appropriate.
Step5: <B>Warning</B>
Step6: This is true for all similarity indexing classes (similarities.Similarity, similarities.MatrixSimilarity and similarities.SparseMatrixSimilarity). Also in the following, index can be an object of any of these. When in doubt, use similarities.Similarity, as it is the most scalable version, and it also supports adding more documents to the index later.
|
7,911
|
<ASSISTANT_TASK:>
Python Code:
import h2o
import imp
from h2o.estimators.kmeans import H2OKMeansEstimator
# Start a local instance of the H2O engine.
h2o.init();
iris = h2o.import_file(path="https://github.com/h2oai/h2o-3/raw/master/h2o-r/h2o-package/inst/extdata/iris_wheader.csv")
iris.describe()
try:
imp.find_module('pandas')
can_pandas = True
import pandas as pd
except:
can_pandas = False
try:
imp.find_module('seaborn')
can_seaborn = True
import seaborn as sns
except:
can_seaborn = False
%matplotlib inline
if can_seaborn:
sns.set()
if can_seaborn:
sns.set_context("notebook")
sns.pairplot(iris.as_data_frame(True), vars=["sepal_len", "sepal_wid", "petal_len", "petal_wid"], hue="class");
results = [H2OKMeansEstimator(k=clusters, init="Random", seed=2, standardize=True) for clusters in range(2,13)]
for estimator in results:
estimator.train(x=iris.col_names[0:-1], training_frame = iris)
import math as math
def diagnostics_from_clusteringmodel(model):
total_within_sumofsquares = model.tot_withinss()
number_of_clusters = len(model.centers()[0])
number_of_dimensions = len(model.centers())
number_of_rows = sum(model.size())
aic = total_within_sumofsquares + 2 * number_of_dimensions * number_of_clusters
bic = total_within_sumofsquares + math.log(number_of_rows) * number_of_dimensions * number_of_clusters
return {'Clusters':number_of_clusters,
'Total Within SS':total_within_sumofsquares,
'AIC':aic,
'BIC':bic}
if can_pandas:
diagnostics = pd.DataFrame( [diagnostics_from_clusteringmodel(model) for model in results])
diagnostics.set_index('Clusters', inplace=True)
if can_pandas:
diagnostics.plot(kind='line');
clusters = 4
predicted = results[clusters-2].predict(iris)
iris["Predicted"] = predicted["predict"].asfactor()
if can_seaborn:
sns.pairplot(iris.as_data_frame(True), vars=["sepal_len", "sepal_wid", "petal_len", "petal_wid"], hue="Predicted");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The next step of using H2O is to parse and load data into H2O's in-memory columnar compressed storage. Today we will be using the Iris flower data set.
Step2: H2O provides convenient commands to understand the H2OFrame object, the data structure for data that will be used by H2O's machine learning algorithms. Because H2O is often used for very large datasets and in a cluster computing configuration information about how much the data is compressed in memory and the distribution of the data across the H2O nodes, along with standard summary statics on the data in the H2OFrame, is provided.
Step3: The iris data set is labeled into three classes; there are four measurements that were taken for each iris. While we will not be using the labeled data for clustering, it does provide us a convenient comparison and visualization of the data as it was provided. In this example I use Seaborn for the visualization of the data.
Step4: The next step is to model the data using H2O's kmeans algorithm. We will do this across a range of cluster options and collect each H2O model object as an element in an array. In this example the initial position of the cluster centers is selected at random and the random number seed is set for reproducibility. Because H2O is designed for high performance it is quick and easy to explore many different hyper-parameter settings during modeling to find the model that best suits your needs.
Step5: There are three diagnostics that will be demonstrated to help with determining the number of clusters
Step6: From the plot below, to me, it is difficult to find a 'knee' in the rate of decrease of the total within cluster sum of square. It might be at 4 clusters, it might be 7. AIC is minimized at 7 clusters, and BIC is minimized at 4 clusters.
Step7: For demonstration purposes, I will selected the number of clusters to be 4. I will use the H2O Model for 4 clusters previously created, and use that to assign the membership in each of the original data points. This predicted cluster assignment is then added to the original iris data frames as a new vector (mostly to make plotting easy).
Step8: Finally, I will plot the predicted cluster membership using the same layout as on the original data earlier in the notebook.
|
7,912
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# returns a random d dimensional vector, a direction to peturb in
def direction(d,t):
# if type == uniform
if(t == 'u'):
return np.random.uniform(-2/np.sqrt(d), 2/np.sqrt(d), d)
elif(t == 'n'):
return np.random.normal(0, 1/np.sqrt(d), d)
elif(t == 's'):
# a point on the N-Sphere r = 1 so it is ommited
angles = np.random.uniform(0, np.pi, d-2)
angleLast = np.random.uniform(0, 2*np.pi,1)[0]
x = np.zeros(d)
x[0] = np.cos(angles[0])
for i in range(1,d-1):
temp = 1
for j in range(i):
temp = temp * np.sin(angles[j])
if(i == d-2):
x[i] = temp * np.cos(angleLast)
else:
x[i] = temp*np.cos(angles[i])
x[d-1] = x[d-2]*np.tan(angleLast)
return x
#N = 10000
N = 100 # number of directions sampled AND the number of dimensions.
d = N
hN = []
normal = direction(d,'n').reshape(d,1)
for i in range(N-1):
hN.append(np.linalg.norm(direction(d,'n')))
normal = np.concatenate((normal,direction(d,'n').reshape(d,1)), axis = 1)
import matplotlib.pyplot as plt
plt.hist(hN)
plt.show()
hU = []
uniform = direction(d,'u').reshape(d,1)
for i in range(N-1):
hU.append(np.linalg.norm(direction(d,'u')))
uniform = np.concatenate((uniform,direction(d,'u').reshape(d,1)), axis = 1)
import matplotlib.pyplot as plt
plt.hist(hU)
plt.show()
N = 1000
hS = []
spherical = direction(d,'s').reshape(d,1)
for i in range(N-1):
hS.append(np.linalg.norm(direction(d,'s')))
spherical = np.concatenate((spherical,direction(d,'s').reshape(d,1)), axis = 1)
for i in hS:
# All vectors are close enough to 1 in length. Even for small d
if((i-1)>10**-16):
print(i-1)
np.matmul(normal.T, normal)
np.matmul(uniform.T, uniform)
np.matmul(spherical.T, spherical)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So from the histograms above we can see all these methods give us points on the unit sphere. (Uniform gives us almost) But are they all uncorrelated? Let us see,
|
7,913
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from cStringIO import StringIO
import matplotlib.pyplot as plt
import caffe
from IPython.display import clear_output, Image, display
import cv2
import PIL.Image
import os
os.chdir("start_deep/")
caffe.set_mode_cpu()
caffe.set_device(0)
caffe.set_mode_gpu()
net = caffe.Net('deploy.prototxt', "facenet_iter_200000.caffemodel", caffe.TEST)
#training = caffe.Net('facenet_train_test.prototxt', "facenet_iter_200000.caffemodel", caffe.TRAIN)
#solver = caffe.SGDSolver('facenet_solver.prototxt')
#test_net = solver.testnets[0]
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
#im = np.array(PIL.Image.open('train_images/0/137021_102_88_72_72.pgm'))/256.0
im = np.array(PIL.Image.open('train_images/1/image000619.pgm'))/256.0
im_input = im[np.newaxis, np.newaxis, :, :]
net.blobs['data'].reshape(*im_input.shape)
net.blobs['data'].data[...] = im
showarray(im)
print (len(im))
print (len(im[0]))
print (im)
output = net.forward()
print(output)
if output['prob'][0][0] >0.9:
print "visage"
print cv2.__version__
orig_scale = 4
def process_chunk(chunk):
im_input = chunk[np.newaxis, np.newaxis, :, :]
net.blobs['data'].reshape(*im_input.shape)
net.blobs['data'].data[...] = imtmp
return net.forward()['prob'][0][1]
def get_face_prob(chunk):
return process_chunk(chunk)
def add_outline(img, i, j, scale, confidence):
color = (255,255,255)
if confidence > 0.985:
color = (0,255,0)
elif confidence >0.96:
color = (255,255,0)
else:
color =(255,0,0)
for x in range (int(i*scale), int((i+36)*scale)):
img[x][int(j*scale)]=color
img[x][int((j+36)*scale-1)]=color
for y in range(int(j*scale), int((j+36)*scale)):
img[int(i*scale)][y]=color
img[int((i+36)*scale-1)][y]=color
return img
img = PIL.Image.open('test_vis4.jpg')
w, h = img.size
img = img.resize((int(w/orig_scale),int(h/orig_scale)), PIL.Image.NEAREST)
imbase = img
img = img.convert('L')
w, h = img.size
scale = 1
print "starting processing"
found_pairs = []
next_scale = 1.3
next_i = 4
while w >=36*2 and h >36*2:
img = img.resize((int(w/next_scale),int(h/next_scale)), PIL.Image.NEAREST)
w, h = img.size
print img.size
im = np.array(img)
scale*=next_scale
i = 0
j = 0
last_result = 0
while i < int(h-36):
next_i = 4
while j < int(w-36):
imtmp = np.array(im [i:i+36, j:j+36]/256.0)
face_prob = get_face_prob(imtmp)
if face_prob > 0.5:
next_i = 2
while j < int(w-36) and face_prob > last_result:
imtmp = np.array(im [i:i+36, j:j+36]/256.0)
last_result = face_prob
face_prob = get_face_prob(imtmp)
j += 1
if last_result > 0.92:
next_i = 1
matched = False
print last_result
print "visage trouvé @ %i , %i"%(i, j)
showarray(imtmp*255)
for pair in found_pairs[:]: # copy the list to remove while working in it
if (abs(pair[0]*pair[3] - i*scale) + abs(pair[1]*pair[3] - j*scale)) < 20*scale :
matched = True
if pair[2] < last_result:
found_pairs.remove(pair)
found_pairs.append((i, j, last_result, scale))
if not matched:
found_pairs.append((i, j, last_result, scale))
j+=36
last_result = 0
j+= 4
i+=next_i
j = 0
print "adding overlay"
for pair in found_pairs:
imbase = add_outline(np.array(imbase),pair[0],pair[1],pair[3], pair[2])
showarray(imbase)
threshold = 0.96
img = PIL.Image.open('test_vis4.jpg')
w, h = img.size
img = img.resize((int(w/orig_scale),int(h/orig_scale)), PIL.Image.NEAREST)
imbase = img
print "adding overlay"
for pair in found_pairs:
if pair[2]>threshold:
imbase = add_outline(np.array(imbase),pair[0],pair[1],pair[3], pair[2])
showarray(imbase)
img = PIL.Image.open('test_vis4.jpg').convert('LA')
arr = np.array(img)
w, h = img.size
print w
a
img = PIL.Image.open('neg1.jpg')
w, h = img.size
#img = img.resize((int(w/4),int(h/4)), PIL.Image.NEAREST)
imbase = img
img = img.convert('L')
w, h = img.size
scale = 1
width = 36
height = 36
import array
findex = 4327
with open('posneg.txt', 'a') as f:
print "starting processing"
while w >=36*2 and h >36*2:
img = img.resize((int(w/1.3),int(h/1.3)), PIL.Image.NEAREST)
w, h = img.size
print img.size
im = np.array(img)
scale*=1.3
found_pairs = []
i = 0
j = 0
last_result = 0
while i < int(h-36):
while j < int(w-36):
imtmp = np.array(im [i:i+36, j:j+36]/256.0)
face_prob = get_face_prob(imtmp)
if face_prob > 0.9:
showarray(imtmp*255)
buff=array.array('B')
for k in range(0, 36):
for l in range(0, 36):
buff.append(int(imtmp[k][l]*255))
findex += 1
# open file for writing
filename = '0/lbe%i.pgm'%findex
try:
fout=open("train_images/"+filename, 'wb')
except IOError, er:
print "Cannot open file ", filename, "Exiting … \n", er
# define PGM Header
pgmHeader = 'P5' + '\n' + str(width) + ' ' + str(height) + ' ' + str(255) + '\n'
# write the header to the file
fout.write(pgmHeader)
# write the data to the file
buff.tofile(fout)
# close the file
fout.close()
f.write(filename + " 0\n")
j+=4
i+=4
j = 0
print "adding overlay"
for pair in found_pairs:
imbase = add_outline(np.array(imbase),pair[0],pair[1],scale, pair[2])
showarray(imbase)
print findex
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Caffe computation mode
Step2: GPU
Step3: Network loading and tests
Step4: The cell bellows checks that opencv and its python bindings are properly installed
Step6: Face detection in image
Step7: Redraw
Step8: Train dataset appender
|
7,914
|
<ASSISTANT_TASK:>
Python Code:
class Node :
def __init__(self , data ) :
self . data = data
self . next = None
def fun1(head ) :
if(head == None ) :
return
fun1(head . next )
print(head . data , end = "▁ ")
def fun2(start ) :
if(start == None ) :
return
print(start . data , end = "▁ ")
if(start . next != None ) :
fun2(start . next . next )
print(start . data , end = "▁ ")
def push(head , new_data ) :
new_node = Node(new_data )
new_node . next = head
head = new_node
return head
head = None
head = Node(5 )
head = push(head , 4 )
head = push(head , 3 )
head = push(head , 2 )
head = push(head , 1 )
print("Output ▁ of ▁ fun1() ▁ for ▁ list ▁ 1 - > 2 - > 3 - > 4 - > 5")
fun1(head )
print("Output of fun2() for list 1 -> 2 -> 3 -> 4 -> 5 ")
fun2(head )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
7,915
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
from scipy.interpolate import interp1d
from scipy.interpolate import interp2d
xb=np.array([-5,-4,-3,-2,-1,0,1,2,3,4,5])
yb=np.array([-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5])
yt=np.array([5]*11)
yc=np.array(0)
x=np.hstack((xb,xb,yb[1:10],yt[1:10],yc))
y=np.hstack((yb,yt,xb[1:10],xb[1:10],yc))
f1=np.array([0]*40)
f2=[1]
f=np.hstack((f1,f2))
plt.scatter(x,y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
# F=np.meshgrid(f,y)
xnew=np.linspace(-5,5,100)
ynew=xnew
Xnew,Ynew=np.meshgrid(xnew,ynew)
Fnew=griddata((x,y),f,(Xnew,Ynew),method='cubic') # worked with Jessi Pilgram
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
plt.figure(figsize=(12,8))
plt.contourf(Xnew,Ynew,Fnew)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Sparse 2d Interpolation')
plt.colorbar(shrink=.9)
plt.figure(figsize=(12,8))
plt.pcolor(Xnew,Ynew,Fnew)
plt.xlabel('x')
plt.ylabel('y')
plt.title('2d Sparse Interpolation')
plt.colorbar(shrink=.9)
assert True # leave this to grade the plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sparse 2d interpolation
Step2: The following plot should show the points on the boundary and the single point in the interior
Step3: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain
Step4: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
|
7,916
|
<ASSISTANT_TASK:>
Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
# Initialization of setup
# --------------------------------------------------------------------------
nx = 2000 # number of grid points
xmax = 8000 # in m
c = 2500 # Advected speed
eps = 0.5 # CFL
tmax = 2 # simulation time in s
sig = 200 # Gaussian width, in m
x0 = 1000 # Gaussian position, in m
method = 'Lax-Wendroff' # 'Lax-Wendroff', 'upwind'
isnap = 10
# Initialize Space
x = np.linspace(0,xmax,nx)
dx = min(np.diff(x))
# use wave based CFL criterion
dt = eps*dx/c # calculate tim step from stability criterion
# Simulation time
nt = int(np.floor(tmax/dt))
# Initialize shape of fields
Q = np.zeros(nx)
dQ = np.zeros(nx)
dQ1 = np.zeros(nx)
dQ2 = np.zeros(nx)
Qa = np.zeros(nx)
# Spatial initial condition
#---------------------------------------------------------------
sx = np.exp(-1./sig**2 * (x-x0)**2)
# Set Initial condition
Q = sx
# ---------------------------------------------------------------
# Plot initial condition
# ---------------------------------------------------------------
plt.plot(x, sx, color='b', lw=2, label='Initial condition')
plt.ylabel('Amplitude', size=16)
plt.xlabel('x', size=16)
plt.legend()
plt.grid(True)
plt.show()
# ---------------------------------------------------------------
# Initialize animated plot
# ---------------------------------------------------------------
fig = plt.figure(figsize=(12,6))
line = plt.plot(x, Q, 'k', x, Qa, 'r--')
plt.ylabel('Amplitude')
plt.xlabel('x')
plt.title('Scalar Advection - %s method'%method, size=16)
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
for i in range(nt):
# upwind method
if method == 'upwind':
for j in range(1, nx-1):
# Forward (upwind) (c>0)
dQ[j] = Q[j] - Q[j-1]
# Time extrapolation
Q = Q - dt/dx*c*dQ
# Lax wendroff method
if method == 'Lax-Wendroff':
for j in range(1, nx-1):
# Forward (upwind) (c>0)
dQ1[j] = Q[j+1] - 2*Q[j] + Q[j-1]
dQ2[j] = Q[j+1] - Q[j-1]
# Time extrapolation
Q = Q - 0.5*c*dt/dx*dQ2 + 0.5*(c*dt/dx)**2 *dQ1
# Boundary condition
Q[0] = Q[nx-2] # Periodic
Q[nx-1] = Q[nx-2] # Absorbing
# --------------------------------------
# Animation plot. Display solution
if not i % isnap:
for l in line:
l.remove()
del l
# --------------------------------------
# Analytical solution
xd = c*i*dt+x0
Qa = np.exp(-1./sig**2 * (x - xd)**2)
# --------------------------------------
# Display lines
line = plt.plot(x, Q, 'k', x, Qa, 'r--', lw=1.5)
plt.legend(iter(line), ('F. Volume', 'Analytic'))
plt.gcf().canvas.draw()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Initialization of setup
Step2: 2. Finite Volumes setup
Step3: 3. Initial condition
Step4: 4. Solution for the scalar advection problem
|
7,917
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import random, datetime
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import statsmodels.api as sm
from scipy.stats import norm
from scipy.stats.stats import pearsonr
# str, int, float
str(3)
"chengjun wang"
# int
int('5')
# float
float('7.1')
range(10)
# for i in range(1, 10):
# print(i)
range(1,10)
dir
dir(str)[-10:]
help(str)
x = ' Hello WorlD '
dir(x)[-10:]
# lower
x.lower()
# upper
x.upper()
# rstrip
x.rstrip()
# strip
x.strip()
# replace
x.replace('lo', '')
# split
x.split('lo')
# join
','.join(['a', 'b'])
x = 'hello world'
type(x)
l = [1,2,3,3] # list
t = (1, 2, 3, 3) # tuple
s = {1, 2, 3, 3} # set([1,2,3,3]) # set
d = {'a':1,'b':2,'c':3} # dict
a = np.array(l) # array
print(l, t, s, d, a)
l = [1,2,3,3] # list
l.append(4)
l
d = {'a':1,'b':2,'c':3} # dict
d.keys()
d = {'a':1,'b':2,'c':3} # dict
d.values()
d = {'a':1,'b':2,'c':3} # dict
d['b']
d = {'a':1,'b':2,'c':3} # dict
d.items()
def devidePlus(m, n): # 结尾是冒号
y = m/n + 1 # 注意:空格
return y # 注意:return
range(10)
range(1, 10)
for i in range(10):
print(i, i*10, i**2)
for i in range(10):
print(i*10)
for i in range(10):
print(devidePlus(i, 2))
# 列表内部的for循环
r = [devidePlus(i, 2) for i in range(10)]
r
m1 = map(devidePlus, [4,3,2], [2, 1, 5])
print(*m1)
#print(*map(devidePlus, [4,3,2], [2, 1, 5]))
# 注意: 将(4, 2)作为一个组合进行计算,将(3, 1)作为一个组合进行计算
m2 = map(lambda x, y: x + y, [1, 3, 5, 7, 9], [2, 4, 6, 8, 10])
print(*m2)
m3 = map(lambda x, y, z: x + y - z, [1, 3, 5, 7, 9], [2, 4, 6, 8, 10], [3, 3, 2, 2, 5])
print(*m3)
j = 5
if j%2 == 1:
print(r'余数是1')
elif j%2 ==0:
print(r'余数是0')
else:
print(r'余数既不是1也不是0')
x = 5
if x < 5:
y = -1
z = 5
elif x > 5:
y = 1
z = 11
else:
y = 0
z = 10
print(x, y, z)
j = 0
while j <10:
print(j)
j+=1 # avoid dead loop
j = 0
while j <10:
if j%2 != 0:
print(j**2)
j+=1 # avoid dead loop
j = 0
while j <50:
if j == 30:
break
if j%2 != 0:
print(j**2)
j+=1 # avoid dead loop
a = 4
while a: # 0, None, False
print(a)
a -= 1
if a < 0:
a = None # []
def devidePlus(m, n): # 结尾是冒号
return m/n+ 1 # 注意:空格
for i in [2, 0, 5]:
try:
print(devidePlus(4, i))
except Exception as e:
print(e)
pass
alist = [[1,1], [0, 0, 1]]
for aa in alist:
try:
for a in aa:
print(10 / a)
except Exception as e:
print(e)
pass
alist = [[1,1], [0, 0, 1]]
for aa in alist:
for a in aa:
try:
print(10 / a)
except Exception as e:
print(e)
pass
data =[[i, i**2, i**3] for i in range(10)]
data
for i in data:
print('\t'.join(map(str, i)))
type(data)
len(data)
data[0]
help(f.write)
# 保存数据
data =[[i, i**2, i**3] for i in range(10000)]
f = open("../data/data_write_to_file1.txt", "w")
for i in data:
f.write('\t'.join(map(str,i)) + '\n')
f.close()
with open('../data/data_write_to_file.txt','r') as f:
data = f.readlines()
data[:5]
with open('../data/data_write_to_file.txt','r') as f:
data = f.readlines(1000) #bytes
len(data)
with open('../data/data_write_to_file.txt','r') as f:
print(f.readline())
f = [1, 2, 3, 4, 5]
for k, i in enumerate(f):
print(k, i)
with open('../data/data_write_to_file.txt','r') as f:
for i in f:
print(i)
with open('../data/data_write_to_file.txt','r') as f:
for k, i in enumerate(f):
if k%2000 == 0:
print(i)
data = []
line = '0\t0\t0\n'
line = line.replace('\n', '')
line = line.split('\t')
line = [int(i) for i in line] # convert str to int
data.append(line)
data
# 读取数据
data = []
with open('../data/data_write_to_file1.txt','r') as f:
for line in f:
line = line.replace('\n', '').split('\t')
line = [int(i) for i in line]
data.append(line)
data
# 读取数据
data = []
with open('../data/data_write_to_file.txt','r') as f:
for line in f:
line = line.replace('\n', '').split('\t')
line = [int(i) for i in line]
data.append(line)
data
import pandas as pd
help(pd.read_csv)
df = pd.read_csv('../data/data_write_to_file.txt',
sep = '\t', names = ['a', 'b', 'c'])
df[-5:]
import json
data_dict = {'a':1, 'b':2, 'c':3}
with open('../data/save_dict.json', 'w') as f:
json.dump(data_dict, f)
dd = json.load(open("../data/save_dict.json"))
dd
data_list = list(range(10))
with open('../data/save_list.json', 'w') as f:
json.dump(data_list, f)
dl = json.load(open("../data/save_list.json"))
dl
%matplotlib inline
import matplotlib.pyplot as plt
x = range(1, 100)
y = [i**-3 for i in x]
plt.plot(x, y, 'b-s')
plt.ylabel('$p(k)$', fontsize = 20)
plt.xlabel('$k$', fontsize = 20)
plt.xscale('log')
plt.yscale('log')
plt.title('Degree Distribution')
plt.show()
import numpy as np
# red dashes, blue squares and green triangles
t = np.arange(0., 5., 0.2)
plt.plot(t, t, 'r--')
plt.plot(t, t**2, 'bs')
plt.plot(t, t**3, 'g^')
plt.show()
# red dashes, blue squares and green triangles
t = np.arange(0., 5., 0.2)
plt.plot(t, t**2, 'b-s', label = '1')
plt.plot(t, t**2.5, 'r-o', label = '2')
plt.plot(t, t**3, 'g-^', label = '3')
plt.annotate(r'$\alpha = 3$', xy=(3.5, 40), xytext=(2, 80),
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize = 20)
plt.ylabel('$f(t)$', fontsize = 20)
plt.xlabel('$t$', fontsize = 20)
plt.legend(loc=2,numpoints=1,fontsize=10)
plt.show()
# plt.savefig('/Users/chengjun/GitHub/cjc/figure/save_figure.png',
# dpi = 300, bbox_inches="tight",transparent = True)
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(5,5))
sns.set(style="whitegrid")
plt.figure(1)
plt.subplot(221)
plt.plot(t, t, 'r--')
plt.text(2, 0.8*np.max(t), r'$\alpha = 1$', fontsize = 20)
plt.subplot(222)
plt.plot(t, t**2, 'bs')
plt.text(2, 0.8*np.max(t**2), r'$\alpha = 2$', fontsize = 20)
plt.subplot(223)
plt.plot(t, t**3, 'g^')
plt.text(2, 0.8*np.max(t**3), r'$\alpha = 3$', fontsize = 20)
plt.subplot(224)
plt.plot(t, t**4, 'r-o')
plt.text(2, 0.8*np.max(t**4), r'$\alpha = 4$', fontsize = 20)
plt.show()
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure(1)
plt.subplot(211)
plt.plot(t1, f(t1), 'bo')
plt.plot(t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
import matplotlib.gridspec as gridspec
import numpy as np
t = np.arange(0., 5., 0.2)
gs = gridspec.GridSpec(3, 3)
ax1 = plt.subplot(gs[0, :])
plt.plot(t, t**2, 'b-s')
ax2 = plt.subplot(gs[1,:-1])
plt.plot(t, t**2, 'g-s')
ax3 = plt.subplot(gs[1:, -1])
plt.plot(t, t**2, 'r-o')
ax4 = plt.subplot(gs[-1,0])
plt.plot(t, t**2, 'g-^')
ax5 = plt.subplot(gs[-1,1])
plt.plot(t, t**2, 'b-<')
plt.tight_layout()
def OLSRegressPlot(x,y,col,xlab,ylab):
xx = sm.add_constant(x, prepend=True)
res = sm.OLS(y,xx).fit()
constant, beta = res.params
r2 = res.rsquared
lab = r'$\beta = %.2f, \,R^2 = %.2f$' %(beta,r2)
plt.scatter(x,y,s=60,facecolors='none', edgecolors=col)
plt.plot(x,constant + x*beta,"red",label=lab)
plt.legend(loc = 'upper left',fontsize=16)
plt.xlabel(xlab,fontsize=26)
plt.ylabel(ylab,fontsize=26)
x = np.random.randn(50)
y = np.random.randn(50) + 3*x
pearsonr(x, y)
fig = plt.figure(figsize=(10, 4),facecolor='white')
OLSRegressPlot(x,y,'RoyalBlue',r'$x$',r'$y$')
plt.show()
fig = plt.figure(figsize=(7, 4),facecolor='white')
data = norm.rvs(10.0, 2.5, size=5000)
mu, std = norm.fit(data)
plt.hist(data, bins=25, normed=True, alpha=0.6, color='g')
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
plt.plot(x, p, 'r', linewidth=2)
title = r"$\mu = %.2f, \, \sigma = %.2f$" % (mu, std)
plt.title(title,size=16)
plt.show()
import pandas as pd
df = pd.read_csv('../data/data_write_to_file.txt', sep = '\t', names = ['a', 'b', 'c'])
df[:5]
df.plot.line()
plt.yscale('log')
plt.ylabel('$values$', fontsize = 20)
plt.xlabel('$index$', fontsize = 20)
plt.show()
df.plot.scatter(x='a', y='b')
plt.show()
df.plot.hexbin(x='a', y='b', gridsize=25)
plt.show()
df['a'].plot.kde()
plt.show()
bp = df.boxplot()
plt.yscale('log')
plt.show()
df['c'].diff().hist()
plt.show()
df.plot.hist(stacked=True, bins=20)
# plt.yscale('log')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Variable Type
Step2: dir & help
Step3: type
Step4: Data Structure
Step5: 定义函数
Step6: For 循环
Step7: map
Step8: if elif else
Step9: while循环
Step10: try except
Step11: Write and Read data
Step12: 保存中间步骤产生的字典数据
Step13: 重新读入json
Step14: 使用matplotlib绘图
|
7,918
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
% matplotlib inline
from thinkbayes2 import Hist, Pmf, Suite
pmf = Pmf()
for x in [1,2,3,4,5,6]:
pmf[x] = 1
pmf.Print()
pmf.Normalize()
pmf.Print()
pmf = Pmf([1,2,3,4,5,6])
pmf.Print()
pmf.Prob(1)
pmf[1]
pmf = Pmf()
pmf['Bowl 1'] = 0.5
pmf['Bowl 2'] = 0.5
pmf.Print()
pmf.Mult('Bowl 1', 0.75)
pmf.Mult('Bowl 2', 0.5)
pmf.Print()
pmf = Pmf(['Bowl 1', 'Bowl 2'])
pmf.Print()
pmf['Bowl 1'] *= 0.75
pmf['Bowl 2'] *= 0.5
pmf.Print()
pmf.Normalize()
pmf.Print()
class Cookie(Pmf):
A map from string bowl ID to probablity.
def __init__(self, hypos):
Initialize self.
hypos: sequence of string bowl IDs
Pmf.__init__(self)
for hypo in hypos:
self.Set(hypo, 1)
self.Normalize()
def Update(self, data):
Updates the PMF with new data.
data: string cookie type
for hypo in self.Values():
like = self.Likelihood(data, hypo)
self.Mult(hypo, like)
self.Normalize()
mixes = {
'Bowl 1':dict(vanilla=0.75, chocolate=0.25),
'Bowl 2':dict(vanilla=0.5, chocolate=0.5),
}
def Likelihood(self, data, hypo):
The likelihood of the data under the hypothesis.
data: string cookie type
hypo: string bowl ID
mix = self.mixes[hypo]
like = mix[data]
return like
pmf = Cookie(['Bowl 1', 'Bowl 2'])
pmf.Update('vanilla')
pmf.Print()
dataset = ['vanilla', 'chocolate', 'vanilla']
for data in dataset:
pmf.Update(data)
pmf.Print()
class Monty(Pmf):
Map from string location of car to probability
def __init__(self, hypos):
Initialize the distribution.
hypos: sequence of hypotheses
Pmf.__init__(self)
for hypo in hypos:
self.Set(hypo, 1)
self.Normalize()
def Update(self, data):
Updates each hypothesis based on the data.
data: any representation of the data
for hypo in self.Values():
like = self.Likelihood(data, hypo)
self.Mult(hypo, like)
self.Normalize()
def Likelihood(self, data, hypo):
Compute the likelihood of the data under the hypothesis.
hypo: string name of the door where the prize is
data: string name of the door Monty opened
if hypo == data:
return 0
elif hypo == 'A':
return 0.5
else:
return 1
pmf = Monty('ABC')
pmf.Update('B')
pmf.Print()
class Monty(Suite):
def Likelihood(self, data, hypo):
if hypo == data:
return 0
elif hypo == 'A':
return 0.5
else:
return 1
pmf = Monty('ABC')
pmf.Update('B')
pmf.Print()
class M_and_M(Suite):
Map from hypothesis (A or B) to probability.
mix94 = dict(brown=30,
yellow=20,
red=20,
green=10,
orange=10,
tan=10,
blue=0)
mix96 = dict(blue=24,
green=20,
orange=16,
yellow=14,
red=13,
brown=13,
tan=0)
hypoA = dict(bag1=mix94, bag2=mix96)
hypoB = dict(bag1=mix96, bag2=mix94)
hypotheses = dict(A=hypoA, B=hypoB)
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: string hypothesis (A or B)
data: tuple of string bag, string color
bag, color = data
mix = self.hypotheses[hypo][bag]
like = mix[color]
return like
suite = M_and_M('AB')
suite.Update(('bag1', 'yellow'))
suite.Update(('bag2', 'green'))
suite.Print()
suite.Update(('bag1', 'blue'))
suite.Print()
# Solution goes here
# Solution goes here
# Solution goes here
from sympy import symbols
p = symbols('p')
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Pmf class
Step2: To be true probabilities, they have to add up to 1. So we can normalize the Pmf
Step3: The return value from Normalize is the sum of the probabilities before normalizing.
Step4: A faster way to make a Pmf is to provide a sequence of values. The constructor adds the values to the Pmf and then normalizes
Step5: To extract a value from a Pmf, you can use Prob
Step6: Or you can use the bracket operator. Either way, if you ask for the probability of something that's not in the Pmf, the result is 0.
Step7: The cookie problem
Step8: And we can update it using Mult
Step9: Or here's the shorter way to construct the prior.
Step10: And we can use *= for the update.
Step11: Either way, we have to normalize the posterior distribution.
Step16: The Bayesian framework
Step17: We can confirm that we get the same result.
Step18: But this implementation is more general; it can handle any sequence of data.
Step23: The Monty Hall problem
Step24: And here's how we use it.
Step25: The Suite class
Step26: And it works.
Step29: The M&M problem
Step30: And here's an update
Step31: Exercise
Step32: Exercise
Step33: Exercises
Step34: Exercise
Step35: Exercise
Step36: Exercise In Section 2.3 I said that the solution to the cookie problem generalizes to the case where we draw multiple cookies with replacement.
|
7,919
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
%matplotlib inline
import openpathsampling as paths
import numpy as np
old_store = paths.Storage("mstis_bootstrap.nc", mode='r')
print("PathMovers: "+ str(len(old_store.pathmovers)))
print("Samples: " + str(len(old_store.samples)))
print("Ensembles: " + str(len(old_store.ensembles)))
print("SampleSets: " + str(len(old_store.samplesets)))
print("Snapshots: " + str(len(old_store.snapshots)))
print("Networks: " + str(len(old_store.networks)))
template = old_store.snapshots[0]
engine = old_store.engines['toy_engine']
opA = old_store.cvs['opA']
opB = old_store.cvs['opB']
opC = old_store.cvs['opC']
stateA = old_store.volumes['A']
stateB = old_store.volumes['B']
stateC = old_store.volumes['C']
# we could also load the interfaces, but it is just as easy:
interfacesA = paths.VolumeInterfaceSet(opA, 0.0,[0.2, 0.3, 0.4])
interfacesB = paths.VolumeInterfaceSet(opB, 0.0,[0.2, 0.3, 0.4])
interfacesC = paths.VolumeInterfaceSet(opC, 0.0,[0.2, 0.3, 0.4])
ms_outers = paths.MSOuterTISInterface.from_lambdas(
{ifaces: 0.5
for ifaces in [interfacesA, interfacesB, interfacesC]}
)
mstis = paths.MSTISNetwork(
[(stateA, interfacesA),
(stateB, interfacesB),
(stateC, interfacesC)],
ms_outers=ms_outers
).named('mstis')
scheme = paths.DefaultScheme(mstis, engine=engine).named("scheme")
# load the sampleset we have saved before; there is only one in the file
old_sampleset = old_store.samplesets[0]
from_old_sset = scheme.initial_conditions_from_trajectories(old_sampleset)
# CODE IN THIS CELL IS NEEDED FOR TEST SUITE, NOT NORMAL USE
def shoot_until_A_to_A(initial_ensemble, desired_ensemble, sample, engine):
# we only shoot forward because we know the final frame is the problem
mover = paths.ForwardShootMover(ensemble=initial_ensemble,
selector=paths.UniformSelector(),
engine=engine)
while not desired_ensemble(sample):
change = mover.move_core([sample])
if desired_ensemble(change.trials[0]):
sample = change.trials[0]
return sample
minus_samples = []
for minus_ensemble in mstis.special_ensembles['minus']:
# tis_ensemble allows A->B; desired_ensemble only allows A->A
initial_state = minus_ensemble.state_vol
tis_ensemble = mstis.from_state[initial_state].ensembles[0]
desired_ensemble = paths.TISEnsemble(initial_state, initial_state,
tis_ensemble.interface)
initial_sample = from_old_sset[tis_ensemble]
# ensure we're A->A, not A->B
sample_A_to_A = shoot_until_A_to_A(tis_ensemble, desired_ensemble,
initial_sample, engine)
# with an A->A segment, just use this to extend into the minus ensemble
sample = minus_ensemble.extend_sample_from_trajectories(
sample_A_to_A,
engine=engine,
replica=-len(minus_samples) - 1
)
minus_samples.append(sample)
init_conds = scheme.initial_conditions_from_trajectories(
minus_samples,
sample_set=from_old_sset
)
# verify that every trajectory satisfies its ensemble
init_conds.sanity_check()
# verify that this initial conditions are valid for this move scheme
scheme.assert_initial_conditions(init_conds)
equilibration = paths.PathSampling(
storage=None,
sample_set=init_conds,
move_scheme=scheme
)
# NBVAL_SKIP
# don't run this during testing
equilibration.run_until_decorrelated()
# get the final sample set; normally we'd save to a file and reload
equilibrated = equilibration.sample_set
# logging creates ops_output.log file with details of what is going on
#import logging.config
#logging.config.fileConfig("../resources/logging.conf",
# disable_existing_loggers=False)
storage = paths.Storage("mstis.nc", "w")
storage.save(template)
mstis_calc = paths.PathSampling(
storage=storage,
sample_set=equilibrated,
move_scheme=scheme
)
mstis_calc.save_frequency = 50
# NBVAL_SKIP
# skip this during testing, but leave it for demo purposes
# we use the %run magic because this isn't in a package
%run ../resources/toy_plot_helpers.py
xval = paths.FunctionCV("xval", lambda snap : snap.xyz[0][0])
yval = paths.FunctionCV("yval", lambda snap : snap.xyz[0][1])
mstis_calc.live_visualizer = paths.StepVisualizer2D(mstis, xval, yval,
[-1.0, 1.0], [-1.0, 1.0])
background = ToyPlot()
background.contour_range = np.arange(-1.5, 1.0, 0.1)
background.add_pes(engine.pes)
mstis_calc.live_visualizer.background = background.plot()
# increase update frequency to speed things up, but it isn't as pretty
mstis_calc.status_update_frequency = 1
mstis_calc.run(100)
representative_mover = scheme.movers['shooting'][0]
n_steps = int(scheme.n_steps_for_trials(representative_mover, 1000))
print(n_steps)
# NBVAL_SKIP
# don't run all those steps in testing!
mstis_calc.live_visualizer = None # turn off the live visualization
mstis_calc.run_until(n_steps)
storage.close()
from openpathsampling import strategies
all_ensembles = scheme.find_used_ensembles()
shooting_strat = strategies.OneWayShootingStrategy(
selector=paths.UniformSelector(),
ensembles=all_ensembles,
engine=engine
)
# all custom strategies need a global-level "OrganizeBy" strategy
# this is the standard one to use
global_strat = strategies.OrganizeByMoveGroupStrategy()
equil_scheme = paths.MoveScheme(mstis)
equil_scheme.append([shooting_strat, global_strat])
custom_equil = paths.PathSampling(
storage=None,
move_scheme=equil_scheme,
sample_set=init_conds
)
# NBVAL_SKIP
custom_equil.run_until_decorrelated()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setting up the simulation
Step2: A lot of information can be recovered from the old storage, and so we don't have the recreate it. However, we did not save our network, so we'll have to create a new one. Since the network creates the ensembles, that means we will have to translate the trajectories from the old ensembles to new ensembles.
Step3: Loading from storage is very easy. Each store is a list. We take the 0th snapshot as a template (it doesn't actually matter which one) for the next storage we'll create.
Step4: Named objects can be found in storage by using their name as a dictionary key. This allows us to load our old engine, collective variables, and states.
Step5: Creating new interface set, network, and move scheme
Step6: Once again, we have everything we need to build the MSTIS network. Recall that this will create all the ensembles we need for the simulation. However, even though the ensembles are semantically the same, these are not the same objects. We'll need to deal with that later.
Step7: Finally, we'll create the move scheme. For this, we'll use the default TIS move scheme
Step8: Preparing initial conditions
Step9: We are missing trajectories that satisfy the minus ensembles. In real simulations, you usually will have run trajectories in each state (you'll use those to create state/interface definitions). You can (and should) feed those trajectories to the initial_conditions_from_trajectories, instead of the complicated code below. That function is smart enough to select a section of the trajectory that satisfies the minus ensemble.
Step10: Now that we have the necessary trajectories, we create a new sample set using initial_conditions_from_trajectories. By adding the sample_set keyword, we retain any assignments that existing in the given sample set.
Step11: Now we have a sample set with a trajectory for all the ensembles required to start the simulation. We can (and should) double-check that everything is okay with a few simple checks
Step12: Equilibration
Step13: When using one-way shooting, as we are, part of the trajectory is reused after each shooting move. Therefore, an absolute minimum requirement for equilibration is that all frames from each initial trajectory have been replaced by other frames. We refer to such trajectories as "decorrelated," and OPS has a convenience for running a move scheme until all trajectories are decorrelated.
Step14: Production
Step15: The next block sets up a live visualization. This is optional, and only recommended if you're using OPS interactively (which would only be for very small systems). Some of the same tools can be used to play back the behavior after the fact if you want to see the behavior for more complicated systems. You can create a background (here we use the PES contours), and the visualization will plot the trajectories.
Step16: Now everything is ready
Step17: In RETIS, there are several different move types (shooting, replica exchange, etc.), and each move type can have a different probability of being selected. Moreover, different move types may have different numbers of specific moves (ensembles affected) within them.
Step18: Finally, let's run for a lot longer to get enough statistics. Note that this time, we'll run the simulation using run_until, which picks up from where we left off, and finishes when a total of n_steps trials have been performed.
Step19: Appendix
Step20: We want to create a move scheme that consists of a shooting mover for each ensemble that is used in the move scheme called scheme, which we created above. The easiest way to get that is the scheme's find_used_ensembles method. However, we could get equivalent information from the network object mstis, by looking at its sampling_ensembles (normal TIS), ms_outers (multiple state outer), and minus_ensembles.
|
7,920
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([1,2,3,4,5,6])
# Print the contents of a
a
print("The vector a has " + str(a.ndim) + " dimension(s) and has the shape " + str(a.shape) + ".")
m = np.array([[1,2,3], [4,5,6]])
m
print("The matrix m has " + str(m.ndim) + " dimension(s) and has the shape " + str(m.shape) + ".")
a_squared = [i**2 for i in a]
a_squared
m[0]
m[:,0]
m[1,1]
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot([1,2,3,4,5])
# x**y is shorthand for x to the power of y
plt.plot([1,2,3,4,5],[1**2,2**2,3**2,4**2,5**2])
p = np.arange(1,10) # Get a range of numbers form 1 to 10
plt.plot(p, p, p, p**2, p, p**3)
import math
def sigmoid(x):
a = []
for item in x:
a.append(1/(1+math.exp(-item)))
return a
x = np.arange(-10., 10., 0.1)
sig = sigmoid(x)
plt.plot(x,sig)
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/mdbloice/Machine-Learning-for-Health-Informatics/master/data/breast-cancer-wisconsin.csv")
df.describe()
df.columns = ["ID","Clump_Thickness","Size_Uniformity","Shape_Uniformity","Marginal_Adhesion","Epithelial_Size","Bare_Nucleoli","Bland_Chromatin","Normal_Nucleoli","Mitoses","Class"]
# Print the new header names:
df.columns
df.head()
df.Class.unique()
df.Class.value_counts()
df = df.replace({"Class": {2: 0, 4: 1}})
df.Class.value_counts()
df = df.drop(["ID"], axis=1)
df.describe()
df.Size_Uniformity[0:10]
df['Size_Uniformity'][0:10]
df.dtypes
df.std()
df.Clump_Thickness.std()
from pandas.tools.plotting import scatter_matrix
# Manually select three of the table's columns by passing an array of column names:
df_subset = df[['Clump_Thickness','Size_Uniformity', 'Shape_Uniformity']]
# The semi colon at the end of this line is to suppress informational output (we only want to see the plot)
scatter_matrix(df_subset.head(100), alpha=0.2, figsize=(6,6), diagonal='kde');
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
iris = datasets.load_iris()
X = iris.data
y = iris.target
np.random.seed(376483)
random_indices = np.random.permutation(len(y))
random_indices
X_train = X[random_indices[:-10]]
X_test = X[random_indices[-10:]]
y_train = y[random_indices[:-10]]
y_test = y[random_indices[-10:]]
print("Number of training samples: %d. Number of test samples: %d." % (len(X_train), len(X_test)) )
knn = KNeighborsClassifier() # Initialise the classifier.
knn.fit(X_train, y_train) # Fit the classifier.
y_pred = knn.predict(X_test)
# The classifier's predicted labels are now contained in y_pred:
y_pred
y_pred == y_test
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make a vector with 6 elements
Step2: Get some information about the vector
Step3: Create a matrix like this
Step4: Get some information about the matrix
Step5: A very powerful feature of NumPy and Python are List Comprehensions. These can replace many for loops and are much more efficient to run. Here we square every element in the vector a from above
Step6: Using NumPy we can select rows and columns of data very easily (known as array slicing).
Step7: Or we can slice the first column only. Using the , symbol we can ask for specfic rows and columns. The first integer always specifies the rows, which is followed by , and the second integer specifies the columns. The colon character
Step8: We can select a specific element using the matrix's column and row index, for example we want to select the item in the second column's second row
Step9: Entire books have been written about NumPy. Let's move on to Matplotlib.
Step10: Using the plt.plot() function, you can plot many types of data and Matplotlib will try to figure out what you want to do with the data
Step11: The plt.plot() function will take an arbritrary number of x y argument pairs
Step12: Or plot three lines at once by supplying three pairs of x y values
Step13: Combined with functions, more complex curves can be plotted. Let's plot the sigmoid curve
Step14: Pandas
Step15: Let's load the well known Wisconsin Breast Cancer dataset
Step16: You can view a summary of the data using the describe() function
Step17: Let's rename the column names
Step18: See the first few rows
Step19: Notice how the Class column (the last column in the table) consists of 2s and 4s. In this case 2 stands for malignant and 4 stands for benign. You can check this quickly using
Step20: So you see only 2s and 4s are contained in this column. See a breakdown of the counts using the value_counts() function
Step21: However, it is convention in machine learning to use a 0-based index to represent classes. Let's replace the 2s with 0s and the 4s with 1s
Step22: Perhaps now we would like to drop the ID column
Step23: As you can see you can use Pandas to quickly manipulate and access tabular data. Here we access the first 10 rows of the Size_Uniformity column
Step24: Columns can also be accessed using the name of the column as an index
Step25: You can examine the data types (Pandas dataframes can contain multiple types)
Step26: The column Bare_Nucleoli appears as type object as it contains some missing data, which appear as ? in the dataset. Later in the course we will learn how to handle missing data.
Step27: Or calculate the standard deviation for a certain column
Step28: Pandas also provides useful plotting tools. To look for correlations in data, a scatter matrix is often useful.
Step29: SciKit-Learn
Step30: Load the Iris dataset (a flower data set often used for demonstration purposes)
Step31: Convention states that matrices are represented using uppercase letters, often the letter X, and label vectors are represented using lower case letters, often y
Step32: The k-Nearest Neighbour algorithm is possibly the simplest classifier. Given a new observation, take the label of the sample closest to it in the n-dimensional feature space.
Step33: Then, the data must be split into a test set and a training set (again we are using naming conventions here for the training and test data X_train and X_test and their labels y_train and y_test)
Step34: Now we will try to fit the k-nearest neighbours classifier to the training data
Step35: The classifier has now been trained on the training data (X_train). We can now check how well it predicts newly seen data (using our test set, X_test)
Step36: We can now element-wise compare our predicted results, in y_pred, with the true labels stored in y_test
|
7,921
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('Ti5zUD08w5s')
YouTubeVideo('jmsFC0mNayM')
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
def parab(x):
return x**2
x = np.linspace(0,1)
y = parab(x)
plt.fill_between(x,y)
plt.text(0.8,0.2,'$\mathcal{D}$',fontsize=20)
plt.show()
help(np.random.uniform)
N = 100000
x = np.random.uniform(0, 1, N)
A_Dapprox = np.sum(parab(x))/N
A_Dapprox
import pandas as pd
A_D = 1/3
N = np.logspace(1,7,7)
df = pd.DataFrame(index=N,columns=['Valor_aproximacion', 'Error_relativo'], dtype='float')
df.index.name = "Cantidad_terminos"
for n in N:
x = np.random.uniform(0, 1, n.astype(int))
df.loc[n,"Valor_aproximacion"] = np.sum(parab(x))/n
df.loc[n,"Error_relativo"] = np.abs(df.loc[n,"Valor_aproximacion"]-A_D)/A_D
df
# Escribir la función acá
def int_montecarlo1(f, a, b, N):
return (b-a)/N*np.sum(f(np.random.uniform(a,b,N)))
# Resolver
def func1(x):
return np.exp(x**2)
def func2(x):
return 1/np.log(x)
def func3(x):
return np.sin(x)/x
a, b = 4, 5
N = np.logspace(1,5,5)
df = pd.DataFrame(index=N,columns=['Funcion1', 'Funcion2', 'Funcion3'], dtype='float')
df.index.name = "Cantidad_terminos"
for n in N:
df.loc[n,"Funcion1"] = int_montecarlo1(func1, a, b, n.astype(int))
df.loc[n,"Funcion2"] = int_montecarlo1(func2, a, b, n.astype(int))
df.loc[n,"Funcion3"] = int_montecarlo1(func3, a, b, n.astype(int))
df
x = np.linspace(-0.1,1.1)
y = parab(x)
plt.plot(x,x,'k--',label='$y=x$')
plt.plot(x,y,'k',label='$y=x^2$')
plt.fill_between(x,x,y)
plt.text(0.5,0.4,'$\mathcal{D}$',fontsize=20)
plt.legend(loc='best')
plt.show()
# Usar la funcion int_montecarlo1
def f(x):
return x-x**2
A_Daprox = int_montecarlo1(f, 0, 1, 100000000)
A_Daprox
YouTubeVideo('G8fOTMYDPEA')
def circ_arriba(x, r):
return np.sqrt(r**2-x**2)
def circ_abajo(x, r):
return -np.sqrt(r**2-x**2)
x = np.linspace(-1,1,100)
y1 = circ_arriba(x, 1)
y2 = circ_abajo(x, 1)
plt.figure(figsize=(5,5))
plt.plot(x,y1,'k')
plt.plot(x,y2,'k')
plt.fill_between(x,y1,y2)
plt.text(0,0,'$\mathcal{D}$',fontsize=20)
plt.text(0.8,0.8,'$\mathcal{R}$',fontsize=20)
plt.show()
N = 1000000
x = np.random.uniform(-1, 1, N)
y = np.random.uniform(-1, 1, N)
X, Y = np.meshgrid(x,y)
plt.figure(figsize=(5,5))
plt.scatter(X,Y)
plt.show()
def reg_circ(x,y):
return x**2+y**2<=1
A_R = 4
A_Dapprox = A_R*np.sum(reg_circ(x,y))/N
A_Dapprox
A_D = np.pi
N = np.logspace(1,7,7)
df = pd.DataFrame(index=N,columns=['Valor_aproximacion', 'Error_relativo'], dtype='float')
df.index.name = "Cantidad_terminos"
for n in N:
x = np.random.uniform(-1, 1, n.astype(int))
y = np.random.uniform(-1, 1, n.astype(int))
df.loc[n,"Valor_aproximacion"] = A_R*np.sum(reg_circ(x,y))/n
df.loc[n,"Error_relativo"] = np.abs(df.loc[n,"Valor_aproximacion"]-A_D)/A_D
df
# Escribir la función acá
def int_montecarlo2(region, a1, b1, a2, b2, N):
A_R = (b1-a1)*(b2-a2)
x = np.random.uniform(a1, b1, N.astype(int))
y = np.random.uniform(a2, b2, N.astype(int))
return A_R*np.sum(region(x,y))/N
N = 100
x = np.linspace(0, 1, N)
y = np.linspace(0, 1, N)
def region(x,y):
return 4*(2*x-1)**4+8*(2*y-1)**8 < 1+2*(2*y-1)**3*(3*x-2)**2
X, Y = np.meshgrid(x,y)
plt.figure(figsize=(5,5))
plt.scatter(X,Y,c=~region(X,Y),cmap='bone')
plt.show()
# Resolver
a1, a2, b1, b2 = 0, 0, 1, 1
N = np.logspace(1,5,5)
df = pd.DataFrame(index=N,columns=['Valor_aproximacion'], dtype='float')
df.index.name = "Cantidad_terminos"
for n in N:
df.loc[n,"Valor_aproximacion"] = int_montecarlo2(region, a1, b1, a2, b2, n)
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Integración Montecarlo tipo 1
Step2: Entonces, lo que queremos es aproximar el área de la región $\mathcal{D}$. Llamaremos esta área $A(\mathcal{D})$.
Step3: En este caso, la integral se puede hacer fácilmente. Comparemos el resultado con el valor real
Step4: Ver que los resultados son distintos cada vez (¿porqué?). Sin embargo, se aproximan más o menos en la misma medida.
Step5: Actividad. Utilizar la anterior función para realizar las siguientes integrales. Poner los resultados en una tabla cuyas filas correspondan a la cantidad de términos utilizados en la aproximación (usar 10, 100, 1000, 10000 y 100000 términos) y cuyas columnas correspondan a las funciones.
Step6: Integración Montecarlo tipo 2
Step7: De cálculo integral, sabemos que
Step8: De modo que si la región se puede describir fácilmente, diría el ferras 'no hay pedo, lo pago' (podemos usar montecarlo tipo 1).
Step9: Pero, ¿qué pasa si la geometría de la región no se puede describir fácilmente?
Step10: Si aproximamos $A(\mathcal{D})$ aproximamos el valor de $\pi$, pues el área del círculo unitario es
Step11: La probabilidad de que el punto $(X,Y)$ esté en el círculo unitario $\mathcal{D}$ es
Step12: De nuevo, comparemos con el valor real
Step13: Escribamos una función que tenga como entradas
Step14: Actividad. Utilizar la anterior función para aproximar el área de la región descrita por
|
7,922
|
<ASSISTANT_TASK:>
Python Code:
# Author: Ivana Kojcic <ivana.kojcic@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Kostiantyn Maksymenko <kostiantyn.maksymenko@gmail.com>
# Samuel Deslauriers-Gauthier <sam.deslauriers@gmail.com>
# License: BSD-3-Clause
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
print(__doc__)
# In this example, raw data will be simulated for the sample subject, so its
# information needs to be loaded. This step will download the data if it not
# already on your machine. Subjects directory is also set so it doesn't need
# to be given to functions.
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
subject = 'sample'
meg_path = op.join(data_path, 'MEG', subject)
# First, we get an info structure from the sample subject.
fname_info = op.join(meg_path, 'sample_audvis_raw.fif')
info = mne.io.read_info(fname_info)
tstep = 1 / info['sfreq']
# To simulate sources, we also need a source space. It can be obtained from the
# forward solution of the sample subject.
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
src = fwd['src']
# To simulate raw data, we need to define when the activity occurs using events
# matrix and specify the IDs of each event.
# Noise covariance matrix also needs to be defined.
# Here, both are loaded from the sample dataset, but they can also be specified
# by the user.
fname_event = op.join(meg_path, 'sample_audvis_raw-eve.fif')
fname_cov = op.join(meg_path, 'sample_audvis-cov.fif')
events = mne.read_events(fname_event)
noise_cov = mne.read_cov(fname_cov)
# Standard sample event IDs. These values will correspond to the third column
# in the events matrix.
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
# Take only a few events for speed
events = events[:80]
activations = {
'auditory/left':
[('G_temp_sup-G_T_transv-lh', 30), # label, activation (nAm)
('G_temp_sup-G_T_transv-rh', 60)],
'auditory/right':
[('G_temp_sup-G_T_transv-lh', 60),
('G_temp_sup-G_T_transv-rh', 30)],
'visual/left':
[('S_calcarine-lh', 30),
('S_calcarine-rh', 60)],
'visual/right':
[('S_calcarine-lh', 60),
('S_calcarine-rh', 30)],
}
annot = 'aparc.a2009s'
# Load the 4 necessary label names.
label_names = sorted(set(activation[0]
for activation_list in activations.values()
for activation in activation_list))
region_names = list(activations.keys())
def data_fun(times, latency, duration):
Function to generate source time courses for evoked responses,
parametrized by latency and duration.
f = 15 # oscillating frequency, beta band [Hz]
sigma = 0.375 * duration
sinusoid = np.sin(2 * np.pi * f * (times - latency))
gf = np.exp(- (times - latency - (sigma / 4.) * rng.rand(1)) ** 2 /
(2 * (sigma ** 2)))
return 1e-9 * sinusoid * gf
times = np.arange(150, dtype=np.float64) / info['sfreq']
duration = 0.03
rng = np.random.RandomState(7)
source_simulator = mne.simulation.SourceSimulator(src, tstep=tstep)
for region_id, region_name in enumerate(region_names, 1):
events_tmp = events[np.where(events[:, 2] == region_id)[0], :]
for i in range(2):
label_name = activations[region_name][i][0]
label_tmp = mne.read_labels_from_annot(subject, annot,
subjects_dir=subjects_dir,
regexp=label_name,
verbose=False)
label_tmp = label_tmp[0]
amplitude_tmp = activations[region_name][i][1]
if region_name.split('/')[1][0] == label_tmp.hemi[0]:
latency_tmp = 0.115
else:
latency_tmp = 0.1
wf_tmp = data_fun(times, latency_tmp, duration)
source_simulator.add_data(label_tmp,
amplitude_tmp * wf_tmp,
events_tmp)
# To obtain a SourceEstimate object, we need to use `get_stc()` method of
# SourceSimulator class.
stc_data = source_simulator.get_stc()
raw_sim = mne.simulation.simulate_raw(info, source_simulator, forward=fwd)
raw_sim.set_eeg_reference(projection=True)
mne.simulation.add_noise(raw_sim, cov=noise_cov, random_state=0)
mne.simulation.add_eog(raw_sim, random_state=0)
mne.simulation.add_ecg(raw_sim, random_state=0)
# Plot original and simulated raw data.
raw_sim.plot(title='Simulated raw data')
epochs = mne.Epochs(raw_sim, events, event_id, tmin=-0.2, tmax=0.3,
baseline=(None, 0))
evoked_aud_left = epochs['auditory/left'].average()
evoked_vis_right = epochs['visual/right'].average()
# Visualize the evoked data
evoked_aud_left.plot(spatial_colors=True)
evoked_vis_right.plot(spatial_colors=True)
method, lambda2 = 'dSPM', 1. / 9.
inv = mne.minimum_norm.make_inverse_operator(epochs.info, fwd, noise_cov)
stc_aud = mne.minimum_norm.apply_inverse(
evoked_aud_left, inv, lambda2, method)
stc_vis = mne.minimum_norm.apply_inverse(
evoked_vis_right, inv, lambda2, method)
stc_diff = stc_aud - stc_vis
brain = stc_diff.plot(subjects_dir=subjects_dir, initial_time=0.1,
hemi='split', views=['lat', 'med'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In order to simulate source time courses, labels of desired active regions
Step3: Create simulated source activity
Step4: Here,
Step5: Simulate raw data
Step6: Extract epochs and compute evoked responsses
Step7: Reconstruct simulated source time courses using dSPM inverse operator
|
7,923
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
my_list = [2, 5, 7, 8]
my_list
type(my_list)
multi_list = [[1, 2, 3], [4, 5, 6]]
#
my_array = np.array(my_list)
type(my_array)
my_array.dtype
multi_array.shape
multi_array = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
#
#
#
# Pandas DataFrames as table elements
import pandas as pd
df = pd.DataFrame({'A': [1,2,3], 'B': [2, 4, 6], 'ccc': [1.0, 33, 4]})
df
sub_df = df[['A', 'ccc']]
sub_df
df['A'] + 2*df['B']
!ls
!curl -o pronto.csv https://data.seattle.gov/api/views/tw7j-dfaw/rows.csv?accessType=DOWNLOAD
import pandas as pd
df = pd.read_csv('pronto.csv')
type(df)
len(df)
df.head()
df.columns
df.index
smaller_df = df.loc[[1,4,6,7,9,34],:]
smaller_df.index
df.shape
df.dtypes
df_small = df['stoptime']
type(df_small)
df_small.tolist()
trip_duration_hours = df['tripduration']/3600
trip_duration_hours[:2]
trip_duration_hours.head()
df['trip_duration_hours'] = df['tripduration']/3600
del df['trip_duration_hours']
df.head()
df.loc[[0,1],:]
df_long_trips = df[df['tripduration'] >10000]
sel = df['tripduration'] > 10000
df_long_trips = df[sel]
df_long_trips
df[sel].shape
# Make a copy of a slice
df_subset = df[['starttime', 'stoptime']].copy()
df_subset['trip_hours'] = df['tripduration']/3600
pd.value_counts(df["gender"])
pd.value_counts(2019 - df["birthyear"])
pd.value_counts(df["birthyear"], sort=False)
#
df.head()
df_count = df.groupby(['from_station_id']).count()
df_count.head()
df_mean = df.groupby(['from_station_id']).mean()
df_mean.head()
dfgroup = df.groupby(['from_station_id'])
dfgroup.groups
df.groupby(gender).mean()
%matplotlib inline
import matplotlib.pyplot as plt
df['tripduration'].hist()
plt.style.use("ggplot")
plt.
# A script for creating a dataframe with counts of the occurrence of a columns' values
df_count = df.groupby('from_station_id').count()
df_count1 = df_count[['trip_id']]
df_count2 = df_count1.rename(columns={'trip_id': 'count'})
df_count2.head()
def make_table_count(df_arg, groupby_column):
df_count = df_arg.groupby(groupby_column).count()
column_name = df.columns[0]
df_count1 = df_count[[column_name]]
df_count2 = df_count1.rename(columns={column_name: 'count'})
return df_count2
dff = make_table_count(df, 'from_station_id')
dff.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lists in native Python
Step2: This list is one-dimensional, let's make it multidimensional!
Step3: How do we access the 6 element in the second row, third column for native Python list?
Step4: Converting to numpy Arrays
Step5: How do we access the 6 element in the second row, third column for numpy array?
Step6: How do we retrieve a slice of the array, array([[1, 2], [4, 5]])?
Step7: How do we retrieve the second column of the array?
Step8: 4. Introduction to Pandas DataFrames
Step9: What operations do we perform on tables?
Step10: Operations on a Pandas DataFrame
Step11: uncomment this to download the data
Step12: Loading Data into a DataFrame
Step13: Now we can use the read_csv command to read the comma-separated-value data
Step14: The shape attribute shows us the number of elements
Step15: The columns attribute gives us the column names
Step16: Sophisticated Data Manipulation
Step17: Mathematical operations on columns happen element-wise
Step18: Columns can be created (or overwritten) with the assignment operator.
Step19: Or to break down rides by age
Step20: By default, the values rather than the index are sorted. Use sort=False to turn this behavior off
Step21: We can explore other things as well
Step22: Group-by Operation
Step23: The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)
Step24: It's also possible to index the grouped object like it is a dataframe
Step25: Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data
Step26: Adjusting the Plot Style
Step27: Other plot types
Step28: For example, we can create a histogram of trip durations
|
7,924
|
<ASSISTANT_TASK:>
Python Code:
empty_dictionary = {}
print empty_dictionary
filled_dictionary = {'first_name': 'abhinav', 'last_name': 'upadhyay'}
print filled_dictionary
food_menu = {}
food_menu['pizza'] = 300
food_menu['sandwich'] = 30
food_menu['tea'] = 10
print food_menu
pizza_price = food_menu['pizza']
print pizza_price
tea_price = food_menu['tea']
print tea_price
food_menu.pop('tea')
print food_menu
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating dictionary with values
Step2: Adding values to the dictionary
Step3: Notice the ordering of the items in the output above
Step4: Deleting item from the dictionary
|
7,925
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%matplotlib notebook
from threeML import *
import os
trigger="GRB110731A"
dec=-28.546
ra=280.52
xrt_dir='xrt'
xrt = SwiftXRTLike("XRT",pha_file=os.path.join(xrt_dir,"xrt_src.pha"),
bak_file=os.path.join(xrt_dir,"xrt_bkg.pha"),
rsp_file=os.path.join(xrt_dir,"xrt.rmf"),
arf_file=os.path.join(xrt_dir,"xrt.arf"))
xrt.view_count_spectrum()
data_dir_gbm=os.path.join('gbm','bn110731A')
trigger_number = 'bn110731465'
gbm_data = download_GBM_trigger_data(trigger_number,detectors=['n3'],destination_directory=data_dir_gbm,compress_tte=True)
# Select the time interval
src_selection = "100.169342-150.169342"
nai3 = FermiGBMTTELike('NAI3',
os.path.join(data_dir_gbm,"glg_tte_n3_bn110731465_v00.fit.gz"),
"20-90,160-250", # background selection
src_selection, # source interval
rsp_file=os.path.join(data_dir_gbm, "glg_cspec_n3_bn110731465_v00.rsp2"))
nai3.view_lightcurve(20,250)
nai3.set_active_measurements("8-900")
nai3.view_count_spectrum()
xspec_abund('angr')
spectral_model = XS_phabs()* XS_zphabs() * XS_powerlaw()
spectral_model.nh_1=0.101
spectral_model.nh_1.fix = True
spectral_model.nh_2=0.1114424
spectral_model.nh_2.fix = True
spectral_model.redshift_2 = 0.618
spectral_model.redshift_2.fix =True
spectral_model.display()
ptsrc = PointSource(trigger,ra,dec,spectral_shape=spectral_model)
model = Model(ptsrc)
data = DataList(xrt,nai3)
jl = JointLikelihood(model, data, verbose=False)
model.display()
res = jl.fit()
res = jl.get_errors()
res = jl.get_contours(spectral_model.phoindex_3,1.5,2.5,50)
res = jl.get_contours(spectral_model.norm_3,.1,.3,25,spectral_model.phoindex_3,1.5,2.5,50)
spectral_model.phoindex_3.prior = Uniform_prior(lower_bound=-5.0, upper_bound=5.0)
spectral_model.norm_3.prior = Log_uniform_prior(lower_bound=1E-5, upper_bound=1)
bayes = BayesianAnalysis(model, data)
samples = bayes.sample(n_walkers=50,burn_in=100, n_samples=1000)
fig = bayes.corner_plot(plot_contours=True, plot_density=False)
bayes.get_highest_density_interval()
cleanup_downloaded_GBM_data(gbm_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load XRT data
Step2: Load GBM data
Step3: View the light curve
Step4: Make energy selections and check them out
Step5: Setup the model
Step6: Setup the joint likelihood
Step7: Fitting
Step8: And then go Bayesian!
|
7,926
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import functools
import os
from matplotlib import gridspec
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("TF Version: ", tf.__version__)
print("TF Hub version: ", hub.__version__)
print("Eager mode enabled: ", tf.executing_eagerly())
print("GPU available: ", tf.config.list_physical_devices('GPU'))
# @title Define image loading and visualization functions { display-mode: "form" }
def crop_center(image):
Returns a cropped square image.
shape = image.shape
new_shape = min(shape[1], shape[2])
offset_y = max(shape[1] - shape[2], 0) // 2
offset_x = max(shape[2] - shape[1], 0) // 2
image = tf.image.crop_to_bounding_box(
image, offset_y, offset_x, new_shape, new_shape)
return image
@functools.lru_cache(maxsize=None)
def load_image(image_url, image_size=(256, 256), preserve_aspect_ratio=True):
Loads and preprocesses images.
# Cache image file locally.
image_path = tf.keras.utils.get_file(os.path.basename(image_url)[-128:], image_url)
# Load and convert to float32 numpy array, add batch dimension, and normalize to range [0, 1].
img = tf.io.decode_image(
tf.io.read_file(image_path),
channels=3, dtype=tf.float32)[tf.newaxis, ...]
img = crop_center(img)
img = tf.image.resize(img, image_size, preserve_aspect_ratio=True)
return img
def show_n(images, titles=('',)):
n = len(images)
image_sizes = [image.shape[1] for image in images]
w = (image_sizes[0] * 6) // 320
plt.figure(figsize=(w * n, w))
gs = gridspec.GridSpec(1, n, width_ratios=image_sizes)
for i in range(n):
plt.subplot(gs[i])
plt.imshow(images[i][0], aspect='equal')
plt.axis('off')
plt.title(titles[i] if len(titles) > i else '')
plt.show()
# @title Load example images { display-mode: "form" }
content_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/f/fd/Golden_Gate_Bridge_from_Battery_Spencer.jpg/640px-Golden_Gate_Bridge_from_Battery_Spencer.jpg' # @param {type:"string"}
style_image_url = 'https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg' # @param {type:"string"}
output_image_size = 384 # @param {type:"integer"}
# The content image size can be arbitrary.
content_img_size = (output_image_size, output_image_size)
# The style prediction model was trained with image size 256 and it's the
# recommended image size for the style image (though, other sizes work as
# well but will lead to different results).
style_img_size = (256, 256) # Recommended to keep it at 256.
content_image = load_image(content_image_url, content_img_size)
style_image = load_image(style_image_url, style_img_size)
style_image = tf.nn.avg_pool(style_image, ksize=[3,3], strides=[1,1], padding='SAME')
show_n([content_image, style_image], ['Content image', 'Style image'])
# Load TF Hub module.
hub_handle = 'https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2'
hub_module = hub.load(hub_handle)
# Stylize content image with given style image.
# This is pretty fast within a few milliseconds on a GPU.
outputs = hub_module(tf.constant(content_image), tf.constant(style_image))
stylized_image = outputs[0]
# Visualize input images and the generated stylized image.
show_n([content_image, style_image, stylized_image], titles=['Original content image', 'Style image', 'Stylized image'])
# @title To Run: Load more images { display-mode: "form" }
content_urls = dict(
sea_turtle='https://upload.wikimedia.org/wikipedia/commons/d/d7/Green_Sea_Turtle_grazing_seagrass.jpg',
tuebingen='https://upload.wikimedia.org/wikipedia/commons/0/00/Tuebingen_Neckarfront.jpg',
grace_hopper='https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg',
)
style_urls = dict(
kanagawa_great_wave='https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg',
kandinsky_composition_7='https://upload.wikimedia.org/wikipedia/commons/b/b4/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg',
hubble_pillars_of_creation='https://upload.wikimedia.org/wikipedia/commons/6/68/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg',
van_gogh_starry_night='https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg',
turner_nantes='https://upload.wikimedia.org/wikipedia/commons/b/b7/JMW_Turner_-_Nantes_from_the_Ile_Feydeau.jpg',
munch_scream='https://upload.wikimedia.org/wikipedia/commons/c/c5/Edvard_Munch%2C_1893%2C_The_Scream%2C_oil%2C_tempera_and_pastel_on_cardboard%2C_91_x_73_cm%2C_National_Gallery_of_Norway.jpg',
picasso_demoiselles_avignon='https://upload.wikimedia.org/wikipedia/en/4/4c/Les_Demoiselles_d%27Avignon.jpg',
picasso_violin='https://upload.wikimedia.org/wikipedia/en/3/3c/Pablo_Picasso%2C_1911-12%2C_Violon_%28Violin%29%2C_oil_on_canvas%2C_Kr%C3%B6ller-M%C3%BCller_Museum%2C_Otterlo%2C_Netherlands.jpg',
picasso_bottle_of_rum='https://upload.wikimedia.org/wikipedia/en/7/7f/Pablo_Picasso%2C_1911%2C_Still_Life_with_a_Bottle_of_Rum%2C_oil_on_canvas%2C_61.3_x_50.5_cm%2C_Metropolitan_Museum_of_Art%2C_New_York.jpg',
fire='https://upload.wikimedia.org/wikipedia/commons/3/36/Large_bonfire.jpg',
derkovits_woman_head='https://upload.wikimedia.org/wikipedia/commons/0/0d/Derkovits_Gyula_Woman_head_1922.jpg',
amadeo_style_life='https://upload.wikimedia.org/wikipedia/commons/8/8e/Untitled_%28Still_life%29_%281913%29_-_Amadeo_Souza-Cardoso_%281887-1918%29_%2817385824283%29.jpg',
derkovtis_talig='https://upload.wikimedia.org/wikipedia/commons/3/37/Derkovits_Gyula_Talig%C3%A1s_1920.jpg',
amadeo_cardoso='https://upload.wikimedia.org/wikipedia/commons/7/7d/Amadeo_de_Souza-Cardoso%2C_1915_-_Landscape_with_black_figure.jpg'
)
content_image_size = 384
style_image_size = 256
content_images = {k: load_image(v, (content_image_size, content_image_size)) for k, v in content_urls.items()}
style_images = {k: load_image(v, (style_image_size, style_image_size)) for k, v in style_urls.items()}
style_images = {k: tf.nn.avg_pool(style_image, ksize=[3,3], strides=[1,1], padding='SAME') for k, style_image in style_images.items()}
#@title Specify the main content image and the style you want to use. { display-mode: "form" }
content_name = 'sea_turtle' # @param ['sea_turtle', 'tuebingen', 'grace_hopper']
style_name = 'munch_scream' # @param ['kanagawa_great_wave', 'kandinsky_composition_7', 'hubble_pillars_of_creation', 'van_gogh_starry_night', 'turner_nantes', 'munch_scream', 'picasso_demoiselles_avignon', 'picasso_violin', 'picasso_bottle_of_rum', 'fire', 'derkovits_woman_head', 'amadeo_style_life', 'derkovtis_talig', 'amadeo_cardoso']
stylized_image = hub_module(tf.constant(content_images[content_name]),
tf.constant(style_images[style_name]))[0]
show_n([content_images[content_name], style_images[style_name], stylized_image],
titles=['Original content image', 'Style image', 'Stylized image'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Fast Style Transfer for Arbitrary Styles
Step4: Let's get as well some images to play with.
Step5: Import TF Hub module
Step6: The signature of this hub module for image stylization is
Step7: Let's try it on more images
|
7,927
|
<ASSISTANT_TASK:>
Python Code:
import datetime
import numpy as np
# Install and pin to versions that seem to work together
!pip3 install pandas-gbq==0.10.0 google-cloud-bigquery==1.11.2 google-api-core==1.8.2
!pip3 install matplotlib
# Inline all matplotlib plots
%matplotlib inline
from google.cloud import bigquery
# NOTE: The RuntimeWarnings (if any) are harmless. See ContinuumIO/anaconda-issues#6678.
import pandas as pd
from pandas.io import gbq
# Note: access to this project is restricted
PROJECT="kubeflow-usage"
# We need to filter out empty timestamps because they can't be converted to dates
# Why would pings be missing a timestamp
# Get data for last year
max_age_hours = 24 * 365
query =
select TIMESTAMP_SECONDS(cast(timestamp as int64)) as timestamp, clusteriD FROM usage.collector
where timestamp is not null and length(timestamp) > 0 and
timestamp_diff(CURRENT_TIMESTAMP(), TIMESTAMP_SECONDS(cast(timestamp as int64)), HOUR) < {max_age_hours}
.format(max_age_hours=max_age_hours)
data=gbq.read_gbq(str(query), dialect='standard', project_id=PROJECT)
# Good reference for working with aggregations
# https://www.shanelynn.ie/summarising-aggregation-and-grouping-data-in-python-pandas/
aggregations = {
"timestamp": {
"min_timestamp": "min",
"max_timestamp": "max",
"count": "count"
}
}
stats = data.groupby("clusteriD").agg(aggregations)
# Drop the multi-level index to make it easier to work with
stats.columns = stats.columns.droplevel(level=0)
stats["age"] = stats["max_timestamp"] - stats["min_timestamp"]
long_stats = stats[stats["age"] > datetime.timedelta(hours=24)]
new_per_day = pd.Series(data=1, index=stats["min_timestamp"])
new_per_day = new_per_day.sort_index()
new_deployments = new_per_day.rolling('28d').sum()
from matplotlib import pyplot as plt
plt.plot(new_deployments.index, new_deployments.values)
plt.title("New Kubeflow Deployments Last 28 Days")
plt.ylabel('Number of created deployments', fontsize='large');
new_per_day_long = pd.Series(data=1, index=long_stats["min_timestamp"])
new_per_day_long = new_per_day_long.sort_index()
new_deployments_long = new_per_day_long.rolling('28d').sum()
from matplotlib import pyplot as plt
plt.plot(new_deployments_long.index, new_deployments_long.values)
plt.title("New Kubeflow Deployments Last 28 Days with minimum duration 24 hours")
plt.ylabel('Number of created deployments', fontsize='large');
# We need to drop unknown_cluster and empty string because those represent
# multiple clusters and the min and max timestamp will not correspond to the duration
# of those clusters
filtered_stats = stats[np.logical_and(stats.index != "" , stats.index != "unknown_cluster")]
create_times = pd.Series(data=1, index=filtered_stats["min_timestamp"])
delete_times = pd.Series(data=-1, index=filtered_stats["max_timestamp"])
# Drop delete_times and create_times which are less than 24 hours old.
# We assume those clusters are still active. So we don't want to interpret the
# max ping timestamp as a delete timestamp
now = datetime.datetime.now(delete_times.index.tzinfo)
min_hours = 48
create_times = create_times[now - create_times.index > datetime.timedelta(hours=min_hours)]
delete_times = delete_times[now - delete_times.index > datetime.timedelta(hours=min_hours)]
deltas = pd.concat([create_times, delete_times])
deltas = deltas.sort_index()
active_deployments= deltas.cumsum()
plt.plot(active_deployments.index, active_deployments.values)
plt.title("Active Kubeflow Deployments with minimum duration 24 hours")
plt.ylabel('Number of deployments', fontsize='large');
last_28 = stats[now - stats["min_timestamp"] < datetime.timedelta(days=28)]
age_counts = pd.Series(data=1, index=last_28["age"])
age_counts=age_counts.sort_index()
age_stats = age_counts.cumsum()
plt.plot(age_stats.index.total_seconds() / (24.0 * 3600), age_stats.values)
plt.title("Kubeflow deployment duration for most recent 28 days")
plt.ylabel('Kubeflow deployment count', fontsize='large');
# Compute a plot of age stats for clusters that have a minimum age of 1 day
# Excluding short lived clusters filters out short lived automated clusters from testing
# and other data
last_28_long = last_28[last_28["age"] > datetime.timedelta(hours=24)]
age_counts_long = pd.Series(data=1, index=last_28_long["age"])
age_counts_long=age_counts_long.sort_index()
age_stats_long = age_counts_long.cumsum()
age_cpdf_long = age_stats_long / age_stats_long.max()
plt.plot(age_stats_long.index.total_seconds() / (24.0 * 3600), age_stats_long.values)
plt.title("Age of Kubeflow deployments for clusters created in last 28 days")
plt.xlabel("Deployment Age(days)")
plt.ylabel('Cumulative deployment count', fontsize='large');
plt.plot(age_cpdf_long.index.total_seconds() / (24.0 * 3600), age_cpdf_long.values)
now_day = now.strftime("%Y-%m-%d")
plt.title("Age of Kubeflow deployments for clusters created in last 28 days\nTotal deployments={0}\n{1}".format(age_stats_long.max(), now_day))
plt.xlabel("Deployment Age(days)")
plt.ylabel('Cumulative distribution', fontsize='large');
last_28_long["age"].quantile([.25, .5, .75, .9])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Kubeflow Stats
Step2: Compute cluster stats
Step3: Number of new deployments
Step4: Number of active deployments
Step5: Compute histogram of Kubeflow deployment age for
|
7,928
|
<ASSISTANT_TASK:>
Python Code:
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n',
'You should consider updating to Python 3.4.0 or',
'higher as the libraries built for this course',
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda'
'and then restart `jupyter notebook`:\n',
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
from scipy.ndimage.filters import gaussian_filter
import IPython.display as ipyd
import tensorflow as tf
from libs import utils, gif, datasets, dataset_utils, vae, dft, vgg16, nb_utils
except ImportError:
print("Make sure you have started notebook in the same directory",
"as the provided zip file which includes the 'libs' folder",
"and the file 'utils.py' inside of it. You will NOT be able",
"to complete this assignment unless you restart jupyter",
"notebook inside the directory created by extracting",
"the zip file or cloning the github repo. If you are still")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML(<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>)
from libs import vgg16, inception, i2v
# Stick w/ Inception for now, and then after you see how
# the next few sections work w/ this network, come back
# and explore the other networks.
net = inception.get_inception_model(version='v5')
# net = inception.get_inception_model(version='v3')
# net = vgg16.get_vgg_model()
# net = vgg16.get_vgg_face_model()
# net = i2v.get_i2v_model()
# net = i2v.get_i2v_tag_model()
print(net.keys())
# First, let's get an image:
og = plt.imread('clinton.png')[..., :3]
plt.imshow(og)
print(og.min(), og.max())
# Now call the preprocess function. This will preprocess our
# image ready for being input to the network, except for changes
# to the dimensions. I.e., we will still need to convert this
# to a 4-dimensional Tensor once we input it to the network.
# We'll see how that works later.
img = net['preprocess'](og)
print(img.min(), img.max())
deprocessed = ...
plt.imshow(deprocessed)
plt.show()
nb_utils.show_graph(net['graph_def'])
net['labels']
label_i = 851
print(net['labels'][label_i])
# Load the VGG network. Scroll back up to where we loaded the inception
# network if you are unsure. It is inside the "vgg16" module...
net = ..
assert(net['labels'][0] == (0, 'n01440764 tench, Tinca tinca'))
# Let's explicity use the CPU, since we don't gain anything using the GPU
# when doing Deep Dream (it's only a single image, benefits come w/ many images).
device = '/cpu:0'
# We'll now explicitly create a graph
g = tf.Graph()
# And here is a context manager. We use the python "with" notation to create a context
# and create a session that only exists within this indent, as soon as we leave it,
# the session is automatically closed! We also tell the session which graph to use.
# We can pass a second context after the comma,
# which we'll use to be explicit about using the CPU instead of a GPU.
with tf.Session(graph=g) as sess, g.device(device):
# Now load the graph_def, which defines operations and their values into `g`
tf.import_graph_def(net['graph_def'], name='net')
# Now we can get all the operations that belong to the graph `g`:
names = [op.name for op in g.get_operations()]
print(names)
# First find all the pooling layers in the network. You can
# use list comprehension to iterate over all the "names" we just
# created, finding whichever ones have the name "pool" in them.
# Then be sure to append a ":0" to the names
features = ...
# Let's print them
print(features)
# This is what we want to have at the end. You could just copy this list
# if you are stuck!
assert(features == ['net/pool1:0', 'net/pool2:0', 'net/pool3:0', 'net/pool4:0', 'net/pool5:0'])
# Use the function 'get_tensor_by_name' and the 'names' array to help you
# get the first tensor in the network. Remember you have to add ":0" to the
# name to get the output of an operation which is the tensor.
x = ...
assert(x.name == 'net/images:0')
def plot_gradient(img, x, feature, g, device='/cpu:0'):
Let's visualize the network's gradient activation
when backpropagated to the original input image. This
is effectively telling us which pixels contribute to the
predicted layer, class, or given neuron with the layer
# We'll be explicit about the graph and the device
# by using a context manager:
with tf.Session(graph=g) as sess, g.device(device):
saliency = tf.gradients(tf.reduce_mean(feature), x)
this_res = sess.run(saliency[0], feed_dict={x: img})
grad = this_res[0] / np.max(np.abs(this_res))
return grad
og = plt.imread('clinton.png')[..., :3]
img = net['preprocess'](og)[np.newaxis]
fig, axs = plt.subplots(1, len(features), figsize=(20, 10))
for i in range(len(features)):
axs[i].set_title(features[i])
grad = plot_gradient(img, x, g.get_tensor_by_name(features[i]), g)
axs[i].imshow(utils.normalize(grad))
def dream(img, gradient, step, net, x, n_iterations=50, plot_step=10):
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
fig, axs = plt.subplots(1, n_iterations // plot_step, figsize=(20, 10))
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Or we could use the `utils.normalize function:
# this_res = utils.normalize(this_res)
# Experiment with all of the above options. They will drastically
# effect the resulting dream, and really depend on the network
# you use, and the way the network handles normalization of the
# input image, and the step size you choose! Lots to explore!
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
axs[it_i // plot_step].imshow(m)
# We'll run it for 3 iterations
n_iterations = 3
# Think of this as our learning rate. This is how much of
# the gradient we'll add back to the input image
step = 1.0
# Every 1 iterations, we'll plot the current deep dream
plot_step = 1
for feature_i in range(len(features)):
with tf.Session(graph=g) as sess, g.device(device):
# Get a feature layer
layer = g.get_tensor_by_name(features[feature_i])
# Find the gradient of this layer's mean activation
# with respect to the input image
gradient = tf.gradients(tf.reduce_mean(layer), x)
# Dream w/ our image
dream(img, gradient, step, net, x, n_iterations=n_iterations, plot_step=plot_step)
noise = net['preprocess'](
np.random.rand(256, 256, 3) * 0.1 + 0.45)[np.newaxis]
for feature_i in range(len(features)):
with tf.Session(graph=g) as sess, g.device(device):
# Get a feature layer
layer = ...
# Find the gradient of this layer's mean activation
# with respect to the input image
gradient = ...
# Dream w/ the noise image. Complete this!
dream(...)
# Load your own image here
og = ...
plt.imshow(og)
# Preprocess the image and make sure it is 4-dimensional by adding a new axis to the 0th dimension:
img = ...
assert(img.ndim == 4)
# Let's get the softmax layer
print(names[-2])
layer = g.get_tensor_by_name(names[-2] + ":0")
# And find its shape
with tf.Session(graph=g) as sess, g.device(device):
layer_shape = tf.shape(layer).eval(feed_dict={x:img})
# We can find out how many neurons it has by feeding it an image and
# calculating the shape. The number of output channels is the last dimension.
n_els = layer_shape[-1]
# Let's pick a label. First let's print out every label and then find one we like:
print(net['labels'])
# Pick a neuron. Or pick a random one. This should be 0-n_els
neuron_i = ...
print(net['labels'][neuron_i])
assert(neuron_i >= 0 and neuron_i < n_els)
# And we'll create an activation of this layer which is very close to 0
layer_vec = np.ones(layer_shape) / 100.0
# Except for the randomly chosen neuron which will be very close to 1
layer_vec[..., neuron_i] = 0.99
# Explore different parameters for this section.
n_iterations = 51
plot_step = 5
# If you use a different network, you will definitely need to experiment
# with the step size, as each network normalizes the input image differently.
step = 0.2
imgs = []
with tf.Session(graph=g) as sess, g.device(device):
gradient = tf.gradients(tf.reduce_max(layer), x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={
x: img_copy, layer: layer_vec})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.figure(figsize=(5, 5))
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
# Save the gif
gif.build_gif(imgs, saveto='softmax.gif')
ipyd.Image(url='softmax.gif?i={}'.format(
np.random.rand()), height=300, width=300)
n_iterations = 101
plot_step = 10
step = 0.1
crop = 1
imgs = []
n_imgs, height, width, *ch = img.shape
with tf.Session(graph=g) as sess, g.device(device):
# Explore changing the gradient here from max to mean
# or even try using different concepts we learned about
# when creating style net, such as using a total variational
# loss on `x`.
gradient = tf.gradients(tf.reduce_max(layer), x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer
# we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={
x: img_copy, layer: layer_vec})[0]
# This is just one way we could normalize the
# gradient. It helps to look at the range of your image's
# values, e.g. if it is 0 - 1, or -115 to +115,
# and then consider the best way to normalize the gradient.
# For some networks, it might not even be necessary
# to perform this normalization, especially if you
# leave the dream to run for enough iterations.
# this_res = this_res / (np.std(this_res) + 1e-10)
this_res = this_res / (np.max(np.abs(this_res)) + 1e-10)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Optionally, we could apply any number of regularization
# techniques... Try exploring different ways of regularizing
# gradient. ascent process. If you are adventurous, you can
# also explore changing the gradient above using a
# total variational loss, as we used in the style net
# implementation during the lecture. I leave that to you
# as an exercise!
# Crop a 1 pixel border from height and width
img_copy = img_copy[:, crop:-crop, crop:-crop, :]
# Resize (Note: in the lecture, we used scipy's resize which
# could not resize images outside of 0-1 range, and so we had
# to store the image ranges. This is a much simpler resize
# method that allows us to `preserve_range`.)
img_copy = resize(img_copy[0], (height, width), order=3,
clip=False, preserve_range=True
)[np.newaxis].astype(np.float32)
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
# Create a GIF
gif.build_gif(imgs, saveto='fractal.gif')
ipyd.Image(url='fractal.gif?i=2', height=300, width=300)
# Replace these with your own images!
guide_og = plt.imread(...)[..., :3]
dream_og = plt.imread(...)[..., :3]
assert(guide_og.ndim == 3 and guide_og.shape[-1] == 3)
assert(dream_og.ndim == 3 and dream_og.shape[-1] == 3)
guide_img = net['preprocess'](guide_og)[np.newaxis]
dream_img = net['preprocess'](dream_og)[np.newaxis]
fig, axs = plt.subplots(1, 2, figsize=(7, 4))
axs[0].imshow(guide_og)
axs[1].imshow(dream_og)
x = g.get_tensor_by_name(names[0] + ":0")
# Experiment with the weighting
feature_loss_weight = 1.0
with tf.Session(graph=g) as sess, g.device(device):
feature_loss = tf.Variable(0.0)
# Explore different layers/subsets of layers. This is just an example.
for feature_i in features[3:5]:
# Get the activation of the feature
layer = g.get_tensor_by_name(feature_i)
# Do the same for our guide image
guide_layer = sess.run(layer, feed_dict={x: guide_img})
# Now we need to measure how similar they are!
# We'll use the dot product, which requires us to first reshape both
# features to a 2D vector. But you should experiment with other ways
# of measuring similarity such as l1 or l2 loss.
# Reshape each layer to 2D vector
layer = tf.reshape(layer, [-1, 1])
guide_layer = guide_layer.reshape(-1, 1)
# Now calculate their dot product
correlation = tf.matmul(guide_layer.T, layer)
# And weight the loss by a factor so we can control its influence
feature_loss += feature_loss_weight * correlation
n_img, height, width, ch = dream_img.shape
# We'll weight the overall contribution of the total variational loss
# Experiment with this weighting
tv_loss_weight = 1.0
with tf.Session(graph=g) as sess, g.device(device):
# Penalize variations in neighboring pixels, enforcing smoothness
dx = tf.square(x[:, :height - 1, :width - 1, :] - x[:, :height - 1, 1:, :])
dy = tf.square(x[:, :height - 1, :width - 1, :] - x[:, 1:, :width - 1, :])
# We will calculate their difference raised to a power to push smaller
# differences closer to 0 and larger differences higher.
# Experiment w/ the power you raise this to to see how it effects the result
tv_loss = tv_loss_weight * tf.reduce_mean(tf.pow(dx + dy, 1.2))
# Experiment with the step size!
step = 0.1
imgs = []
with tf.Session(graph=g) as sess, g.device(device):
# Experiment with just optimizing the tv_loss or negative tv_loss to understand what it is doing!
gradient = tf.gradients(-tv_loss + feature_loss, x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = dream_img.copy()
with tf.Session(graph=g) as sess, g.device(device):
sess.run(tf.global_variables_initializer())
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.figure(figsize=(5, 5))
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
gif.build_gif(imgs, saveto='guided.gif')
ipyd.Image(url='guided.gif?i=0', height=300, width=300)
sess.close()
tf.reset_default_graph()
# Stick w/ VGG for now, and then after you see how
# the next few sections work w/ this network, come back
# and explore the other networks.
net = vgg16.get_vgg_model()
# net = vgg16.get_vgg_face_model()
# net = inception.get_inception_model(version='v5')
# net = inception.get_inception_model(version='v3')
# net = i2v.get_i2v_model()
# net = i2v.get_i2v_tag_model()
# Let's explicity use the CPU, since we don't gain anything using the GPU
# when doing Deep Dream (it's only a single image, benefits come w/ many images).
device = '/cpu:0'
# We'll now explicitly create a graph
g = tf.Graph()
# And here is a context manager. We use the python "with" notation to create a context
# and create a session that only exists within this indent, as soon as we leave it,
# the session is automatically closed! We also tel the session which graph to use.
# We can pass a second context after the comma,
# which we'll use to be explicit about using the CPU instead of a GPU.
with tf.Session(graph=g) as sess, g.device(device):
# Now load the graph_def, which defines operations and their values into `g`
tf.import_graph_def(net['graph_def'], name='net')
names = [op.name for op in g.get_operations()]
content_og = plt.imread('arles.png')[..., :3]
style_og = plt.imread('clinton.png')[..., :3]
fig, axs = plt.subplots(1, 2)
axs[0].imshow(content_og)
axs[0].set_title('Content Image')
axs[0].grid('off')
axs[1].imshow(style_og)
axs[1].set_title('Style Image')
axs[1].grid('off')
# We'll save these with a specific name to include in your submission
plt.imsave(arr=content_og, fname='content.png')
plt.imsave(arr=style_og, fname='style.png')
content_img = net['preprocess'](content_og)[np.newaxis]
style_img = net['preprocess'](style_og)[np.newaxis]
# Grab the tensor defining the input to the network
x = ...
# And grab the tensor defining the softmax layer of the network
softmax = ...
# Remember from the lecture that we have to set the dropout
# "keep probability" to 1.0.
keep_probability = np.ones([1, 4096])
for img in [content_img, style_img]:
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
res = softmax.eval(feed_dict={x: img,
'net/dropout_1/random_uniform:0': keep_probability,
'net/dropout/random_uniform:0': keep_probability})[0]
print([(res[idx], net['labels'][idx])
for idx in res.argsort()[-5:][::-1]])
print(names)
# Experiment w/ different layers here. You'll need to change this if you
# use another network!
content_layer = 'net/conv3_2/conv3_2:0'
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
content_features = g.get_tensor_by_name(content_layer).eval(
session=sess,
feed_dict={x: content_img,
'net/dropout_1/random_uniform:0': keep_probability,
'net/dropout/random_uniform:0': keep_probability})
# Experiment with different layers and layer subsets. You'll need to change these
# if you use a different network!
style_layers = ['net/conv1_1/conv1_1:0',
'net/conv2_1/conv2_1:0',
'net/conv3_1/conv3_1:0',
'net/conv4_1/conv4_1:0',
'net/conv5_1/conv5_1:0']
style_activations = []
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
for style_i in style_layers:
style_activation_i = g.get_tensor_by_name(style_i).eval(
feed_dict={x: style_img,
'net/dropout_1/random_uniform:0': keep_probability,
'net/dropout/random_uniform:0': keep_probability})
style_activations.append(style_activation_i)
style_features = []
for style_activation_i in style_activations:
s_i = np.reshape(style_activation_i, [-1, style_activation_i.shape[-1]])
gram_matrix = np.matmul(s_i.T, s_i) / s_i.size
style_features.append(gram_matrix.astype(np.float32))
tf.reset_default_graph()
g = tf.Graph()
# Get the network again
net = vgg16.get_vgg_model()
# Load up a session which we'll use to import the graph into.
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
# We can set the `net_input` to our content image
# or perhaps another image
# or an image of noise
# net_input = tf.Variable(content_img / 255.0)
net_input = tf.get_variable(
name='input',
shape=content_img.shape,
dtype=tf.float32,
initializer=tf.random_normal_initializer(
mean=np.mean(content_img), stddev=np.std(content_img)))
# Now we load the network again, but this time replacing our placeholder
# with the trainable tf.Variable
tf.import_graph_def(
net['graph_def'],
name='net',
input_map={'images:0': net_input})
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
content_loss = tf.nn.l2_loss((g.get_tensor_by_name(content_layer) -
content_features) /
content_features.size)
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
style_loss = np.float32(0.0)
for style_layer_i, style_gram_i in zip(style_layers, style_features):
layer_i = g.get_tensor_by_name(style_layer_i)
layer_shape = layer_i.get_shape().as_list()
layer_size = layer_shape[1] * layer_shape[2] * layer_shape[3]
layer_flat = tf.reshape(layer_i, [-1, layer_shape[3]])
gram_matrix = tf.matmul(tf.transpose(layer_flat), layer_flat) / layer_size
style_loss = tf.add(style_loss, tf.nn.l2_loss((gram_matrix - style_gram_i) / np.float32(style_gram_i.size)))
def total_variation_loss(x):
h, w = x.get_shape().as_list()[1], x.get_shape().as_list()[1]
dx = tf.square(x[:, :h-1, :w-1, :] - x[:, :h-1, 1:, :])
dy = tf.square(x[:, :h-1, :w-1, :] - x[:, 1:, :w-1, :])
return tf.reduce_sum(tf.pow(dx + dy, 1.25))
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
tv_loss = total_variation_loss(net_input)
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
# Experiment w/ the weighting of these! They produce WILDLY different
# results.
loss = 5.0 * content_loss + 1.0 * style_loss + 0.001 * tv_loss
optimizer = tf.train.AdamOptimizer(0.05).minimize(loss)
imgs = []
n_iterations = 100
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
sess.run(tf.global_variables_initializer())
# map input to noise
og_img = net_input.eval()
for it_i in range(n_iterations):
_, this_loss, synth = sess.run([optimizer, loss, net_input], feed_dict={
'net/dropout_1/random_uniform:0': keep_probability,
'net/dropout/random_uniform:0': keep_probability})
print("%d: %f, (%f - %f)" %
(it_i, this_loss, np.min(synth), np.max(synth)))
if it_i % 5 == 0:
m = vgg16.deprocess(synth[0])
imgs.append(m)
plt.imshow(m)
plt.show()
gif.build_gif(imgs, saveto='stylenet.gif')
ipyd.Image(url='stylenet.gif?i=0', height=300, width=300)
utils.build_submission('session-4.zip',
('softmax.gif',
'fractal.gif',
'guided.gif',
'content.png',
'style.png',
'stylenet.gif',
'session-4.ipynb'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Session 4
Step2: <a name="part-1---pretrained-networks"></a>
Step3: Now we can load a pre-trained network's graph and any labels. Explore the different networks in your own time.
Step4: Each network returns a dictionary with the following keys defined. Every network has a key for "labels" except for "i2v", since this is a feature only network, e.g. an unsupervised network, and does not have labels.
Step5: <a name="preprocessdeprocessing"></a>
Step6: Let's now try preprocessing this image. The function for preprocessing is inside the module we used to load it. For instance, for vgg16, we can find the preprocess function as vgg16.preprocess, or for inception, inception.preprocess, or for i2v, i2v.preprocess. Or, we can just use the key preprocess in our dictionary net, as this is just convenience for us to access the corresponding preprocess function.
Step7: Let's undo the preprocessing. Recall that the net dictionary has the key deprocess which is the function we need to use on our processed image, img.
Step8: <a name="tensorboard"></a>
Step9: If you open up the "mixed3a" node above (double click on it), you'll see the first "inception" module. This network encompasses a few advanced concepts that we did not have time to discuss during the lecture, including residual connections, feature concatenation, parallel convolution streams, 1x1 convolutions, and including negative labels in the softmax layer. I'll expand on the 1x1 convolutions here, but please feel free to skip ahead if this isn't of interest to you.
Step10: <a name="using-context-managers"></a>
Step11: <a name="part-2---visualizing-gradients"></a>
Step12: Let's also grab the input layer
Step14: We'll now try to find the gradient activation that maximizes a layer with respect to the input layer x.
Step15: Let's try this w/ an image now. We're going to use the plot_gradient function to help us. This is going to take our input image, run it through the network up to a layer, find the gradient of the mean of that layer's activation with respect to the input image, then backprop that gradient back to the input layer. We'll then visualize the gradient by normalizing its values using the utils.normalize function.
Step16: <a name="part-3---basic-deep-dream"></a>
Step17: Let's now try running Deep Dream for every feature, each of our 5 pooling layers. We'll need to get the layer corresponding to our feature. Then find the gradient of this layer's mean activation with respect to our input, x. Then pass these to our dream function. This can take awhile (about 10 minutes using the CPU on my Macbook Pro).
Step18: Instead of using an image, we can use an image of noise and see how it "hallucinates" the representations that the layer most responds to
Step19: We'll do the same thing as before, now w/ our noise image
Step20: <a name="part-4---deep-dream-extensions"></a>
Step21: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step22: Let's decide on some parameters of our deep dream. We'll need to decide how many iterations to run for. And we'll plot the result every few iterations, also saving it so that we can produce a GIF. And at every iteration, we need to decide how much to ascend our gradient.
Step23: Now let's dream. We're going to define a context manager to create a session and use our existing graph, and make sure we use the CPU device, as there is no gain in using GPU, and we have much more CPU memory than GPU memory.
Step24: <a name="fractal"></a>
Step25: <a name="guided-hallucinations"></a>
Step26: Preprocess both images
Step27: Like w/ Style Net, we are going to measure how similar the features in the guide image are to the dream images. In order to do that, we'll calculate the dot product. Experiment with other measures such as l1 or l2 loss to see how this impacts the resulting Dream!
Step28: We'll now use another measure that we saw when developing Style Net during the lecture. This measure the pixel to pixel difference of neighboring pixels. What we're doing when we try to optimize a gradient that makes the mean differences small is saying, we want the difference to be low. This allows us to smooth our image in the same way that we did using the Gaussian to blur the image.
Step29: Now we train just like before, except we'll need to combine our two loss terms, feature_loss and tv_loss by simply adding them! The one thing we have to keep in mind is that we want to minimize the tv_loss while maximizing the feature_loss. That means we'll need to use the negative tv_loss and the positive feature_loss. As an experiment, try just optimizing the tv_loss and removing the feature_loss from the tf.gradients call. What happens?
Step30: <a name="further-explorations"></a>
Step31: Let's now import the graph definition into our newly created Graph using a context manager and specifying that we want to use the CPU.
Step32: Let's then grab the names of every operation in our network
Step33: Now we need an image for our content image and another one for our style image.
Step34: Let's see what the network classifies these images as just for fun
Step35: <a name="content-features"></a>
Step36: Pick a layer for using for the content features. If you aren't using VGG remember to get rid of the dropout stuff!
Step37: <a name="style-features"></a>
Step38: Now we find the gram matrix which we'll use to optimize our features.
Step39: <a name="remapping-the-input"></a>
Step40: <a name="content-loss"></a>
Step41: <a name="style-loss"></a>
Step42: <a name="total-variation-loss"></a>
Step43: <a name="training"></a>
Step44: And now iterate! Feel free to play with the number of iterations or how often you save an image. If you use a different network to VGG, then you will not need to feed in the dropout parameters like I've done here.
Step45: <a name="assignment-submission"></a>
|
7,929
|
<ASSISTANT_TASK:>
Python Code:
def sat(f):
return f.cumsum(axis=1).cumsum(axis=0)
def satarea(sat,r0_c0,r1_c1):
a,b,c,d = 0,0,0,0
r0,c0 = r0_c0
r1,c1 = r1_c1
if ((r0 - 1 >= 0) and (c0 - 1 >= 0)):
a = sat[r0-1,c0-1]
if (r0 - 1 >= 0):
b = sat[r0-1,c1]
if (c0 - 1 >= 0):
c = sat[r1,c0-1]
d = sat[r1,c1]
return a + d - c - b
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python sat.ipynb
%matplotlib inline
import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
import matplotlib.image as mpimg
if testing:
f = np.array([[0,1,1,0,0,0,0,0,0],
[1,0,0,0,0,0,0,1,0],
[1,0,0,1,0,0,0,1,0],
[0,0,0,0,0,1,1,0,0]], dtype=np.uint8)
s = ia.sat(f)
print('f (input):\n',f)
print('s (output):\n',s)
a = ia.satarea(s,(0,0),(3,8))
print('area:',a)
if testing:
f = mpimg.imread('../data/lenina.pgm')[::2,::2]
nb = ia.nbshow(2)
nb.nbshow(f, 'Original Image')
nb.nbshow(ia.normalize(ia.sat(f)), 'Integral Image')
nb.nbshow()
if testing:
f = mpimg.imread('../data/lenina.pgm')[::2,::2]
H,W = f.shape
s = ia.sat(f)
a0 = ia.satarea(s,(0,0),(H-1,W-1))
atopleft = ia.satarea(s,( 0,0 ),(H//2-1,W//2-1))
abotleft = ia.satarea(s,(H//2,0 ),(H-1, W//2-1))
atopright = ia.satarea(s,( 0,W//2),(H//2-1,W-1))
abotright = ia.satarea(s,(H//2,W//2),(H-1, W-1))
print('Area Total: ', a0)
print('Area Top-left: ', atopleft)
print('Area Bot-left: ', abotleft)
print('Area Top-right: ', atopright)
print('Area Bot-right: ', abotright)
print('Area Total:', atopleft+abotleft+atopright+abotright)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Examples
Step2: Numerical example
Step3: Image example
Step4: Calculating a rectangle area with SAT (Summed Area Table)
|
7,930
|
<ASSISTANT_TASK:>
Python Code:
import pyquil.quil as pq
import pyquil.forest as forest
from pyquil.gates import *
qvm = forest.Connection()
p = pq.Program()
p.inst(X(0)).measure(0, 0)
print p
classical_regs = [0] # A list of which classical registers to return the values of.
qvm.run(p, classical_regs)
qvm.run(p, [0, 1, 2])
p = pq.Program() # clear the old program
p.inst(X(0)).measure(0, 1)
qvm.run(p, [0, 1, 2])
coin_flip = pq.Program().inst(H(0)).measure(0, 0)
num_flips = 5
qvm.run(coin_flip, [0], num_flips)
coin_flip = pq.Program().inst(H(0))
qvm.wavefunction(coin_flip)
print "Multiple inst arguments with final measurement:"
print pq.Program().inst(X(0), Y(1), Z(0)).measure(0, 1)
print "Chained inst with explicit MEASURE instruction:"
print pq.Program().inst(X(0)).inst(Y(1)).measure(0, 1).inst(MEASURE(1, 2))
print "A mix of chained inst and measures:"
print pq.Program().inst(X(0)).measure(0, 1).inst(Y(1), X(0)).measure(0, 0)
print "A composition of two programs:"
print pq.Program(X(0)) + pq.Program(Y(0))
p = pq.Program().inst(X(0))
p.inst(Y(1))
print "Oops! We have added Y 1 by accident:"
print p
print "We can fix by popping:"
p.pop()
print p
print "And then add it back:"
p += pq.Program(Y(1))
print p
import numpy as np
# First we define the new gate from a matrix
x_gate_matrix = np.array(([0.0, 1.0], [1.0, 0.0]))
sqrt_x = np.array([[ 0.5+0.5j, 0.5-0.5j],
[ 0.5-0.5j, 0.5+0.5j]])
p = pq.Program().defgate("SQRT-X", sqrt_x)
# Then we can use the new gate,
p.inst(("SQRT-X", 0))
print p
qvm.wavefunction(p)
# A multi-qubit defgate example
x_gate_matrix = np.array(([0.0, 1.0], [1.0, 0.0]))
sqrt_x = np.array([[ 0.5+0.5j, 0.5-0.5j],
[ 0.5-0.5j, 0.5+0.5j]])
x_sqrt_x = np.kron(x_gate_matrix, sqrt_x)
p = pq.Program().defgate("X-SQRT-X", x_sqrt_x)
# Then we can use the new gate
p.inst(("X-SQRT-X", 0, 1))
qvm.wavefunction(p)
from math import pi
def qft3(q0, q1, q2):
p = pq.Program()
p.inst( H(q2),
CPHASE(pi/2.0, q1, q2),
H(q1),
CPHASE(pi/4.0, q0, q2),
CPHASE(pi/2.0, q0, q1),
H(q0),
SWAP(q0, q2) )
return p
print qft3(0, 1, 2)
state_prep = pq.Program().inst(X(0))
add_dummy_qubits = pq.Program().inst(I(2))
qvm.wavefunction(state_prep + add_dummy_qubits)
qvm.wavefunction(state_prep + qft3(0, 1, 2))
from numpy.fft import ifft
ifft([0,1,0,0,0,0,0,0], norm="ortho")
# Name our classical registers:
classical_flag_register = 2
# Write out the loop initialization and body programs:
init_register = pq.Program(TRUE([classical_flag_register]))
loop_body = pq.Program(X(0), H(0)).measure(0, classical_flag_register)
# Put it all together in a loop program:
loop_prog = init_register.while_do(classical_flag_register, loop_body)
print loop_prog
# Name our classical registers:
test_register = 1
answer_register = 0
# Construct each branch of our if-statement. We can have empty branches
# simply by having empty programs.
then_branch = pq.Program(X(0))
else_branch = pq.Program()
# Make a program that will put a 0 or 1 in test_register with 50% probability:
branching_prog = pq.Program(H(1)).measure(1, test_register)
# Add the conditional branching:
branching_prog.if_then(test_register, then_branch, else_branch)
# Measure qubit 0 into our answer register:
branching_prog.measure(0, answer_register)
print branching_prog
qvm.run(branching_prog, [answer_register], 10)
# 20% chance of a X gate being applied after gate applications and before measurements.
gate_noise_probs = [0.2, 0.0, 0.0]
meas_noise_probs = [0.2, 0.0, 0.0]
noisy_qvm = forest.Connection(gate_noise=gate_noise_probs, measurement_noise=meas_noise_probs)
p = pq.Program().inst(X(0)).measure(0, 0)
print "Without Noise:", qvm.run(p, [0], 10)
print "With Noise :", noisy_qvm.run(p, [0], 10)
# This function returns a quantum circuit with different rotation angles on a gate on qubit 0
def rotator(angle):
return pq.Program(RX(angle, 0))
from pyquil.parametric import ParametricProgram
par_p = ParametricProgram(rotator) # This produces a new type of parameterized program object
print par_p(0.5)
from pyquil.paulis import ID, sX, sY, sZ
# Pauli term takes an operator "X", "Y", "Z", or "I"; a qubit to act on, and
# an optional coefficient.
a = 0.5 * ID
b = -0.75 * sX(0) * sY(1) * sZ(3)
c = (5-2j) * sZ(1) * sX(2)
# Construct a sum of Pauli terms.
sigma = a + b + c
print "sigma =", sigma
import pyquil.paulis as pl
# Simplification
sigma_cubed = sigma * sigma * sigma
print "Simplified :", sigma_cubed
print
#Produce Quil code to compute exp[iX]
H = -1.0 * sX(0)
print "Quil to compute exp[iX] on qubit 0:"
print pl.exponential_map(H)(1.0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, we want to open a connection to the QVM.
Step2: Now we can make a program by adding some Quil instruction using the inst method on a Program object.
Step3: This program simply applies the $X$-gate to the zeroth qubit, measures that qubit, and stores the measurement result in the zeroth classical register. We can look at the Quil code that makes up this program simply by printing it.
Step4: Most importantly, of course, we can see what happens if we run this program on the QVM
Step5: We see that the result of this program is that the classical register [0] now stores the state of qubit 0, which should be $\left\vert 1\right\rangle$ after an $X$-gate. We can of course ask for more classical registers
Step6: The classical registers are initialized to zero, so registers [1] and [2] come out as zero. If we stored the measurement in a different classical register we would obtain
Step7: We can also run programs multiple times and accumulate all the results in a single list
Step8: Try running the above code several times. You will, with very high probability, get different results each time.
Step9: It is important to remember that this wavefunction method is just a useful debugging tool for small quantum systems, and it cannot be feasibly obtained on a quantum processor.
Step10: Fixing a Mistaken Instruction
Step11: The Standard Gate Set
Step12: Quil in general supports defining parametric gates, though right now only static gates are supported by pyQuil. Below we show how we can define $X_1\otimes \sqrt{X_0} $ as a single gate.
Step13: Advanced Usage
Step14: There is a very important detail to recognize here
Step15: Next, we want to prepare a state that corresponds to the sequence we want to compute the discrete Fourier transform of. Fortunately, this is easy, we just apply an $X$-gate to the zeroth qubit.
Step16: We can verify that this works by computing its wavefunction. However, we need to add some "dummy" qubits, because otherwise wavefunction would return a two-element vector.
Step17: If we have two quantum programs a and b, we can concatenate them by doing a + b. Using this, all we need to do is compute the QFT after state preparation to get our final result.
Step18: We can verify this works by computing the (inverse) FFT from NumPy.
Step19: Classical Control Flow
Step20: Notice that the init_register program applied a Quil instruction directly to a classical register. There are several classical commands that can be used in this fashion
Step21: We can run this program a few times to see what we get in the answer_register.
Step22: Parametric Depolarizing Noise
Step23: We can test this by applying an $X$ gate and measuring. Nominally, we should always measure 1.
Step24: Parametric Programs
Step25: The parametric program par_p now takes the same arguments as rotator
Step26: We can think of ParametricPrograms as a sort of template for Quil programs. They cache computations
Step27: There are two primary things one can do with Pauli terms and sums
|
7,931
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import numpy
# Path for TubeTK libs and bin
#Values takend from TubeTK launcher
#sys.path.append("C:/src/TubeTK_Python_ITK/SlicerExecutionModel-build/GenerateCLP/")
#sys.path.append("C:/src/TubeTK_Python_ITK/SlicerExecutionModel-build/GenerateCLP/Release")
#sys.path.append("C:/src/TubeTK_Python_ITK/ITK-build/bin/")
#sys.path.append("C:/src/TubeTK_Python_ITK/ITK-build/bin/Release")
#sys.path.append("C:/src/TubeTK_Python_ITK/TubeTK-build/bin/")
#sys.path.append("C:/src/TubeTK_Python_ITK/TubeTK-build/bin/Release")
sys.path.append("C:/src/TubeTK_Python_ITK/TubeTK-build/lib/")
sys.path.append("C:/src/TubeTK_Python_ITK/TubeTK-build/lib/Release")
#sys.path.append("C:/src/TubeTK_Python_ITK/VTK-build/bin/")
#sys.path.append("C:/src/TubeTK_Python_ITK/VTK-build/bin/Release")
# Setting TubeTK Build Directory
TubeTK_BUILD_DIR=None
if 'TubeTK_BUILD_DIR' in os.environ:
TubeTK_BUILD_DIR = os.environ['TubeTK_BUILD_DIR']
else:
print('TubeTK_BUILD_DIR not found!')
print(' Set environment variable')
os.environ["TubeTK_BUILD_DIR"] = "C:/src/TubeTK_Python_ITK/TubeTK-build"
TubeTK_BUILD_DIR = os.environ["TubeTK_BUILD_DIR"]
#sys.exit( 1 )
if not os.path.exists(TubeTK_BUILD_DIR):
print('TubeTK_BUILD_DIR set by directory not found!')
print(' TubeTK_BUILD_DIR = ' + TubeTK_BUILD_DIR )
sys.exit(1)
# Setting ITK Build Directory and importing ITK
try:
import itk
except:
ITK_BUILD_DIR = None
if 'ITK_BUILD_DIR' in os.environ:
ITK_BUILD_DIR = os.environ['ITK_BUILD_DIR']
else:
print('ITK_BUILD_DIR not found!')
print(' Set environment variable')
os.environ["ITK_BUILD_DIR"] = "C:/src/TubeTK_Python_R/ITK-build"
ITK_BUILD_DIR = os.environ["ITK_BUILD_DIR"]
#sys.exit( 1 )
if not os.path.exists(ITK_BUILD_DIR):
print('ITK_BUILD_DIR set by directory not found!')
print(' ITK_BUIDL_DIR = ' + ITK_BUILD_DIR )
sys.exit(1)
# Append ITK libs
sys.path.append("C:/src/TubeTK_Python_ITK/ITK-build/Wrapping/Generators/Python/Release")
sys.path.append("C:/src/TubeTK_Python_ITK/ITK-build/lib/Release")
sys.path.append("C:/src/TubeTK_Python_ITK/ITK-build/lib")
# Append TubeTK libs
sys.path.append("C:/src/TubeTK_Python_ITK/TubeTK-build/ITKModules/TubeTKITK-build/Wrapping/Generators/Python/Release")
import itk
from itk import TubeTKITK as itktube
Dimension = 3
PixelType = itk.F
CTImageFileName = os.path.join(TubeTK_BUILD_DIR, 'MIDAS_Data\inputCTA.mha')
ExpertMaskImageFileName = os.path.join(TubeTK_BUILD_DIR, 'MIDAS_Data\inputExpertMask.mha')
MaskImageFileName = os.path.join(TubeTK_BUILD_DIR, 'MIDAS_Data\inputMask.mha')
SpatialObjectType = itk.SpatialObject[Dimension]
ImageType = itk.Image[PixelType, Dimension]
ImageReaderType = itk.ImageFileReader[ImageType]
imageReader1 = ImageReaderType.New()
imageReader1.SetFileName(CTImageFileName)
imageReader1.Update()
CTImage = imageReader1.GetOutput()
imageReader2 = ImageReaderType.New()
imageReader2.SetFileName(ExpertMaskImageFileName)
imageReader2.Update()
ExpertMaskImage = imageReader2.GetOutput()
imageReader3 = ImageReaderType.New()
imageReader3.SetFileName(MaskImageFileName)
imageReader3.Update()
MaskImage = imageReader3.GetOutput()
boundary = itk.Index[3]()
boundary.Fill(10)
#Create the crop image filter
CropImageFilterType = itktube.CropImage[ImageType, ImageType]
cropImageFilter = CropImageFilterType.New()
cropImageFilter.SetBoundary(boundary)
#cropImageFilter.SetMatchVolume(MaskImage) #Giving error
#Crop Input CTA
cropImageFilter.SetInput(CTImage)
cropImageFilter.Update()
croppedCTImage = cropImageFilter.GetOutput()
#Crop Expert Mask
cropImageFilter.SetInput(ExpertMaskImage)
cropImageFilter.Update()
croppedExpertMaskImage = cropImageFilter.GetOutput()
#Crop Mask
cropImageFilter.SetInput(MaskImage)
cropImageFilter.Update()
croppedMaskImage = cropImageFilter.GetOutput()
boundary = itk.Index[3]()
boundary.Fill(10)
#Create the resample image filter
ResampleImageFilterType = itktube.ResampleImage[ImageType, ImageType]
#Resample Input CTA
resampleImageFilter1 = ResampleImageFilterType.New()
resampleImageFilter1.SetInput(croppedCTImage)
resampleImageFilter1.SetMakeIsotropic(True)
resampleImageFilter1.SetInterpolator("Sinc")
resampleCTImage = resampleImageFilter1.GetOutput()
#Resample Expert Mask
resampleImageFilter2 = ResampleImageFilterType.New()
resampleImageFilter2.SetInput(croppedExpertMaskImage)
resampleImageFilter2.SetMakeIsotropic(True)
resampleImageFilter2.SetInterpolator("NearestNeighbor")
resampleExpertMaskImage = resampleImageFilter2.GetOutput()
#Resample Mask
resampleImageFilter3 = ResampleImageFilterType.New()
resampleImageFilter3.SetInput(croppedMaskImage)
resampleImageFilter3.SetMakeIsotropic(True)
resampleImageFilter3.SetInterpolator("NearestNeighbor")
resampleMaskImage = resampleImageFilter3.GetOutput()
# create resampleMaskImage -> erodedResampleMaskImage
# resampleExpertMaskImage -> erodedResampleExpertMaskImage
# resampleCTImage -> maskedResampleCTImage
# Create image to save not-vessel mask.
ShortImageType = itk.Image[itk.S, Dimension]
notVesselMaskImage = ShortImageType.New()
#Create Compute Training Mask Filter
ComputeTrainingMaskFilterType = itktube.ComputeTrainingMask[ImageType]
computeTrainingMaskFilter = ComputeTrainingMaskFilterType.New()
computeTrainingMaskFilter.SetInput(erodedResampleExpertMaskImage)
computeTrainingMaskFilter.SetNotVesselMask(notVesselMaskImage)
computeTrainingMaskFilter.SetGap(0.5)
computeTrainingMaskFilter.SetNotVesselWidth(2)
computeTrainingMaskFilter.Update()
expertTrainMaskImage = computeTrainingMaskFilter.GetOutput()
DiscriminantInfoFileName = os.path.join(TubeTK_BUILD_DIR, 'Temporary\\vascularModel.mrs')
enhancedScalesExpertMaskImage = ImageType.New()
# Create EnhanceTubesUsingDiscriminantAnalysis Filter
EnhanceTubesUsingDiscriminantAnalysisFilterType = itktube.EnhanceTubesUsingDiscriminantAnalysis[ImageType, ImageType]
ETUDAFilter = EnhanceTubesUsingDiscriminantAnalysisFilterType.New()
ETUDAFilter.SetInput(maskedResampleCTImage)
ETUDAFilter.SetLabelMap(expertTrainMaskImage)
ETUDAFilter.SetTubeId(255)
ETUDAFilter.SetBackgroundId(127)
ETUDAFilter.SetSaveDiscriminantInfo(DiscriminantInfoFileName)
ETUDAFilter.SetOutputSeedScaleImage(enhancedScalesExpertMaskImage)
ETUDAFilter.SetTubeScales(0.4,0.8,1.2,1.6)
enhancedExpertMaskImage = ETUDAFilter.GetOutput()
vasculaModelParameterFileName = os.path.join(TubeTK_BUILD_DIR, 'Temporary\\vascularModel.mtp')
# Create SegmentTubesParameters Filter
ComputeSegmentTubesParametersFilterType = itktube.ComputeSegmentTubesParameters[ImageType]
CSTPFilter = ComputeSegmentTubesParametersFilterType.New()
CSTPFilter.SetInput(maskedResampleCTImage)
CSTPFilter.SetMaskImage(expertTrainMaskImage)
CSTPFilter.SetScaleImage(enhancedScalesExpertMaskImage)
CSTPFilter.SetParametersFileName(vasculaModelParameterFileName)
CSTPFilter.Update()
# enhancedExpertMaskImage -> vesselEnhancedExpertMaskImage
# Create SegmentBinaryImageSkeleton Filter
SegmentBinaryImageSkeletonFilterType = itktube.SegmentBinaryImageSkeleton[Imagetype]
SBISFilter = SegmentBinaryImageSkeletonFilterType.New()
SBISFilter.SetInput(vesselEnhancedExpertMaskImage)
SBISFilter.Update()
seedsVesselEnhancedExpertMaskImage = SBISFilter.GetOutput()
outputVesselsFileName = os.path.join(TubeTK_BUILD_DIR, 'Temporary\\outputVessels.tre')
# Create SegmentTubes Filter
SegmentTubesFilterType = itktube.SegmentTubes[ImageType]
SegmenttubesFilter = SegmentTubesFilterType.New()
SegmenttubesFilter.SetInput(maskedResampleCTImage)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialization
Step2: Read the input images
Step3: STEP 1
Step4: STEP 2
Step5: STEP 3
Step6: STEP 4
Step7: STEP 5
Step8: STEP 6
Step9: STEP 7
Step10: STEP 8
Step11: STEP 9
|
7,932
|
<ASSISTANT_TASK:>
Python Code:
%load_ext Cython
%%cython
import math
def erathostene_sieve(int n):
cdef list primes = [False, False] + [True] * (n - 1) # from 0 to n included
cdef int max_divisor = math.floor(math.sqrt(n))
cdef int i = 2
for divisor in range(2, max_divisor + 1):
if primes[divisor]:
number = 2*divisor
while number <= n:
primes[number] = False
number += divisor
return primes
sieve10million = erathostene_sieve(int(1e7))
primes_upto_10million = [p for p,b in enumerate(sieve10million) if b]
print(f"There are {len(primes_upto_10million)} prime numbers smaller than 10 million")
import itertools
prime = 56003
nb_digit_prime = len(str(prime))
nb_replacements = 2
for c in itertools.combinations(range(nb_digit_prime), nb_replacements):
print(c)
from typing import List
def find_prime_digit_replacements(max_size_family: int=6, primes: List[int]=primes_upto_10million) -> int:
set_primes = set(primes)
# we explore this list of primes in ascending order,
# so we'll find the smallest that satisfy the property
# for prime in primes:
for prime in range(10, max(primes) + 1):
str_prime = str(prime)
# for this prime, try all the possibilities
nb_digit_prime = len(str_prime)
for nb_replacements in range(1, nb_digit_prime + 1): # cannot replace all the digits
# now try to replace nb_replacements digits (not necessarily adjacent)
for positions in itertools.combinations(range(nb_digit_prime), nb_replacements):
size_family = 0
good_digits = []
good_primes = []
for new_digit in range(0, 9 + 1):
if positions[0] == 0 and new_digit == 0:
continue
new_prime = int(''.join(
(c if i not in positions else str(new_digit))
for i,c in enumerate(str_prime)
))
if new_prime in set_primes:
size_family += 1
good_digits.append(new_digit)
good_primes.append(new_prime)
if size_family >= max_size_family:
print(f"For p = {prime} with {nb_digit_prime} digits, and {nb_replacements} replacement(s), we found")
print(f"a family of {size_family} prime(s) when replacing digit(s) at position(s) {positions}")
for new_digit, new_prime in zip(good_digits, good_primes):
print(f" {new_prime} obtained by replacing with digit {new_digit}")
return prime
%%time
find_prime_digit_replacements(max_size_family=6)
%%time
find_prime_digit_replacements(max_size_family=7)
%%time
find_prime_digit_replacements(max_size_family=8)
def x_to_kx_contain_same_digits(x: int, kmax: int) -> bool:
digits_x = sorted(list(str(x)))
for k in range(2, kmax+1):
digits_kx = sorted(list(str(k*x)))
if digits_x != digits_kx:
return False
return True
assert not x_to_kx_contain_same_digits(125873, 2)
assert x_to_kx_contain_same_digits(125874, 2)
assert not x_to_kx_contain_same_digits(125875, 2)
assert not x_to_kx_contain_same_digits(125874, 3)
def find_smallest_x_such_that_x_to_6x_contain_same_digits(kmax: int=6) -> int:
x = 1
while True:
if x_to_kx_contain_same_digits(x, kmax):
print(f"Found a solution x = {x}, proof:")
for k in range(1, kmax + 1):
print(f" k x = {k}*{x}={k*x}")
return x
x += 1
%%time
find_smallest_x_such_that_x_to_6x_contain_same_digits()
%load_ext Cython
%%cython
def choose_kn(int k, int n):
# {k choose n} = {n-k choose n} so first let's keep the minimum
if k < 0 or k > n:
return 0
elif k > n-k:
k = n-k
# instead of computing with factorials (that blow up VERY fast),
# we can compute with product
product = 1
for p in range(k+1, n+1):
product *= p
for p in range(2, n-k+1):
product //= p
return product
choose_kn(10, 23)
def how_many_choose_kn_are_greater_than_x(max_n: int, x: int) -> int:
count = 0
for n in range(1, max_n + 1):
for k in range(1, n//2 + 1):
c_kn = choose_kn(k, n)
if c_kn > x:
count += 1
if n-k != k:
# we count twice for (n choose k) and (n choose n-k)
# only if n-k != k
count += 1
return count
how_many_choose_kn_are_greater_than_x(100, 1e6)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 51
Step2: Let's try to obtain the examples given in the problem statement, with the smallest prime giving a 6-sized family being 13 and the smallest prime giving a 7-sized family being 56003.
Step3: The code seems to work pretty well. It's not that fast... but let's try to obtain the smallest prime giving a 8-sized family.
Step4: Done!
Step5: Done, it was quick.
|
7,933
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
def param_plot():
this function creates the graph on page 189 of Sargent Macroeconomic Theory, second edition, 1987
fig, ax = plt.subplots(figsize=(12, 8))
ax.set_aspect('equal')
# Set axis
xmin, ymin = -3, -2
xmax, ymax = -xmin, -ymin
plt.axis([xmin, xmax, ymin, ymax])
# Set axis labels
ax.set(xticks=[], yticks=[])
ax.set_xlabel(r'$\rho_2$', fontsize=16)
ax.xaxis.set_label_position('top')
ax.set_ylabel(r'$\rho_1$', rotation=0, fontsize=16)
ax.yaxis.set_label_position('right')
# Draw (t1, t2) points
rho1 = np.linspace(-2, 2, 100)
ax.plot(rho1, -abs(rho1) + 1, c='black')
ax.plot(rho1, np.ones_like(rho1) * -1, c='black')
ax.plot(rho1, -(rho1**2 / 4), c='black')
# Turn normal axes off
for spine in ['left', 'bottom', 'top', 'right']:
ax.spines[spine].set_visible(False)
# Add arrows to represent axes
axes_arrows = {'arrowstyle': '<|-|>', 'lw': 1.3}
ax.annotate('', xy=(xmin, 0), xytext=(xmax, 0), arrowprops=axes_arrows)
ax.annotate('', xy=(0, ymin), xytext=(0, ymax), arrowprops=axes_arrows)
# Annotate the plot with equations
plot_arrowsl = {'arrowstyle': '-|>', 'connectionstyle': "arc3, rad=-0.2"}
plot_arrowsr = {'arrowstyle': '-|>', 'connectionstyle': "arc3, rad=0.2"}
ax.annotate(r'$\rho_1 + \rho_2 < 1$', xy=(0.5, 0.3), xytext=(0.8, 0.6),
arrowprops=plot_arrowsr, fontsize='12')
ax.annotate(r'$\rho_1 + \rho_2 = 1$', xy=(0.38, 0.6), xytext=(0.6, 0.8),
arrowprops=plot_arrowsr, fontsize='12')
ax.annotate(r'$\rho_2 < 1 + \rho_1$', xy=(-0.5, 0.3), xytext=(-1.3, 0.6),
arrowprops=plot_arrowsl, fontsize='12')
ax.annotate(r'$\rho_2 = 1 + \rho_1$', xy=(-0.38, 0.6), xytext=(-1, 0.8),
arrowprops=plot_arrowsl, fontsize='12')
ax.annotate(r'$\rho_2 = -1$', xy=(1.5, -1), xytext=(1.8, -1.3),
arrowprops=plot_arrowsl, fontsize='12')
ax.annotate(r'${\rho_1}^2 + 4\rho_2 = 0$', xy=(1.15, -0.35),
xytext=(1.5, -0.3), arrowprops=plot_arrowsr, fontsize='12')
ax.annotate(r'${\rho_1}^2 + 4\rho_2 < 0$', xy=(1.4, -0.7),
xytext=(1.8, -0.6), arrowprops=plot_arrowsr, fontsize='12')
# Label categories of solutions
ax.text(1.5, 1, 'Explosive\n growth', ha='center', fontsize=16)
ax.text(-1.5, 1, 'Explosive\n oscillations', ha='center', fontsize=16)
ax.text(0.05, -1.5, 'Explosive oscillations', ha='center', fontsize=16)
ax.text(0.09, -0.5, 'Damped oscillations', ha='center', fontsize=16)
# Add small marker to y-axis
ax.axhline(y=1.005, xmin=0.495, xmax=0.505, c='black')
ax.text(-0.12, -1.12, '-1', fontsize=10)
ax.text(-0.12, 0.98, '1', fontsize=10)
return fig
param_plot()
plt.show()
def categorize_solution(rho1, rho2):
this function takes values of rho1 and rho2 and uses them to classify the type of solution
discriminant = rho1 ** 2 + 4 * rho2
if rho2 > 1 + rho1 or rho2 < -1:
print('Explosive oscillations')
elif rho1 + rho2 > 1:
print('Explosive growth')
elif discriminant < 0:
print('Roots are complex with modulus less than one; therefore damped oscillations')
else:
print('Roots are real and absolute values are less than zero; therfore get smooth convergence to a steady state')
### Test the categorize_solution function
categorize_solution(1.3, -.4)
def plot_y(function=None):
function plots path of Y_t
plt.subplots(figsize=(12, 8))
plt.plot(function)
plt.xlabel('Time $t$')
plt.ylabel('$Y_t$', rotation=0)
plt.grid()
plt.show()
from cmath import sqrt
##=== This is a 'manual' method ===#
def y_nonstochastic(y_0=100, y_1=80, alpha=.92, beta=.5, gamma=10, n=80):
Takes values of parameters and computes roots of characteristic polynomial.
It tells whether they are real or complex and whether they are less than unity in absolute value.
It also computes a simulation of length n starting from the two given initial conditions for national income
roots = []
rho1 = alpha + beta
rho2 = -beta
print('rho_1 is ', rho1)
print('rho_2 is ', rho2)
discriminant = rho1 ** 2 + 4 * rho2
if discriminant == 0:
roots.append(-rho1 / 2)
print('Single real root: ')
print(''.join(str(roots)))
elif discriminant > 0:
roots.append((-rho1 + sqrt(discriminant).real) / 2)
roots.append((-rho1 - sqrt(discriminant).real) / 2)
print('Two real roots: ')
print(''.join(str(roots)))
else:
roots.append((-rho1 + sqrt(discriminant)) / 2)
roots.append((-rho1 - sqrt(discriminant)) / 2)
print('Two complex roots: ')
print(''.join(str(roots)))
if all(abs(root) < 1 for root in roots):
print('Absolute values of roots are less than one')
else:
print('Absolute values of roots are not less than one')
def transition(x, t): return rho1 * x[t - 1] + rho2 * x[t - 2] + gamma
y_t = [y_0, y_1]
for t in range(2, n):
y_t.append(transition(y_t, t))
return y_t
plot_y(y_nonstochastic())
### code to reverse engineer a cycle
### y_t = r^t (c_1 cos(phi t) + c2 sin(phi t))
###
import cmath
import math
def f(r, phi):
Takes modulus r and angle phi of complex number r exp(j phi)
and creates rho1 and rho2 of characteristic polynomial for which
r exp(j phi) and r exp(- j phi) are complex roots.
Returns the multiplier coefficient a and the accelerator coefficient b
that verifies those roots.
g1 = cmath.rect(r, phi) # Generate two complex roots
g2 = cmath.rect(r, -phi)
rho1 = g1 + g2 # Implied rho1, rho2
rho2 = -g1 * g2
b = -rho2 # Reverse engineer a and b that validate these
a = rho1 - b
return rho1, rho2, a, b
## Now let's use the function in an example
## Here are the example paramters
r = .95
period = 10 # Length of cycle in units of time
phi = 2 * math.pi/period
## Apply the function
rho1, rho2, a, b = f(r, phi)
print("a, b = ", a, b)
print("rho1, rho2 =", rho1, rho2)
## Print the real components of rho1 and rho2
rho1 = rho1.real
rho2 = rho2.real
rho1, rho2
r1, r2 = np.roots([1, -rho1, -rho2])
p1 = cmath.polar(r1)
p2 = cmath.polar(r2)
print("r, phi =", r, phi)
print("p1, p2 = ", p1, p2)
# print("g1, g2 = ", g1, g2)
print("a, b =", a, b)
print("rho1, rho2 =", rho1, rho2)
##=== This method uses numpy to calculate roots ===#
def y_nonstochastic(y_0=100, y_1=80, alpha=.9, beta=.8, gamma=10, n=80):
Rather than computing the roots of the characteristic polynomial by hand as we did earlier, this function
enlists numpy to do the work for us
# Useful constants
rho1 = alpha + beta
rho2 = -beta
categorize_solution(rho1, rho2)
# Find roots of polynomial
roots = np.roots([1, -rho1, -rho2])
print('Roots are', roots)
# Check if real or complex
if all(isinstance(root, complex) for root in roots):
print('Roots are complex')
else:
print('Roots are real')
# Check if roots are less than one
if all(abs(root) < 1 for root in roots):
print('Roots are less than one')
else:
print('Roots are not less than one')
# Define transition equation
def transition(x, t): return rho1 * x[t - 1] + rho2 * x[t - 2] + gamma
# Set initial conditions
y_t = [y_0, y_1]
# Generate y_t series
for t in range(2, n):
y_t.append(transition(y_t, t))
return y_t
plot_y(y_nonstochastic())
r = 1 # generates undamped, nonexplosive cycles
period = 10 # length of cycle in units of time
phi = 2 * math.pi/period
## Apply the reverse engineering function f
rho1, rho2, a, b = f(r, phi)
a = a.real # drop the imaginary part so that it is a valid input into y_nonstochastic
b = b.real
print("a, b =", a, b)
ytemp = y_nonstochastic(alpha=a, beta=b, y_0=20, y_1=30)
plot_y(ytemp)
from ipywidgets import interact
def choose_r_phi(r, phi):
rho1, rho2, a, b = f(r, phi)
a = a.real # drop the imaginary part so that it is a valid input into y_nonstochastic
b = b.real
ytemp = y_nonstochastic(alpha=a, beta=b, y_0=20, y_1=30)
return plot_y(ytemp)
periodhighest = 20
interact(choose_r_phi, r=(0, 1.5), phi=(math.pi/periodhighest, 10 * math.pi/periodhighest))
import sympy
from sympy import Symbol, init_printing
init_printing()
r1 = Symbol("rho_1")
r2 = Symbol("rho_2")
z = Symbol("z")
sympy.solve(z**2 - r1*z - r2, z)
a = Symbol("alpha")
b = Symbol("beta")
r1 = a + b
r2 = -b
sympy.solve(z**2 - r1*z - r2, z)
def y_stochastic(y_0=0, y_1=0, alpha=0.8, beta=0.2, gamma=10, n=100, sigma=5):
This function takes parameters of a stochastic version of the model and proceeds to analyze
the roots of the characteristic polynomial and also generate a simulation
# Useful constants
rho1 = alpha + beta
rho2 = -beta
# Categorize solution
categorize_solution(rho1, rho2)
# Find roots of polynomial
roots = np.roots([1, -rho1, -rho2])
print(roots)
# Check if real or complex
if all(isinstance(root, complex) for root in roots):
print('Roots are complex')
else:
print('Roots are real')
# Check if roots are less than one
if all(abs(root) < 1 for root in roots):
print('Roots are less than one')
else:
print('Roots are not less than one')
# Generate shocks
epsilon = np.random.normal(0, 1, n)
# Define transition equation
def transition(x, t): return rho1 * \
x[t - 1] + rho2 * x[t - 2] + gamma + sigma * epsilon[t]
# Set initial conditions
y_t = [y_0, y_1]
# Generate y_t series
for t in range(2, n):
y_t.append(transition(y_t, t))
return y_t
plot_y(y_stochastic())
r = .97
period = 10 # length of cycle in units of time
phi = 2 * math.pi/period
### apply the reverse engineering function f
rho1, rho2, a, b = f(r, phi)
a = a.real # drop the imaginary part so that it is a valid input into y_nonstochastic
b = b.real
print("a, b = ", a, b)
plot_y(y_stochastic(y_0=40, y_1 = 42, alpha=a, beta=b, sigma=2, n=100))
def y_stochastic_g(y_0=20,
y_1=20,
alpha=0.8,
beta=0.2,
gamma=10,
n=100,
sigma=2,
g=0,
g_t=0,
duration='permanent'):
This program computes a response to a permanent increase in government expenditures that occurs
at time 20
# Useful constants
rho1 = alpha + beta
rho2 = -beta
# Categorize solution
categorize_solution(rho1, rho2)
# Find roots of polynomial
roots = np.roots([1, -rho1, -rho2])
print(roots)
# Check if real or complex
if all(isinstance(root, complex) for root in roots):
print('Roots are complex')
else:
print('Roots are real')
# Check if roots are less than one
if all(abs(root) < 1 for root in roots):
print('Roots are less than one')
else:
print('Roots are not less than one')
# Generate shocks
epsilon = np.random.normal(0, 1, n)
def transition(x, t, g):
# Non-stochastic - separated to avoid generating random series when not needed
if sigma == 0:
return rho1 * x[t - 1] + rho2 * x[t - 2] + gamma + g
# Stochastic
else:
epsilon = np.random.normal(0, 1, n)
return rho1 * x[t - 1] + rho2 * x[t - 2] + gamma + g + sigma * epsilon[t]
# Create list and set initial conditions
y_t = [y_0, y_1]
# Generate y_t series
for t in range(2, n):
# No government spending
if g == 0:
y_t.append(transition(y_t, t))
# Government spending (no shock)
elif g != 0 and duration == None:
y_t.append(transition(y_t, t))
# Permanent government spending shock
elif duration == 'permanent':
if t < g_t:
y_t.append(transition(y_t, t, g=0))
else:
y_t.append(transition(y_t, t, g=g))
# One-off government spending shock
elif duration == 'one-off':
if t == g_t:
y_t.append(transition(y_t, t, g=g))
else:
y_t.append(transition(y_t, t, g=0))
return y_t
plot_y(y_stochastic_g(g=10, g_t=20, duration='permanent'))
plot_y(y_stochastic_g(g=500, g_t=50, duration='one-off'))
class Samuelson():
rThis class represents the Samuelson model, otherwise known as the
multiple-accelerator model. The model combines the Keynesian multipler
with the accelerator theory of investment.
The path of output is governed by a linear second-order difference equation
.. math::
Y_t = + \alpha (1 + \beta) Y_{t-1} - \alpha \beta Y_{t-2}
Parameters
----------
y_0 : scalar
Initial condition for Y_0
y_1 : scalar
Initial condition for Y_1
alpha : scalar
Marginal propensity to consume
beta : scalar
Accelerator coefficient
n : int
Number of iterations
sigma : scalar
Volatility parameter. Must be greater than or equal to 0. Set
equal to 0 for non-stochastic model.
g : scalar
Government spending shock
g_t : int
Time at which government spending shock occurs. Must be specified
when duration != None.
duration : {None, 'permanent', 'one-off'}
Specifies type of government spending shock. If none, government
spending equal to g for all t.
def __init__(self,
y_0=100,
y_1=50,
alpha=1.3,
beta=0.2,
gamma=10,
n=100,
sigma=0,
g=0,
g_t=0,
duration=None):
self.y_0, self.y_1, self.alpha, self.beta = y_0, y_1, alpha, beta
self.n, self.g, self.g_t, self.duration = n, g, g_t, duration
self.gamma, self.sigma = gamma, sigma
self.rho1 = alpha + beta
self.rho2 = -beta
self.roots = np.roots([1, -self.rho1, -self.rho2])
def root_type(self):
if all(isinstance(root, complex) for root in self.roots):
return 'Complex conjugate'
elif len(self.roots) > 1:
return 'Double real'
else:
return 'Single real'
def root_less_than_one(self):
if all(abs(root) < 1 for root in self.roots):
return True
def solution_type(self):
rho1, rho2 = self.rho1, self.rho2
discriminant = rho1 ** 2 + 4 * rho2
if rho2 >= 1 + rho1 or rho2 <= -1:
return 'Explosive oscillations'
elif rho1 + rho2 >= 1:
return 'Explosive growth'
elif discriminant < 0:
return 'Damped oscillations'
else:
return 'Steady state'
def _transition(self, x, t, g):
# Non-stochastic - separated to avoid generating random series when not needed
if self.sigma == 0:
return self.rho1 * x[t - 1] + self.rho2 * x[t - 2] + self.gamma + g
# Stochastic
else:
epsilon = np.random.normal(0, 1, self.n)
return self.rho1 * x[t - 1] + self.rho2 * x[t - 2] + self.gamma + g + self.sigma * epsilon[t]
def generate_series(self):
# Create list and set initial conditions
y_t = [self.y_0, self.y_1]
# Generate y_t series
for t in range(2, self.n):
# No government spending
if self.g == 0:
y_t.append(self._transition(y_t, t))
# Government spending (no shock)
elif self.g != 0 and self.duration == None:
y_t.append(self._transition(y_t, t))
# Permanent government spending shock
elif self.duration == 'permanent':
if t < self.g_t:
y_t.append(self._transition(y_t, t, g=0))
else:
y_t.append(self._transition(y_t, t, g=self.g))
# One-off government spending shock
elif self.duration == 'one-off':
if t == self.g_t:
y_t.append(self._transition(y_t, t, g=self.g))
else:
y_t.append(self._transition(y_t, t, g=0))
return y_t
def summary(self):
print('Summary\n' + '-' * 50)
print('Root type: ' + self.root_type())
print('Solution type: ' + self.solution_type())
print('Roots: ' + str(self.roots))
if self.root_less_than_one() == True:
print('Absolute value of roots is less than one')
else:
print('Absolute value of roots is not less than one')
if self.sigma > 0:
print('Stochastic series with sigma = ' + str(self.sigma))
else:
print('Non-stochastic series')
if self.g != 0:
print('Government spending equal to ' + str(self.g))
if self.duration != None:
print(self.duration.capitalize() +
' government spending shock at t = ' + str(self.g_t))
def plot(self):
fig, ax = plt.subplots(figsize=(12, 8))
ax.plot(self.generate_series())
ax.set(xlabel='Iteration', xlim=(0, self.n))
ax.set_ylabel('$Y_t$', rotation=0)
ax.grid()
# Add parameter values to plot
paramstr = '$\\alpha=%.2f$\n$\\beta=%.2f$\n$\\gamma=%.2f$\n$\\sigma=%.2f$\n$\\rho_1=%.2f$\n$\\rho_2=%.2f$'%(
self.alpha, self.beta, self.gamma, self.sigma, self.rho1, self.rho2)
props = dict(fc='white', pad=10, alpha=0.5)
ax.text(0.87, 0.05, paramstr, transform=ax.transAxes,
fontsize=12, bbox=props, va='bottom')
return fig
def param_plot(self):
# Uses the param_plot() function defined earlier (it is then able
# to be used standalone or as part of the model)
fig = param_plot()
ax = fig.gca()
# Add lambda values to legend
for i, root in enumerate(self.roots):
if isinstance(root, complex):
operator = ['+', ''] # Need to fill operator for positive as string is split apart
label = r'$\lambda_{0} = {1.real:.2f} {2} {1.imag:.2f}i$'.format(i+1, sam.roots[i], operator[i])
else:
label = r'$\lambda_{0} = {1.real:.2f}$'.format(i+1, sam.roots[i])
ax.scatter(0, 0, 0, label=label) # dummy to add to legend
# Add rho pair to plot
ax.scatter(self.rho1, self.rho2, 100, 'red', '+', label=r'$(\ \rho_1, \ \rho_2 \ )$', zorder=5)
plt.legend(fontsize=12, loc=3)
return fig
sam = Samuelson(alpha=0.8, beta=0.5, sigma=2, g=10, g_t=20, duration='permanent')
sam.summary()
sam.plot()
plt.show()
sam.param_plot()
plt.show()
from quantecon import LinearStateSpace
This script maps the Samuelson model in the the LinearStateSpace class
alpha = 0.8
beta = 0.9
rho1 = alpha + beta
rho2 = -beta
gamma = 10
sigma = 1
g = 10
n = 100
A = [[1, 0, 0],
[gamma + g, rho1, rho2],
[0, 1, 0]]
G = [[gamma + g, rho1, rho2], # this is Y_{t+1}
[gamma, alpha, 0], # this is C_{t+1}
[0, beta, -beta]] # this is I_{t+1}
mu_0 = [1, 100, 100]
C = np.zeros((3,1))
C[1] = sigma # stochastic
sam_t = LinearStateSpace(A, C, G, mu_0=mu_0)
x, y = sam_t.simulate(ts_length=n)
fig, axes = plt.subplots(3, 1, sharex=True, figsize=(15, 8))
titles = ['Output ($Y_t$)', 'Consumption ($C_t$)', 'Investment ($I_t$)']
colors = ['darkblue', 'red', 'purple']
for ax, series, title, color in zip(axes, y, titles, colors):
ax.plot(series, color=color)
ax.set(title=title, xlim=(0, n))
ax.grid()
axes[-1].set_xlabel('Iteration')
plt.show()
imres = sam_t.impulse_response()
imres = np.asarray(imres)
y1 = imres[:, :, 0]
y2 = imres[:, :, 1]
y1.shape
A = np.asarray(A)
w, v = np.linalg.eig(A)
print(w)
class SamuelsonLSS(LinearStateSpace):
this subclass creates a Samuelson multiplier-accelerator model
as a linear state space system
def __init__(self,
y_0=100,
y_1=100,
alpha=0.8,
beta=0.9,
gamma=10,
sigma=1,
g=10):
self.alpha, self.beta = alpha, beta
self.y_0, self.y_1, self.g = y_0, y_1, g
self.gamma, self.sigma = gamma, sigma
# Define intial conditions
self.mu_0 = [1, y_0, y_1]
self.rho1 = alpha + beta
self.rho2 = -beta
# Define transition matrix
self.A = [[1, 0, 0],
[gamma + g, self.rho1, self.rho2],
[0, 1, 0]]
# Define output matrix
self.G = [[gamma + g, self.rho1, self.rho2], # this is Y_{t+1}
[gamma, alpha, 0], # this is C_{t+1}
[0, beta, -beta]] # this is I_{t+1}
self.C = np.zeros((3, 1))
self.C[1] = sigma # stochastic
# Initialize LSS with parameters from Samuleson model
LinearStateSpace.__init__(self, self.A, self.C, self.G, mu_0=self.mu_0)
def plot_simulation(self, ts_length=100, stationary=True):
# Temporarily store original parameters
temp_mu = self.mu_0
temp_Sigma = self.Sigma_0
# Set distribution parameters equal to their stationary values for simulation
if stationary == True:
try:
self.mu_x, self.mu_y, self.sigma_x, self.sigma_y = self.stationary_distributions()
self.mu_0 = self.mu_y
self.Sigma_0 = self.sigma_y
# Exception where no convergence achieved when calculating stationary distributions
except ValueError:
print('Stationary distribution does not exist')
x, y = self.simulate(ts_length)
fig, axes = plt.subplots(3, 1, sharex=True, figsize=(15, 8))
titles = ['Output ($Y_t$)', 'Consumption ($C_t$)', 'Investment ($I_t$)']
colors = ['darkblue', 'red', 'purple']
for ax, series, title, color in zip(axes, y, titles, colors):
ax.plot(series, color=color)
ax.set(title=title, xlim=(0, n))
ax.grid()
axes[-1].set_xlabel('Iteration')
# Reset distribution parameters to their initial values
self.mu_0 = temp_mu
self.Sigma_0 = temp_Sigma
return fig
def plot_irf(self, j=5):
x, y = self.impulse_response(j)
# Reshape into 3 x j matrix for plotting purposes
yimf = np.array(y).flatten().reshape(j+1, 3).T
fig, axes = plt.subplots(3, 1, sharex=True, figsize=(15, 8))
labels = ['$Y_t$', '$C_t$', '$I_t$']
colors = ['darkblue', 'red', 'purple']
for ax, series, label, color in zip(axes, yimf, labels, colors):
ax.plot(series, color=color)
ax.set(xlim=(0, j))
ax.set_ylabel(label, rotation=0, fontsize=14, labelpad=10)
ax.grid()
axes[0].set_title('Impulse Response Functions')
axes[-1].set_xlabel('Iteration')
return fig
def multipliers(self, j=5):
x, y = self.impulse_response(j)
return np.sum(np.array(y).flatten().reshape(j+1, 3), axis=0)
samlss = SamuelsonLSS()
samlss.plot_simulation(100, stationary=False)
plt.show()
samlss.plot_simulation(100, stationary=True)
plt.show()
samlss.plot_irf(100)
plt.show()
samlss.multipliers()
pure_multiplier = SamuelsonLSS(alpha=0.95, beta=0)
pure_multiplier.plot_simulation()
pure_multiplier = SamuelsonLSS(alpha=0.8, beta=0)
pure_multiplier.plot_simulation()
pure_multiplier.plot_irf(100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: OOP in Action
Step3: Explanation of the graph
Step5: none
Step7: Manual or “by hand” root calculations
Step9: none
Step10: none
Step11: none
Step13: none
Step14: none
Step15: none
Step16: Digression
Step17: <table width=100%><tr style='background-color
Step19: <table width=100%><tr style='background-color
Step20: none
Step22: none
Step23: A permanent government spending shock can be simulated as follows
Step24: none
Step26: none
Step27: Illustration of Samuelson class
Step28: ```none
Step29: Using the graph
Step31: Using the LinearStateSpace class
Step32: Other methods in the LinearStateSpace class
Step33: <table width=100%><tr style='background-color
Step35: none
Step36: Illustrations
Step37:
Step38:
Step39:
Step40: none
Step41: none
Step42:
|
7,934
|
<ASSISTANT_TASK:>
Python Code:
from nltk.corpus import propbank
pb_instances = propbank.instances()
print(pb_instances)
inst = pb_instances[103]
print("File ID:", inst.fileid)
print("Sentence Number:", inst.sentnum)
print("Word Number:", inst.wordnum)
inst.tagger
inst.inflection
infl = inst.inflection
infl.form, infl.tense, infl.aspect, infl.person, infl.voice
inst.roleset
inst.predicate
inst.arguments
print(inst.predicate.wordnum, inst.predicate.height)
tree = inst.tree
from nltk.corpus import treebank
assert tree == treebank.parsed_sents(inst.fileid)[inst.sentnum]
inst.predicate.select(tree)
for (argloc, argid) in inst.arguments:
print('%-10s %s' % (argid, argloc.select(tree).pformat(500)[:50]))
treepos = inst.predicate.treepos(tree)
print (treepos, tree[treepos])
inst = pb_instances[6]
inst.roleset
argloc, argid = inst.arguments[2]
argloc
argloc.pieces
argloc.pieces[0].pieces
print(argloc.select(inst.tree))
expose_01 = propbank.roleset('expose.01')
turn_01 = propbank.roleset('turn.01')
print(turn_01)
for role in turn_01.findall("roles/role"):
print(role.attrib['n'], role.attrib['descr'])
from xml.etree import ElementTree
print(ElementTree.tostring(turn_01.find('example')).decode('utf8').strip())
inst = pb_instances[9352]
inst.fileid
print(inst.tree)
print(inst.predicate.select(inst.tree))
inst = pb_instances[9353]
inst.fileid
print(inst.tree)
print(inst.predicate.select(inst.tree))
propbank.verbs()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Each propbank instance defines the following member variables
Step2: The location of the predicate and of the arguments are encoded using PropbankTreePointer objects, as well as PropbankChainTreePointer objects and PropbankSplitTreePointer objects. A PropbankTreePointer consists of a wordnum and a height
Step3: This identifies the tree constituent that is headed by the word that is the wordnum'th token in the sentence, and whose span is found by going height nodes up in the tree. This type of pointer is only useful if we also have the corresponding tree structure, since it includes empty elements such as traces in the word number count. The trees for 10% of the standard PropBank Corpus are contained in the treebank corpus
Step4: Propbank tree pointers can be converted to standard tree locations, which are usually easier to work with, using the treepos() method
Step5: In some cases, argument locations will be encoded using PropbankChainTreePointers (for trace chains) or PropbankSplitTreePointers (for discontinuous constituents). Both of these objects contain a single member variable, pieces, containing a list of the constituent pieces. They also define the method select(), which will return a tree containing all the elements of the argument. (A new head node is created, labeled "CHAIN" or "SPLIT", since the argument is not a single constituent in the original tree). Sentence #6 contains an example of an argument that is both discontinuous and contains a chain
Step6: The PropBank Corpus also provides access to the frameset files, which define the argument labels used by the annotations, on a per-verb basis. Each frameset file contains one or more predicates, such as 'turn' or 'turn_on', each of which is divided into coarse-grained word senses called rolesets. For each roleset, the frameset file provides descriptions of the argument roles, along with examples.
Step7: Note that the standard corpus distribution only contains 10% of the treebank, so the parse trees are not available for instances starting at 9353
Step8: However, if you supply your own version of the treebank corpus (by putting it before the nltk-provided version on nltk.data.path, or by creating a ptb directory as described above and using the propbank_ptb module), then you can access the trees for all instances.
|
7,935
|
<ASSISTANT_TASK:>
Python Code:
%gui
from PyQt5 import QtWidgets
b1 = QtWidgets.QPushButton("Click Me")
%gui qt5
from PyQt5 import QtWidgets
b1 = QtWidgets.QPushButton("Click Me")
b1.show()
def on_click_cb():
print("Clicked")
b1.clicked.connect(on_click_cb)
%connect_info
!jupyter kernel list
%qtconsole
b1.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A pop up will appear saying
Step2:
Step3: Now, if you click the button, the callback will be called.
Step4: Now the Jupyter QtConsole will be started by connecting to the ipython kernel that is already running.
|
7,936
|
<ASSISTANT_TASK:>
Python Code:
import heapq
nums = [1, 8, 2, 23, 7, -4, 18, 23, 42, 37, 2]
print(heapq.nlargest(3, nums)) # Prints [42, 37, 23]
print(heapq.nsmallest(3, nums)) # Prints [-4, 1, 2]
portfolio = [
{'name': 'IBM', 'shares': 100, 'price': 91.1},
{'name': 'AAPL', 'shares': 50, 'price': 543.22},
{'name': 'FB', 'shares': 200, 'price': 21.09},
{'name': 'HPQ', 'shares': 35, 'price': 31.75},
{'name': 'YHOO', 'shares': 45, 'price': 16.35},
{'name': 'ACME', 'shares': 75, 'price': 115.65}
]
cheap = heapq.nsmallest(3, portfolio, key=lambda s: s['price'])
expensive = heapq.nlargest(3, portfolio, key=lambda s: s['price'])
print(cheap)
print(expensive)
import os
file_dir = os.path.dirname(os.path.realpath('__file__'))
filename = os.path.abspath(os.path.join(file_dir, "..", "code/src/1/keeping_the_last_n_items/somefile.txt"))
!head $filename
from collections import deque
def search(lines, pattern, history=5):
previous_lines = deque(maxlen=history)
for line in lines:
if pattern in line:
yield line, previous_lines
previous_lines.append(line)
# Example use on a file
if __name__ == '__main__':
with open(filename) as f:
for line, prevlines in search(f, 'python', 5):
for pline in prevlines:
print(pline, end='')
print(line, end='')
print('-'*20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 2
Step2: Example 2
|
7,937
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
def lin_regplot(X, y, model):
plt.scatter(X, y, c='blue')
plt.plot(X, model.predict(X), color='red')
return
X = np.array([ 1, 2, 3, 4, 5])[:, np.newaxis]
y = np.array([ 1, 2, 3, 4, 5])
ne_lr = LinearRegression(solver='normal_equation')
ne_lr.fit(X, y)
print('Intercept: %.2f' % ne_lr.w_[0])
print('Slope: %.2f' % ne_lr.w_[1])
lin_regplot(X, y, ne_lr)
gd_lr = LinearRegression(solver='gd', eta=0.005, epochs=1500, random_seed=0)
gd_lr.fit(X, y)
print('Intercept: %.2f' % gd_lr.w_[0])
print('Slope: %.2f' % gd_lr.w_[1])
plt.plot(range(1, gd_lr.epochs+1), gd_lr.cost_)
plt.xlabel('Epochs')
plt.ylabel('Cost')
plt.tight_layout()
plt.show()
lin_regplot(X, y, gd_lr)
X_std = (X - np.mean(X)) / X.std()
y_std = (y - np.mean(y)) / y.std()
gd_lr = LinearRegression(solver='gd', eta=0.1, epochs=10, random_seed=0)
gd_lr.fit(X_std, y_std)
print('Intercept: %.2f' % gd_lr.w_[0])
print('Slope: %.2f' % gd_lr.w_[1])
plt.plot(range(1, gd_lr.epochs+1), gd_lr.cost_)
plt.xlabel('Epochs')
plt.ylabel('Cost')
plt.tight_layout()
plt.show()
lin_regplot(X_std, y_std, gd_lr)
sgd_lr = LinearRegression(solver='sgd', eta=0.1, epochs=10, random_seed=0)
sgd_lr.fit(X_std, y_std)
print('Intercept: %.2f' % sgd_lr.w_[0])
print('Slope: %.2f' % sgd_lr.w_[1])
plt.plot(range(1, sgd_lr.epochs+1), sgd_lr.cost_)
plt.xlabel('Epochs')
plt.ylabel('Cost')
plt.tight_layout()
plt.show()
lin_regplot(X, y, sgd_lr)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <br>
Step2: <br>
Step3: <br>
Step4: <br>
|
7,938
|
<ASSISTANT_TASK:>
Python Code:
MU = 3.9
N = int(10E4)
INITIAL = 0.5
MIN_SIZE = 2
MAX_SIZE = 26
BITS_RANGE = array(list(range(MIN_SIZE, MAX_SIZE + 1)))
def generate(x, mu, n):
current = x
for _ in range(n):
yield current
current = mu * current * (1 - current)
def bin_to_dec(sequence, bits):
aligned_sequence = sequence.flatten()[:sequence.size - sequence.size % bits]
binary_matrix = aligned_sequence.reshape((sequence.size // bits, bits))
result_sequence = array([binary_matrix[:,i] * (2**i) for i in range(binary_matrix.shape[1])])
return result_sequence.sum(axis=0)
def R(probabilities, beta):
return -log((probabilities[probabilities>0]**beta).sum()) / (beta - 1)
def H(probabilities):
indices = probabilities>0
return -(log(probabilities[indices]) * probabilities[indices]).sum()
def show_entropies(entropies, errors=None, count=None, labels=None):
for entropy, label in zip(entropies, labels):
if count is None:
plt.plot(BITS_RANGE, entropy, '-o', label=label)
else:
plt.plot(BITS_RANGE[:count], entropy, '-o', label=label)
if errors is not None:
plt.errorbar(**errors)
plt.legend()
plt.show()
source_sequence = array(list(generate(INITIAL, MU, N)))
bin_sequence = (source_sequence > 0.5).astype('i')
sequences = array([bin_to_dec(bin_sequence, bits_count).astype('i') for bits_count in BITS_RANGE])
probabilities = array([bincount(sequence)/sequence.size for sequence in sequences])
probabilities[1]
plt.plot(source_sequence[:100])
plt.show()
shannon_entropies = [H(p) for p in probabilities]
renyi_entropies2 = [R(p, 2) for p in probabilities]
renyi_entropies3 = [R(p, 3) for p in probabilities]
show_entropies([shannon_entropies, renyi_entropies2, renyi_entropies3], labels=['Shannon', 'Renyi (2)', 'Renyi (3)'])
means = array([shannon_entropies, renyi_entropies2, renyi_entropies3]).mean(axis=0)
show_entropies([shannon_entropies, renyi_entropies2, renyi_entropies3, means], labels=['Shannon', 'Renyi (2)', 'Renyi (3)', 'Mean'])
shannon_entropies_chaos = [n*log(2) for n in BITS_RANGE]
renyi_entropies2_chaos = [-(n*log(2)-2*n*log(2)) for n in BITS_RANGE]
renyi_entropies3_chaos = [-(n*log(2)-3*n*log(2))/2 for n in BITS_RANGE]
show_entropies([shannon_entropies, renyi_entropies2, renyi_entropies3, means, shannon_entropies_chaos], labels=['Shannon', 'Renyi (2)', 'Renyi (3)', 'Mean', 'Uniform entropy'])
standard_deviation = array([shannon_entropies, renyi_entropies2, renyi_entropies3]).std(axis=0)
show_entropies([shannon_entropies, renyi_entropies2, renyi_entropies3], labels=['Shannon', 'Renyi (2)', 'Renyi (3)'], errors={'x': BITS_RANGE, 'y': means, 'yerr': standard_deviation, 'label': 'Mean'})
count = standard_deviation.argmax()
a, b = lstsq(vstack([BITS_RANGE[:count], ones(count)]).T, means[:count])[0]
approximation_error = array([means[:count], a*BITS_RANGE[:count]+b]).std(axis=0)
show_entropies([shannon_entropies[:count], renyi_entropies2[:count], renyi_entropies3[:count], means[:count]], labels=['Shannon', 'Renyi (2)', 'Renyi (3)', 'Mean'], errors={'x': BITS_RANGE[:count], 'y': a*BITS_RANGE[:count]+b, 'yerr': approximation_error, 'label': 'Linear approximation'}, count=count)
show_entropies([shannon_entropies_chaos[:count], means[:count]], labels=['Uniform entropy', 'Mean'], errors={'x': BITS_RANGE[:count], 'y': a*BITS_RANGE[:count]+b, 'yerr': approximation_error, 'label': 'Linear approximation'}, count=count)
print('Source entropy estimate is', a)
print('Chaotic entropy is', log(2))
(array(shannon_entropies[1:]) - shannon_entropies[:-1])[:count][:6]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate a sequence
Step2: Entropies
Step3: Now we have mappings from binary block size to entropies of sequences
Step4: Now we will calculate average entropy for each block size
Step5: Get same entropies for chaotic sequence
Step6: To get only needed numbers we need to calculate standard deviations
Step7: Analysis
Step8: Results
|
7,939
|
<ASSISTANT_TASK:>
Python Code:
from utils import load_buzz, select, write_result
from features import featurize, get_pos
from containers import Questions, Users, Categories
%matplotlib inline
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
def plot_gmm(X, models, n_components, covariance_type='diag', n_iter=100,
figsize=(10, 20), suptitle=None, xlabel=None, ylabel=None):
color_iter = ['r', 'g', 'b', 'c', 'm', 'y', 'k', 'gray', 'pink', 'lime']
plt.figure(figsize=figsize)
plt.suptitle(suptitle, fontsize=20)
for i, model in enumerate(models):
mm = getattr(mixture, model)(n_components=n_components,
covariance_type=covariance_type,
n_iter=n_iter)
mm.fit(X_pos_qid)
Y = mm.predict(X_pos_qid)
plt.subplot(len(models), 1, 1 + i)
for i, color in enumerate(color_iter):
plt.scatter(X_pos_qid[Y == i, 0], X_pos_qid[Y == i, 1], .7, color=color)
plt.title(model, fontsize=15)
plt.xlabel(xlabel, fontsize=12)
plt.ylabel(ylabel, fontsize=12)
plt.grid()
plt.show()
users = Users(load_buzz())
questions = Questions(load_buzz())
X_pos_uid = users.select(['ave_pos_uid', 'acc_ratio_uid'])
X_pos_qid = questions.select(['ave_pos_qid', 'acc_ratio_qid'])
plot_gmm(X_pos_uid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying users',
xlabel='abs(position)',
ylabel='accuracy ratio')
plot_gmm(X_pos_qid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying questions',
xlabel='abs(position)',
ylabel='accuracy ratio')
# Question category
n_components = 8
gmm = mixture.DPGMM(n_components=n_components, covariance_type='diag', n_iter=10**10)
gmm.fit(X_pos_qid)
pred_cat_qid = gmm.predict(X_pos_qid)
plt.hist(pred_cat_qid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("Question Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
# User category
n_components = 8
gmm = mixture.DPGMM(n_components=n_components, covariance_type='diag', n_iter=10**10)
gmm.fit(X_pos_uid)
pred_cat_uid = gmm.predict(X_pos_uid)
plt.hist(pred_cat_uid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("User Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
from collections import Counter
users.sub_append('cat_uid', [str(x) for x in pred_cat_uid])
questions.sub_append('cat_qid', [str(x) for x in pred_cat_qid])
# to get most frequent cat for some test data which do not have ids in train set
most_pred_cat_uid = Counter(pred_cat_uid).most_common(1)[0][0]
most_pred_cat_qid = Counter(pred_cat_qid).most_common(1)[0][0]
print(most_pred_cat_uid)
print(most_pred_cat_qid)
print(users[1])
print(questions[1])
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['sign_val', 'avg_pos'])
X_train = select(X_train, regression_keys)
categories = Categories(load_buzz())
for item in X_train:
for key in categories[item['category']].keys():
item[key] = categories[item['category']][key]
X_train
import nltk
def extract_entities(text, all=True, verbose=False):
count = 0
for sent in nltk.sent_tokenize(text):
for chunk in nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize(sent))):
if all:
if verbose: print(chunk)
if type(chunk) is nltk.tree.Tree:
count += 1
if verbose: print(chunk.label(), ' '.join(c[0] for c in chunk.leaves()))
elif chunk[1] == 'CD':
count += 1
if verbose: print('CD', chunk[0])
return count
from collections import defaultdict
ne_count = defaultdict(int)
for key in questions:
ne_count[key] = extract_entities(questions[key]['question'], all=False, verbose=False)
import pickle
with open('ne_count01.pkl', 'wb') as f:
pickle.dump(ne_count, f)
def transform(X):
for index, item in enumerate(X):
uid = int(item['uid'])
qid = int(item['qid'])
# uid
if int(uid) in users:
item['acc_ratio_uid'] = users[uid]['acc_ratio_uid']
item['cat_uid'] = users[uid]['cat_uid']
else:
acc = users.select(['acc_ratio_uid'])
item['acc_ratio_uid'] = sum(acc) / float(len(acc))
item['cat_uid'] = most_pred_cat_uid
# qid
if int(qid) in questions:
item['acc_ratio_qid'] = questions[qid]['acc_ratio_qid']
item['cat_qid'] = questions[qid]['cat_qid']
item['ne_count'] = ne_count[qid]
else:
acc = questions.select(['acc_ratio_qid'])
item['acc_ratio_qid'] = sum(acc) / float(len(acc))
item['cat_qid'] = most_pred_cat_qid
item['uid'] = str(uid)
item['qid'] = str(qid)
transform(X_train)
X_train[1]
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
X_train_dict_vec = vec.fit_transform(X_train)
import multiprocessing
from sklearn import linear_model
from sklearn.cross_validation import train_test_split, cross_val_score
import math
from numpy import abs, sqrt
regressor_names =
ElasticNetCV
#for l1 in [0.5, 0.2, 0.7, 0.9]:
for l1 in [0.5]:
print ("=== ElasticNetCV RMSE", "with", l1)
for regressor in regressor_names.split():
scores = cross_val_score(getattr(linear_model, regressor)(n_jobs=3, normalize=True, l1_ratio = l1),
X_train_dict_vec, y_train,
cv=2,
scoring='mean_squared_error'
)
print (regressor, sqrt(abs(scores)).mean())
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['avg_pos'])
X_train = select(X_train, regression_keys)
X_test = featurize(load_buzz(), group='test', sign_val=None, extra=['avg_pos'])
X_test = select(X_test, regression_keys)
transform(X_train)
transform(X_test)
for item in X_train:
for key in categories[item['category']].keys():
item[key] = categories[item['category']][key]
for item in X_test:
for key in categories[item['category']].keys():
item[key] = categories[item['category']][key]
X_train[1]
X_test[1]
vec = DictVectorizer()
vec.fit(X_train + X_test)
X_train = vec.transform(X_train)
X_test = vec.transform(X_test)
for l1_ratio in [0.72, 0.7]:
print('=== l1_ratio:', l1_ratio)
regressor = linear_model.ElasticNetCV(n_jobs=3, normalize=True, l1_ratio=l1_ratio)
regressor.fit(X_train, y_train)
print(regressor.coef_)
print(regressor.alpha_)
predictions = regressor.predict(X_test)
write_result(load_buzz()['test'], predictions, file_name=str(l1_ratio)+'guess.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Right, now, you can use those module.
Step3: B. Modeling
Step4: n_iter=10
|
7,940
|
<ASSISTANT_TASK:>
Python Code:
couleurs = ["rouge", "orange", "jaune", "vert", "bleu", "indigo", "violet"]
tailles = ["page", "homme", "demi patron", "patron", "grand patron"]
[(couleur, taille) for couleur in couleurs for taille in tailles]
couleurs_et_tailles = ((couleur, taille) for couleur in couleurs for taille in tailles)
couleurs_et_tailles
couleurs_et_tailles = ((couleur, taille) for couleur in couleurs for taille in tailles)
couleurs_et_tailles
for article in couleurs_et_tailles:
print(article)
jours = ["lundi", "mardi", "mercredi", "jeudi", "vendredi", "samedi", "dimanche"]
all_slice = slice(None, None, None)
print(jours[all_slice])
every_other_slice = slice(0, None, 2)
print(jours[every_other_slice])
reverse_slice = slice(None, None, -1)
print(jours[reverse_slice])
jours = ["lundi", "mardi", "mercredi", "jeudi", "vendredi", "samedi", "dimanche"] # already ordered
print(sorted(jours))
print(sorted(jours, reverse=True))
print(sorted(jours, key=len, reverse=True)) # by length of string, descend
print(sorted(jours, reverse=True, key=lambda x: jours.index(x))) # by actual order of week, descending
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As you can see, it doesn't generate the complete list like the listcomp above. Genexps don't produce entire lists in memory. You need to iterate over them, and then you will get the item one by one. No one is keeping the entire list!
Step2: Next, it's about tuple (in short, not just an immutable list), and slicing. More examples
Step3: Next, the chapter moves on to list.sort() vs. sorted(). I kinda like sorted because it works with everything including generators, and it returns a new list. Whereas list.sort() is more limited because it is an in-place sort which returns None.
|
7,941
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("JXJQYpgFAyc",width=640,height=360) # Numerical integration
# Put your code here
import math
Nstep = 10
begin = 0.0
end = 3.1415926
dx = (end-begin)/Nstep
sum = 0.0
xpos = 0.0
for i in range(Nstep):
thisval = math.sin(xpos)*dx
sum += thisval
xpos += dx
error = abs(sum-2.0)/2.0
print("for dx = {0:3f} we get an answer of {1:3f} and a fractional error of {2:4e}".format(dx,sum,error))
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("b0K8LiHyrBg",width=640,height=360) # Numerical differentiation
# Put your code here
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
def f(x):
return np.exp(-2.0*x)
def dfdx(x):
return -2.0*np.exp(-2.0*x)
x = np.linspace(-3.0,3.0, 100)
dx = 1.0e-2
deriv = (f(x+dx)-f(x-dx))/(2.0*dx)
error = np.abs((deriv-dfdx(x))/dfdx(x))
plt.plot(x,error)
print("the average fractional error is:", error.mean())
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/IMkGVL5XnxqZM8EP2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 1
Step2: Question 2
Step4: Question 3
|
7,942
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import mne
import os.path as op
from matplotlib import pyplot as plt
# Load an example dataset, the preload flag loads the data into memory now
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.RawFIF(data_path, preload=True, verbose=False)
# Give the sample rate
print('sample rate:', raw.info['sfreq'], 'Hz')
# Give the size of the data matrix
print('channels x samples:', raw._data.shape)
print('Shape of data array:', raw._data.shape)
array_data = raw._data[0, :1000]
_ = plt.plot(array_data)
# Extract data from the first 5 channels, from 1 s to 3 s.
sfreq = raw.info['sfreq']
data, times = raw[:5, int(sfreq * 1):int(sfreq * 3)]
_ = plt.plot(times, data.T)
_ = plt.title('Sample channels')
# Pull all MEG gradiometer channels:
# Make sure to use copy==True or it will overwrite the data
meg_only = raw.pick_types(meg=True, copy=True)
eeg_only = raw.pick_types(meg=False, eeg=True, copy=True)
# The MEG flag in particular lets you specify a string for more specificity
grad_only = raw.pick_types(meg='grad', copy=True)
# Or you can use custom channel names
pick_chans = ['MEG 0112', 'MEG 0111', 'MEG 0122', 'MEG 0123']
specific_chans = raw.pick_channels(pick_chans, copy=True)
print(meg_only, eeg_only, grad_only, specific_chans, sep='\n')
f, (a1, a2) = plt.subplots(2, 1)
eeg, times = eeg_only[0, :int(sfreq * 2)]
meg, times = meg_only[0, :int(sfreq * 2)]
a1.plot(times, meg[0])
a2.plot(times, eeg[0])
restricted = raw.crop(5, 7) # in seconds
print('New time range from', restricted.times.min(), 's to',
restricted.times.max(), 's')
restricted = restricted.drop_channels(['MEG 0241', 'EEG 001'])
print('Number of channels reduced from', raw.info['nchan'], 'to',
restricted.info['nchan'])
# Create multiple :class:`Raw <mne.io.RawFIF>` objects
raw1 = raw.copy().crop(0, 10)
raw2 = raw.copy().crop(10, 20)
raw3 = raw.copy().crop(20, 100)
# Concatenate in time (also works without preloading)
raw1.append([raw2, raw3])
print('Time extends from', raw1.times.min(), 's to', raw1.times.max(), 's')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Continuous data is stored in objects of type
Step2: Information about the channels contained in the
Step3: You can also pass an index directly to the
Step4: Selecting subsets of channels and samples
Step5: Notice the different scalings of these types
Step6: You can restrict the data to a specific time range
Step7: And drop channels by name
Step8: Concatenating
|
7,943
|
<ASSISTANT_TASK:>
Python Code:
from bs4 import BeautifulSoup
import requests
r = requests.get('https://en-marche.fr/emmanuel-macron/le-programme')
soup = BeautifulSoup(r.text, 'html.parser')
proposals = soup.find_all(class_='programme__proposal')
proposals = [p for p in proposals if 'programme__proposal--category' not in p.attrs['class']]
len(proposals)
p = proposals[0]
full_url = 'https://en-marche.fr' + p.find('a').attrs['href']
full_url
full_urls = ['https://en-marche.fr' + p.find('a').attrs['href'] for p in proposals]
full_urls[:10]
r = requests.get(full_url)
soup = BeautifulSoup(r.text, 'html.parser')
figure_tag = soup.find('figure', class_='fullscreen')
figure_tag
src_url = 'https://en-marche.fr' + figure_tag('img')[0].attrs['src']
src_url
from IPython.display import Image
Image(url=src_url)
def extract_img_src(url):
"Extracts image src url from linked page."
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
figure_tag = soup.find('figure', class_='fullscreen')
if figure_tag is not None and figure_tag('img') is not None:
src_url = 'https://en-marche.fr' + figure_tag('img')[0].attrs['src']
return src_url
else:
print("no image for url: {}".format(url))
return None
srcs = [extract_img_src(url) for url in full_urls]
srcs = [_ for _ in srcs if _ is not None]
header = <!doctype html>
<html lang="fr">
<head>
<meta charset="utf-8">
<title>Gallerie des photos du site d'Emmanuel Macron</title>
<style>
img {width: 100%;}
</style>
</head>
def format_as_img_tag(src):
return "<img src={} />".format(src)
format_as_img_tag(srcs[2])
with open('galerie_macron.html', 'w') as f:
body = <body>
{0}
</body>.format("\n".join(format_as_img_tag(url) for url in srcs))
html = header + body + "</html>"
f.write(html)
r = requests.get('https://www.fillon2017.fr/projet/')
soup = BeautifulSoup(r.text, 'html.parser')
tags = soup.find_all('a', class_='projectItem__inner')
sublinks = [tag.attrs['href'] for tag in tags]
sublinks[0]
r = requests.get(sublinks[0])
soup = BeautifulSoup(r.text, 'html.parser')
src = soup.find('div', class_='singleProject__banner bannerWithMask backgroundCover').attrs['style'].split("background-image: url(")[1][1:-3]
def extract_img_src(url):
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
src = soup.find('div', class_='singleProject__banner bannerWithMask backgroundCover').attrs['style'].split("background-image: url(")[1][1:-3]
return src
srcs = [extract_img_src(url) for url in sublinks]
srcs
with open('galerie_fillon.html', 'w') as f:
body = <body>
{0}
</body>.format("\n".join(format_as_img_tag(url) for url in srcs))
html = header + body + "</html>"
f.write(html)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: On peut maintenant extraire le lien vers l'image.
Step2: On peut afficher ceci dans le notebook.
Step5: On peut répeter ce processus et faire une gallerie avec toutes ces images.
Step6: Ce sont des belles photos...
Step8: On s'attaque aux pages individuelles.
|
7,944
|
<ASSISTANT_TASK:>
Python Code:
# Importing pandas to read CSV file
import pandas as pd
# Read breast cancer csv file to pandas data frame data
data = pd.read_csv('wisconsin_breast_cancer.csv')
# Display the first 5 rows of the csv file
data.head()
data.shape # It is always a good idea to understand your data
# There are 699 rows and 11 columns in this CSV file
data.isnull().sum() # Find out how many cells have missing values
# Looks like the field nucleoli has 16 missing values
data=data.dropna(how='any') # Dropping any rows that has missing values
# previously we had 699 rows. The above code deleted 16 rows with missing values and now we have 683 rows
#creating feature data set
# Here we are going to select feature fileds and assign it to a pandas data frame x. We do not need Id. So it was dropped
x=data[['thickness','size','shape','adhesion','single','nuclei','chromatin','nucleoli','mitosis']]
x.head() # printing the first 5 rows to see whether we got all the features
# Now we are going to extract the 'class' field from 'data' and store it in variable y
# This is the variable that we want to predict 0= no cancer 1 = cancer
y=data['class']
y.head()
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=0)
# train logistic regression model
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(x_train,y_train)
y_pred_class=logreg.predict(x_test) # make predictions based on x_test and store it to y_pred_class
# Now let us see how our model is performing.
# We will start with accuracy
from sklearn import metrics
print metrics.accuracy_score(y_test, y_pred_class)
## Not bad at all. 92.9% of the time our model was able to identify breast cancer based on the training data
# Now let us see how our "intelligent" model compare to a dumb model which predicts "o" or "not cancer" all the time
# because it the most prevelent class
y_test.value_counts() # as you see "0" is more prevalent
# Now let us see the percentage of "o" in y_test. This will be the accuracy of a dumb model that predicts 0
# all the time
1-y_test.mean()
# looks like we did better than the dumb model!
print metrics.confusion_matrix(y_test, y_pred_class)
# let us see what this means
#
# Predicted 0 Predicted 1 Total
#
#Actual 0 103 4 107
#Actual 1 8 56 64
#Total 111 60
confusion =metrics.confusion_matrix(y_test, y_pred_class)
TP = confusion[1,1] # true positive
TN = confusion[0,0] # true negatives
FP = confusion[0,1] # false positives
FN = confusion[1,0] # false negatives
# Let us see the sensitivity of our logistic regression model
print TP / float(TP+FN)
# Our model's sensitivity is 87.5%
# Let us calculate specificity
print TN / float(TN+FP)
# Looks like our model has pretty good specificity 96.2%
# Calculate false postive rate - predicting cancer when pt does not have cancer
print FP/ float(TN+FP)
# pretty awesome 3.7%
# precison - when it is predicting cancer how precise is it
# positive predictive value
print TP / float(TP+FP)
# 93.3% of the time
# Negative predictive value
print TN / float(TN+ FN)
# Now let us say that you want to figure out the probaility of cancer from a set of features
# we can use the predict_proba function
# let us first see the predicted answers
logreg.predict(x_test)[0:10] # predicting cancer based on the data from first 10 rows of x
# Now let us predict the probaility of each prediction
logreg.predict_proba(x_test)[0:10, :]
# The first colun is the probability of it being benign. Second column is the probablity of it being cancerous
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
import random
%matplotlib inline
# calculates the probability of predicting "1" (cancer) and store the out put in probab_cancer
proba_cancer=logreg.predict_proba(x_test)[:,1]
# we need the actual values in the cancer column and the predicted probabilities of postive value "1"
roc_auc_score(y_test, proba_cancer)
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, proba_cancer)
roc_auc = auc(false_positive_rate, true_positive_rate)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, true_positive_rate, 'b',
label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Delete rows with missing data
Step2: Getting ready to do classification
Step3: Now let us create a confusion matrix to identify sensitivity specificity & all the other good statistical stuff
Step4: Now let us find out the area under the curve.
|
7,945
|
<ASSISTANT_TASK:>
Python Code:
MAX = 10005
MOD = 1000000007
def countNum(idx , sum , tight , num , len1 , k ) :
if(len1 == idx ) :
if(sum == 0 ) :
return 1
else :
return 0
if(dp[idx ][sum ][tight ] != - 1 ) :
return dp[idx ][sum ][tight ]
res = 0
if(tight == 0 ) :
limit = num[idx ]
else :
limit = 9
for i in range(limit + 1 ) :
new_tight = tight
if(tight == 0 and i < limit ) :
new_tight = 1
res += countNum(idx + 1 ,(sum + i ) % k , new_tight , num , len1 , k )
res %= MOD
if(res < 0 ) :
res += MOD
dp[idx ][sum ][tight ] = res
return dp[idx ][sum ][tight ]
def process(s ) :
num =[]
for i in range(len(s ) ) :
num . append(ord(s[i ] ) - ord('0' ) )
return num
if __name__== ' __main __' :
n = "98765432109876543210"
len1 = len(n )
k = 58
dp =[[[- 1 for i in range(2 ) ] for j in range(101 ) ] for k in range(MAX ) ]
num = process(n )
print(countNum(0 , 0 , 0 , num , len1 , k ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
7,946
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('/Users/c242587/Desktop/projects/git/ngboost')
from ngboost import NGBRegressor
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X, Y = load_boston(True)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
ngb = NGBRegressor().fit(X_train, Y_train)
Y_preds = ngb.predict(X_test)
Y_dists = ngb.pred_dist(X_test)
# test Mean Squared Error
test_MSE = mean_squared_error(Y_preds, Y_test)
print('Test MSE', test_MSE)
# test Negative Log Likelihood
test_NLL = -Y_dists.logpdf(Y_test).mean()
print('Test NLL', test_NLL)
Y_dists[0:5].params
from ngboost.distns import Exponential, Normal
X, Y = load_boston(True)
X_reg_train, X_reg_test, Y_reg_train, Y_reg_test = train_test_split(X, Y, test_size=0.2)
ngb_norm = NGBRegressor(Dist=Normal, verbose=False).fit(X_reg_train, Y_reg_train)
ngb_exp = NGBRegressor(Dist=Exponential, verbose=False).fit(X_reg_train, Y_reg_train)
ngb_norm.predict(X_reg_test)[0:5]
ngb_exp.predict(X_reg_test)[0:5]
ngb_exp.pred_dist(X_reg_test)[0:5].params
import numpy as np
from ngboost import NGBSurvival
from ngboost.distns import LogNormal
X, Y = load_boston(True)
X_surv_train, X_surv_test, Y_surv_train, Y_surv_test = train_test_split(X, Y, test_size=0.2)
# introduce administrative censoring to simulate survival data
T_surv_train = np.minimum(Y_surv_train, 30) # time of an event or censoring
E_surv_train = Y_surv_train > 30 # 1 if T[i] is the time of an event, 0 if it's a time of censoring
ngb = NGBSurvival(Dist=LogNormal).fit(X_surv_train, T_surv_train, E_surv_train)
from ngboost import NGBClassifier
from ngboost.distns import k_categorical, Bernoulli
from sklearn.datasets import load_breast_cancer
X, y = load_breast_cancer(True)
y[0:15] = 2 # artificially make this a 3-class problem instead of a 2-class problem
X_cls_train, X_cls_test, Y_cls_train, Y_cls_test = train_test_split(X, y, test_size=0.2)
ngb_cat = NGBClassifier(Dist=k_categorical(3), verbose=False) # tell ngboost that there are 3 possible outcomes
_ = ngb_cat.fit(X_cls_train, Y_cls_train) # Y should have only 3 values: {0,1,2}
ngb_cat.predict(X_cls_test)[0:5]
ngb_cat.predict_proba(X_cls_test)[0:5]
ngb_cat.pred_dist(X_cls_test)[0:5].params
from ngboost.scores import LogScore, CRPScore
NGBRegressor(Dist=Exponential, Score=CRPScore, verbose=False).fit(X_reg_train, Y_reg_train)
NGBClassifier(Dist=k_categorical(3), Score=LogScore, verbose=False).fit(X_cls_train, Y_cls_train)
from sklearn.tree import DecisionTreeRegressor
learner = DecisionTreeRegressor(criterion='friedman_mse', max_depth=5)
NGBSurvival(Dist=Exponential, Score=CRPScore, Base=learner, verbose=False).fit(X_surv_train, T_surv_train, E_surv_train)
ngb = NGBRegressor(n_estimators=100, learning_rate=0.01,
minibatch_frac=0.5, col_sample=0.5)
ngb.fit(X_reg_train, Y_reg_train)
ngb = NGBRegressor(n_estimators=100, learning_rate=0.01,
minibatch_frac=0.5, col_sample=0.5)
weights = np.random.random(Y_reg_train.shape)
ngb.fit(X_reg_train, Y_reg_train, sample_weight=weights)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the estimated distributional parameters at a set of points is easy. This returns the predicted mean and standard deviation of the first five observations in the test set
Step2: Distributions
Step3: There are two prediction methods for NGBRegressor objects
Step4: Survival Regression
Step5: The scores currently implemented assume that the censoring is independent of survival, conditional on the observed predictors.
Step6: When using NGBoost for classification, the outcome vector Y must consist only of integers from 0 to K-1, where K is the total number of classes. This is consistent with the classification standards in sklearn.
Step7: Scores
Step8: Base Learners
Step9: Other Arguments
Step10: Sample weights (for training) are set using the sample_weight argument to fit.
|
7,947
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
%matplotlib inline
def clf_preds(accuracy, truth_vector):
# accuracy of classifier
# truth_vector is the actual value of the target
preds = []
for i in range(len(truth)):
pred = np.random.choice([truth[i], 1-truth[i]], p=[accuracy, 1-accuracy])
preds.append(pred)
return preds
# generate num_clfs number of predictors -- an ensemble of predictors
def generate_clfs(num_clfs, accuracy, truth_vector):
clfs = [clf_preds(accuracy, truth_vector) for i in range(num_clfs)]
return clfs
# majority vote result for a prediction from each member of the ensemble
def majority_vote(pred_vector):
vote_result = stats.mode(pred_vector)[0][0]
return vote_result
def ensemble_accuracy(num_clfs, clf_accuracy, truth_vector):
# Generate the ensemble of classifiers and their predictions
clfs = generate_clfs(num_clfs, clf_accuracy, truth_vector)
# Each clf in clfs contains a list of predictions associated with a particular classifier
# Transpose those values to get the first set of predictions
# This set of predictions can then be voted on to find the majority result
preds = np.transpose(clfs)
majority_results = [majority_vote(pred) for pred in preds]
# Compare the majority_results with the truth_vector
correct = 0
for i in range(len(truth_vector)):
if majority_results[i] == truth_vector[i]:
correct += 1
ensemble_accuracy = correct/len(truth_vector)
return ensemble_accuracy
# The truth (training data) is as follows
truth = np.random.choice([0,1], 1000, p=[0.7, 0.3])
#truth
clfs = [clf_preds(0.55, truth) for i in range(3)]
np.transpose(clfs)
%time ensemble_accuracy(100, 0.51, truth)
# 10 classifiers, data has 1000 rows, each classifier's accuracy varies from
# 0 to 1
clf_accs = np.arange(0,1.02,0.02)
%time ensemble_accs = [ensemble_accuracy(100, x, truth) for x in clf_accs]
#ensemble_accs
fig = plt.figure(1, figsize=(12, 8))
# Create an axes instance
ax = fig.add_subplot(111)
plt.title('Ensemble Prediction Accuracy (100 Individual Models/Classifiers)')
plt.xlabel('Individual Classifier Prediction Accuracy')
plt.ylabel('Ensemble Prediction Accuracy')
ax.set_xticks(np.arange(0, 1.1, 0.05))
ax.set_yticks(np.arange(0, 1.1, 0.05))
plt.grid()
# Create the plot
plt.plot(clf_accs, ensemble_accs, marker='o');
# Toss 5 coins, each with 0.51 probability each of heads
outcomes = np.random.choice(['heads', 'tails'], 5, p=[0.51, 1-0.51])
majority_outcome = majority_vote(outcomes)
#print("Tosses: {}".format(outcomes))
print("Majority Value: {}".format(majority_outcome))
# Toss n coins and see if the majority of outcome is heads (1) or tails (0)
clf_accuracy = 0.51
num_clfs = 500
outcomes = np.random.choice([1,0], num_clfs, p=[clf_accuracy, 1-clf_accuracy])
# Check majority value as number of classifiers increases
majority_vals = []
outcome_ratios = []
for i in range(1, num_clfs+1):
#print(outcomes[:i])
outcome_subset = outcomes[:i]
majority = majority_vote(outcome_subset)
majority_vals.append(majority)
# number of 1s (i.e., heads)
num_heads = list(outcome_subset).count(1)
outcome_ratio = num_heads/len(outcome_subset)
outcome_ratios.append(outcome_ratio)
#outcomes, majority_vals, outcome_ratios
fig = plt.figure(1, figsize=(12, 8))
# Create an axes instance
ax = fig.add_subplot(111)
plt.title('Ensemble Majority Value as Number of Classifiers Change')
plt.xlabel('Number of Classifiers')
plt.ylabel('Ensemble Majority Value')
#ax.set_xticks(np.arange(0, 1.1, 0.05))
#ax.set_yticks(np.arange(0, 1.1, 0.05))
plt.ylim([0,1.2])
plt.grid()
# Create the plot
plt.plot(range(1, num_clfs+1), majority_vals);
fig = plt.figure(1, figsize=(12, 8))
# Create an axes instance
ax = fig.add_subplot(111)
plt.title('Ratio of Heads as Number of Classifiers in the Ensemble Increases')
plt.xlabel('Number of Classifiers')
plt.ylabel('Ratio of Heads')
#ax.set_xticks(np.arange(0, 1.1, 0.05))
ax.set_yticks(np.arange(0, 1.1, 0.1))
plt.ylim([0,1.2])
#plt.xlim([0,100])
plt.grid()
# Create the plot
plt.plot(range(1, num_clfs+1), outcome_ratios);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Power of Ensembles
Step2: The accuracy of a majority-vote system from an ensemble of weak classifiers is a big improvement on the accuracy of any individual classifier in the ensemble.
Step3: When individual classifier accuracy is lower than 0.5 -- i.e., when the classifiers are worse than randomly predicting a result, the ensemble accuracy is a lot lower than the accuracy of the individual classifiers.
Step4: The majority value jumps around even when the number of coins tossed is high.
|
7,948
|
<ASSISTANT_TASK:>
Python Code:
def caps(val):
caps returns double the value of the provided value
return val*2
a = caps("TEST ")
print(a)
print(caps.__doc__)
a = caps(1234)
print(a)
def is_valid(data):
if 10 in data:
return True
return False
a = is_valid([10, 200, 33, "asf"])
print(a)
a = is_valid((10,))
print(a)
is_valid((10,))
a = is_valid((110,))
print(a)
def is_valid_new(data):
return 10 in data
print(is_valid_new([10, 200, 33, "asf"]))
a = is_valid_new((110,))
print(a)
def fatorial(n):#{
n = n if n > 1 else 1
j = 1
for i in range(1, n + 1):
j = j * i
return j
#}
# Testing...
for i in range(1, 6):
print (i, '->', fatorial(i))
def factorial(num):
Fatorial implemented with recursion.
if num <= 1:
return 1
else:
return(num * factorial(num - 1))
# Testing factorial()
print (factorial(5))
# 5 * (4 * (3 * (2) * (1))
def fib(n):
Fibonacci:
fib(n) = fib(n - 1) + fib(n - 2) se n > 1
fib(n) = 1 se n <= 1
if n > 1:
return fib(n - 1) + fib(n - 2)
else:
return 1
# Show Fibonacci from 1 to 5
for i in [1, 2, 3, 4, 5]:
print (i, '=>', fib(i))
def fib(n):
# the first two values
l = [1, 1]
# Calculating the others
for i in range(2, n + 1):
l.append(l[i -1] + l[i - 2])
return l[n]
# Show Fibonacci from 1 to 5
for i in [1, 2, 3, 4, 5]:
print (i, '=>', fib(i))
def test(a, b):
print(a, b)
return a + b
print(test(1, 2))
test(b=1, a=2)
def test_abc(a, b, c):
print(a, b, c)
return a + b + c
test_abc(2, c=3, b=2)
test_abc(2, b=2, c=3)
try:
test_abc(2, a=12, c=3)
except Exception as e:
print(e)
def test_new(a, b, c):
pass
def test(a, b):
print(a, b)
return a*a, b*b
x, a = test(2, 5)
print(x)
print(type(x))
print(a)
print(type(a))
print(type(test(2, 5)))
def test(a, b):
print(a, b)
return a*a, b*b, a*b
x = test(2 , 5)
print(x)
print(type(x))
def test(a, b):
print(a, b)
return a*a, b*b, "asdf"
x = test(2, 5)
print(x)
print(type(x))
def test(a=100, b=1000):
print(a, b)
return a, b
x = test(2, 5)
print(x)
print(test(10))
def test(a=100, b=1000):
print(a, b)
return a, b
print(test(b=10))
print(test(101))
def test(d, c, a=100, b=1000):
print(d, c, a, b)
return d, c, a, b
x = test(c=2, d=10, b=5)
print(x)
x = test(1, 2, 3, 4)
print(x)
print(test(10, 2))
def rgb_html(r=0, g=0, b=0):
Converts R, G, B to #RRGGBB
return '#%02x%02x%02x' % (r, g, b)
def html_rgb(color='#000000'):
Converts #RRGGBB em R, G, B
if color.startswith('#'): color = color[1:]
r = int(color[:2], 16)
g = int(color[2:4], 16)
b = int(color[4:], 16)
return r, g, b
print (rgb_html(200, 200, 255))
print (rgb_html(b=200, g=200, r=255))
print (html_rgb('#c8c8ff'))
def test(c, d, a=100, b=1000):
print(d, c, a, b)
return d, c, a, b
x = test(c=2, d=10, b=5)
print(x)
x = test(1, 2, 3, 4)
print(x)
print(test(10, 2))
# *args - arguments without name (list)
# **kargs - arguments with name (ditcionary)
def func(*args, **kargs):
print (args)
print (kargs)
func('weigh', 10, unit='k')
def func(* , **kargs):
print (args)
print (kargs)
a = {
"name": "Mohan kumar Shah",
"age": 24 + 1
}
func('weigh', 10, unit='k', val=a)
def func(*args):
print(args)
func('weigh', 10, "test")
data = [(4, 3), (5, 1), (7, 2), (9, 0)]
# Comparing by the last element
def _cmp(x, y):
return cmp(x[-1], y[-1])
print ('List:', data)
print (eval('12. / 2 + 3.3'))
def listing(lst):
for l in lst:
print(l)
d = {"Mayank Johri":40, "Janki Mohan Johri":68}
listing(d)
d = {
"name": "Mohan",
"age": 24
}
a = {
"name": "Mohan kumar Shah",
"age": 24 + 1
}
def process_dict(d=a):
print(d)
process_dict(d)
process_dict()
def test(a=[]):
a.append(1)
print(a)
test()
test()
test()
def test(a=None):
if a == None:
a = []
a.append(1)
print(a)
test()
test()
test()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Functions
Step3: In the above example, we have caps as function, which takes val as argument and returns val * 2.
Step4: Functions can return any data type, next example returns a boolean value.
Step5: Example (factorial without recursion)
Step7: Example (factorial with recursion)
Step9: Example (Fibonacci series with recursion)
Step10: Example (Fibonacci series without recursion)
Step11: python
Step12: Functions can also not return anything like in the below example
Step13: Functions can also return multiple values, usually in form of tuple.
Step16: Example (RGB conversion)
Step17: Note
Step18: Observations
Step19: In the example, kargs will receive the named arguments and args will receive the others.
Step20: Python also has a builtin function eval(), which evaluates code (source or object) and returns the value.
|
7,949
|
<ASSISTANT_TASK:>
Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.13'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/babyweight/preproc; then
gsutil mb -l ${REGION} gs://${BUCKET}
# copy canonical set of preprocessed files if you didn't do previous notebook
gsutil -m cp -R gs://cloud-training-demos/babyweight gs://${BUCKET}
fi
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
%%writefile babyweight/trainer/task.py
import argparse
import json
import os
from . import model
import tensorflow as tf
from tensorflow.contrib.learn.python.learn import learn_runner
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--bucket',
help = 'GCS path to data. We assume that data is in gs://BUCKET/babyweight/preproc/',
required = True
)
parser.add_argument(
'--output_dir',
help = 'GCS location to write checkpoints and export models',
required = True
)
parser.add_argument(
'--batch_size',
help = 'Number of examples to compute gradient over.',
type = int,
default = 512
)
parser.add_argument(
'--job-dir',
help = 'this model ignores this field, but it is required by gcloud',
default = 'junk'
)
parser.add_argument(
'--nnsize',
help = 'Hidden layer sizes to use for DNN feature columns -- provide space-separated layers',
nargs = '+',
type = int,
default=[128, 32, 4]
)
parser.add_argument(
'--nembeds',
help = 'Embedding size of a cross of n key real-valued parameters',
type = int,
default = 3
)
## TODOs after this line
################################################################################
## TODO 1: add the new arguments here
## parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# unused args provided by service
arguments.pop('job_dir', None)
arguments.pop('job-dir', None)
## assign the arguments to the model variables
output_dir = arguments.pop('output_dir')
model.BUCKET = arguments.pop('bucket')
model.BATCH_SIZE = arguments.pop('batch_size')
model.TRAIN_STEPS = (arguments.pop('train_examples') * 1000) / model.BATCH_SIZE
model.EVAL_STEPS = arguments.pop('eval_steps')
print ("Will train for {} steps using batch_size={}".format(model.TRAIN_STEPS, model.BATCH_SIZE))
model.PATTERN = arguments.pop('pattern')
model.NEMBEDS= arguments.pop('nembeds')
model.NNSIZE = arguments.pop('nnsize')
print ("Will use DNN size of {}".format(model.NNSIZE))
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
output_dir = os.path.join(
output_dir,
json.loads(
os.environ.get('TF_CONFIG', '{}')
).get('task', {}).get('trial', '')
)
# Run the training job
model.train_and_evaluate(output_dir)
%%writefile babyweight/trainer/model.py
import shutil
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
BUCKET = None # set from task.py
PATTERN = 'of' # gets all files
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
# Define some hyperparameters
TRAIN_STEPS = 10000
EVAL_STEPS = None
BATCH_SIZE = 512
NEMBEDS = 3
NNSIZE = [64, 16, 4]
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(prefix, mode, batch_size):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults=DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Use prefix to create file path
file_path = 'gs://{}/babyweight/preproc/{}*{}*'.format(BUCKET, prefix, PATTERN)
# Create list of files that match pattern
file_list = tf.gfile.Glob(file_path)
# Create dataset from file list
dataset = (tf.data.TextLineDataset(file_list) # Read text file
.map(decode_csv)) # Transform each elem by applying decode_csv fn
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
# Define feature columns
def get_wide_deep():
# Define column types
is_male,mother_age,plurality,gestation_weeks = \
[\
tf.feature_column.categorical_column_with_vocabulary_list('is_male',
['True', 'False', 'Unknown']),
tf.feature_column.numeric_column('mother_age'),
tf.feature_column.categorical_column_with_vocabulary_list('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)']),
tf.feature_column.numeric_column('gestation_weeks')
]
# Discretize
age_buckets = tf.feature_column.bucketized_column(mother_age,
boundaries=np.arange(15,45,1).tolist())
gestation_buckets = tf.feature_column.bucketized_column(gestation_weeks,
boundaries=np.arange(17,47,1).tolist())
# Sparse columns are wide, have a linear relationship with the output
wide = [is_male,
plurality,
age_buckets,
gestation_buckets]
# Feature cross all the wide columns and embed into a lower dimension
crossed = tf.feature_column.crossed_column(wide, hash_bucket_size=20000)
embed = tf.feature_column.embedding_column(crossed, NEMBEDS)
# Continuous columns are deep, have a complex relationship with the output
deep = [mother_age,
gestation_weeks,
embed]
return wide, deep
# Create serving input function to be able to serve predictions later using provided inputs
def serving_input_fn():
feature_placeholders = {
'is_male': tf.placeholder(tf.string, [None]),
'mother_age': tf.placeholder(tf.float32, [None]),
'plurality': tf.placeholder(tf.string, [None]),
'gestation_weeks': tf.placeholder(tf.float32, [None]),
KEY_COLUMN: tf.placeholder_with_default(tf.constant(['nokey']), [None])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
# create metric for hyperparameter tuning
def my_rmse(labels, predictions):
pred_values = predictions['predictions']
return {'rmse': tf.metrics.root_mean_squared_error(labels, pred_values)}
## TODOs after this line
################################################################################
# Create estimator to train and evaluate
def train_and_evaluate(output_dir):
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
wide, deep = get_wide_deep()
EVAL_INTERVAL = 300 # seconds
## TODO 2a: set the save_checkpoints_secs to the EVAL_INTERVAL
run_config = tf.estimator.RunConfig(save_checkpoints_secs = None,
keep_checkpoint_max = 3)
## TODO 2b: change the dnn_hidden_units to NNSIZE
estimator = tf.estimator.DNNLinearCombinedRegressor(
model_dir = output_dir,
linear_feature_columns = wide,
dnn_feature_columns = deep,
dnn_hidden_units = None,
config = run_config)
# illustrates how to add an extra metric
estimator = tf.contrib.estimator.add_metrics(estimator, my_rmse)
# for batch prediction, you need a key associated with each instance
estimator = tf.contrib.estimator.forward_features(estimator, KEY_COLUMN)
## TODO 2c: Set the third argument of read_dataset to BATCH_SIZE
## TODO 2d: and set max_steps to TRAIN_STEPS
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset('train', tf.estimator.ModeKeys.TRAIN, None),
max_steps = None)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn, exports_to_keep=None)
## TODO 2e: Lastly, set steps equal to EVAL_STEPS
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset('eval', tf.estimator.ModeKeys.EVAL, 2**15), # no need to batch in eval
steps = None,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
%%bash
echo "bucket=${BUCKET}"
rm -rf babyweight_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
python -m trainer.task \
--bucket=${BUCKET} \
--output_dir=babyweight_trained \
--job-dir=./tmp \
--pattern="00000-of-" --train_examples=1 --eval_steps=1
%%writefile inputs.json
{"key": "b1", "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "g1", "is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
%%bash
sudo find "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine" -name '*.pyc' -delete
%%bash
MODEL_LOCATION=$(ls -d $(pwd)/babyweight_trained/export/exporter/* | tail -1)
echo $MODEL_LOCATION
gcloud ai-platform local predict --model-dir=$MODEL_LOCATION --json-instances=inputs.json
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=200000
%%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
hyperparameterMetricTag: rmse
goal: MINIMIZE
maxTrials: 20
maxParallelTrials: 5
enableTrialEarlyStopping: True
params:
- parameterName: batch_size
type: INTEGER
minValue: 8
maxValue: 512
scaleType: UNIT_LOG_SCALE
- parameterName: nembeds
type: INTEGER
minValue: 3
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
- parameterName: nnsize
type: INTEGER
minValue: 64
maxValue: 512
scaleType: UNIT_LOG_SCALE
%%bash
OUTDIR=gs://${BUCKET}/babyweight/hyperparam
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--config=hyperparam.yaml \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--eval_steps=10 \
--train_examples=20000
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=20000 --batch_size=35 --nembeds=16 --nnsize=281
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now that we have the TensorFlow code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
Step2: Lab Task 2
Step3: Lab Task 3
Step4: Lab Task 4
Step5: Lab Task 5
Step6: When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was
Step7: <h2> Repeat training </h2>
|
7,950
|
<ASSISTANT_TASK:>
Python Code:
def vol(rad):
pass
def ran_check(num,low,high):
pass
def ran_bool(num,low,high):
pass
ran_bool(3,1,10)
def up_low(s):
pass
def unique_list(l):
pass
unique_list([1,1,1,1,2,2,3,3,3,3,4,5])
def multiply(numbers):
pass
multiply([1,2,3,-4])
def palindrome(s):
pass
palindrome('helleh')
import string
def ispangram(str1, alphabet=string.ascii_lowercase):
pass
ispangram("The quick brown fox jumps over the lazy dog")
string.ascii_lowercase
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Write a function that checks whether a number is in a given range (Inclusive of high and low)
Step2: If you only wanted to return a boolean
Step3: Write a Python function that accepts a string and calculate the number of upper case letters and lower case letters.
Step4: Write a Python function that takes a list and returns a new list with unique elements of the first list.
Step5: Write a Python function to multiply all the numbers in a list.
Step6: Write a Python function that checks whether a passed string is palindrome or not.
Step7: Hard
|
7,951
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.0,<2.1"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b['rpole@primary'] = 1.8
b['rpole@secondary'] = 0.96
b['teff@primary'] = 10000
b['gravb_bol@primary'] = 1.0
b['teff@secondary'] = 5200
b['gravb_bol@secondary'] = 0.32
b['q@binary'] = 0.96/1.8
b['incl@binary'] = 88
b['period@binary'] = 1.0
b['sma@binary'] = 6.0
times = np.linspace(0,1,101)
b.add_dataset('lc', times=times, dataset='lc01')
b.add_dataset('rv', times=times, dataset='rv01')
b.add_dataset('mesh', times=times[::10], dataset='mesh01')
b.set_value('irrad_method', 'none')
print b['boosting_method@compute']
print b['boosting_method@compute'].choices
b.run_compute(boosting_method='none', model='boosting_none')
b.run_compute(boosting_method='linear', model='boosting_linear')
axs, artists = b['lc01'].plot()
leg = plt.legend()
axs, artists = b['lc01'].plot(ylim=(1.01,1.03))
leg = plt.legend()
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['rv01@boosting_none'].plot(ax=ax1)
axs, artists = b['rv01@boosting_linear'].plot(ax=ax2)
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['mesh@boosting_none'].plot(time=0.6, facecolor='boost_factors@lc01', edgecolor=None, ax=ax1)
axs, artists = b['mesh@boosting_linear'].plot(time=0.6, facecolor='boost_factors@lc01', edgecolor=None, ax=ax2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Let's make our system so that the boosting effects will be quite noticeable.
Step3: We'll add lc, rv, and mesh datasets so that we can see how they're each affected by beaming and boosting.
Step4: Relevant Parameters
Step5: Influence on Light Curves (fluxes)
Step6: Influence on Radial Velocities
Step7: Influence on Meshes
|
7,952
|
<ASSISTANT_TASK:>
Python Code:
probe_x_offset = 5 # i.e. probe is 5 mm to the right of the nozzle
probe_y_offset = -31 # i.e. probe is 31mm "down" from the nozzle
probe_z_offset = -22.5 # i.e. probe clicks with nozzle 22.5mm above the bed
min_y = 41
min_x = 5
max_y = 147 # Moving beyond this value after homing will crash bed
max_x = 130 # moving beyond this value will cause probe to miss bed
pre_travel_z = 27
safe_z = 32.5
x_points = 10 # Number of points in x direction
y_points = 10 # Number of points in y direction
f = open('bedread8.g', 'w')
f.write("G28\n") # Home first
x_step = (max_x - min_x) / (x_points - 1)
y_step = (max_y - min_y) / (y_points - 1)
for y_point in reversed(range(0, y_points)):
for x_point in range(0, x_points):
f.write('G1 X{0:.1f} Y{1:.1f} Z{2:.1f}\n'.format((x_point * x_step + min_x), (y_point * y_step + min_y), (safe_z)))
f.write('M400\n')
f.write('G30\n')
f.write('M400\n')
f.write('G1 Z{0:.1f}\n'.format(pre_travel_z))
f.write('M400\n')
f.write('G1 Z{0:.1f}\n'.format(safe_z))
f.write('M400\n')
f.write('M402\n')
f.close()
f = open("bed_level_20160227_6.txt")
printlog = f.read()
# Extract values from logfile (only lines starting with 'Bed' are interesting)
bed_values = []
for line in iter(printlog.splitlines()):
if line[0:3] == "Bed":
x_start = line.index("X")
y_start = line.index("Y")
z_start = line.index("Z")
bed_values.append([float(line[x_start+3:y_start]),
float(line[y_start+3:z_start]),
float(line[z_start+3:].rstrip(' \t\r\n\0\x00\x03'))])
def plot_scatter_from_points(bed_values):
xs = np.array([event[0] for event in bed_values])
ys = np.array([event[1] for event in bed_values])
zs = np.array([event[2] for event in bed_values])
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs, ys, zs, c='r', marker='o')
plt.show()
plot_scatter_from_points(bed_values)
def plot_surface_from_points(bed_values, resX=10, resY=10):
# Convert scatter plot into surface plot
# http://stackoverflow.com/questions/18764814/make-contour-of-scatter
x = [point[0] for point in bed_values]
y = [point[1] for point in bed_values]
z = [point[2] for point in bed_values]
xi = linspace(min(x), max(x), resX)
yi = linspace(min(y), max(y), resY)
Z = griddata(x, y, z, xi, yi, interp='linear')
X, Y = meshgrid(xi, yi)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0.1)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
def plot_contour_from_points(bed_values, resX=10, resY=10):
# Convert scatter plot into surface plot
# http://stackoverflow.com/questions/18764814/make-contour-of-scatter
x = [point[0] for point in bed_values]
y = [point[1] for point in bed_values]
z = [point[2] for point in bed_values]
xi = linspace(min(x), max(x), resX)
yi = linspace(min(y), max(y), resY)
Z = griddata(x, y, z, xi, yi, interp='linear')
X, Y = meshgrid(xi, yi)
plt.figure()
CS = plt.contour(X, Y, Z)
plt.clabel(CS, inline=1, fontsize=10)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0.1)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
plot_surface_from_points(bed_values)
plot_contour_from_points(bed_values)
def planeFit(points):
p, n = planeFit(points)
Given an array, points, of shape (d,...)
representing points in d-dimensional space,
fit an d-dimensional plane to the points.
Return a point, p, on the plane (the point-cloud centroid),
and the normal, n.
points = np.reshape(points, (np.shape(points)[0], -1)) # Collapse trialing dimensions
assert points.shape[0] <= points.shape[1], "There are only {} points in {} dimensions.".format(points.shape[1], points.shape[0])
ctr = points.mean(axis=1)
x = points - ctr[:,np.newaxis]
M = np.dot(x, x.T) # Could also use np.cov(x) here.
return ctr, svd(M)[0][:,-1]
centroid, normal = planeFit(np.transpose(np.array(bed_values)))
mag = np.sqrt(normal.dot(normal)) # Should be 1 if this is a unit vector
print(centroid, normal)
# Plot the plane to make sure
def get_height_on_plane(centroid, normal, x, y):
# plane is of form ax + by +cz +d = 0
# Normal vector of plane is [a,b,c]T, so just need to find d
# d = -(ax0 + by0 + cz0)
d = - np.sum((np.array(centroid) * np.array(normal)))
[a, b, c] = normal
z = -(a * x + b * y + d) / c
return z
best_fit_plane = []
for point in bed_values:
best_fit_plane.append([
point[0],
point[1],
get_height_on_plane(centroid, normal, point[0], point[1])
])
plot_surface_from_points(best_fit_plane)
z_unit = np.array([0,0,1])
dot_product = normal.dot(z_unit)
tilt_angle =np.arccos(dot_product)
print("Bed is tilted by {0:.3f} degrees".format(tilt_angle))
old_x_unit = np.array([1,0,0])
old_y_unit = np.array([0,1,0])
old_z_unit = np.array([0,0,1])
new_z_unit = np.array(normal)
new_y_unit = np.cross(old_x_unit, new_z_unit)
new_y_unit = new_y_unit / (np.dot(new_y_unit, new_y_unit))
new_x_unit = np.cross(new_z_unit, new_y_unit)
new_x_unit = new_x_unit / (np.dot(new_x_unit, new_x_unit))
# For each point, create new coords
centroid_vec = np.array(centroid)
rotated_bed_values = []
min_x = None
min_y = None
min_z = None
for point in bed_values:
point_vec = np.array(point)
from_centroid = point_vec - centroid_vec
new_coords = [np.dot(from_centroid, new_x_unit),
np.dot(from_centroid, new_y_unit),
np.dot(from_centroid, new_z_unit)
]
if min_z is None or np.dot(from_centroid, new_z_unit) < min_z:
min_z = np.dot(from_centroid, new_z_unit)
if min_y is None or np.dot(from_centroid, new_y_unit) < min_y:
min_y = np.dot(from_centroid, new_y_unit)
if min_x is None or np.dot(from_centroid, new_x_unit) < min_x:
min_x = np.dot(from_centroid, new_x_unit)
rotated_bed_values.append(new_coords)
# Make minimum point z = 0
for point in rotated_bed_values:
point[0] -= min_x
point[1] -= min_y
point[2] -= min_z
plot_scatter_from_points(rotated_bed_values)
plot_surface_from_points(rotated_bed_values, resX=20, resY=20)
plot_contour_from_points(rotated_bed_values, resX=20, resY=20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Display results
Step4: Need to remove the tilt from this. Get the best fit orthoganal distance regression plane using approach here
Step5: What is the angle between the z axis and the normal vector? Need to dot product the normal with z unit vector. http
Step6: Now rotate the original set of points so that the best fit plane has normal equal to z axis unit vector. Use approach here
|
7,953
|
<ASSISTANT_TASK:>
Python Code:
from random import choices
lnct_few_friends = ["Jyoti Pancholi", "Amit Shrivastava", "Mukesh Bansal", "Preeti Saraswat", "Manish Nandle"]
list_of_prob = [0.2, 0.1, 0.3, 0.2, 0.2]
lnct_few_friends = choices(lnct_few_friends, weights=list_of_prob, k=200)
for name in set(population):
print(name, lnct_few_friends.count(name))
from random import choices
import matplotlib.pyplot as plt
lnct_few_friends = ["X", "Jyoti Pancholi", "Amit Shrivastava", "Mukesh Bansal", "Preeti Saraswat", "Manish Nandle"]
list_of_prob = [0.05, 0.15, 0.1, 0.3, 0.1, 0.3]
d = {}
for i in range(10):
a = {}
lst = choices(lnct_few_friends, weights=list_of_prob, k=10)
for name in set(lnct_few_friends):
a[name] = lst.count(name)
d[i] = a
# print(d)
import matplotlib.pyplot as plt
from matplotlib import ticker
import math
plt.xticks(rotation=90)
for key, val in d.items():
plt.plot(val.keys() ,val.values(), marker="o")
from random import choices
import matplotlib.pyplot as plt
lnct_few_friends = ["X", "Jyoti Pancholi", "Amit Shrivastava", "Mukesh Bansal", "Preeti Saraswat", "Manish Nandle"]
list_of_prob = [0.05, 0.15, 0.1, 0.3, 0.1, 0.3]
d = {}
for i in range(10):
a = {}
lst = choices(lnct_few_friends, weights=list_of_prob, k=900)
for name in set(lnct_few_friends):
a[name] = lst.count(name)
d[i] = a
import matplotlib.pyplot as plt
from matplotlib import ticker
import math
plt.xticks(rotation=90)
for key, val in d.items():
plt.plot(val.keys() ,val.values(), marker="o")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lets try some graphs on them
Step2: In the above graph, you can see that everytime, "X" were the lowest and "Mukesh" & "Manish" were the highests names in the created list and more I increase the value of k more prominent the graph becomes as shown in the below code and graph
|
7,954
|
<ASSISTANT_TASK:>
Python Code:
# Make sure the base overlay is loaded
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
from pynq.lib.arduino import Arduino_Analog
from pynq.lib.arduino import ARDUINO_GROVE_A1
from pynq.lib.arduino import ARDUINO_GROVE_A4
analog1 = Arduino_Analog(base.ARDUINO,ARDUINO_GROVE_A1)
analog1.read()
analog1.read('raw')[0]
from time import sleep
analog1.set_log_interval_ms(100)
analog1.start_log()
log1 = analog1.get_log()
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.legend_handler import HandlerLine2D
line1, = plt.plot(range(len(log1[0])), log1[0],
'ro', label="X-axis of joystick")
line2, = plt.plot(range(len(log1[1])), log1[1],
'bs', label="Y-axis of joystick")
plt.title('Arduino Analog Voltage Log')
plt.axis([0, len(log1[0]), 0.0, 3.3])
plt.xlabel('Sample number')
plt.ylabel('Voltage')
plt.legend(loc=4,bbox_to_anchor=(1, -0.3),
ncol=2, borderaxespad=0.,
handler_map={line1: HandlerLine2D(numpoints=1),
line2: HandlerLine2D(numpoints=1)})
plt.show()
analog2 = Arduino_Analog(base.ARDUINO,[0,1,4])
analog2.set_log_interval_ms(100)
analog2.start_log()
log2 = analog2.get_log()
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.legend_handler import HandlerLine2D
line1, = plt.plot(range(len(log2[0])), log2[0],
'ro', label="X-axis of joystick")
line2, = plt.plot(range(len(log2[1])), log2[1],
'bs', label="Y-axis of joystick")
line3, = plt.plot(range(len(log2[2])), log2[2],
'g^', label="potentiometer")
plt.title('Arduino Analog Voltage Log')
plt.axis([0, len(log2[0]), 0.0, 3.3])
plt.xlabel('Sample number')
plt.ylabel('Voltage')
plt.legend(loc=4,bbox_to_anchor=(1, -0.3),
ncol=2, borderaxespad=0.,
handler_map={line1: HandlerLine2D(numpoints=1),
line2: HandlerLine2D(numpoints=1),
line3: HandlerLine2D(numpoints=1)})
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Instantiate individual analog controller
Step2: 2. Read voltage value out
Step3: 3. Read raw value out
Step4: 4. Logging multiple sample values
Step 1
Step5: Step 2
Step6: Step 3. Plot values over time
Step7: 5. Logging multiple devices
Step8: Step 2
Step9: Step 3. Plot values over time
|
7,955
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(url='http://www.phdcomics.com/comics/archive/phd101212s.gif')
%%bash
git status
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If that hasn't convinced you, here are some other benefits
|
7,956
|
<ASSISTANT_TASK:>
Python Code:
from QGL import *
cl = ChannelLibrary("example")
q1 = cl["q1"]
# Repeat similar configuration for q2
q2 = cl.new_qubit("q2")
aps2_3 = cl.new_APS2("BBNAPS3", address="192.168.5.103")
aps2_4 = cl.new_APS2("BBNAPS4", address="192.168.5.104")
dig_2 = cl.new_X6("X6_2", address=0)
cl.set_control(q2, aps2_3)
cl.set_measure(q2, aps2_4, dig_2.ch(1))
RabiPoints = 101;
plot_pulse_files(RabiAmp_NQubits((q1,q2),np.linspace(0,1,RabiPoints), measChans=(q1,q2), add_cals=True))
e = cl.new_edge(q1, q2)
# Most calls required label and address. Let's define
# an AWG for control pulse generation
aps2_5 = cl.new_APS2("BBNAPS5", address="192.168.5.106")
cl.set_control(e, aps2_5)
seqs = [[Id(q1), CNOT(q1, q2)]] # use the default CNOT_simple implementation, where the CNOT is represented as an X pulse
mf = compile_to_hardware(seqs,'CNOT_simple')
plot_pulse_files(mf)
seqs = [[Id(q1), CNOT_CR(q1, q2)]] # use the CNOT_CR implementation, where the CNOT is decomposed
# into a sequence of single-qubit gates and a ZX90, as is appropriate for a cross-resonance interaction.
mf = compile_to_hardware(seqs,'CNOT_CR')
plot_pulse_files(mf)
seqs = [[Id(q1), CNOT_CR(q2, q1)]] # use the CNOT_CR implementation, where the CNOT is decomposed
# into a sequence of single-qubit gates and a ZX90, as is appropriate for a cross-resonance interaction.
mf = compile_to_hardware(seqs,'CNOT_CR_inv')
plot_pulse_files(mf)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: See Auspex example notebooks on how to configure a channel library.
Step2: One can define simultaneous operations on qubits using the * operator (indicating a tensor product), see ex1_QGL_basics.
Step3: Two-qubit gates
Step4: We can now include CNOT gates in our sequences. In this example you can see the use of the two-qubit primitive CNOT.
Step5: You can also explicitly call CNOT_CR to
Step6: Inverting the order of the CNOT_CR input will also produce a CNOT using the same directed edge (q1->q2), but with added single-qubit gates to invert the CNOT control and target.
|
7,957
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from empiricaldist import Pmf
from utils import decorate
# set the random seed so we get the same results every time
np.random.seed(17)
# make the directory for the figures
import os
if not os.path.exists('inspection'):
!mkdir inspection
# Class size data originally from
# https://www.purdue.edu/datadigest/2013-14/InstrStuLIfe/DistUGClasses.html
# now available from
# https://web.archive.org/web/20160415011613/https://www.purdue.edu/datadigest/2013-14/InstrStuLIfe/DistUGClasses.html
sizes = [(1, 1),
(2, 9),
(10, 19),
(20, 29),
(30, 39),
(40, 49),
(50, 99),
(100, 300)]
counts = [138, 635, 1788, 1979, 796, 354, 487, 333]
def generate_sample(sizes, counts):
Generate a sample from a distribution.
sizes: sequence of (low, high) pairs
counts: sequence of integers
returns: NumPy array
t = []
for (low, high), count in zip(sizes, counts):
print(count, low, high)
sample = np.random.randint(low, high+1, count)
t.extend(sample)
return np.array(t)
unbiased = generate_sample(sizes, counts)
def resample_weighted(sample, weights):
Resample values from `sample` with the given weights.
sample: NumPy array
weights: NumPy array
returns: NumPy array
n = len(sample)
p = weights / np.sum(weights)
return np.random.choice(sample, n, p=p)
biased = resample_weighted(unbiased, unbiased)
from scipy.stats import gaussian_kde
def kdeplot(sample, xs, label=None, **options):
Use KDE to plot the density function.
sample: NumPy array
xs: NumPy array
label: string
density = gaussian_kde(sample, **options).evaluate(xs)
plt.plot(xs, density, label=label)
decorate(ylabel='Relative likelihood')
xs = np.arange(1, 300)
kdeplot(unbiased, xs, 'Reported by the Dean')
kdeplot(biased, xs, 'Reported by students')
decorate(xlabel='Class size',
title='Distribution of class sizes')
plt.savefig('inspection/class_size.png', dpi=150)
np.mean(unbiased)
np.mean(biased)
from empiricaldist import Cdf
def cdfplot(sample, xs, label=None, **options):
Plot the CDF of the sample.
sample: NumPy array
xs: NumPy array (ignored)
label: string
cdf = Cdf.from_seq(sample, **options)
cdf.plot(label=label)
decorate(ylabel='CDF')
xs = np.arange(1, 300)
cdfplot(unbiased, xs, 'Reported by the Dean')
cdfplot(biased, xs, 'Reported by students')
decorate(xlabel='Class size',
title='Distribution of class sizes')
plt.savefig('inspection/class_size.png', dpi=150)
unbiased = [
428.0, 705.0, 407.0, 465.0, 433.0, 425.0, 204.0, 506.0, 143.0, 351.0,
450.0, 598.0, 464.0, 749.0, 341.0, 586.0, 754.0, 256.0, 378.0, 435.0,
176.0, 405.0, 360.0, 519.0, 648.0, 374.0, 483.0, 537.0, 578.0, 534.0,
577.0, 619.0, 538.0, 331.0, 186.0, 629.0, 193.0, 360.0, 660.0, 484.0,
512.0, 315.0, 457.0, 404.0, 740.0, 388.0, 357.0, 485.0, 567.0, 160.0,
428.0, 387.0, 901.0, 187.0, 622.0, 616.0, 585.0, 474.0, 442.0, 499.0,
437.0, 620.0, 351.0, 286.0, 373.0, 232.0, 393.0, 745.0, 636.0, 758.0,
]
unbiased = np.array(unbiased) / 60
biased = resample_weighted(unbiased, unbiased)
xs = np.linspace(1, 16.5, 101)
kdeplot(unbiased, xs, 'Seen by MBTA')
kdeplot(biased, xs, 'Seen by passengers')
decorate(xlabel='Time between trains (min)',
title='Distribution of time between trains')
plt.savefig('inspection/red_line.png', dpi=150)
xs = np.linspace(1, 16.5, 101)
cdfplot(unbiased, xs, 'Seen by MBTA')
cdfplot(biased, xs, 'Seen by passengers')
decorate(xlabel='Time between trains (min)',
title='Distribution of time between trains')
plt.savefig('inspection/red_line.png', dpi=150)
np.mean(biased), np.mean(unbiased)
(np.mean(biased) - np.mean(unbiased)) / np.mean(unbiased) * 100
import networkx as nx
def read_graph(filename):
Read a graph from a file.
filename: string
return: nx.Graph
G = nx.Graph()
array = np.loadtxt(filename, dtype=int)
G.add_edges_from(array)
return G
# https://snap.stanford.edu/data/facebook_combined.txt.gz
fb = read_graph('facebook_combined.txt.gz')
n = len(fb)
m = len(fb.edges())
n, m
unbiased = [fb.degree(node) for node in fb]
len(unbiased)
np.max(unbiased)
biased = resample_weighted(unbiased, unbiased)
xs = np.linspace(0, 300, 101)
kdeplot(unbiased, xs, 'Random sample of people')
kdeplot(biased, xs, 'Random sample of friends')
decorate(xlabel='Number of friends in social network',
title='Distribution of social network size')
plt.savefig('inspection/social.png', dpi=150)
xs = np.linspace(0, 300, 101)
cdfplot(unbiased, xs, 'Random sample of people')
cdfplot(biased, xs, 'Random sample of friends')
decorate(xlabel='Number of friends in social network',
title='Distribution of social network size',
xlim=[-10, 310])
plt.savefig('inspection/social.png', dpi=150)
np.mean(biased), np.mean(unbiased)
np.mean(biased > unbiased)
import relay
results = relay.ReadResults()
unbiased = relay.GetSpeeds(results)
weights = np.abs(np.array(unbiased) - 7)
biased = resample_weighted(unbiased, weights)
xs = np.linspace(3, 11, 101)
kdeplot(unbiased, xs, 'Seen by spectator')
kdeplot(biased, xs, 'Seen by runner at 7 mph', bw_method=0.2)
decorate(xlabel='Running speed (mph)',
title='Distribution of running speed')
plt.savefig('inspection/relay.png', dpi=150)
xs = np.linspace(3, 11, 101)
cdfplot(unbiased, xs, 'Seen by spectator')
cdfplot(biased, xs, 'Seen by runner at 7 mph')
decorate(xlabel='Running speed (mph)',
title='Distribution of running speed')
plt.savefig('inspection/relay.png', dpi=150)
tables = pd.read_html('BOP Statistics_ Sentences Imposed.html')
df = tables[0]
df
sentences = [(0.02, 1),
(1, 3),
(3, 5),
(5, 10),
(10, 15),
(15, 20),
(20, 40),
(40, 60)]
counts = df['# of Inmates']
def generate_sample(sizes, counts):
Generate a sample from a distribution.
sizes: sequence of (low, high) pairs
counts: sequence of integers
returns: NumPy array
t = []
for (low, high), count in zip(sizes, counts):
print(count, low, high)
sample = np.random.uniform(low, high, count)
t.extend(sample)
return np.array(t)
biased = generate_sample(sentences, counts)
weights = 1 / (0.85 * np.array(biased))
unbiased = resample_weighted(biased, weights)
xs = np.linspace(0, 60, 101)
kdeplot(unbiased, xs, 'Seen by judge', bw_method=0.5)
kdeplot(biased, xs, 'Seen by prison visitor', bw_method=0.5)
decorate(xlabel='Prison sentence (years)',
title='Distribution of federal prison sentences')
plt.savefig('inspection/orange.png', dpi=150)
xs = np.linspace(0, 60, 101)
cdfplot(unbiased, xs, 'Seen by judge')
cdfplot(biased, xs, 'Seen by prison visitor')
decorate(xlabel='Prison sentence (years)',
title='Distribution of federal prison sentences')
plt.savefig('inspection/orange.png', dpi=150)
x = 0.85 * unbiased
y = 13 / 12
weights = x + y
kerman = resample_weighted(unbiased, weights)
xs = np.linspace(0, 60, 101)
kdeplot(unbiased, xs, 'Seen by judge', bw_method=0.5)
kdeplot(kerman, xs, 'Seen by Kerman', bw_method=0.5)
kdeplot(biased, xs, 'Seen by visitor', bw_method=0.5)
decorate(xlabel='Prison sentence (years)',
title='Distribution of federal prison sentences')
plt.savefig('inspection/orange.png', dpi=150)
xs = np.linspace(0, 60, 101)
cdfplot(unbiased, xs, 'Seen by judge')
cdfplot(kerman, xs, 'Seen by Kerman')
cdfplot(biased, xs, 'Seen by visitor')
decorate(xlabel='Prison sentence (years)',
title='Distribution of federal prison sentences')
plt.savefig('inspection/orange.png', dpi=150)
np.mean(unbiased<1)
np.mean(biased<1)
np.mean(unbiased)
np.mean(biased)
np.mean(kerman)
from matplotlib.patches import Circle
def draw_dartboard():
ax = plt.gca()
c1 = Circle((0, 0), 170, color='C3', alpha=0.3)
c2 = Circle((0, 0), 160, color='white')
c3 = Circle((0, 0), 107, color='C3', alpha=0.3)
c4 = Circle((0, 0), 97, color='white')
c5 = Circle((0, 0), 16, color='C3', alpha=0.3)
c6 = Circle((0, 0), 6, color='white')
for circle in [c1, c2, c3, c4, c5, c6]:
ax.add_patch(circle)
plt.axis('equal')
draw_dartboard()
plt.text(0, 10, '25 ring')
plt.text(0, 110, 'triple ring')
plt.text(0, 170, 'double ring')
plt.savefig('inspection/darts0.png', dpi=150)
sigma = 50
n = 100
error_x = np.random.normal(0, sigma, size=(n))
error_y = np.random.normal(0, sigma, size=(n))
draw_dartboard()
plt.plot(error_x, error_y, '.')
plt.savefig('inspection/darts1.png', dpi=150)
sigma = 50
n = 10000
error_x = np.random.normal(0, sigma, size=(n))
error_y = np.random.normal(0, sigma, size=(n))
import numpy as np
import seaborn as sns
import matplotlib.pyplot as pl
ax = sns.kdeplot(error_x, error_y, shade=True, cmap="PuBu")
ax.collections[0].set_alpha(0)
plt.axis([-240, 240, -175, 175])
decorate(xlabel='x distance from center (mm)',
ylabel='y distance from center (mm)',
title='Estimated density')
plt.savefig('inspection/darts2.png', dpi=150)
rs = np.hypot(error_x, error_y)
np.random.seed(18)
sigma = 50
n = 10000
error_x = np.random.normal(0, sigma, size=(n))
error_y = np.random.normal(0, sigma, size=(n))
xs = np.linspace(-200, 200, 101)
#ys = np.exp(-(xs/sigma)**2/2)
#pmf = Pmf(ys, index=xs)
#pmf.normalize()
#pmf.plot(color='gray')
unbiased = error_x
biased = resample_weighted(unbiased, np.abs(unbiased))
kdeplot(unbiased, xs, 'Density at a point')
kdeplot(biased, xs, 'Total density in a ring')
#kdeplot(rs, xs, 'Total density in a ring')
decorate(xlabel='Distance from center (mm)',
ylabel='Density',
xlim=[0, 210])
plt.savefig('inspection/darts3.png', dpi=150)
xs = np.linspace(0, 200, 101)
unbiased = np.abs(error_x)
biased = resample_weighted(unbiased, unbiased)
cdfplot(unbiased, xs, 'Density at a point')
cdfplot(biased, xs, 'Total density in a ring')
decorate(xlabel='Distance from center (mm)',
ylabel='Density')
plt.savefig('inspection/darts4.png', dpi=150)
triple = (biased > 97) & (biased < 107)
triple.mean() * 100
ring50 = (biased > 6) & (biased < 16)
ring50.mean() * 100
double = (biased > 160) & (biased < 170)
double.mean() * 100
bull = (biased < 6)
bull.mean() * 100
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Class size
Step3: I generate a sample from this distribution, assuming a uniform distribution in each range and an upper bound of 300.
Step4: The "unbiased" sample is as seen by the college, with each class equally likely to be in the sample.
Step6: To generate a biased sample, we use the values themselves as weights and resample with replacement.
Step8: To plot the distribution, I use KDE to estimate the density function, then evaluate it over the given sequence of xs.
Step9: The following plot shows the distribution of class size as seen by the Dean, and as seen by a sample of students.
Step11: Here are the means of the unbiased and biased distributions.
Step12: Red Line
Step13: Here's the same data in minutes.
Step14: We can use the same function to generate a biased sample.
Step15: And plot the results.
Step16: Here are the means of the distributions and the percentage difference.
Step18: Social network
Step19: The unbiased sample is the number of friends for each user.
Step20: We can use the same function to generate a biased sample.
Step21: And generate the plot.
Step22: Here are the means of the distributions.
Step23: And the probability that the friend of a user has more friends than the user.
Step24: Relay race
Step25: In this case, the weights are related to the difference between each element of the sample and the hypothetical speed of the observer.
Step26: And here's the plot.
Step27: Prison sentences
Step28: Here are the low and I sentences for each range. I assume that the minimum sentence is about a week, that sentences "less than life" are 40 years, and that a life sentence is between 40 and 60 years.
Step29: We can get the counts from the table.
Step31: Here's a different version of generate_sample for a continuous quantity.
Step32: In this case, the data are biased.
Step33: So we have to unbias them with weights inversely proportional to the values.
Step34: Here's the unbiased sample.
Step35: And the plotted distributions.
Step36: We can also compute the distribution of sentences as seen by someone at the prison for 13 months.
Step37: Here's the sample.
Step38: And here's what it looks like.
Step39: In the unbiased distribution, almost half of prisoners serve less than one year.
Step40: But if we sample the prison population, barely 3% are short timers.
Step41: Here are the means of the distributions.
Step42: The dartboard problem
|
7,958
|
<ASSISTANT_TASK:>
Python Code:
from pyoptools.all import *
from numpy import pi
P1=Plane(shape=Circular(radius=(25)))
Plot3D(P1,center=(0,0,0),size=(60,60),rot=[(0,0,0)],scale=6)
P2=Plane(shape=Rectangular(size=(50,50)))
Plot3D(P2,center=(0,0,0),size=(60,60),rot=[(0,0,0)],scale=6)
P3=Plane(shape=Triangular(coord=((0,25),(25,-25),(-25,-25))))
Plot3D(P3,center=(0,0,0),size=(60,60),scale=6)
S=Spherical(curvature=1/200., shape=Circular(radius=145.),reflectivity=0)
Plot3D(S,center=(0,0,0),size=(400,400),scale=1)
S3=Cylinder(radius=36,length=100,reflectivity=1)
Plot3D(S3,center=(0,0,0),size=(100,100),rot=[(0,pi/32,0)],scale=4)
S1=Cylindrical(shape=Rectangular(size=(50,100)),curvature=1/20.)
Plot3D(S1,center=(0,0,0),size=(150,150),rot=[(pi/4,0,0)],scale=2)
S2=Cylindrical(shape=Circular(radius=(50)),curvature=1/100.)
Plot3D(S2,center=(0,0,0),size=(150,150),rot=[(-pi/4,0,0)],scale=2)
%%latex
$$Z=\frac{(Ax*x^2+Ay*y^2)}{(1+\sqrt{(1-(1+Kx)*Ax^2*x^2-(1+Ky)*Ay^2*y^2))}}+ poly2d()$$
sa=Aspherical(shape=Rectangular(size=(5,5)),Ax=.2,Ay=.2,Kx=.1, Ky=.15, poly=poly2d((0,0,0,.5,0,.5)))
Plot3D(sa,center=(0,0,5),size=(10,10),rot=[(-3*pi/10,pi/4,0)],scale=40)
sa=Aspherical(shape=Circular(radius=2.5),Ax=.2,Ay=.2,Kx=.1, Ky=.15, poly=poly2d((0,0,0,.5,0,.5)))
Plot3D(sa,center=(0,0,5),size=(10,10),rot=[(-3*pi/10,pi/4,0)],scale=40)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plane Surface
Step2: Spherical Surface
Step3: Cylindrical Surface
Step4: The second class is the Cylindrical.
Step5: Aspherical Surface
|
7,959
|
<ASSISTANT_TASK:>
Python Code:
sushi_order = ['unagi', 'hamachi', 'otoro']
prices = [6.50, 5.50, 15.75]
print(sushi_order)
print(prices)
print(sushi_order[0])
print(sushi_order[2])
print(len(sushi_order))
print(sushi_order[-3])
everyones_order = [['california roll'], ['unagi', 'dragon roll'], sushi_order]
print(everyones_order)
sushi_order[0] = 'caterpillar roll'
print(sushi_order)
prices[-1] = 21.00
print(prices)
sushi_order
print(('hamachi' in sushi_order))
if 'otoro' in sushi_order:
print("Big spender!")
print((sushi_order * 3))
exprep = ['rep'+str(i) for i in range(5)]
exprep
print((prices + sushi_order))
newprices = prices.copy()
newprices.append(22)
print(newprices)
prices
inexpensive = sushi_order[:2] #takes only the first two elements from list
print(inexpensive)
for item in sushi_order:
print(("I'd like to order the {}.".format(item)))
print("And hold the wasabi!")
for ind, item in enumerate(sushi_order):
print(("I'd like to order the {0} for {1}.".format(item, prices[ind])))
lots_of_sushi = inexpensive*2
print(lots_of_sushi)
my_sushis = ['maguro', 'rock n roll']
my_sushis.append('avocado roll')
print(my_sushis)
my_sushis.append(['hamachi', 'california roll'])
print(my_sushis)
my_sushis = ['maguro', 'rock n roll']
my_sushis.extend(['hamachi', 'california roll'])
print(my_sushis)
print(my_sushis)
last_sushi = my_sushis.pop(-1)
print(last_sushi)
my_sushis.remove('maguro')
print(my_sushis)
del my_sushis[1:]
print(my_sushis)
numbers = [1, 1, 2, 3, 5, 8]
print((max(numbers)))
print((min(numbers)))
print((sum(numbers)))
print((len(numbers)))
sum(numbers)/len(numbers)
cooked_rolls = ['unagi roll', 'shrimp tempura roll']
my_order = cooked_rolls
my_order.append('hamachi')
print(my_order)
print(cooked_rolls)
print((my_order is cooked_rolls))
cooked_rolls = ['unagi roll', 'shrimp tempura roll']
my_order = list(cooked_rolls)
my_order.append('hamachi')
print(my_order)
print(cooked_rolls)
noodles = ('soba', 'udon', 'ramen', 'lo mein', 'somen', 'rice noodle')
print((type(noodles)))
sushi_tuple = tuple(my_order)
print(sushi_tuple)
# Remember strings are sequences
maguro = tuple('maguro')
print(maguro)
single_element_tuple = (1,)
print(single_element_tuple)
print((type(single_element_tuple)))
print((noodles[0]))
print((noodles[4:]))
# This should throw an error
noodles[0] = 'spaghetti'
print(sushi_tuple)
sushi_tuple = sushi_tuple[1:] + ('california roll',)
print(sushi_tuple)
for noodle in noodles:
print(("Yummy, yummy {0} and {1}".format(noodle, 'sushi')))
print((list(zip([1,2,3], [4,5,6]))))
sushi = ['salmon', 'tuna', 'sea urchin']
prices = [5.5, 6.75, 8]
sushi_and_prices = list(zip(sushi, prices))
sushi_and_prices
for sushi, price in sushi_and_prices:
print(("The {0} costs ${1}".format(sushi, price)))
exotic_sushi = ['tako', 'toro', 'uni', 'hirame']
for index, item in enumerate(exotic_sushi):
print((index, item))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can access a single element in a list by indexing in using brackets. List indexing starts at 0 so to get the first element, you use 0, the second element is 1 and so on.
Step2: You can find the length of a list using len
Step3: You can use negative indexing to get the last element of a list
Step4: Nested lists
Step5: To access an element in a nested list, first index to the inner list, then index to the item.
Step6: TRY IT
Step7: Operators and Lists
Step8: You can use some arithmatic operators on lists
Step9: Note
Step10: Remember slices from strings? We can also use the slice operator on lists
Step11: Don't forget, you can use the for and in keywords to loop through a list
Step12: TRY IT
Step13: Adding and deleting elements
Step14: You also have several options for removing elements
Step15: TRY IT
Step16: TRY IT
Step17: Aliasing
Step18: To check this, you can use the is operator to see if both variable refer to the same object
Step19: To fix this, you can make a copy of the list using the list function
Step20: Tuples
Step21: You can create a tuple from any sequence using the tuple function
Step22: To create a single element tuple, you need to add a comma to the end of that element (it looks kinda weird)
Step23: You can use the indexing and slicing you learned for lists the same with tuples.
Step24: To change the values in a tuple, you need to create a new tuple (there is nothing stopping you from assigning it to the same variable, though
Step25: You can loop through tuples the same way you loop through lists, using for in
Step26: TRY IT
Step27: Enumerate
|
7,960
|
<ASSISTANT_TASK:>
Python Code:
import sys
import numpy as np
# the following line is not required if BatchFlow is installed as a python package.
sys.path.append("../..")
from batchflow import Dataset, DatasetIndex, Batch
# number of items in the dataset
NUM_ITEMS = 10
# number of items in a batch when iterating
BATCH_SIZE = 3
dataset = Dataset(index=NUM_ITEMS, batch_class=Batch)
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_epochs=1)):
print("batch", i, " contains items", batch.indices)
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_iters=5)):
print("batch", i, " contains items", batch.indices)
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_epochs=1, drop_last=True)):
print("batch", i, " contains items", batch.indices)
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_iters=4, drop_last=True, shuffle=True)):
print("batch", i, " contains items", batch.indices)
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_epochs=1, drop_last=True, shuffle=123)):
print("batch", i, " contains items", batch.indices)
for i in range(NUM_ITEMS * 3):
try:
batch = dataset.next_batch(BATCH_SIZE, shuffle=True, n_epochs=2, drop_last=True)
print("batch", i + 1, "contains items", batch.indices)
except StopIteration:
print("got StopIteration")
break
dataset.reset('iter')
for i in range(int(NUM_ITEMS * 1.3)):
batch = dataset.next_batch(BATCH_SIZE + (-1)**i * i % 3, shuffle=True, n_epochs=None, drop_last=True)
print("batch", i + 1, "contains items", batch.indices)
data = np.arange(NUM_ITEMS).reshape(-1, 1) * 100 + np.arange(3).reshape(1, -1)
data
for batch in dataset.gen_batch(BATCH_SIZE, n_epochs=1):
batch = batch.load(src=data)
print("batch contains items with indices", batch.indices)
print('and batch data is')
print(batch.data)
print()
for batch in dataset.gen_batch(BATCH_SIZE, n_epochs=1):
batch = batch.load(src=data)
print("batch contains")
for item in batch:
print(item)
print()
class MyBatch(Batch):
components = 'features', 'labels'
features_array = np.arange(NUM_ITEMS).reshape(-1, 1) * 100 + np.arange(3).reshape(1, -1)
labels_array = np.random.choice(10, size=NUM_ITEMS)
data = features_array, labels_array
dataset = Dataset(index=NUM_ITEMS, batch_class=MyBatch, preloaded=data)
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_epochs=1)):
print("batch", i, " contains items", batch.indices)
print("and batch data consists of features:")
print(batch.features)
print("and labels:", batch.labels)
print()
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_epochs=1)):
print("Batch", i)
for item in batch:
print("item features:", item.features, " item label:", item.labels)
print()
print("You can change batch data, even scalars.")
for item in batch:
item.features = item.features + 1000
item.labels = item.labels + 100
print("New batch features:\n", batch.features)
print("and labels:", batch.labels)
print()
dataset.split(0.8)
len(dataset.train), len(dataset.test)
dataset.split([.6, .2, .2])
len(dataset.train), len(dataset.test), len(dataset.validation)
dataset.split(0.7, shuffle=True)
dataset.train.indices, dataset.test.indices
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a dataset
Step2: The dataset index
Step3: drop_last=True skips the last batch if it contains fewer than BATCH_SIZE items
Step4: shuffle permutes items across batches
Step5: Run the cell above multiple times to see how batches change.
Step6: Run the cell above multiple times to see that batches stay the same across runs.
Step7: And finally with shuffle=True, n_epochs=None and a variable batch size
Step8: n_epochs=None allows for infinite iterations.
Step9: To get a deeper understanding of drop_last read very important notes in the API.
Step10: Load data into a batch
Step11: You can easily iterate over batch items too
Step12: Data components
Step13: Let's generate some random data
Step14: Now create a dataset (preloaded handles data loading from data stored in memory)
Step15: Since components are defined, you can address them as batch and even item attributes (they are created and loaded automatically).
Step16: You can iterate over batch items and change them on the fly
Step17: Splitting a dataset
Step18: Now the dataset is split into train / test in 80/20 ratio.
Step19: Dataset may be shuffled before splitting.
|
7,961
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.model_selection import cross_val_score, KFold
from sklearn.neighbors import KNeighborsRegressor
# generate toy dataset:
x = np.linspace(-3, 3, 100)
rng = np.random.RandomState(42)
y = np.sin(4 * x) + x + rng.normal(size=len(x))
X = x[:, np.newaxis]
cv = KFold(shuffle=True)
# for each parameter setting do cross_validation:
for n_neighbors in [1, 3, 5, 10, 20]:
scores = cross_val_score(KNeighborsRegressor(n_neighbors=n_neighbors), X, y, cv=cv)
print("n_neighbors: %d, average score: %f" % (n_neighbors, np.mean(scores)))
from sklearn.model_selection import validation_curve
n_neighbors = [1, 3, 5, 10, 20, 50]
train_errors, test_errors = validation_curve(KNeighborsRegressor(), X, y, param_name="n_neighbors",
param_range=n_neighbors, cv=cv)
plt.plot(n_neighbors, train_errors.mean(axis=1), label="train error")
plt.plot(n_neighbors, test_errors.mean(axis=1), label="test error")
plt.legend(loc="best")
from sklearn.model_selection import cross_val_score, KFold
from sklearn.svm import SVR
# each parameter setting do cross_validation:
for C in [0.001, 0.01, 0.1, 1, 10]:
for gamma in [0.001, 0.01, 0.1, 1]:
scores = cross_val_score(SVR(C=C, gamma=gamma), X, y, cv=cv)
print("C: %f, gamma: %f, average score: %f" % (C, gamma, np.mean(scores)))
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=cv, verbose=3)
grid.fit(X, y)
grid.predict(X)
print(grid.best_score_)
print(grid.best_params_)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
cv = KFold(n_splits=10, shuffle=True)
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=cv)
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
grid.best_params_
from sklearn.model_selection import train_test_split, ShuffleSplit
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
single_split_cv = ShuffleSplit(n_splits=1)
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=single_split_cv, verbose=3)
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
clf = GridSearchCV(SVR(), param_grid=param_grid)
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There is a function in scikit-learn, called validation_plot to reproduce the cartoon figure above. It plots one parameter, such as the number of neighbors, against training and validation error (using cross-validation)
Step2: Note that many neighbors mean a "smooth" or "simple" model, so the plot is the mirror image of the diagram above.
Step3: As this is such a very common pattern, there is a built-in class for this in scikit-learn, GridSearchCV. GridSearchCV takes a dictionary that describes the parameters that should be tried and a model to train.
Step4: One of the great things about GridSearchCV is that it is a meta-estimator. It takes an estimator like SVR above, and creates a new estimator, that behaves exactly the same - in this case, like a regressor.
Step5: What fit does is a bit more involved then what we did above. First, it runs the same loop with cross-validation, to find the best parameter combination.
Step6: You can inspect the best parameters found by GridSearchCV in the best_params_ attribute, and the best score in the best_score_ attribute
Step7: There is a problem with using this score for evaluation, however. You might be making what is called a multiple hypothesis testing error. If you try very many parameter settings, some of them will work better just by chance, and the score that you obtained might not reflect how your model would perform on new unseen data.
Step8: We can also look at the parameters that were selected
Step9: Some practitioners go for an easier scheme, splitting the data simply into three parts, training, validation and testing. This is a possible alternative if your training set is very large, or it is infeasible to train many models using cross-validation because training a model takes very long.
Step10: This is much faster, but might result in worse hyperparameters and therefore worse results.
|
7,962
|
<ASSISTANT_TASK:>
Python Code:
from sympy import *
init_printing()
x = symbols('x')
x**2
eq = Eq(x + 3, 2, evaluate=False)
eq
Eq(x**2 + 3*x -1, 0, evaluate = False)
solve(eq, [x], dict=True)
eq = Eq(3*x, -2, evaluate=False)
eq
solve(eq, [x], dict=True)
eq = Eq(x**2, 3, evaluate=False)
eq
tentativi = list(map(S, range(-5, 5)))
tentativi
equazioni = [Eq((x/y)**2, S(3), evaluate=False) for x in tentativi for y in tentativi if y]
equazioni
list(map(solve, equazioni))
solve(eq, [x], dict=True)
0.3 == 3/10
#0.47368754355678678678678678678678678...(678)*... is in Q
pi
list(filter(lambda r: r.is_real, solve(Eq(x**5, 10))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $\mathbb{N}$
Step2: Quando leggi la scrittura matematica $x + 3 = 2$, questa significa la seguente cosa
Step3: puoi leggere la scrittura matematica $x^2 + 3x -1 = 0$, questa significa la seguente cosa
Step4: Quando leggi la scrittura matematica $x + 3 = 2$, questa significa la seguente cosa
Step5: Ha significato $\mathcal{X} = \lbrace x \in \mathbb{N}
Step6: dove in questo caso sia $y=3$ sia $z=-2$ sono in $\mathbb{Z}$.
Step7: Ha significato $\mathcal{X} = \lbrace x \in \mathbb{N}
Step8: Ha significato $\mathcal{X} = \lbrace x, y \in \mathbb{Z}
Step9: dove in questo caso sia $y=2$ sia $z=3$ sono in $\mathbb{Z}$.
|
7,963
|
<ASSISTANT_TASK:>
Python Code:
import sys
import numpy
def main():
script = sys.argv[0]
filename = sys.argv[1]
data = numpy.loadtxt(filename, delimiter=',')
for m in data.mean(axis=1):
print(m)
import sys
import numpy
def main():
script = sys.argv[0]
filename = sys.argv[1]
data = numpy.loadtxt(filename, delimiter=',')
for m in data.mean(axis=1):
print(m)
main()
from __future__ import division, print_function
import sys
def main():
action = sys.argv[1]
number1 = int(sys.argv[2])
number2 = int(sys.argv[3])
assert action in ['add', 'subtract']
if action == 'add':
print(number1 + number2)
else:
print(number1 - number2)
main()
from __future__ import division, print_function
import sys
import glob
def main():
suffix = sys.argv[1]
files = glob.glob('*.' + suffix)
for file in files:
print(file)
main()
from __future__ import division, print_function
import sys
def main():
if len(sys.argv) < 2:
print(This program should be called with an action and a filename or list of filenames, like so:
$ python readings.py [action] [filename(s)]
In the above, action is either '--min', '--mean', or '--max'.
filename(s) is a filename or several filenames, or a file in standard input.)
script = sys.argv[0]
action = sys.argv[1]
filenames = sys.argv[2:]
assert action in ['--min', '--mean', '--max'], \
'Action is not one of --min, --mean, or --max: ' + action
if len(filenames) == 0:
process(sys.stdin, action)
else:
for f in filenames:
process(f, action)
def process(filename, action):
data = numpy.loadtxt(filename, delimiter=',')
if action == '--min':
values = data.min(axis=1)
elif action == '--mean':
values = data.mean(axis=1)
elif action == '--max':
values = data.max(axis=1)
for m in values:
print(m)
main()
def main():
script = sys.argv[0]
if len(sys.argv) > 1:
action = sys.argv[1]
else:
action = '--mean'
filenames = sys.argv[2:]
assert action in ['--min', '--mean', '--max'], \
'Action is not one of --min, --mean, or --max: ' + action
if len(filenames) == 0:
process(sys.stdin, action)
else:
for f in filenames:
process(f, action)
from __future__ import division, print_function
import sys
import numpy as np
def main():
script = sys.argv[0]
filenames = sys.argv[1:]
print filenames[0]
shape0 = np.loadtxt(filenames[0]).shape
for file in filenames[1:]:
assert np.loadtxt(file).shape == shape0, 'Shape of {} does not match'.format(file)
main()
from __future__ import division, print_function
import sys
import numpy as np
def main():
script = sys.argv[0]
filenames = sys.argv[1:]
shape0 = np.loadtxt(filenames[0], delimiter=',').shape
for file in filenames[1:]:
assert np.loadtxt(file, delimiter=',').shape == shape0, 'Shape of {} does not match'.format(file)
main()
import sys
def main():
if len(sys.argv) < 2:
count = 0
for line in sys.stdin:
count += 1
print(count, 'lines in standard input')
else:
total_lines = 0
filenames = sys.argv[1:]
for file in filenames:
file_contents = np.loadtxt(file)
print('Lines in', file, ':', len(file_contents))
total_lines += 1
print('Total number of lines:', total_lines)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This function gets the name of the script from sys.argv[0], because that’s where it’s always put, and the name of the file to process from sys.argv[1]. Here’s a simple test
Step2: and run that
Step3: <section class="challenge panel panel-success">
Step5: <section class="challenge panel panel-success">
Step6: <section class="challenge panel panel-success">
Step7: <section class="challenge panel panel-success">
Step8: <section class="challenge panel panel-success">
|
7,964
|
<ASSISTANT_TASK:>
Python Code:
letters_map = {'2':'ABC', '3':'DEF', '4':'GHI', '5':'JKL',
'6':'MNO', '7':'PQRS', '8':'TUV', '9':'WXYZ'}
def printWords(number, ):
#number is phone number
def printWordsUtil(numb, curr_digit, output, n):
if curr_digit == n:
print('%s ' % output)
return
for i in range(len(letters_map[numb[curr_digit]])):
output[curr_digit] = letters_map[number[curr_digit]][i]
printWordsUtil(numb, curr_digit+1, output, n)
if numb[curr_digit] == 0 or numb[curr_digit] == 1:
return
def gen_phone(digits):
results = []
lookup = {
'0': ' ',
'1': ' ',
'2': 'abc',
'3': 'def',
'4': 'ghi',
'5': 'jkl',
'6': 'mno',
'7': 'pqrs',
'8': 'tuv',
'9': 'wxyz',
}
def decode_next(s, i):
if i == len(digits):
results.append(s)
return
for c in lookup[digits[i]]:
decode_next(s + c, i + 1)
decode_next('', 0)
return results
# Dynamic programming implementation of LCS problem
# Returns length of LCS for X[0..m-1], Y[0..n-1]
def lcs(X, Y, m, n):
L = [[0 for x in xrange(n+1)] for x in xrange(m+1)]
# Following steps build L[m+1][n+1] in bottom up fashion. Note
# that L[i][j] contains length of LCS of X[0..i-1] and Y[0..j-1]
for i in xrange(m+1):
for j in xrange(n+1):
if i == 0 or j == 0:
L[i][j] = 0
elif X[i-1] == Y[j-1]:
L[i][j] = L[i-1][j-1] + 1
else:
L[i][j] = max(L[i-1][j], L[i][j-1])
# Following code is used to print LCS
index = L[m][n]
# Create a character array to store the lcs string
lcs = [""] * (index+1)
lcs[index] = "\0"
# Start from the right-most-bottom-most corner and
# one by one store characters in lcs[]
i = m
j = n
while i > 0 and j > 0:
# If current character in X[] and Y are same, then
# current character is part of LCS
if X[i-1] == Y[j-1]:
lcs[index-1] = X[i-1]
i-=1
j-=1
index-=1
# If not same, then find the larger of two and
# go in the direction of larger value
elif L[i-1][j] > L[i][j-1]:
i-=1
else:
j-=1
print "LCS of " + X + " and " + Y + " is " + "".join(lcs)
# Driver program
X = "AGGTAB"
Y = "GXTXAYB"
m = len(X)
n = len(Y)
lcs(X, Y, m, n)
passed in a list of dictionaries
also passed a character
passed single characted to int
if a character does not exist in the dict
then the defualt value it zero
find the highest possisble value for a character
in the dicts
now design it to take an abatrary operator and reutrn
the highest value based on the operator
and then have it return ascending and descending order
import time
import math
class TimeTravelDict:
def __init__(self):
self.dict = {}
def get(self, key, time):
if not self.dict[key]:
return -1
most_recent, value = math.inf, None
for a, b in self.dict[key]:
if b < time:
if (time - b) < most_recent:
most_recent = b
value = a
if value == None:
return -1
else:
return value
def put(self, key, value):
if not key in self.dict:
self.dict[key] = [(value, time.time())]
self.dict[key].append((value, time.time()))
print(self.dict[key])
tt = TimeTravelDict()
tt.put('a', 11)
tt.put('a', 12)
tt.put('a', 13)
tt.put('a', 14)
tt.get('a', 1513571590.2447577)
#[2::][1::2]
import collections
words = ["baa", "", "abcd", "abca", "cab", "cad"]
def alienOrder(words):
pre, suc = collections.defaultdict(set), collections.defaultdict(set)
for pair in zip(words, words[1:]):
print(pair)
for a, b in zip(*pair):
if a != b:
suc[a].add(b)
pre[b].add(a)
break
print('succ %s' % suc)
print('pred %s' % pre)
chars = set(''.join(words))
print('chars %s' % chars)
print(set(pre))
free = chars - set(pre)
print('free %s' % free)
order = ''
while free:
a = free.pop()
order += a
for b in suc[a]:
pre[b].discard(a)
if not pre[b]:
free.add(b)
if set(order) == chars:
return order
else:
False
# return order * (set(order) == chars)
alienOrder(words)
def binarySearch(alist, value):
mini = 0
maxi = len(alist)
while mini <= maxi:
print('here')
pivot = (maxi - mini) // 2
current_value = alist[pivot]
if current_value < value:
mini = pivot + 1
elif current_value > value:
maxi = pivot -1
else:
return pivot
return pivot or -1
test1 = [0, 5, 10 , 23, 46, 49, 78]
test2 = [0, 5, 10]
test3 = [0]
print(binarySearch(test1, 49))
print(binarySearch(test2, 10))
binarySearch(test3, 90)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Print Longest Common Subsequence
Step2: Time Travelling dictionary
Step3: Alien Dictionary
Step4: Binary Search
|
7,965
|
<ASSISTANT_TASK:>
Python Code:
import ga4gh.client as client
c = client.HttpClient("http://1kgenomes.ga4gh.org")
dataset = c.search_datasets().next()
print(dataset)
reference_set = c.search_reference_sets().next()
print(reference_set)
references = [r for r in c.search_references(reference_set_id=reference_set.id)]
print(', '.join(sorted([reference.name for reference in references])))
chr1 = filter(lambda x: x.name == "1", references)[0]
bases = c.list_reference_bases(chr1.id, start=10000, end=11000)
print(bases)
print(len(bases))
release = None
functional = None
for variant_set in c.search_variant_sets(dataset_id=dataset.id):
if variant_set.name == "phase3-release":
release = variant_set
else:
functional = variant_set
all_call_sets = list(c.search_call_sets(release.id))
call_set_ids = []
for call_set in all_call_sets:
call_set_ids.append(str(call_set.id))
example_variant = c.search_variants(variant_set_id=release.id, start=10000, end=11000, reference_name=chr1.name, call_set_ids=call_set_ids).next()
print("Variant name: {}".format(example_variant.names[0]))
print("Start: {}, End: {}".format(example_variant.start, example_variant.end))
print("Reference bases: {}".format(example_variant.reference_bases))
print("Alternate bases: {}".format(example_variant.alternate_bases))
print("Number of calls: {}".format(len(example_variant.calls)))
print(example_variant.calls[0])
total = 0
count = 0
for call in example_variant.calls:
total += 1
count += call.genotype[0] or call.genotype[1]
print("{}/{} participants with this variant".format(count, total))
print(float(count) / float(total))
annotation_set = c.search_variant_annotation_sets(variant_set_id=functional.id).next()
annotation = c.search_variant_annotations(
variant_annotation_set_id=annotation_set.id,
start=example_variant.start,
end=example_variant.end,
reference_name=chr1.name).next()
print(annotation.transcript_effects[0].effects[0].term)
gencode = c.search_feature_sets(dataset_id=dataset.id).next()
print(gencode)
gene = c.search_features(
feature_set_id=gencode.id,
start=10000,
end=12000,
reference_name="chr1",
feature_types=['gene']).next()
print("Gene name: {}".format(gene.attributes.vals['gene_name'].values[0]))
print("Start: {}, End: {}".format(gene.start, gene.end))
limit = 2
iterator = 0
read_group_set = c.search_read_group_sets(dataset_id=dataset.id).next()
print("Read group set : {}".format(read_group_set.name))
for read_group in read_group_set.read_groups:
sequence = c.search_reads(read_group_ids=[read_group.id], start=0, end = 1000000, reference_id=references[0].id).next()
print("Read group name: {},\nSequence: {}".format(read_group.name, sequence.aligned_sequence))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will continue to refer to this client object for accessing the remote server.
Step2: Access the reference set
Step3: With the reference set saved to a variable we will now request for the available references. This is the list of contigs for which we can request reference bases.
Step4: Here, we print the names of the available references. These reference names are used in the variants/search API. By selecting one of the references we can craft a ListBases request. Here, we ask for the 1000 bases between 10,000 and 11,000 on the first chromosome.
Step5: List Variant Sets
Step6: There are two variant sets currently being made available by this server instance. release contains the calls for the each participant and functional_annotation provides details of the effects of these variants created using the Variant Effect Predictor.
Step7: Variant calls
Step8: This tells us that for the participant HG00096 the variant in question was observed on the first haplotype ("genotype"
Step9: Variant annotations
Step10: We can now search for the range that includes our example variant to discover relevant annotations.
Step11: Here we have found the annotation for our example variant and have found it has the upstream_gene_variant consequence.
Step12: We can now craft search requests for features to find the nearest gene
Step13: Querying read group set and reads from dataset
|
7,966
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from pandas_datareader.data import DataReader
cpi_apparel = DataReader('CPIAPPNS', 'fred', start='1986')
cpi_apparel.index = pd.DatetimeIndex(cpi_apparel.index, freq='MS')
inf_apparel = np.log(cpi_apparel).diff().iloc[1:] * 1200
inf_apparel.plot(figsize=(15, 5));
# Model that will apply Kalman filter recursions
mod_kf = sm.tsa.SARIMAX(inf_apparel, order=(6, 0, 0), seasonal_order=(15, 0, 0, 12), tolerance=0)
print(mod_kf.k_states)
# Model that will apply Chandrasekhar recursions
mod_ch = sm.tsa.SARIMAX(inf_apparel, order=(6, 0, 0), seasonal_order=(15, 0, 0, 12), tolerance=0)
mod_ch.ssm.filter_chandrasekhar = True
# Model that will apply Kalman filter recursions
mod_kf = sm.tsa.SARIMAX(inf_apparel, order=(6, 0, 0), seasonal_order=(15, 0, 0, 12))
print(mod_kf.k_states)
# Model that will apply Chandrasekhar recursions
mod_ch = sm.tsa.SARIMAX(inf_apparel, order=(6, 0, 0), seasonal_order=(15, 0, 0, 12))
mod_ch.ssm.filter_chandrasekhar = True
res_kf = mod_kf.filter(mod_kf.start_params)
print('Convergence at t=%d, of T=%d total observations' %
(res_kf.filter_results.period_converged, res_kf.nobs))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Although most operations related to state space models rely on the Kalman filtering recursions, in some special cases one can use a separate method often called "Chandrasekhar recursions". These provide an alternative way to iteratively compute the conditional moments of the state vector, and in some cases they can be substantially less computationally intensive than the Kalman filter recursions. For complete details, see the paper "Using the 'Chandrasekhar Recursions' for Likelihood Evaluation of DSGE Models" (Herbst, 2015). Here we just sketch the basic idea.
Step2: We will construct two model instances. The first will be set to use the Kalman filter recursions, while the second will be set to use the Chandrasekhar recursions. This setting is controlled by the ssm.filter_chandrasekhar property, as shown below.
Step3: We time computation of the log-likelihood function, using the following code
Step4: Again, we time computation of the log-likelihood function, using the following code
|
7,967
|
<ASSISTANT_TASK:>
Python Code:
from IPython.lib.display import YouTubeVideo
YouTubeVideo('6O43gOxtaWo', start=14)
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Import Users Data
unames = ['user_id','gender','age','occupation','zip']
users = pd.read_table('data/users.dat', sep = '::', header = None, names = unames, engine='python')
users[:5]
# Import Ratings Data
rnames = ['user_id','movie_id','rating','timestamp']
ratings = pd.read_table('data/ratings.dat', sep = '::', header = None, names = rnames,engine='python')
ratings[:5]
# Import movies Data
mnames = ['movie_id','title','genres']
movies = pd.read_table('data/movies.dat', sep = '::', header = None, names = mnames,engine='python')
movies[:5]
# Merge the data into a single data frame
data = pd.merge(pd.merge(ratings,users),movies)
data[:5]
#Store the data into an HDF5 file
data_hdf = pd.HDFStore('data/movies.h5')
data_hdf['data1'] = data
data_hdf.close()
#check statistics of data
data[['rating','age']].describe()
# Use a pivot table to compute mean ratings by title
mean_ratings = data.pivot_table('rating',index = 'title',aggfunc = 'mean')
# Determine titles with high mean ratings
top_overall_titles = mean_ratings.index[mean_ratings >= 4.5]
#Extract those titles
top_overall_movies = mean_ratings.ix[top_overall_titles]
print 'Total movies with an average ranking of (at least) 4.5 overall:'
print len(top_overall_movies)
print
print 'Examples:'
print top_overall_movies[:5]
# Use a pivot table to compute mean ratings per title, stratified by gender.
mean_ratings = data.pivot_table('rating',index = 'title',columns = 'gender',aggfunc = 'mean')
#Determine those title ranked high among females.
top_female_titles = mean_ratings.index[mean_ratings['F'] >= 4.5]
# Extract those titles
top_female_movies = mean_ratings.ix[top_female_titles]
print 'Total movies with an average ranking of (at least) 4.5 among women:'
print len(top_female_movies)
print
print 'Examples (average rankings):'
print top_female_movies[:5]
mean_ratings = data.pivot_table('rating',index = 'title',columns = 'gender',aggfunc = 'mean')
# Determine which titles had high average ratings among men
top_male_titles = mean_ratings.index[mean_ratings['M'] >= 4.5]
# Extract those titles
top_male_movies = mean_ratings.ix[top_male_titles]
print 'Total movies with an average ranking of (at least) 4.5 among men:'
print len(top_male_movies)
print
print 'Examples (average rankings):'
print top_male_movies[:5]
# Restrict data to those with raters aged over 30
data_over30 = data.ix[data['age']>30]
# Use a pivot table to compute the median ratings by title on this restricted data
median_ratings = data_over30.pivot_table('rating',index = 'title',columns = ['gender'],aggfunc = 'median')
# Determine which movies had a high median among men and extract those titles
top_male_median_titles = median_ratings.index[median_ratings['M'] >= 4.5]
top_male_median_movies = median_ratings.ix[top_male_median_titles]
print 'Total movies with an median ranking of (at least) 4.5 among men over 30:'
print len(top_male_median_movies)
print
print 'Examples, median scores among people over 30:'
print top_male_median_movies[:5]
# Determine which movies had a high median among men and extract those titles
top_female_median_titles = median_ratings.index[median_ratings['F'] >= 4.5]
top_female_median_movies = median_ratings.ix[top_female_median_titles]
print 'Total movies with an median ranking of (at least) 4.5 among women over 30:'
print len(top_female_median_movies)
print
print 'Examples, median scores among people over 30:'
print top_female_median_movies[:5]
# Determine the overall total ratings and mean ratings per title
popularity_test = data.pivot_table('rating',index = 'title', aggfunc = [len, np.mean])
# Determine the mean ratings per title by gender
gender_popularity_test = data.pivot_table('rating',index = 'title', columns = 'gender', aggfunc = np.mean)
popularity_test[:5]
gender_popularity_test[:5]
# Calculate total number of ratings for each title
ratings_by_title = data.groupby('title').size()
# Determine the average number of total ratings per title
average_total_ratings = sum(ratings_by_title)/len(ratings_by_title)
# Determine which titles had above average total ratings and isolate those titles.
high_total_titles = popularity_test.index[popularity_test['len'] >= average_total_ratings]
high_total = popularity_test.ix[high_total_titles]
high_total[:5]
# Determine the average of ALL ratings given by men and by women.
gender_average_ratings = data.pivot_table('rating', index = 'gender',aggfunc = np.mean)
gender_average_ratings
# Determine the titles with above average female ratings and isolate those titles among the movies with above average total ratings.
high_female_titles = gender_popularity_test.index[gender_popularity_test['F'] >= gender_average_ratings['F']]
high_total_female = high_total.ix[high_female_titles]
# Among the above isolated titles, determine those with above average male ratings and isolate those titles.
high_male_titles = gender_popularity_test.index[gender_popularity_test['M'] >= gender_average_ratings['M']]
high_total_female_male = high_total_female.ix[high_male_titles]
# Determine the popular movies, given the definition above.
from numpy import nan as NA
popular_movies = high_total_female_male.dropna(how = 'all')
popular_movies[:5]
# Given the popluar movies, determine the 10 most popular.
most_popular_movies = popular_movies.sort_values(by='mean',ascending = False)
most_popular_movies[:10]
# Compute average rating by age group
age_avg_ratings = data.pivot_table('rating', index = 'age',aggfunc = np.mean)
age_avg_ratings
# Compute weighted average by weighting each rating by the total number of ratings that individual submits
avg_by_user = data.pivot_table('rating',index = ['age','user_id'], aggfunc = [ len , np.mean])
avg_by_user[:10]
avg_ratings = np.mean(avg_by_user['len'])
avg_by_user['weight'] = avg_by_user['len']/avg_ratings
avg_by_user['weighted_mean'] = avg_by_user['mean']*avg_by_user['weight']
age_avg_weighted_ratings = avg_by_user.pivot_table('weighted_mean', index = avg_by_user.index.droplevel(1), aggfunc = np.mean)
age_avg_weighted_ratings
# Compute average age per rating
avg_age_ratings = data.pivot_table('age', index = 'rating',aggfunc = np.mean)
avg_age_ratings
age_counts = data.pivot_table('title', index='age', columns='rating', aggfunc='count')
age_counts.rename(index={1: 'Under 18', 18: '18-24', 25: '25-34', 35: '35-44',
45: '45-49', 50: '50-55', 56: '56+'}, inplace=True)
print 'Frequency of Age Groups Ratings'
print age_counts
#normalize
age_counts_norm = age_counts.div(age_counts.sum(1).astype(float), axis=0)
age_counts_norm
# plot percentage of each rate from each age group
age_counts_norm.plot(ylim=[0,0.4],kind='bar', color=['yellow','#E50E14','#ec971f','#00b27f','#5898f1'],title = "Percent of Ratings By Age").legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
import time
timestamps = data['timestamp']
# Time stamps are reported in seconds since epoch. Convert these values to local time, and extract the hour.
hour = [time.localtime(stamp).tm_hour for stamp in timestamps.values]
hour_series = pd.DataFrame(hour, index=data.index)
# Append the hour each rating was reported to the data set.
data['hour'] = hour_series
# Use a pivot table to determine the average overall rating by hour.
avg_by_hour = data.pivot_table('rating',index = 'hour', aggfunc = np.mean)
avg_by_hour
wee_hours_data = data[np.logical_or(data['hour']>= 22,data['hour']<=5)]
wee_hours_5 = wee_hours_data[wee_hours_data['rating']==5]
wee_hours_1 = wee_hours_data[wee_hours_data['rating']==1]
total_5 = data[data['rating']==5]
total_1 = data[data['rating']==1]
wee_hours_5_percent = float(len(wee_hours_5))/len(wee_hours_data)
wee_hours_1_percent = float(len(wee_hours_1))/len(wee_hours_data)
total_5_percent = float(len(total_5))/len(data)
total_1_percent = float(len(total_1))/len(data)
compdat = {'Percent Ratings 5':[wee_hours_5_percent , total_5_percent],
'Percent Raings 1':[wee_hours_1_percent, total_1_percent]}
comp = pd.DataFrame(compdat, columns=['Percent Ratings 5','Percent Raings 1'], index=['Wee Hours','Total'])
comp
#Plot a histogram of the ratings of all movies.
Rating_all=data.pivot_table('title',index='rating',aggfunc='count')
Rating_all.plot(kind='bar', color='#FA5744')
plt.title('Histogram of all ratings')
plt.ylabel('Total number')
#Plot a histogram of the number of ratings each movie recieved.
Rating_each=data.pivot_table('rating',index='title',aggfunc='count')
Rating_each.hist()
plt.title('Histogram of Number of ratings each movie received')
plt.ylabel('Number of Movies')
plt.xlabel('Number of Ratings')
#Plot a histogram of the average rating for each movie.
Avg_rating_each=data.pivot_table('rating',index='title',aggfunc='mean')
Avg_rating_each.hist(color='orange')
plt.title('Histogram of Average rating for each movie')
plt.ylabel('Number of Movies')
plt.xlabel('Average Rating')
#Plot a histogram of the average rating for movies which are rated more than 100 times.
rating_by_title = data.groupby('title').size()
active_titles = rating_by_title.index[rating_by_title > 100]
avg_ratings_each_active = Avg_rating_each.ix[active_titles]
avg_ratings_each_active.hist(color='red')
plt.title('average rating for movies rated more than 100 times')
plt.ylabel('Number of Movies')
plt.xlabel('Average rating')
# Select the movies with less than half the average number of total ratings.
rating_by_title = data.groupby('title').size()
inactive_titles = rating_by_title.index[rating_by_title <= average_total_ratings/2]
inactive = [title in inactive_titles.values for title in data['title']]
inactive_series = pd.DataFrame(inactive, index = data.index)
data['Inactive'] = inactive_series
inactive_data = data[data['Inactive']]
inactive_rating_all=inactive_data.pivot_table('title',index='rating',aggfunc='count')
inactive_rating_all.plot(kind='bar', color='blue')
plt.title('Histogram of ratings of movies \n with less than half the average number of ratings')
plt.ylabel('Total number')
# Select the movies with less more than twice the average number of total ratings.
rating_by_title = data.groupby('title').size()
wayactive_titles = rating_by_title.index[rating_by_title >= average_total_ratings*2]
wayactive = [title in wayactive_titles.values for title in data['title']]
wayactive_series = pd.DataFrame(wayactive, index = data.index)
data['wayactive'] = wayactive_series
wayactive_data = data[data['wayactive']]
wayactive_rating_all=wayactive_data.pivot_table('title',index='rating',aggfunc='count')
wayactive_rating_all.plot(kind='bar', color='blue')
plt.title('Histogram of ratings of movies with \n more than twice the average number of ratings')
plt.ylabel('Total number')
# Extract the year from the title
def extract(string, start='(', stop=')'):
while string.index(stop) - (string.index(start)+1)!= 4:
string = string[:string.index(start)] + string[string.index(stop)+1:]
return string[string.index(start)+1:string.index(stop)]
titles = data['title']
year = [int(extract(title)) for title in titles]
year_series = pd.DataFrame(year, index=data.index)
data['year'] = year_series
data[:5]
year_array = list(set(data['year'].values))
average_year = int(float(sum(year_array))/len(year_array))
average_year
old_data = data[data['year']<= average_year]
old_rating_all=old_data.pivot_table('title',index='rating',aggfunc='count')
old_rating_all.plot(kind='bar', color='#FA5744')
plt.title('Histogram of all of early movies')
plt.ylabel('Total number')
newer_data = data[data['year']>= average_year]
new_rating_all=newer_data.pivot_table('title',index='rating',aggfunc='count')
new_rating_all.plot(kind='bar', color='#00b27f')
plt.title('Histogram of all of newer movies')
plt.ylabel('Total number')
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
# among total6040 users, how many female? how may male?
users.groupby('gender').size()
#among total 100209 rating records, how many was made by female? how many was made by male?
data.groupby('gender').size()
# Use a pivot table to compute mean ratings per title by gender
mean_ratings = data.pivot_table('rating',index = 'title',columns = 'gender',aggfunc = 'mean')
# Scatter this data.
plt.scatter(mean_ratings['M'], mean_ratings['F'])
plt.title('Average Ratings by Movie')
plt.ylabel('Average female rating')
plt.xlabel('Average male rating')
# Determine titles with more than 200 total ratings.
ratings_by_title = data.groupby('title').size()
active_titles = ratings_by_title.index[ratings_by_title > 200]
# Extract these titles
over_200_mean_ratings = mean_ratings.ix[active_titles]
#Produce scatter plot
plt.scatter(over_200_mean_ratings['M'], over_200_mean_ratings['F'])
plt.title('Average Ratings by Movie, \n Among movies rated more than 200 times')
plt.ylabel('Average female rating')
plt.xlabel('Average male rating')
# Compute the correlation coefficient
print 'correlation coefficient between averege male and female ratings: {0}'.format(mean_ratings.M.corr(mean_ratings.F))
# Based on scatter plots above, it is clear that men and women tend to agree more when the movies have a higher total number
# of ratings. Calculate the correlation coeffcient in this case
print 'correlation coefficient between averege male and female ratings among movies with over 200 ratings: {0}'.format(
over_200_mean_ratings.M.corr(over_200_mean_ratings.F))
# Given this observed in crease in correlation coefficient, we now compute the correlation coefficient based on the number of
# total ratings:
ratings_by_title = data.groupby('title').size()
mean_ratings = data.pivot_table('rating',index = 'title',columns = 'gender',aggfunc = 'mean')
i = 1
IND = ['0']
RAT = [0]
while i < max(ratings_by_title):
titles = ratings_by_title.index[np.logical_and(ratings_by_title >= i, ratings_by_title < 2*i)]
subset_mean_ratings = mean_ratings.ix[titles]
correl = subset_mean_ratings.M.corr(subset_mean_ratings.F)
IND.append('Total ratings between {0} and {1}'.format(i, 2*i))
RAT.append(correl)
j = i
i = 2*j
correl_comp = pd.Series(RAT, index=IND)
correl_comp.index.name = 'Total number of Ratings'
correl_comp.name = 'Correlation coefficient between average Male and Female Ratings per Movie'
correl_comp
fives_data = data[data['rating']==5]
five_ratings = fives_data.pivot_table('rating', index = 'title', columns = 'gender', aggfunc = 'count')
total_ratings = data.pivot_table('rating',index ='title',columns = 'gender',aggfunc = 'count')
fives_percent = pd.DataFrame(index = five_ratings.index)
fives_percent['M'] = five_ratings['M']/total_ratings['M']
fives_percent['F'] = five_ratings['F']/total_ratings['F']
print 'correlation coefficient between percent ratings of 5 by male and female per title: {0}'.format(fives_percent.M.corr(fives_percent.F))
over_200_fives_percent = fives_percent.ix[active_titles]
print 'correlation coefficient between percent ratings of 5 by male and female among titles with more than 200 ratings: {0}'.format(over_200_fives_percent.M.corr(over_200_fives_percent.F))
plt.scatter(over_200_fives_percent['M'], over_200_fives_percent['F'])
plt.title('Percent ratings 5 by Movie, \n Among movies rated more than 200 times')
plt.ylabel('Percent 5, female')
plt.xlabel('Percent 5, male')
low_data = data[data['rating']<= 2]
low_ratings = low_data.pivot_table('rating', index = 'title', columns = 'gender', aggfunc = 'count')
total_ratings = data.pivot_table('rating',index ='title',columns = 'gender',aggfunc = 'count')
low_percent = pd.DataFrame(index = low_ratings.index)
low_percent['M'] = low_ratings['M']/total_ratings['M']
low_percent['F'] = low_ratings['F']/total_ratings['F']
print 'correlation coefficient between percent ratings of 1 or 2 by male and female per title: {0}'.format(low_percent.M.corr(low_percent.F))
over_200_low_percent = low_percent.ix[active_titles]
print 'correlation coefficient between percent ratings of 5 by male and female among titles with more than 200 ratings: {0}'.format(over_200_low_percent.M.corr(over_200_low_percent.F))
plt.scatter(over_200_low_percent['M'], over_200_low_percent['F'])
plt.title('Percent low ratings by Movie, \n Among movies rated more than 200 times')
plt.ylabel('Low percent, female')
plt.xlabel('Low percent, male')
#freqency of men vs. wen ratings for each age group
gender_counts = data.pivot_table('title', index='gender', columns='rating', aggfunc='count')
print 'Frequency of men vs. wemen Ratings'
print gender_counts
#normalize to sum to 1, giving us the percent of each rating given by men and women.
gender_counts_norm = gender_counts.div(gender_counts.sum(1).astype(float), axis=0)
gender_counts_norm
gender_counts_norm.plot(kind='bar')
print
print 'Percent of each Rating, men vs women'
print gender_counts_norm
# Calculate the correlation coefficient among these average ratings.
gender_counts_norm.ix['M'].corr(gender_counts_norm.ix['F'])
import time
# Convert time stamps to local time and extract the hour.
timestamps = data['timestamp']
hour = [time.localtime(stamp).tm_hour for stamp in timestamps.values]
hour_series = pd.DataFrame(hour, index=data.index)
data['hour'] = hour_series
# Isolate data for ratings submitted between 10PM and 5AM local time
wee_hours_data = data[np.logical_or(data['hour']>= 22,data['hour']<=5)]
# Determine the average ratings per title by gender during these late-night hours.
wee_hours_mean_ratings = wee_hours_data.pivot_table('rating', index = 'title', columns = 'gender', aggfunc = np.mean)
wee_hours_mean_ratings[:5]
#Calculate the correlation coefficient.
print 'Correlation coefficient between averege male and female ratings between 10PM and 5AM: {0}'.format(
wee_hours_mean_ratings.M.corr(wee_hours_mean_ratings.F))
# We already know that men and women tend to disagree on movies with lower total ratings. Segment from the late-night data those with
# high total numbers of ratings.
wee_hours_over_200_mean_ratings = wee_hours_mean_ratings.ix[active_titles]
wee_hours_over_200_mean_ratings[:5]
#Compute Correlation Coefficient
'Correlation coefficient between averege male and female ratings between 10PM and 5AM, among movies with at least 200 total ratings: {0}'.format(wee_hours_over_200_mean_ratings.M.corr(wee_hours_over_200_mean_ratings.F))
# Determine which movies have "Comedy" listed within its genres.
genres = data['genres']
all_genres = [string.split('|') for string in genres]
comedy_truth = [ 'Comedy' in genres for genres in all_genres]
comedy_series = pd.DataFrame(comedy_truth, index=data.index)
data['comedy'] = comedy_series
comedy_data = data.ix[data['comedy'] == True]
# Determine comedies with at least 100 ratings
comedy_ratings_by_title = comedy_data.groupby('title').size()
comedy_active_titles = comedy_ratings_by_title.index[comedy_ratings_by_title > 100]
# Extract these titles
comedy_mean_ratings = comedy_data.pivot_table('rating',index = 'title',columns = 'gender',aggfunc = 'mean')
active_comedy_mean_ratings = comedy_mean_ratings.ix[comedy_active_titles]
active_comedy_mean_ratings[:10]
# Compute correlation between average men's and women's ratings.
active_comedy_mean_ratings.M.corr(active_comedy_mean_ratings.F)
from sklearn import cross_validation, linear_model, feature_selection, metrics
# Train a linear model to examine predictability
# Can't have any NaN values for linear regression.
active_comedy_mean_ratings = active_comedy_mean_ratings.dropna()
# Select out our predictor columns and our response columns
X = active_comedy_mean_ratings.ix[:,['M']]
y = active_comedy_mean_ratings.ix[:,['F']]
# Split the data into training data and testing data
X_train,X_test,y_train,y_test = cross_validation.train_test_split(X,
y,
test_size=0.8)
# Run the solver
reg = linear_model.LinearRegression(fit_intercept=True)
reg.fit(X_train,y_train)
# Plot the data and the model
plotX = np.linspace(0,5,100)
plotY = reg.predict(np.matrix(plotX).T)
plt.plot(X_train,y_train,'o', color='#FA5744')
plt.plot(X_test,y_test,'o', color='#00b27f')
plt.plot(plotX,plotY,'-', color='#5898f1')
plt.title('Average Rating of Comedies')
plt.ylabel('Female Average')
plt.xlabel('Male Average')
# Compute the slope and intercept of the linear model
print reg.intercept_
# Beta_1
print reg.coef_
# Compute testing and training error.
print 'training error'
print metrics.mean_squared_error(y_train,reg.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,reg.predict(X_test))
comedy_fives_data = comedy_data[comedy_data['rating']==5]
comedy_gender_fives = comedy_fives_data.pivot_table('rating', index='title', columns='gender', aggfunc='count')
comedy_gender_totals = comedy_data.pivot_table('rating', index='title', columns='gender', aggfunc='count')
comedy_gender_percents = comedy_gender_fives / comedy_gender_totals
comedy_gender_percents = comedy_gender_percents.ix[active_titles]
comedy_gender_percents.M.corr(comedy_gender_percents.F)
# Train a linear model to examine predictability
# Can't have any NaN values for linear regression.
comedy_gender_percents = comedy_gender_percents.dropna()
# Select out our predictor columns and our response columns
X = comedy_gender_percents.ix[:,['M']]
y = comedy_gender_percents.ix[:,['F']]
# Split the data into training data and testing data
X_train,X_test,y_train,y_test = cross_validation.train_test_split(X,
y,
test_size=0.8)
# Run the solver
reg = linear_model.LinearRegression(fit_intercept=True)
reg.fit(X_train,y_train)
# Plot the data and the model
plotX = np.linspace(0,1,100)
plotY = reg.predict(np.matrix(plotX).T)
plt.plot(X_train,y_train,'o', color='#FA5744')
plt.plot(X_test,y_test,'o', color='#00b27f')
plt.plot(plotX,plotY,'-', color='#5898f1')
plt.title('Percent 5 Rating of Comedies')
plt.ylabel('Female Average')
plt.xlabel('Male Average')
# Compute the slope and intercept of the linear model
print reg.intercept_
# Beta_1
print reg.coef_
# Compute testing and training error.
print 'training error'
print metrics.mean_squared_error(y_train,reg.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,reg.predict(X_test))
comedy_low_data = comedy_data[comedy_data['rating']<=2]
comedy_low = comedy_low_data.pivot_table('rating', index='title', columns='gender', aggfunc='count')
comedy_totals = comedy_data.pivot_table('rating', index='title', columns='gender', aggfunc='count')
comedy_low_percents = comedy_low / comedy_totals
comedy_low_percents = comedy_low_percents.ix[active_titles]
comedy_low_percents.M.corr(comedy_low_percents.F)
# Train a linear model to determine predictability
# Can't have any NaN values for linear regression.
comedy_low_percents = comedy_low_percents.dropna()
# Select out our predictor columns and our response columns
X = comedy_low_percents.ix[:,['M']]
y = comedy_low_percents.ix[:,['F']]
# Split the data into training data and testing data
X_train,X_test,y_train,y_test = cross_validation.train_test_split(X,
y,
test_size=0.8)
# Run the solver
reg = linear_model.LinearRegression(fit_intercept=True)
reg.fit(X_train,y_train)
# Plot the data and the model
plotX = np.linspace(0,1,100)
plotY = reg.predict(np.matrix(plotX).T)
plt.plot(X_train,y_train,'o', color='#FA5744')
plt.plot(X_test,y_test,'o', color='#00b27f')
plt.plot(plotX,plotY,'-', color='#5898f1')
plt.title('Percent 1 or 2 Rating of Comedies')
plt.ylabel('Female Average')
plt.xlabel('Male Average')
# Compute the slope and intercept of the linear model
print reg.intercept_
# Beta_1
print reg.coef_
# Compute testing and training error.
print 'training error'
print metrics.mean_squared_error(y_train,reg.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,reg.predict(X_test))
# Extract those movies made in the last 10 years of those available.
newest_data = data[data['year']>= 1990]
# Use a pivot table to compute mean ratings per title by gender
newest_mean_ratings = newest_data.pivot_table('rating',index = 'title',columns = 'gender',aggfunc = 'mean')
over_200_newest_mean_ratings = newest_mean_ratings.ix[active_titles]
# Scatter this data.
plt.scatter(over_200_newest_mean_ratings['M'], over_200_newest_mean_ratings['F'])
plt.title('Average Ratings by Movie, after 1990')
plt.ylabel('Average female rating')
plt.xlabel('Average male rating')
print 'correlation coefficient between percent average males and females per title: {0}'.format(
over_200_newest_mean_ratings.M.corr(over_200_newest_mean_ratings.F))
# Compute the percent 5 ratings by males and females for movies made in each year of this 10 year window.
new_fives_data = newest_data[newest_data['rating']==5]
year_gender_fives = new_fives_data.pivot_table('rating', index='title', columns='gender', aggfunc='count')
year_gender_totals = newest_data.pivot_table('rating', index='title', columns='gender', aggfunc='count')
year_gender_percents = year_gender_fives / year_gender_totals
year_gender_percents = year_gender_percents.ix[active_titles]
# Scatter this data.
plt.scatter(year_gender_percents['M'], year_gender_percents['F'])
plt.title('Percent ratings 5 by Movie, after 1990')
plt.ylabel('Female percent 5')
plt.xlabel('Male percent 5')
print 'Correlation coefficient between percent ratings of 5 between males and females by year of movie release: {0}'.format(
year_gender_percents.M.corr(year_gender_percents.F))
new_low_data = newest_data[newest_data['rating']<=2]
year_low = new_low_data.pivot_table('rating', index='title', columns='gender', aggfunc='count')
year_totals = newest_data.pivot_table('rating', index='title', columns='gender', aggfunc='count')
year_low_percents = year_low / year_totals
year_low_percents = year_low_percents.ix[active_titles]
plt.scatter(year_low_percents['M'], year_low_percents['F'])
plt.title('Percent ratings 1 or 2 by Movie, after 1990')
plt.ylabel('Female percent 1 or 2')
plt.xlabel('Male percent 1 or 2')
print 'Correlation between percent ratings of 1 or 2 between males and females on movies with the same release year: {0}'.format(
year_low_percents.M.corr(year_low_percents.F))
# Train a linear model to predict average ratings between genders.
from sklearn import cross_validation, linear_model, feature_selection, metrics
# Can't have any NaN values for linear regression.
over_200_newest_mean_ratings = over_200_newest_mean_ratings.dropna()
# Select out our predictor columns and our response columns
X = over_200_newest_mean_ratings.ix[:,['M']]
y = over_200_newest_mean_ratings.ix[:,['F']]
# Split the data into training data and testing data
X_train,X_test,y_train,y_test = cross_validation.train_test_split(X,
y,
test_size=0.8)
# Run the solver
reg = linear_model.LinearRegression(fit_intercept=True)
reg.fit(X_train,y_train)
# Compute the slope and intercept of the linear model
print reg.intercept_
# Beta_1
print reg.coef_
# Plot the data and the model
plotX = np.linspace(0,5,100)
plotY = reg.predict(np.matrix(plotX).T)
plt.plot(X_train,y_train,'ro')
plt.plot(X_test,y_test,'go')
plt.plot(plotX,plotY,'b-')
plt.title('Average Ratings by Movie, after 1990')
plt.ylabel('Average female rating')
plt.xlabel('Average male rating')
# Compute testing and training error.
print 'training error'
print metrics.mean_squared_error(y_train,reg.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,reg.predict(X_test))
# Train a linear model to predict percent of ratings given as 5 per-movie between genders.
# Can't have any NaN values for linear regression.
year_gender_percents = year_gender_percents.dropna()
# Select out our predictor columns and our response columns
X = year_gender_percents.ix[:,['M']]
y = year_gender_percents.ix[:,['F']]
# Split the data into training data and testing data
X_train,X_test,y_train,y_test = cross_validation.train_test_split(X,
y,
test_size=0.8)
# Run the solver
reg = linear_model.LinearRegression(fit_intercept=True)
reg.fit(X_train,y_train)
# Plot the data and the model
plotX = np.linspace(0,1,100)
plotY = reg.predict(np.matrix(plotX).T)
plt.plot(X_train,y_train,'ro')
plt.plot(X_test,y_test,'go')
plt.plot(plotX,plotY,'b-')
plt.title('Percent 5 Ratings by Movie, after 1990')
plt.ylabel('Female % 5')
plt.xlabel('Male % 5')
# Compute the slope and intercept of the linear model
print reg.intercept_
# Beta_1
print reg.coef_
# Compute testing and training error.
print 'training error'
print metrics.mean_squared_error(y_train,reg.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,reg.predict(X_test))
# Train a linear model to predict percent of ratings given as 1 or 2 per-movie between genders.
# Can't have any NaN values for linear regression.
year_low_percents = year_low_percents.dropna()
# Select out our predictor columns and our response columns
X = year_low_percents.ix[:,['M']]
y = year_low_percents.ix[:,['F']]
# Split the data into training data and testing data
X_train,X_test,y_train,y_test = cross_validation.train_test_split(X,
y,
test_size=0.8)
# Run the solver
reg = linear_model.LinearRegression(fit_intercept=True)
reg.fit(X_train,y_train)
# Plot the data and the model
plotX = np.linspace(0,1,100)
plotY = reg.predict(np.matrix(plotX).T)
plt.plot(X_train,y_train,'ro')
plt.plot(X_test,y_test,'go')
plt.plot(plotX,plotY,'b-')
plt.title('Percent 1 or 2 Ratings by Movie, after 1990')
plt.ylabel('Female % 1 or 2')
plt.xlabel('Male % 1 or 2')
# Compute the slope and intercept of the linear model
print reg.intercept_
# Beta_1
print reg.coef_
# Compute testing and training error.
print 'training error'
print metrics.mean_squared_error(y_train,reg.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,reg.predict(X_test))
#Convert genres into 18 dummies. New dataset has total 100209 rows*28 columns
#generate 18 dummies variables for movie genres
genre_iter=(set(x.split('|')) for x in movies.genres)
genres=sorted(set.union(*genre_iter))
dummies=pd.DataFrame(np.zeros((len(movies), len(genres))), columns=genres)
for i, gen in enumerate(movies.genres):
dummies.ix[i,gen.split('|')]=1
movies_windic=movies.join(dummies)
movies_windic.ix[0]
# newdata has total 100209 rows 28 columns
newdata = pd.merge(pd.merge(pd.merge(ratings,users),movies), movies_windic)
newdata.columns
#How many movies for each genre are in this dataset?
moviegenre=movies_windic
moviegenre.drop(moviegenre[[0,1,2]],axis=1,inplace=True)
moviegenre.sum().plot(kind='bar',color='g')
plt.title('Number of Movies in Each Genre')
#Total number of ratings received for each type of movie
genres_rating_received=newdata
genres_rating_received.drop(genres_rating_received.columns[[0,1,2,3,4,5,6,7,8,9]], axis=1, inplace=True)
genres_rating_received.sum().plot(kind='bar')
plt.title('Number of total ratings for Movies in each genre')
average_ratings_genre = (genres_rating_received.sum())*moviegenre.sum()/sum(moviegenre.sum())
average_ratings_genre
# Percent of movies watched by each gender classified as comedies
genres = data['genres']
all_genres = [string.split('|') for string in genres]
comedy_truth = [ 'Comedy' in genres for genres in all_genres]
comedy_series = pd.DataFrame(comedy_truth, index=data.index)
data['comedy'] = comedy_series
comedy_data = data.ix[data['comedy'] == True]
comedy_gender_counts = comedy_data.pivot_table('rating', index = 'gender', aggfunc = 'count')
total_gender_counts = data.pivot_table('rating', index = 'gender', aggfunc = 'count')
gender_counts = pd.concat([total_gender_counts,comedy_gender_counts],axis = 1)
gender_counts.columns = ['total_gender_counts', 'comedy_gender_counts']
gender_counts['comedy_gender_percent'] = gender_counts['comedy_gender_counts']/gender_counts['total_gender_counts']
gender_counts
import time
# convert timestamps to localized hours
timestamps = data['timestamp']
hours = [time.localtime(timestamp).tm_hour for timestamp in timestamps.values]
hour_series = pd.DataFrame(hours, index=data.index)
data['hour'] = hour_series
genres = data['genres']
all_genres = [string.split('|') for string in genres]
drama_truth = [ 'Drama' in genres for genres in all_genres]
drama_series = pd.DataFrame(drama_truth, index=data.index)
data['drama'] = drama_series
drama_data = data.ix[data['drama'] == True]
drama_ratings_per_hours=drama_data.pivot_table('title', index='hour', columns = 'gender', aggfunc='count')
drama_ratings_per_hours.plot(kind='bar', color=['#E50E14','#5898f1']).legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Dramas Rated per Hour')
plt.ylabel('Count')
plt.xlabel('Hour')
movie_ratings_per_hours=data.pivot_table('title', index='hour', columns = 'gender', aggfunc='count')
movie_ratings_per_hours.plot(kind='bar', color=['#E50E14','#5898f1']).legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Total Movies Rated per Hour')
plt.ylabel('Count')
plt.xlabel('Hour')
percent_dramas_per_hours = drama_ratings_per_hours/movie_ratings_per_hours
percent_dramas_per_hours.plot(kind='bar', color=['#E50E14','#5898f1']).legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Percent Dramas Rated Per Hour')
plt.ylabel('Percent')
plt.xlabel('Hour')
# histogram of occupation vs count of comedy ratings
# Percent of movies watched by each gender classified as comedies
genres = data['genres']
all_genres = [string.split('|') for string in genres]
comedy_truth = [ 'Comedy' in genres for genres in all_genres]
comedy_series = pd.DataFrame(comedy_truth, index=data.index)
data['comedy'] = comedy_series
comedy_data = data.ix[data['comedy'] == True]
job_avg_ratings = comedy_data.pivot_table('rating', index='occupation', aggfunc=np.mean)
job_avg_ratings.rename(index={0:'other', 1:'academic/educator',2: 'artist',3: 'clerical/admin',4: 'college/grad student',
5: 'customer service',6: 'doctor/health care',7:'executive/managerial',8:'farmer',
9: 'homemaker',10: 'K-12 student',11: 'lawyer',12 :'programmer',13: 'retired',
14:'sales/marketing',15:'scientist',16: 'self-employed',17: 'technician/engineer',
18: 'tradesman/craftsman',19 :'unemployed',20: 'writer'}, inplace=True)
print job_avg_ratings
print "RANGE: %s" % (job_avg_ratings.max() - job_avg_ratings.min())
job_avg_ratings.plot(kind='bar', color='#00b27f')
plt.title('Average Comedy Rating vs. Occupation')
plt.xlabel('Occupation')
plt.ylabel('Average Rating')
job_total_comedy_ratings = comedy_data.pivot_table('title', index='occupation', aggfunc='count')
job_total_comedy_ratings.rename(index={0:'other', 1:'academic/educator',2: 'artist',3: 'clerical/admin',4: 'college/grad student',
5: 'customer service',6: 'doctor/health care',7:'executive/managerial',8:'farmer',
9: 'homemaker',10: 'K-12 student',11: 'lawyer',12 :'programmer',13: 'retired',
14:'sales/marketing',15:'scientist',16: 'self-employed',17: 'technician/engineer',
18: 'tradesman/craftsman',19 :'unemployed',20: 'writer'}, inplace=True)
print job_total_comedy_ratings
job_total_comedy_ratings.plot(kind='bar', color='#ec971f')
plt.title('Comedies Rated vs. Occupation')
plt.xlabel('Occupation')
plt.ylabel('Number of Ratings')
job_total_ratings = data.pivot_table('title', index='occupation', aggfunc='count')
job_total_ratings.rename(index={0:'other', 1:'academic/educator',2: 'artist',3: 'clerical/admin',4: 'college/grad student',
5: 'customer service',6: 'doctor/health care',7:'executive/managerial',8:'farmer',
9: 'homemaker',10: 'K-12 student',11: 'lawyer',12 :'programmer',13: 'retired',
14:'sales/marketing',15:'scientist',16: 'self-employed',17: 'technician/engineer',
18: 'tradesman/craftsman',19 :'unemployed',20: 'writer'}, inplace=True)
job_percent_comedy_ratings = job_total_comedy_ratings / job_total_ratings
job_percent_comedy_ratings.plot(kind='bar', color='#E50E14')
plt.title('Percent Comedies vs. Occupation')
plt.xlabel('Occupation')
plt.ylabel('Percent of all movies watched')
# histogram of age group vs adventure genre
genres = data['genres']
all_genres = [string.split('|') for string in genres]
adventure_truth = [ 'adventure' in genres for genres in all_genres]
adventure_series = pd.DataFrame(drama_truth, index=data.index)
data['adventure'] = adventure_series
adventure_data = data.ix[data['adventure'] == True]
adventure_ratings_per_age = adventure_data.pivot_table('title', index='age', columns='gender', aggfunc='count')
adventure_ratings_per_age.rename(index={1: 'Under 18', 18: '18-24', 25: '25-34', 35: '35-44',
45: '45-49', 50: '50-55', 56: '56+'}, inplace=True)
adventure_ratings_per_age.plot(kind='bar', color=['#E50E14','#5898f1'])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Adventure Movies Rated per Age Group')
plt.xlabel('Age Group')
plt.ylabel('Count')
total_per_age = data.pivot_table('title', index='age', columns='gender', aggfunc='count')
total_per_age.rename(index={1: 'Under 18', 18: '18-24', 25: '25-34', 35: '35-44',
45: '45-49', 50: '50-55', 56: '56+'}, inplace=True)
adventure_percent = adventure_ratings_per_age / total_per_age
adventure_percent.plot(kind='bar', color=['#E50E14','#5898f1'])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Percent Adventure Movies per Age Group')
plt.xlabel('Age Group')
plt.ylabel('Percent')
print adventure_ratings_per_age
print total_per_age
print adventure_percent
# avg rating vs. gender
gender_avg_ratings = data.pivot_table('rating', index = 'gender',aggfunc = np.mean)
gender_avg_ratings
gender_avg_ratings.plot(kind='barh', color='yellow')
plt.title('avg rating vs. gender')
# avg rating vs. occupation
job_avg_ratings = data.pivot_table('rating', index = 'occupation',aggfunc = np.mean)
job_avg_ratings
job=job_avg_ratings.rename(index={0:'other',1:'academic/educator',2: 'artist',3: 'clerical/admin',4: 'college/grad student',
5 :'customer service',6: 'doctor/health care',7:'executive/managerial',8:'farmer',
9: 'homemaker',10: 'K-12 student',11: 'lawyer',12 :'programmer',13: 'retired',
14:'sales/marketing',15:'scientist',16: 'self-employed',17: 'technician/engineer',
18: 'tradesman/craftsman',19 :'unemployed',20: 'writer'})
job.plot(kind='bar')
plt.title('avg rating vs. occupation')
plt.ylabel('average rating')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TEAM Members
Step2: Compute some Summary Statistics for the data
Step3: How many movies have an average rating over 4.5 overall?
Step4: How many movies have an average rating over 4.5 among men? How about women?
Step5: How many movies have an median rating over 4.5 among men over age 30? How about women over age 30?
Step6: What are the ten most popular movies?
Step7: Make some conjectures about how easy various groups are to please? Support your answers with data!
Step8: Conclusion
Step9: Conclusion
Step10: Plot a histogram of the number of ratings each movie recieved.
Step11: Plot a histogram of the average rating for each movie.
Step12: Plot a histogram of the average rating for movies which are rated more than 100 times.
Step13: Make some conjectures about the distribution of ratings? Support your answers with data!
Step14: Conjecture 2
Step15: *------------------------
Step16: Make a scatter plot of men versus women and their mean rating for every movie.
Step17: Make a scatter plot of men versus women and their mean rating for movies rated more than 200 times.
Step18: Compute the correlation coefficent between the ratings of men and women.
Step19: This data seems to be somewhat misleading. Based on the high correlation values, it seems that the ratings between men and women are similar, especially among movies watched more than 200 times. However, this is the correlation between the MEAN rating per title between men and women. What this is saying is that ON AVERAGE, men and women rate movies similarly. This doesn't indicate that the ratings themselves are actually similar! For example, there could be a movie in which both men and women have an average rating of 3, but women rate it as either a 1 or a 5 and all men rate it as 3. We need to explore the data more to understand if the ratings between men and women are actually similar.
Step20: Similarly, we perform the same analysis for number of ratings of 1 or 2
Step21: This indicates that male and females tend to agree on average and in distribution (especially on movies rated more than 200 times). This does not, however, indicate we can predict a single male rating given female ratings! The average behavior of the two is similar, but not single instances.
Step22: Conjecture 1.) People rate more similarly when they are tired.
Step23: Conclusion
Step24: As before, we consider the percent ratings of 5 (per title) given by each age group.
Step25: And the percent of low (1 or 2) ratings.
Step26: Conjecture 3.) Men and Women rate similarly on highly-watched movies made most recently.
Step27: We see that, for movies made in the 90's, the correlation coefficient for the percent of ratings given as 5 between males and females is
Step28: *------------------------
Step29: What is the best time of day to recommend a drama for each gender?
Step30: The most dramas are rated at 4pm (16th hour) during the day, therefore the best time to recommend a drama is likely before 4pm. To make a more precise determination, instead of our current answer
Step31: When comparing the average rating versus occuptation for comedy movies, scientists have the highest average rating (3.687170) followed by retired (3.663825) and clerical/admin (3.601516). Does this mean that those occupations are most likely to enjoy comedies? Possibly, but since we are using mean as our comparison metric and the range of the data is 0.285, we also looked at the number of comedy ratings per occupation.
Step32: From the chart, we can see that even though scientist, retired, and clerical/admin have the highest average rating for comedies, they also have low numbers of ratings, 7771, 4340, and 11870 respectively. In contrast, college/grad student, other, and executive/managerial have significantly more ratings (at least triple) 48672, 46500, and 35784 respectively. Although scientist has the highest average rating, it might be better to recommend comedies to students because while they have a lower average rating, they have almost seven times more ratings than scientists.
Step33: What age group watches the most adventure movies?
Step34: Additional Data Exploration (not used in Problems 1-4)
|
7,968
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from bokeh.charts import TimeSeries, output_notebook, show
# Get data
df = pd.read_csv('data/Land_Ocean_Monthly_Anomaly_Average.csv')
# Process data
df['datetime'] = pd.to_datetime(df['datetime'])
df = df[['anomaly','datetime']]
# Output option
output_notebook()
# Create timeseries chart
t = TimeSeries(df, x='datetime')
# Show chart
show(t)
# Style your timeseries chart
t = TimeSeries(df, x='datetime', xlabel='time', ylabel='Anomaly(ºC)',
xgrid = False, ygrid=True, tools=False, width=950, height=300,
title="Temperature Anomaly(ºC) Monthly Average", palette=["grey"])
# Show new chart
show(t)
# Compute moving average
df['moving_average'] = pd.rolling_mean(df['anomaly'], 12)
# Create chart with moving average
t = TimeSeries(df, x='datetime', xlabel='time', ylabel='Anomaly(ºC)',
xgrid = False, ygrid=True, tools=False, width=950, height=300, legend="bottom_right",
title="Temperature Anomaly(ºC) Monthly Average", palette=["grey", "red"])
# Show chart with moving average
show(t)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise
Step2: Exercise
|
7,969
|
<ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator
print(__doc__)
data_path = sample.data_path()
fname = data_path
fname += '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inv = read_inverse_operator(fname)
print("Method: %s" % inv['methods'])
print("fMRI prior: %s" % inv['fmri_prior'])
print("Number of sources: %s" % inv['nsource'])
print("Number of channels: %s" % inv['nchan'])
lh_points = inv['src'][0]['rr']
lh_faces = inv['src'][0]['use_tris']
rh_points = inv['src'][1]['rr']
rh_faces = inv['src'][1]['use_tris']
from mayavi import mlab # noqa
mlab.figure(size=(600, 600), bgcolor=(0, 0, 0))
mesh = mlab.triangular_mesh(lh_points[:, 0], lh_points[:, 1], lh_points[:, 2],
lh_faces, colormap='RdBu')
mesh.module_manager.scalar_lut_manager.reverse_lut = True
mesh = mlab.triangular_mesh(rh_points[:, 0], rh_points[:, 1], rh_points[:, 2],
rh_faces, colormap='RdBu')
mesh.module_manager.scalar_lut_manager.reverse_lut = True
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Show result on 3D source space
|
7,970
|
<ASSISTANT_TASK:>
Python Code:
from pextant.mesh.abstractmesh import NpDataset
import numpy as np
xx,yy= np.mgrid[0:5,0:5]
basic_terrain = NpDataset(0.1*(xx**2+yy**2), resolution=1)
basic_terrain
basic_terrain[1,1]
basic_terrain.get_datapoint(np.array(([1,1],[1.5,1.5])))
from pextant.EnvironmentalModel import GridMesh
from pextant.lib.geoshapely import GeoPoint, LAT_LONG
upper_left_corner = GeoPoint(LAT_LONG, 0, 0) # this will be the north-west corner of the dataset
basic_mesh = GridMesh(upper_left_corner, basic_terrain)
print basic_mesh
upper_left_corner, lower_right_corner = basic_mesh.nw_geo_point, basic_mesh.se_geo_point
import matplotlib.pyplot as plt
plt.matshow(basic_mesh.dataset, cmap='gray_r')
plt.show()
point_in_mesh = GeoPoint(basic_mesh.ROW_COL, 1, 1)
from pextant.lib.geoshapely import GeoEnvelope
model_envelope = GeoEnvelope(point_in_mesh, lower_right_corner)
terrain_model = basic_mesh.loadSubSection(model_envelope)
import matplotlib.patches as patches
plt.matshow(basic_mesh.data, cmap='gray_r')
plt.gca().add_patch(patches.Rectangle(point_in_mesh.to(basic_mesh.ROW_COL)-np.array([0.5,0.5]),basic_mesh.y_size,basic_mesh.x_size, fill=False, hatch='/'))
plt.legend(["terrain_model area"])
plt.show()
terrain_model.slopes
plt.matshow(terrain_model.dataset, cmap='gray_r')
plt.imshow(terrain_model.obstacle_mask(), alpha=0.5, cmap='bwr_r')
plt.text(1.2,2.3,"Steep terrain \n in red", size=15, color="white")
plt.show()
from pextant.EnvironmentalModel import GDALMesh
ames_gridmesh = GDALMesh('Ames.tif')
print ames_gridmesh
ames_model = ames_gridmesh.loadSubSection()
plt.matshow(ames_model.data, cmap='gray_r')
obstacle_transparent = np.ma.masked_array(np.ones_like(ames_model.data), ames_model.slopes<15)
plt.imshow(obstacle_transparent, alpha=0.5, cmap='bwr_r')
plt.show()
from pextant.viz.utils import hillshade
hillshade(ames_model, 5) #5 is used to exaggerate the effect of the hillshade
plt.show()
from pextant.EnvironmentalModel import load_legacy
apollo14_model = load_legacy('Apollo14.txt')
print(apollo14_model)
hillshade(apollo14_model, 1)
plt.show()
from pextant.mesh.triangularmesh import grid_to_tri
apollo14_tri = grid_to_tri(apollo14_model, accuracy=3)
tri = apollo14_tri.data
plt.gca().invert_yaxis()
plt.tripcolor(tri.vertices[:,0], tri.vertices[:,1], tri.faces, facecolors=tri.triangles_center[:,2], cmap='gray_r', alpha=1.)
plt.axis('equal')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This dataset is wrapped around numpy so we can access can easily access entries
Step2: Or access several entries, and even interpolate
Step3: 1.1 GridMesh
Step4: We can read out some basic properties of the mesh, and plot it
Step5: The upper left corner can be accessed, and returns our original anchoring point. The lower right corner is also accessible for convenience
Step6: We can also access the original terrain dataset through the dataset keyword
Step7: GridMesh also stores the local coordinate system of the grid, which can then be converted back and forth to other representations. The two coordinate systems are called ROW_COL and COL_ROW, which allows to define a point given the row and the column.
Step8: 1.2 GridMeshModel
Step9: If no envelope is passed as an argument, the entire dataset is processed. Be careful with this, as it might take up significant memory if a very large dataset is being used.
Step10: Before we used dataset to access the underlying heightmap. For the model we will use the data property, which carries the raw representation of the data.
Step11: 1.3 Model info
Step12: The code below demonstrates a more advanced usage of GeoMesh
Step13: We will use a 0.5 resolution DEM of NASA Ames Roverscape site
Step14: Whats different in this representation from when we had a GridMesh with a numpy array, is that if we access the dataset we wont get an array. This is because the DEM is still encoded, and wont be decoded until loadSubSection has been called; this is done to limit memory used when larger DEMs (100s of MB or GB size) are being used.
Step15: We can display it, including the obstacles in red
Step16: We notice that some of the areas are white; these represent masked locations, that are points with a no data value such as -9999.
Step17: 2.2 From text file
Step18: LOLA(Lunar Orbiter Laser Altimeter) instrument recently generated a 2m DEM of the Lunar terrain. The data has been post processed into the format of the legacy code, and is displayed below as an example.
Step19: 3. Other Representations
|
7,971
|
<ASSISTANT_TASK:>
Python Code:
## Interactive magics
%matplotlib inline
import sys
import warnings
warnings.filterwarnings('ignore')
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import patsy as pt
from scipy import optimize
# pymc3 libraries
import pymc3 as pm
import theano as thno
import theano.tensor as T
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = 14, 6
np.random.seed(0)
def strip_derived_rvs(rvs):
'''Convenience fn: remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
def plot_traces_pymc(trcs, varnames=None):
''' Convenience fn: plot traces with overlaid means and values '''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.df_summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.df_summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
# decide poisson theta values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# create samples
q = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df.tail()
df.groupby(['alcohol','nomeds']).mean().unstack()
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df,
kind='count', size=4, aspect=1.5)
fml = 'nsneeze ~ alcohol + antihist + alcohol:antihist' # full patsy formulation
fml = 'nsneeze ~ alcohol * nomeds' # lazy, alternative patsy formulation
(mx_en, mx_ex) = pt.dmatrices(fml, df, return_type='dataframe', NA_action='raise')
pd.concat((mx_ex.head(3),mx_ex.tail(3)))
with pm.Model() as mdl_fish:
# define priors, weakly informative Normal
b0 = pm.Normal('b0_intercept', mu=0, sd=10)
b1 = pm.Normal('b1_alcohol[T.True]', mu=0, sd=10)
b2 = pm.Normal('b2_nomeds[T.True]', mu=0, sd=10)
b3 = pm.Normal('b3_alcohol[T.True]:nomeds[T.True]', mu=0, sd=10)
# define linear model and exp link function
theta = (b0 +
b1 * mx_ex['alcohol[T.True]'] +
b2 * mx_ex['nomeds[T.True]'] +
b3 * mx_ex['alcohol[T.True]:nomeds[T.True]'])
## Define Poisson likelihood
y = pm.Poisson('y', mu=np.exp(theta), observed=mx_en['nsneeze'].values)
with mdl_fish:
trc_fish = pm.sample(2000, tune=1000, njobs=4)[1000:]
rvs_fish = [rv.name for rv in strip_derived_rvs(mdl_fish.unobserved_RVs)]
plot_traces_pymc(trc_fish, varnames=rvs_fish)
np.exp(pm.df_summary(trc_fish, varnames=rvs_fish)[['mean','hpd_2.5','hpd_97.5']])
with pm.Model() as mdl_fish_alt:
pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Poisson())
with mdl_fish_alt:
trc_fish_alt = pm.sample(4000, tune=2000)[2000:]
rvs_fish_alt = [rv.name for rv in strip_derived_rvs(mdl_fish_alt.unobserved_RVs)]
plot_traces_pymc(trc_fish_alt, varnames=rvs_fish_alt)
np.exp(pm.df_summary(trc_fish_alt, varnames=rvs_fish_alt)[['mean','hpd_2.5','hpd_97.5']])
np.percentile(trc_fish_alt['mu'], [25,50,75])
df['nsneeze'].mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Local Functions
Step2: Generate Data
Step3: View means of the various combinations (poisson mean values)
Step4: Briefly Describe Dataset
Step5: Observe
Step6: 1. Manual method, create design matrices and manually specify model
Step7: Create Model
Step8: Sample Model
Step9: View Diagnostics
Step10: Observe
Step11: Observe
Step12: Sample Model
Step13: View Traces
Step14: Transform coeffs
Step15: Observe
Step16: ... of 9.45 with a range [25%, 75%] of [4.17, 24.18], we see this is pretty close to the overall mean of
|
7,972
|
<ASSISTANT_TASK:>
Python Code:
count = 1
for elem in range(1, 3 + 1):
count *= elem
print(count)
from math import factorial as f
f(3)
def n_max():
inpt = eval(input("Please enter some values: "))
maximum = max_val(inpt)
print("The largest value is", maximum)
def max_val(ints):
Input: collection of ints.
Returns: maximum of the collection
int - the max integer.
max = ints[0]
for x in ints:
if x > max:
max = x
return max
assert max_val([1, 2, 3]) == 3
assert max_val([1, 1, 1]) == 1
assert max_val([1, 2, 2]) == 2
n_max()
inpt = eval(input("Please enter three values: "))
list(inpt)
assert compress('AAAADDBBBBBCCEAA') == 'A4D2B5C2E1A2'
# %load ../scripts/compress/compressor.py
def groupby_char(lst):
Returns a list of strings containing identical characters.
Takes a list of characters produced by running split on a string.
Groups runs (in order sequences) of identical characters into string elements in the list.
Parameters
---------
Input:
lst: list
A list of single character strings.
Output:
grouped: list
A list of strings containing grouped characters.
new_lst = []
count = 1
for i in range(len(lst) - 1): # we range to the second to last index since we're checking if lst[i] == lst[i + 1].
if lst[i] == lst[i + 1]:
count += 1
else:
new_lst.append([lst[i],count]) # Create a lst of lists. Each list contains a character and the count of adjacent identical characters.
count = 1
new_lst.append((lst[-1],count)) # Return the last character (we didn't reach it with our for loop since indexing until second to last).
grouped = [char*count for [char, count] in new_lst]
return grouped
def compress_group(string):
Returns a compressed two character string containing a character and a number.
Takes in a string of identical characters and returns the compressed string
consisting of the character and the length of the original string.
Example
-------
"AAA"-->"A3"
Parameters:
-----------
Input:
string: str
A string of identical characters.
Output:
------
compressed_str: str
A compressed string of length two containing a character and a number.
return str(string[0]) + str(len(string))
def compress(string):
Returns a compressed representation of a string.
Compresses the string by mapping each run of identical characters to a
single character and a count.
Ex.
--
compress('AAABBCDDD')--> 'A3B2C1D3'.
Only compresses string if the compression is shorter than the original string.
Ex.
--
compress('A')--> 'A' # not 'A1'.
Parameters
----------
Input:
string: str
The string to compress
Output:
compressed: str
The compressed representation of the string.
try:
split_str = [char for char in string] # Create list of single characters.
grouped = groupby_char(split_str) # Group characters if characters are identical.
compressed = ''.join( # Compress each element of the grouped list and join to a string.
[compress_group(elem) for elem in grouped])
if len(compressed) < len(string): # Only return compressed if compressed is actually shorter.
return compressed
else:
return string
except IndexError: # If our input string is empty, return an empty string.
return ""
except TypeError: # If we get something that's not compressible (including NoneType) return None.
return None
# %load ../scripts/compress/compress_tests.py
# This will fail to run because in wrong directory
from compress.compressor import *
def compress_test():
assert compress('AAABBCDDD') == 'A3B2C1D3'
assert compress('A') == 'A'
assert compress('') == ''
assert compress('AABBCC') == 'AABBCC' # compressing doesn't shorten string so just return string.
assert compress(None) == None
def groupby_char_test():
assert groupby_char(["A", "A", "A", "B", "B"]) == ["AAA", "BB"]
def compress_group_test():
assert compress_group("AAA") == "A3"
assert compress_group("A") == "A1"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3. Extend your program to n objects. How many different combinations do I have for 5 objects? How about 15? What is the max number of objects I could calculate for if I was storing the result in a 32 bit integer? What happens if the combinations exceed 32 bits?
Step3: 4. What will the following code yield? Was it what you expected? What's going on here?
Step7: Strategy 1
|
7,973
|
<ASSISTANT_TASK:>
Python Code:
import os
import matplotlib.pyplot as plt
import pandas as pd
import torch
from torch.nn import Parameter
import pyro
import pyro.contrib.gp as gp
import pyro.distributions as dist
import pyro.ops.stats as stats
smoke_test = ('CI' in os.environ) # ignore; used to check code integrity in the Pyro repo
assert pyro.__version__.startswith('1.7.0')
pyro.set_rng_seed(1)
# license: Copyright (c) 2014, the Open Data Science Initiative
# license: https://www.elsevier.com/legal/elsevier-website-terms-and-conditions
URL = "https://raw.githubusercontent.com/sods/ods/master/datasets/guo_qpcr.csv"
df = pd.read_csv(URL, index_col=0)
print("Data shape: {}\n{}\n".format(df.shape, "-" * 21))
print("Data labels: {}\n{}\n".format(df.index.unique().tolist(), "-" * 86))
print("Show a small subset of the data:")
df.head()
data = torch.tensor(df.values, dtype=torch.get_default_dtype())
# we need to transpose data to correct its shape
y = data.t()
capture_time = y.new_tensor([int(cell_name.split(" ")[0]) for cell_name in df.index.values])
# we scale the time into the interval [0, 1]
time = capture_time.log2() / 6
# we setup the mean of our prior over X
X_prior_mean = torch.zeros(y.size(1), 2) # shape: 437 x 2
X_prior_mean[:, 0] = time
kernel = gp.kernels.RBF(input_dim=2, lengthscale=torch.ones(2))
# we clone here so that we don't change our prior during the course of training
X = Parameter(X_prior_mean.clone())
# we will use SparseGPRegression model with num_inducing=32;
# initial values for Xu are sampled randomly from X_prior_mean
Xu = stats.resample(X_prior_mean.clone(), 32)
gplvm = gp.models.SparseGPRegression(X, y, kernel, Xu, noise=torch.tensor(0.01), jitter=1e-5)
# we use `.to_event()` to tell Pyro that the prior distribution for X has no batch_shape
gplvm.X = pyro.nn.PyroSample(dist.Normal(X_prior_mean, 0.1).to_event())
gplvm.autoguide("X", dist.Normal)
# note that training is expected to take a minute or so
losses = gp.util.train(gplvm, num_steps=4000)
# let's plot the loss curve after 4000 steps of training
plt.plot(losses)
plt.show()
gplvm.mode = "guide"
X = gplvm.X # draw a sample from the guide of the variable X
plt.figure(figsize=(8, 6))
colors = plt.get_cmap("tab10").colors[::-1]
labels = df.index.unique()
X = gplvm.X_loc.detach().numpy()
for i, label in enumerate(labels):
X_i = X[df.index == label]
plt.scatter(X_i[:, 0], X_i[:, 1], c=[colors[i]], label=label)
plt.legend()
plt.xlabel("pseudotime", fontsize=14)
plt.ylabel("branching", fontsize=14)
plt.title("GPLVM on Single-Cell qPCR data", fontsize=16)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset
Step2: Modelling
Step3: Now comes the most interesting part. We know that the observed data $y$ has latent structure
Step4: We will use a sparse version of Gaussian process inference to make training faster. Remember that we also need to define $X$ as a Parameter so that we can set a prior and guide (variational distribution) for it.
Step5: We will use the autoguide() method from the Parameterized class to set an auto Normal guide for $X$.
Step6: Inference
Step7: After inference, the mean and standard deviation of the approximated posterior $q(X) \sim p(X | y)$ will be stored in the parameters X_loc and X_scale. To get a sample from $q(X)$, we need to set the mode of gplvm to "guide".
Step8: Visualizing the result
|
7,974
|
<ASSISTANT_TASK:>
Python Code:
import sys
print('Python version:', sys.version)
import IPython
print('IPython:', IPython.__version__)
import numpy
print('numpy:', numpy.__version__)
import scipy
print('scipy:', scipy.__version__)
import matplotlib
print('matplotlib:', matplotlib.__version__)
import pandas
print('pandas:', pandas.__version__)
import sklearn
print('scikit-learn:', sklearn.__version__)
2+2
(50-5*6)/4
sqrt(81)
from math import sqrt
sqrt(81)
import math
math.sqrt(81)
radius = 20
pi = math.pi
area = pi * radius ** 2
area
return = 0
'Hello, World!'
"Hello, World!"
greeting = "Hello, World!"
print(greeting)
print("The area is " + area)
print("The area is " + str(area))
statement = "Hello, " + "World!"
print(statement)
days_of_the_week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"]
days_of_the_week[2]
days_of_the_week[-1]
languages = ["Fortran","C","C++"]
languages.append("Python")
print(languages)
list(range(10))
list(range(2,8))
evens = list(range(0,20,2))
evens
evens[3]
["Today",7,99.3,""]
help(len)
len(evens)
for day in days_of_the_week:
print(day)
for day in days_of_the_week:
statement = "Today is " + day
print(statement)
for i in range(20):
print("The square of ",i," is ",i*i)
for letter in "Sunday":
print(letter)
days_of_the_week[0]
days_of_the_week[0:2]
days_of_the_week[:2]
days_of_the_week[-2:]
workdays = days_of_the_week[1:6]
print(workdays)
day = "Sunday"
abbreviation = day[:3]
print(abbreviation)
numbers = list(range(0,40))
evens = numbers[2::2]
evens
if day == "Sunday":
print("Sleep in")
else:
print("Go to work")
day == "Sunday"
1 == 2
50 == 2*25
3 < 3.14159
1 == 1.0
1 != 0
1 <= 2
1 >= 1
1 is 1.0
[1,2,3] == [1,2,4]
[1,2,3] < [1,2,4]
hours = 5
0 < hours < 24
if day == "Sunday":
print("Sleep in")
elif day == "Saturday":
print("Do chores")
else:
print("Go to work")
for day in days_of_the_week:
statement = "Today is " + day
print(statement)
if day == "Sunday":
print(" Sleep in")
elif day == "Saturday":
print(" Do chores")
else:
print(" Go to work")
bool(1)
bool(0)
bool(["This "," is "," a "," list"])
n = 10
sequence = [0,1]
for i in range(2,n): # This is going to be a problem if we ever set n <= 2!
sequence.append(sequence[i-1]+sequence[i-2])
print(sequence)
def fibonacci(sequence_length):
"Return the Fibonacci sequence of length *sequence_length*"
sequence = [0,1]
if sequence_length < 1:
print("Fibonacci sequence only defined for length 1 or greater")
return
if 0 < sequence_length < 3:
return sequence[:sequence_length]
for i in range(2,sequence_length):
sequence.append(sequence[i-1]+sequence[i-2])
return sequence
fibonacci(2)
fibonacci(12)
help(fibonacci)
t = (1,2,'hi',9.0)
t
t[1]
t.append(7)
t[1]=77
('Bob',0.0,21.0)
positions = [
('Bob',0.0,21.0),
('Cat',2.5,13.1),
('Dog',33.0,1.2)
]
def minmax(objects):
minx = 1e20 # These are set to really big numbers
miny = 1e20
for obj in objects:
name,x,y = obj
if x < minx:
minx = x
if y < miny:
miny = y
return minx,miny
x,y = minmax(positions)
print(x,y)
mylist = [1,2,9,21]
ages = {"Rick": 46, "Bob": 86, "Fred": 21}
print("Rick's age is ",ages["Rick"])
dict(Rick=46,Bob=86,Fred=20)
len(t)
len(ages)
import this
import numpy as np
import scipy as sp
array = np.array([1,2,3,4,5,6])
array
array.shape
mat = np.array([[0,1],[1,0]])
mat
mat2 = np.c_[mat, np.ones(2)]
mat2
mat2.shape
np.zeros((3,3))
np.identity(4)
np.linspace(0,1)
np.linspace(0,1,11)
x = np.linspace(0,2*np.pi)
np.sin(x)
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.plot(x,np.sin(x))
0.125*np.identity(3)
np.identity(2) + np.array([[1,1],[1,2]])
np.identity(2)*np.ones((2,2))
np.dot(np.identity(2),np.ones((2,2)))
v = np.array([3,4])
np.sqrt(np.dot(v,v))
m = np.array([[1,2],[3,4]])
m.T
np.linalg.inv(m)
np.diag([1,2,3,4,5])
raw_data = \
3.1905781584582433,0.028208609537968457
4.346895074946466,0.007160804747670053
5.374732334047101,0.0046962988461934805
8.201284796573875,0.0004614473299618756
10.899357601713055,0.00005038370219939726
16.295503211991434,4.377451812785309e-7
21.82012847965739,3.0799922117601088e-9
32.48394004282656,1.524776208284536e-13
43.53319057815846,5.5012073588707224e-18
data = []
for line in raw_data.splitlines():
words = line.split(',')
data.append(words)
data = np.array(data, dtype=np.float)
data
data[:, 0]
plt.title("Raw Data")
plt.xlabel("Distance")
plt.plot(data[:,0],data[:,1],'bo')
plt.title("Raw Data")
plt.xlabel("Distance")
plt.semilogy(data[:,0],data[:,1],'bo')
params = sp.polyfit(data[:,0],np.log(data[:,1]),1)
a = params[0]
A = np.exp(params[1])
x = np.linspace(1,45)
plt.title("Raw Data")
plt.xlabel("Distance")
plt.semilogy(data[:,0],data[:,1],'bo')
plt.semilogy(x,A*np.exp(a*x),'b-')
gauss_data = \
-0.9902286902286903,1.4065274110372852e-19
-0.7566104566104566,2.2504438576596563e-18
-0.5117810117810118,1.9459459459459454
-0.31887271887271884,10.621621621621626
-0.250997150997151,15.891891891891893
-0.1463309463309464,23.756756756756754
-0.07267267267267263,28.135135135135133
-0.04426734426734419,29.02702702702703
-0.0015939015939017698,29.675675675675677
0.04689304689304685,29.10810810810811
0.0840994840994842,27.324324324324326
0.1700546700546699,22.216216216216214
0.370878570878571,7.540540540540545
0.5338338338338338,1.621621621621618
0.722014322014322,0.08108108108108068
0.9926849926849926,-0.08108108108108646
data = []
for line in gauss_data.splitlines():
words = line.split(',')
data.append(words)
data = np.array(data, dtype=np.float)
plt.plot(data[:,0],data[:,1],'bo')
def gauss(x,A,a):
return A*np.exp(a*x**2)
from scipy.optimize import curve_fit
params,conv = curve_fit(gauss,data[:,0],data[:,1])
x = np.linspace(-1,1)
plt.plot(data[:,0],data[:,1],'bo')
A,a = params
plt.plot(x,gauss(x,A,a),'g-')
from random import random
rands = []
for i in range(100):
rands.append(random())
plt.plot(rands)
from random import gauss
grands = []
for i in range(100):
grands.append(gauss(0,1))
plt.plot(grands)
plt.plot(np.random.rand(100))
import pandas as pd
import numpy as np
ser_1 = pd.Series([1, 1, 2, -3, -5, 8, 13])
ser_1
ser_1.values
ser_1.index
ser_2 = pd.Series([1, 1, 2, -3, -5], index=['a', 'b', 'c', 'd', 'e'])
ser_2
ser_2[4] == ser_2['e']
ser_2[['c', 'a', 'b']]
ser_2[ser_2 > 0]
ser_2 * 2
np.exp(ser_2)
dict_1 = {'foo' : 100, 'bar' : 200, 'baz' : 300}
ser_3 = pd.Series(dict_1)
ser_3
index = ['foo', 'bar', 'baz', 'qux']
ser_4 = pd.Series(dict_1, index=index)
ser_4
pd.isnull(ser_4)
ser_4.isnull()
ser_3 + ser_4
ser_4.name = 'foobarbazqux'
ser_4.index.name = 'label'
ser_4
ser_4.index = ['fo', 'br', 'bz', 'qx']
ser_4
data_1 = {'state' : ['VA', 'VA', 'VA', 'MD', 'MD'],
'year' : [2012, 2013, 2014, 2014, 2015],
'pop' : [5.0, 5.1, 5.2, 4.0, 4.1]}
df_1 = pd.DataFrame(data_1)
df_1
df_2 = pd.DataFrame(data_1, columns=['year', 'state', 'pop'])
df_2
df_3 = pd.DataFrame(data_1, columns=['year', 'state', 'pop', 'unempl'])
df_3
df_3['state']
df_3.year
df_3.iloc[0]
df_3['unempl'] = np.arange(5)
df_3
unempl = pd.Series([6.0, 6.0, 6.1], index=[2, 3, 4])
df_3['unempl'] = unempl
df_3
df_3['state_dup'] = df_3['state']
df_3
del df_3['state_dup']
df_3
df_3.T
pop = {'VA' : {2013 : 5.1, 2014 : 5.2},
'MD' : {2014 : 4.0, 2015 : 4.1}}
df_4 = pd.DataFrame(pop)
df_4
data_2 = {'VA' : df_4['VA'][1:],
'MD' : df_4['MD'][2:]}
df_5 = pd.DataFrame(data_2)
df_5
df_5.index.name = 'year'
df_5
df_5.columns.name = 'state'
df_5
df_5.values
df_3.values
df_3
df_3.reindex(list(reversed(range(0, 6))))
df_3.reindex(columns=['state', 'pop', 'unempl', 'year'])
df_7 = df_3.drop([0, 1])
df_7
df_7 = df_7.drop('unempl', axis=1)
df_7
df_3
df_3[['pop', 'unempl']]
df_3[:2]
df_3.iloc[1:3]
df_3[df_3['pop'] > 5]
df_3.loc[0:2, 'pop']
df_3
np.random.seed(0)
df_8 = pd.DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['a', 'b', 'c'])
df_8
np.random.seed(1)
df_9 = pd.DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['b', 'c', 'd'])
df_9
df_8 + df_9
df_10 = df_8.add(df_9, fill_value=0)
df_10
ser_8 = df_10.iloc[0]
df_11 = df_10 - ser_8
df_11
ser_9 = pd.Series(range(3), index=['a', 'd', 'e'])
ser_9
df_11 - ser_9
df_11 = np.abs(df_11)
df_11
df_11.apply(sum)
df_11.apply(sum, axis=1)
def func_3(x):
return '%.2f' %x
df_11.applymap(func_3)
df_12 = pd.DataFrame(np.arange(12).reshape((3, 4)),
index=['three', 'one', 'two'],
columns=['c', 'a', 'b', 'd'])
df_12
df_12.sort_index()
df_12.sort_index(axis=1, ascending=False)
df_12.sort_values(by=['d', 'c'])
df_15 = pd.DataFrame(np.random.randn(10, 3),
columns=['a', 'b', 'c'])
df_15['cat1'] = (np.random.rand(10) * 3).round(0)
df_15['cat2'] = (np.random.rand(10)).round(0)
df_15
df_15.sum()
df_15.sum(axis=1)
df_15.mean(axis=0)
df_15['a'].describe()
df_15['cat1'].value_counts()
pd.pivot_table(df_15, index='cat1', aggfunc=np.mean)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I. Python Overview
Step2: (If you're typing this into an IPython notebook, or otherwise using notebook file, you hit shift-Enter to evaluate a cell.)
Step3: or you can simply import the math library itself
Step4: You can define variables using the equals (=) sign
Step5: You can name a variable almost anything you want. It needs to start with an alphabetical character or "_", can contain alphanumeric characters plus underscores ("_"). Certain words, however, are reserved for the language
Step6: The Python Tutorial has more on using Python as an interactive shell. The IPython tutorial makes a nice complement to this, since IPython has a much more sophisticated iteractive shell.
Step7: or double quotes
Step8: Just like the other two data objects we're familiar with (ints and floats), you can assign a string to a variable
Step9: The print statement is often used for printing character strings
Step10: But it can also print data types other than strings
Step11: In the above snipped, the number 600 (stored in the variable "area") is converted into a string before being printed out.
Step12: If you have a lot of words to concatenate together, there are other, more efficient ways to do this. But this is fine for linking a few strings together.
Step13: You can access members of the list using the index of that item
Step14: Python lists, like C, but unlike Fortran, use 0 as the index of the first element of a list. Thus, in this example, the 0 element is "Sunday", 1 is "Monday", and so on. If you need to access the nth element from the end of the list, you can use a negative index. For example, the -1 element of a list is the last element
Step15: You can add additional items to the list using the .append() command
Step16: The range() command is a convenient way to make sequential lists of numbers
Step17: Note that range(n) starts at 0 and gives the sequential list of integers less than n. If you want to start at a different number, use range(start,stop)
Step18: The lists created above with range have a step of 1 between elements. You can also give a fixed step size via a third command
Step19: Lists do not have to hold the same data type. For example,
Step20: However, it's good (but not essential) to use lists for similar objects that are somehow logically connected. If you want to group different data types together into a composite data object, it's best to use tuples, which we will learn about below.
Step21: Iteration, Indentation, and Blocks
Step22: This code snippet goes through each element of the list called days_of_the_week and assigns it to the variable day. It then executes everything in the indented block (in this case only one line of code, the print statement) using those variable assignments. When the program has gone through every element of the list, it exists the block.
Step23: The range() command is particularly useful with the for statement to execute loops of a specified length
Step24: Slicing
Step25: This is only occasionally useful. Slightly more useful is the slicing operation, which you can also use on any sequence. We already know that we can use indexing to get the first element of a list
Step26: If we want the list containing the first two elements of a list, we can do this via
Step27: or simply
Step28: If we want the last items of the list, we can do this with negative slicing
Step29: which is somewhat logically consistent with negative indices accessing the last elements of the list.
Step30: Since strings are sequences, you can also do this to them
Step31: If we really want to get fancy, we can pass a third element into the slice, which specifies a step length (just like a third argument to the range() function specifies the step)
Step32: Note that in this example I was even able to omit the second argument, so that the slice started at 2, went to the end of the list, and took every second element, to generate the list of even numbers less that 40.
Step33: (Quick quiz
Step34: If we evaluate it by itself, as we just did, we see that it returns a boolean value, False. The "==" operator performs equality testing. If the two items are equal, it returns True, otherwise it returns False. In this case, it is comparing two variables, the string "Sunday", and whatever is stored in the variable "day", which, in this case, is the other string "Saturday". Since the two strings are not equal to each other, the truth test has the false value.
Step35: We see a few other boolean operators here, all of which which should be self-explanatory. Less than, equality, non-equality, and so on.
Step36: We can do boolean tests on lists as well
Step37: Finally, note that you can also string multiple comparisons together, which can result in very intuitive tests
Step38: If statements can have elif parts ("else if"), in addition to if/else parts. For example
Step39: Of course we can combine if statements with for loops, to make a snippet that is almost interesting
Step40: This is something of an advanced topic, but ordinary data types have boolean values associated with them, and, indeed, in early versions of Python there was not a separate boolean object. Essentially, anything that was a 0 value (the integer or floating point 0, an empty string "", or an empty list []) was False, and everything else was true. You can see the boolean value of any data object using the bool() function.
Step41: Code Example
Step42: Let's go through this line by line. First, we define the variable n, and set it to the integer 20. n is the length of the sequence we're going to form, and should probably have a better variable name. We then create a variable called sequence, and initialize it to the list with the integers 0 and 1 in it, the first two elements of the Fibonacci sequence. We have to create these elements "by hand", since the iterative part of the sequence requires two previous elements.
Step43: We can now call fibonacci() for different sequence_lengths
Step44: We've introduced a several new features here. First, note that the function itself is defined as a code block (a colon followed by an indented block). This is the standard way that Python delimits things. Next, note that the first line of the function is a single string. This is called a docstring, and is a special kind of comment that is often available to people using the function through the python command line
Step45: If you define a docstring for all of your functions, it makes it easier for other people to use them, since they can get help on the arguments and return values of the function.
Step46: Tuples are like lists, in that you can access the elements using indices
Step47: However, tuples are immutable, you can't append to them or change the elements of them
Step48: Tuples are useful anytime you want to group different pieces of data together in an object, but don't want to create a full-fledged class (see below) for them. For example, let's say you want the Cartesian coordinates of some objects in your program. Tuples are a good way to do this
Step49: Again, it's not a necessary distinction, but one way to distinguish tuples and lists is that tuples are a collection of different things, here a name, and x and y coordinates, whereas a list is a collection of similar things, like if we wanted a list of those coordinates
Step50: Tuples can be used when functions return more than one value. Say we wanted to compute the smallest x- and y-coordinates of the above list of objects. We could write
Step51: Dictionaries are an object called "mappings" or "associative arrays" in other languages. Whereas a list associates an integer index with a set of objects
Step52: The index in a dictionary is called the key, and the corresponding dictionary entry is the value. A dictionary can use (almost) anything as the key. Whereas lists are formed with square brackets [], dictionaries use curly brackets {}
Step53: There's also a convenient way to create dictionaries without having to quote the keys.
Step54: The len() command works on both tuples and dictionaries
Step55: Conclusion of the Python Overview
Step56: No matter how experienced a programmer you are, these are words to meditate on.
Step57: size of the array
Step58: To build matrices, you can either use the array command with lists of lists
Step59: Add a column of ones to mat
Step60: size of a matrix
Step61: You can also form empty (zero) matrices of arbitrary shape (including vectors, which Numpy treats as vectors with one row), using the zeros command
Step62: There's also an identity command that behaves as you'd expect
Step63: as well as a ones command.
Step64: If you provide a third argument, it takes that as the number of points in the space. If you don't provide the argument, it gives a length 50 linear space.
Step65: linspace is an easy way to make coordinates for plotting. Functions in the numpy library (all of which are imported into IPython notebook) can act on an entire vector (or even a matrix) of points at once. Thus,
Step66: In conjunction with matplotlib, this is a nice way to plot things
Step67: Matrix operations
Step68: as well as when you add two matrices together. (However, the matrices have to be the same shape.)
Step69: Something that confuses Matlab users is that the times (*) operator give element-wise multiplication rather than matrix multiplication
Step70: To get matrix multiplication, you need the dot command
Step71: dot can also do dot products (duh!)
Step72: as well as matrix-vector products.
Step73: There's also a diag() function that takes a list or a vector and puts it along the diagonal of a square matrix.
Step75: We'll find this useful later on.
Step76: There's a section below on parsing CSV data. We'll steal the parser from that. For an explanation, skip ahead to that section. Otherwise, just assume that this is a way to parse that text into a numpy array that we can plot and do other analyses with.
Step77: Since we expect the data to have an exponential decay, we can plot it using a semi-log plot.
Step78: For a pure exponential decay like this, we can fit the log of the data to a straight line. The above plot suggests this is a good approximation. Given a function
Step79: Let's see whether this curve fits the data.
Step81: If we have more complicated functions, we may not be able to get away with fitting to a simple polynomial. Consider the following data
Step82: This data looks more Gaussian than exponential. If we wanted to, we could use polyfit for this as well, but let's use the curve_fit function from Scipy, which can fit to arbitrary functions. You can learn more using help(curve_fit).
Step83: Now fit to it using curve_fit
Step84: The curve_fit routine we just used is built on top of a very good general minimization capability in Scipy. You can learn more at the scipy documentation pages.
Step85: random() uses the Mersenne Twister algorithm, which is a highly regarded pseudorandom number generator. There are also functions to generate random integers, to randomly shuffle a list, and functions to pick random numbers from a particular distribution, like the normal distribution
Step86: It is generally more efficient to generate a list of random numbers all at once, particularly if you're drawing from a non-uniform distribution. Numpy has functions to generate vectors and matrices of particular types of random distributions.
Step87: III. Introduction to Pandas
Step88: Series
Step89: Get the array representation of a Series
Step90: Index objects are immutable and hold the axis labels and metadata such as names and axis names.
Step91: Create a Series with a custom index
Step92: Get a value from a Series
Step93: Get a set of values from a Series by passing in a list
Step94: Get values great than 0
Step95: Scalar multiply
Step96: Apply a numpy math function
Step97: A Series is like a fixed-length, ordered dict.
Step98: Re-order a Series by passing in an index (indices not found are NaN)
Step99: Check for NaN with the pandas method
Step100: Check for NaN with the Series method
Step101: Series automatically aligns differently indexed data in arithmetic operations
Step102: Name a Series
Step103: Name a Series index
Step104: Rename a Series' index in place
Step105: DataFrame
Step106: Like Series, columns that are not present in the data are NaN
Step107: Retrieve a column by key, returning a Series
Step108: Retrive a column by attribute, returning a Series
Step109: Retrieve a row by position
Step110: Update a column by assignment
Step111: Assign a Series to a column (note if assigning a list or array, the length must match the DataFrame, unlike a Series)
Step112: Assign a new column that doesn't exist to create a new column
Step113: Delete a column
Step114: Transpose the DataFrame
Step115: Create a DataFrame from a nested dict of dicts (the keys in the inner dicts are unioned and sorted to form the index in the result, unless an explicit index is specified)
Step116: Create a DataFrame from a dict of Series
Step117: Set the DataFrame index name
Step118: Set the DataFrame columns name
Step119: Return the data contained in a DataFrame as a 2D ndarray
Step120: If the columns are different dtypes, the 2D ndarray's dtype will accomodate all of the columns
Step121: Reindexing
Step122: Reindexing rows returns a new frame with the specified index
Step123: Reindex columns
Step124: Dropping Entries
Step125: Indexing, Selecting, Filtering
Step126: Select specified columns from a DataFrame
Step127: Select a slice from a DataFrame
Step128: Select from a DataFrame based on a filter
Step129: Select a slice of rows from a specific column of a DataFrame
Step130: Arithmetic and Data Alignment
Step131: Set a fill value instead of NaN for indices that do not overlap
Step132: Like NumPy, pandas supports arithmetic operations between DataFrames and Series.
Step133: Match the index of the Series on the DataFrame's columns, broadcasting down the rows and union the indices that do not match
Step134: Function Application and Mapping
Step135: Apply a function on 1D arrays to each column
Step136: Apply a function on 1D arrays to each row
Step137: Apply an element-wise Python function to a DataFrame
Step138: Sorting
Step139: Sort a DataFrame by its index
Step140: Sort a DataFrame by columns in descending order
Step141: Sort a DataFrame's values by column
Step142: Summarizing and Computing Descriptive Statistics
Step143: Sum and Mean
Step144: Descriptive analysis
Step145: Pivot tables
|
7,975
|
<ASSISTANT_TASK:>
Python Code:
!pip install meterstick
!git clone https://github.com/google/meterstick.git
import sys, os
sys.path.append(os.getcwd())
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from meterstick import *
np.random.seed(42)
platform = ('Desktop', 'Mobile', 'Tablet')
exprs = ('ctrl', 'expr')
country = ('US', 'non-US')
size = 1000
impressions = np.random.randint(10, 20, size)
clicks = impressions * 0.1 * np.random.random(size)
df = pd.DataFrame({'impressions': impressions, 'clicks': clicks})
df['platform'] = np.random.choice(platform, size=size)
df['expr_id'] = np.random.choice(exprs, size=size)
df['country'] = np.random.choice(country, size=size)
df['cookie'] = np.random.choice(range(5), size=size)
df.loc[df.country == 'US', 'clicks'] *= 2
df.loc[(df.country == 'US') & (df.platform == 'Desktop'), 'impressions'] *= 4
df.head()
Sum('clicks').compute_on(df)
Count('country').compute_on(df)
Count('country', distinct=True).compute_on(df)
Dot('clicks', 'impressions').compute_on(df)
Dot('clicks', 'clicks', True).compute_on(df)
Max('clicks').compute_on(df)
Min('clicks').compute_on(df)
Mean('clicks').compute_on(df)
Mean('clicks', 'impressions').compute_on(df)
Quantile('clicks').compute_on(df) # Default is median.
Quantile('clicks', 0.2).compute_on(df)
Quantile('clicks', (0.2, 0.5)).compute_on(df) # Quantile can take multiple quantiles.
Quantile('clicks', 0.5, interpolation='higher').compute_on(df)
Quantile('clicks', weight='impressions').compute_on(df)
Variance('clicks').compute_on(df)
Variance('clicks', unbiased=False).compute_on(df)
Variance('clicks', weight='impressions').compute_on(df)
StandardDeviation('clicks').compute_on(df)
StandardDeviation('clicks', False).compute_on(df)
StandardDeviation('clicks', weight='impressions').compute_on(df)
CV('clicks').compute_on(df)
Correlation('clicks', 'impressions').compute_on(df)
Correlation('clicks', 'impressions', weight='impressions').compute_on(df)
Cov('clicks', 'impressions').compute_on(df)
Cov('clicks', 'impressions', weight='impressions').compute_on(df)
Sum('clicks').compute_on(df, 'country')
Mean('clicks').compute_on(df, ['platform', 'country'])
MetricList((Sum('clicks'), Count('clicks'))).compute_on(df)
(Sum('clicks') + 1).compute_on(df)
sum((Sum('clicks'), Sum('impressions'), 1)).compute_on(df)
sum((Sum('clicks'), Sum('impressions'), 1)).set_name('meaningless sum').compute_on(df)
(Sum('clicks') / Sum('impressions')).compute_on(df)
Ratio('clicks', 'impressions', 'ctr').compute_on(df)
MetricList(
(Sum('clicks') - 1,
-Sum('clicks'),
2 * Sum('clicks'),
Sum('clicks')**2,
2**Mean('clicks'),
(Mean('impressions')**Mean('clicks')).set_name('meaningless power'))
).compute_on(df, melted=True)
Sum('clicks').compute_on(df, return_dataframe=False)
Count('clicks').compute_on(df, ['platform', 'country'], return_dataframe=False)
Mean('clicks').compute_on(df, melted=True)
MetricList((Sum('clicks'), Count('clicks'))).compute_on(df, 'country')
Quantile('clicks', [0.2, 0.7]).compute_on(df, 'country', melted=True)
# Don't worry. We will talk more about the pipeline operator "|" later.
(MetricList((Sum('clicks'), Count('clicks')))
| Jackknife('cookie')
| compute_on(df, 'country'))
(MetricList((Sum('clicks'), Count('clicks')))
| Bootstrap(n_replicates=100)
| compute_on(df, 'country', melted=True))
Distribution('country', Sum('clicks')).compute_on(df)
(Sum('clicks').compute_on(df, 'country') /
Sum('clicks').compute_on(df, return_dataframe=False))
Normalize('country', Sum('clicks')).compute_on(df)
CumulativeDistribution('country', MetricList(
(Sum('clicks'), Sum('impressions')))).compute_on(df)
CumulativeDistribution(
'country', Sum('clicks'), order=('non-US', 'US')).compute_on(df, 'platform')
CumulativeDistribution(
'country', MetricList((Sum('clicks'), Sum('impressions')))
).compute_on(df, melted=True)
PercentChange('country', 'US', Mean('clicks')).compute_on(df)
mean = Mean('clicks').compute_on(df, 'country')
(mean.loc['non-US'] / mean.loc['US'] - 1) * 100
PercentChange(
'country',
'US',
MetricList((Count('clicks'), Count('impressions'))),
include_base=True).compute_on(df, 'platform')
PercentChange(
['country', 'platform'],
('US', 'Desktop'),
MetricList((Count('clicks'), Count('impressions'))),
include_base=True).compute_on(df)
AbsoluteChange('country', 'US', Mean('clicks')).compute_on(df)
AbsoluteChange(
'country', 'US', Count('clicks'), include_base=True).compute_on(
df, 'platform', melted=True)
ctr = Ratio('clicks', 'impressions')
MH('country', 'US', 'platform', ctr).compute_on(df) # stratified by platform
np.random.seed(42)
exprs = ('ctrl', 'expr')
n = 10000
df_prepost = pd.DataFrame({'impressions': np.random.randint(10, 30, n)})
df_prepost['expr_id'] = np.random.choice(exprs, size=n)
df_prepost['cookie'] = np.random.choice(range(20), size=n)
# Preperiod correlates with postperiod.
df_prepost['pre_impressions'] = np.random.normal(df_prepost.impressions, 3)
# Add small and noisy improvments.
df_prepost.loc[df_prepost.expr_id == 'expr', 'impressions'] += np.random.randint(-2, 4, size=len(df_prepost.loc[df_prepost.expr_id == 'expr', 'impressions']))
abs = AbsoluteChange('expr_id', 'ctrl', Mean('impressions'))
cuped = CUPED('expr_id', 'ctrl', Mean('impressions'), Mean('pre_impressions'), 'cookie')
MetricList((abs, cuped)).compute_on(df_prepost)
from sklearn import linear_model
df_agg = MetricList((Mean('impressions'), Mean('pre_impressions'))).compute_on(df_prepost, ['expr_id', 'cookie'])
lm = linear_model.LinearRegression()
lm.fit(df_agg[['mean(pre_impressions)']], df_agg['mean(impressions)'])
theta = lm.coef_[0]
df_agg['adjusted'] = df_agg['mean(impressions)'] - theta * df_agg['mean(pre_impressions)']
adjusted = df_agg.groupby('expr_id').adjusted.mean()
adjusted['expr'] - adjusted['ctrl']
from plotnine import ggplot, aes, geom_density, after_stat, facet_grid
data_to_plot = pd.concat([df_agg['mean(impressions)'], df_agg.adjusted], keys=['Raw', 'CUPED'], names=['Adjusted'])
data_to_plot = pd.DataFrame(data_to_plot, columns=['Value']).reset_index()
(
ggplot(data_to_plot)
+ aes(x="Value", y=after_stat('density'), color='expr_id')
+ geom_density()
+ facet_grid('Adjusted ~ .')
)
# Jackknife is explained in the 'Standard Errors' section.
Jackknife('cookie', MetricList((abs, cuped))).compute_on(df_prepost)
# It's possible to control for multiple metrics.
CUPED('expr_id', 'ctrl', Mean('impressions'),
[Mean('pre_impressions'), Mean('pre_impressions')**2],
'cookie').compute_on(df_prepost)
pct = PercentChange('expr_id', 'ctrl', Mean('impressions'))
prepost = PrePostChange('expr_id', 'ctrl', Mean('impressions'), Mean('pre_impressions'), 'cookie')
MetricList((pct, prepost)).compute_on(df_prepost)
df_agg = MetricList((Mean('impressions'), Mean('pre_impressions'))).compute_on(
df_prepost, ['expr_id', 'cookie']).reset_index()
df_agg['mean(pre_impressions)'] -= df_agg['mean(pre_impressions)'].mean()
df_agg['is_treated'] = df_agg.expr_id == 'expr'
df_agg['interaction'] = df_agg.is_treated * df_agg['mean(pre_impressions)']
lm = linear_model.LinearRegression()
lm.fit(df_agg[['is_treated', 'mean(pre_impressions)', 'interaction']],
df_agg['mean(impressions)'])
beta1 = lm.intercept_
beta2 = lm.coef_[0]
beta2 / beta1 * 100
# Jackknife is explained in the 'Standard Errors' section.
Jackknife('cookie', MetricList((pct, prepost))).compute_on(df_prepost)
Jackknife('cookie', MetricList((Sum('clicks'), Sum('impressions')))).compute_on(df)
metrics = MetricList((Sum('clicks'), Sum('impressions')))
Jackknife('cookie', metrics).compute_on(df, 'country', True)
Jackknife('cookie', metrics, 0.9).compute_on(df)
res = (
MetricList((Ratio('clicks', 'impressions', 'ctr'), Sum('clicks')))
| PercentChange('country', 'US')
| Jackknife('cookie', confidence=0.9)
| compute_on(df, 'platform'))
res.display()
np.random.seed(42)
Bootstrap(None, Sum('clicks'), 100).compute_on(df)
np.random.seed(42)
Bootstrap('cookie', Sum('clicks'), 100).compute_on(df, 'country')
np.random.seed(42)
Bootstrap('cookie', Sum('clicks'), 100, 0.95).compute_on(df, 'country')
np.random.seed(42)
res = (
MetricList((Ratio('clicks', 'impressions', 'ctr'), Sum('impressions')))
| AbsoluteChange('country', 'US')
| Bootstrap(None, n_replicates=100, confidence=0.9)
| compute_on(df, 'platform'))
res.display()
from meterstick.models import *
m = LinearRegression(Mean('clicks'), Mean('impressions'), 'platform')
m.compute_on(df)
from sklearn import linear_model
x = Mean('impressions').compute_on(df, 'platform')
y = Mean('clicks').compute_on(df, 'platform')
m = linear_model.LinearRegression().fit(x, y)
print(m.coef_, m.intercept_)
# x can also be a list of Metrics or a MetricList.
m = Ridge(
Mean('clicks'),
[Mean('impressions'), Variance('clicks')],
'platform',
alpha=2)
m.compute_on(df, melted=True)
m = Lasso(
Mean('clicks'),
Mean('impressions'),
'platform',
fit_intercept=False,
alpha=5)
m.compute_on(df, 'country')
m = LogisticRegression(Count('clicks'), Mean('impressions'), 'country')
m.compute_on(df, melted=True)
m = LogisticRegression(Count('clicks'), Mean('impressions'), 'platform', name='LR')
m.compute_on(df, melted=True)
Count('clicks').compute_on(df, 'platform')
(LinearRegression(
Mean('clicks'),
[Mean('impressions'), Variance('impressions')],
'country',
name='lm')
| AbsoluteChange('platform', 'Desktop')
| Jackknife('cookie', confidence=0.9)
| compute_on(df)).display()
clicks_us = Sum('clicks', where='country == "US"')
clicks_not_us = Sum('clicks', where='country != "US"')
(clicks_not_us - clicks_us).compute_on(df)
Sum('clicks') | AbsoluteChange('country', 'US') | compute_on(df)
MetricList((Sum('X', where='Y > 0'), Sum('X'))).to_sql('T', 'grp')
m = MetricList((Sum('clicks'), Mean('impressions')))
m = AbsoluteChange('country', 'US', m)
m.compute_on(df, 'platform')
from sqlalchemy import create_engine
engine = create_engine('sqlite://', echo=False)
df.to_sql('T', con=engine)
# Meterstick uses a different SQL dialect from SQLAlchemy, so this doesn't
# always work.
m.compute_on_sql('T', 'platform', execute=lambda sql: pd.read_sql(sql, engine))
class CustomSum(Metric):
def __init__(self, var):
name = 'custom sum(%s)' % var
super(CustomSum, self).__init__(name)
self.var = var
def compute(self, df):
return df[self.var].sum()
CustomSum('clicks').compute_on(df, 'country')
Sum('clicks').compute_on(df, 'country')
%%timeit
CustomSum('clicks').compute_on(df, 'country')
%%timeit
Sum('clicks').compute_on(df, 'country')
%%timeit
df.groupby('country')['clicks'].sum()
class VectorizedSum(Metric):
def __init__(self, var):
name = 'vectorized sum(%s)' % var
super(VectorizedSum, self).__init__(name = name)
self.var = var
def compute_slices(self, df, split_by):
if split_by:
return df.groupby(split_by)[self.var].sum()
return df[self.var].sum()
VectorizedSum('clicks').compute_on(df, 'country')
%%timeit
VectorizedSum('clicks').compute_on(df, 'country')
class USOnlySum(Sum):
def precompute(self, df, split_by):
return df[df.country == 'US']
def postcompute(self, data, split_by):
print('Inside postcompute():')
print('Input data: ', data)
print('Input split_by: ', split_by)
print('\n')
return data
def final_compute(self, res, melted, return_dataframe, split_by, df):
# res is the result processed by the dataflow till now. df is the original
# DataFrme passed to compute_on().
print('Inside final_compute():')
for country in df.country.unique():
if country not in res.index:
print('Country "%s" is missing!' % country)
return res
USOnlySum('clicks').compute_on(df, 'country')
normalize = metrics.Sum('clicks', postcompute=lambda res, split_by: res / res.sum())
normalize.compute_on(df, 'country')
# The above is equivalent to Normalize by 'country'.
Normalize('country', Sum('clicks')).compute_on(df)
class Distribution(Operation):
Computes the normalized values of a Metric over column(s).
Attributes:
extra_index: A list of column(s) to normalize over.
children: A tuple of a Metric whose result we normalize on. And all other
attributes inherited from Operation.
def __init__(self,
over: Union[Text, List[Text]],
child: Optional[Metric] = None,
**kwargs):
self.over = over
# The 3rd argument is the extra column that will be added to split_by. It'll
# be converted to a list then assigned to self.extra_index.
super(Distribution, self).__init__(child, 'Distribution of {}', over,
**kwargs)
def compute_slices(self, df, split_by=None):
# extra_index is after the split_by.
lvls = split_by + self.extra_index if split_by else self.extra_index
res = self.compute_child(df, lvls)
total = res.groupby(level=split_by).sum() if split_by else res.sum()
return res / total
class SumWithSQL(SimpleMetric):
def __init__(self,
var: Text,
name: Optional[Text] = None,
where: Optional[Text] = None,
**kwargs):
super(SumWithSQL, self).__init__(var, name, 'sum({})', where, **kwargs)
self._sum = Sum(var, name, where, **kwargs)
def compute_slices(self, df, split_by):
return self._sum.compute_slices(df, split_by)
# All the SQL-related classes, like Datasource, Filters, Columns, and so on,
# are defined in sql.py.
def get_sql_and_with_clause(self, table: Datasource, split_by: Columns,
global_filter: Filters, indexes: Columns,
local_filter: Filters, with_data: Datasources):
del indexes # unused
# Always starts with this line unless you know what you are doing.
local_filter = Filters([self.where, local_filter]).remove(global_filter)
columns = Column(self.var, 'SUM({})', self.name, local_filter)
# Returns a Sql instance and the WITH clause it needs.
return Sql(columns, table, global_filter, split_by), with_data
m = Sum('clicks') - SumWithSQL('clicks', 'custom_sum')
m.compute_on_sql('T', 'platform', execute=lambda sql: pd.read_sql(sql, engine))
class DistributionWithSQL(Operation):
def __init__(self,
over: Union[Text, List[Text]],
child: Optional[Metric] = None,
**kwargs):
super(DistributionWithSQL, self).__init__(child, 'Distribution of {}', over,
**kwargs)
def compute_slices(self, df, split_by=None):
lvls = split_by + self.extra_index if split_by else self.extra_index
res = self.compute_child(df, lvls)
total = res.groupby(level=split_by).sum() if split_by else res.sum()
return res / total
def get_sql_and_with_clause(self,
table: Datasource,
split_by: Columns,
global_filter: Filters,
indexes: Columns,
local_filter: Filters,
with_data: Datasources):
Gets the SQL query and WITH clause.
The query is constructed by
1. Get the query for the child metric.
2. Keep all indexing/groupby columns unchanged.
3. For all value columns, get
value / SUM(value) OVER (PARTITION BY split_by).
Args:
table: The table we want to query from.
split_by: The columns that we use to split the data.
global_filter: The Filters that can be applied to the whole Metric tree.
indexes: The columns that we shouldn't apply any arithmetic operation.
local_filter: The Filters that have been accumulated so far.
with_data: A global variable that contains all the WITH clauses we need.
Returns:
The SQL instance for metric, without the WITH clause component.
The global with_data which holds all datasources we need in the WITH
clause.
# Always starts with this line unless you know what you are doing.
local_filter = Filters([self.where, local_filter]).remove(global_filter)
# The intermediate tables needed by child metrics will be added to with_data
# in-place.
child_sql, with_data = self.children[0].get_sql_and_with_clause(
table, indexes, global_filter, indexes, local_filter, with_data)
child_table = sql.Datasource(child_sql, 'DistributionRaw')
# Always use the alias returned by with_data.add(), because if the with_data
# already holds a different table that also has 'DistributionRaw' as its
# alias, we'll use a different alias for the child_table, which is returned
# by with_data.add().
child_table_alias = with_data.add(child_table)
groupby = sql.Columns(indexes.aliases, distinct=True)
columns = sql.Columns()
for c in child_sql.columns:
if c.alias in groupby:
continue
col = sql.Column(c.alias) / sql.Column(
c.alias, 'SUM({})', partition=split_by.aliases)
col.set_alias('Distribution of %s' % c.alias_raw)
columns.add(col)
return sql.Sql(groupby.add(columns), child_table_alias), with_data
m = DistributionWithSQL('country', Sum('clicks'))
m.to_sql('T')
class SumWithTrace(Sum):
def compute_through(self, data, split_by):
print('Computing %s...' % self.name)
return super(SumWithTrace, self).compute_through(data, split_by)
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
ctr = SumWithTrace('clicks') / SumWithTrace('impressions')
MetricList((sum_clicks, ctr)).compute_on(df)
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
ctr = sum_clicks / SumWithTrace('impressions')
MetricList((sum_clicks, ctr)).compute_on(df)
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
jk, s = MetricList(
[Jackknife('cookie', sum_clicks), sum_clicks],
children_return_dataframe=False).compute_on(
df, return_dataframe=False)
print(s)
jk
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
ctr = sum_clicks / SumWithTrace('impressions')
sum_clicks.compute_on(df, 'country', cache_key='foo')
ctr.compute_on(df, 'country', cache_key='foo')
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
ctr = sum_clicks / SumWithTrace('impressions')
MetricList((sum_clicks, ctr)).compute_on(df, cache_key='foo')
print('sum_clicks cached: ', sum_clicks.get_cached('foo'))
print('ctr cached: ', ctr.get_cached('foo'))
ctr.compute_on(None, cache_key='foo')
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
ctr = sum_clicks / SumWithTrace('impressions')
MetricList((sum_clicks, ctr)).compute_on(df, cache_key='foo')
ctr.flush_cache('foo', recursive=False)
sum_clicks.compute_on(None, cache_key='foo') # sum is not flushed.
ctr.in_cache('foo')
ctr.compute_on(None, cache_key='foo')
ctr.in_cache('foo')
ctr.flush_cache('foo')
sum_clicks.compute_on(None, cache_key='foo') # sum is flushed too.
sum_clicks = SumWithTrace('clicks')
PercentChange('country', 'US', sum_clicks).compute_on(df, cache_key=42)
sum_clicks.compute_on(None, 'country', cache_key=42)
sum_clicks.compute_on(df, cache_key=42)
sum_clicks.compute_on(df, 'country', cache_key=42)
np.random.seed(42)
df['duration'] = np.random.random(len(df)) * 200
long_clicks = Sum('clicks', where='duration > 60')
short_clicks = Sum('clicks', where='duration < 30')
click_split = (long_clicks / short_clicks).set_name('click split')
click_split | Jackknife('cookie') | compute_on(df, 'country')
np.random.seed(42)
df['period'] = np.random.choice(('preperiod', 'postperiod'), size=size)
sum_clicks = Sum('clicks')
ctr = sum_clicks / Sum('impressions')
metrics = (sum_clicks, ctr)
preperiod_clicks = MetricList(metrics, where='period == "preperiod"')
postperiod_clicks = MetricList(metrics, where='period == "postperiod"')
pct = PercentChange('platform', 'Desktop')
did = (pct(postperiod_clicks) - pct(preperiod_clicks)).rename_columns(
['clicks% DID', 'ctr% DID'])
Jackknife('cookie', did).compute_on(df)
np.random.seed(42)
sum_clicks = Sum('clicks')
ctr = sum_clicks / Sum('impressions')
metrics = MetricList((sum_clicks, ctr))
(Jackknife('cookie', metrics) /
Bootstrap('cookie', metrics, 100)).rename_columns(
pd.MultiIndex.from_product(
(('sum(clicks)', 'ctr'), ('Value', 'SE')))).compute_on(df, 'country')
np.random.seed(42)
size = 1000000
df_lin = pd.DataFrame({'grp': np.random.choice(range(10), size=size)})
df_lin['x'] = df_lin.grp + np.random.random(size=size)
df_lin['y'] = 2 * df_lin.x + np.random.random(size=size)
df_lin['cookie'] = np.random.choice(range(20), size=size)
df_lin_mean = df_lin.groupby('grp').mean()
plt.scatter(df_lin_mean.x, df_lin_mean.y)
plt.show()
from sklearn import linear_model
class LinearReg(Operation):
def __init__(self, x, y, grp):
self.lm = linear_model.LinearRegression()
# Delegate most of the computations to Mean Metrics.
child = MetricList((Mean(x), Mean(y)))
self.grp = grp
# Register grp as the extra_index.
super(LinearReg, self).__init__(child, '%s ~ %s' % (y, x), grp)
def split_data(self, df, split_by=None):
The 1st element in yield will be passed to compute().
if not split_by:
yield self.compute_child(df, self.grp), None
else:
# grp needs to come after split_by.
child = self.compute_child(df, split_by + [self.grp])
keys, indices = list(zip(*child.groupby(split_by).groups.items()))
for i, idx in enumerate(indices):
yield child.loc[idx.unique()].droplevel(split_by), keys[i]
def compute(self, df):
self.lm.fit(df.iloc[:, [0]], df.iloc[:, 1])
return pd.Series((self.lm.coef_[0], self.lm.intercept_))
lr = LinearReg('x', 'y', 'grp')
Jackknife('cookie', lr, 0.95).compute_on(df_lin)
class LinearRegSlow(Metric):
def __init__(self, x, y, grp):
self.lm = linear_model.LinearRegression()
# Doesn't delegate.
self.x = x
self.y = y
self.grp = grp
super(LinearRegSlow, self).__init__('%s ~ %s' % (y, x))
def split_data(self, df, split_by=None):
The 1st element in yield will be passed to compute().
idx = split_by + [self.grp] if split_by else self.grp
mean = df.groupby(idx).mean()
if not split_by:
yield mean, None
else:
keys, indices = list(zip(*mean.groupby(split_by).groups.items()))
for i, idx in enumerate(indices):
yield mean.loc[idx.unique()].droplevel(split_by), keys[i]
def compute(self, df):
self.lm.fit(df.iloc[:, [0]], df.iloc[:, 1])
return pd.Series((self.lm.coef_[0], self.lm.intercept_))
lr_slow = LinearRegSlow('x', 'y', 'grp')
Jackknife('cookie', lr_slow, 0.95).compute_on(df_lin)
%%timeit
Jackknife('cookie', lr, 0.95).compute_on(df_lin)
%%timeit
Jackknife('cookie', lr_slow, 0.95).compute_on(df_lin)
# Mimics that measurements, y, are taken repeatedly at a fixed grid, x.
np.random.seed(42)
size = 10
x = list(range(5))
df_sin = pd.DataFrame({'x': x * size, 'cookie': np.repeat(range(size), len(x))})
df_sin['y'] = np.sin(df_sin.x) + np.random.normal(scale=0.5, size=len(df_sin.x))
df_sin.head(10)
import statsmodels.api as sm
lowess = sm.nonparametric.lowess
class Lowess(Metric):
def __init__(self, x, y, name=None, where=None):
self.x = x
self.y = y
name = name or 'LOWESS(%s ~ %s)' % (y, x)
super(Lowess, self).__init__(name, where=where)
def compute(self, data):
lowess_fit = pd.DataFrame(
lowess(data[self.y], data[self.x]), columns=[self.x, self.y])
return lowess_fit.drop_duplicates().reset_index(drop=True)
Lowess('x', 'y') | compute_on(df_sin)
jk = Lowess('x', 'y') | Jackknife('cookie', confidence=0.9) | compute_on(df_sin)
point_est = jk[('y', 'Value')]
ci_lower = jk[('y', 'Jackknife CI-lower')]
ci_upper = jk[('y', 'Jackknife CI-upper')]
plt.scatter(df_sin.x, df_sin.y)
plt.plot(x, point_est, c='g')
plt.fill_between(
x, ci_lower,
ci_upper,
color='g',
alpha=0.5)
plt.show()
from plotnine import ggplot, geom_point, geom_ribbon, aes, ylab
y = Mean('clicks')
x = [Mean('impressions'), Variance('impressions')]
grpby = 'platform'
baseline = LinearRegression(y, x, grpby, fit_intercept=False)
shrinkage = [(Ridge(y, x, grpby, a, False) / baseline).rename_columns(
('%s::mean(impressions)' % a, '%s::var(impressions)' % a))
for a in range(10)]
jk = (MetricList(shrinkage)
| Jackknife('cookie', confidence=0.95)
| compute_on(df, melted=True)).reset_index()
jk[['penalty', 'X']] = jk.Metric.str.split('::', expand=True)
jk.penalty = jk.penalty.astype(int)
(ggplot(jk, aes('penalty', 'Value', color='X'))
+ ylab('Shrinkage')
+ geom_point()
+ geom_ribbon(
aes(ymin='Jackknife CI-lower', ymax='Jackknife CI-upper', fill='X'),
alpha=0.1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: or from GitHub for the latest version.
Step2: Demo Starts
Step3: Simple Metrics
Step4: Count
Step5: Dot (inner product)
Step6: It can also be normalized.
Step7: Max
Step8: Min
Step9: Mean
Step10: Weighted Mean
Step11: Quantile
Step12: Interpolation
Step13: Weighted Quantile
Step14: Variance
Step15: Biased Variance
Step16: Weighted Variance
Step17: Standard Deviation
Step18: Biased Standard Deviation
Step19: Weighted Standard Deviation
Step20: Coefficient of Variation
Step21: Correlation
Step22: Weighted Correlation
Step23: Covariance
Step24: Weighted Covariance
Step25: Slicing
Step26: Multiple Metrics
Step27: Arithmetic of Metrics
Step28: Divide
Step29: Ratio
Step30: We also support many other common arithmetic operations.
Step31: Output Format
Step32: Operations
Step33: It's equal to
Step34: Distribution has an alias Normalize.
Step35: Cumulative Distribution
Step36: PercentChange
Step37: You can include the base group in your result.
Step38: You can also specify multiple columns as the condition columns, then your base value should be a tuple.
Step39: Absolute Change
Step40: You can also include the base group in your result.
Step41: Cochran-Mantel-Haenszel statistics
Step42: CUPED
Step43: CUPED essentially fits a linear model of Postperiod metric ~ 1 + preperiod metric and uses it to control for the variance in the preperiod.
Step44: We can see that CUPED's result is similar to the absolute change but has smaller variance.
Step45: PrePostChange
Step46: Standard Errors
Step47: You can also specify a confidence level, the we'll return the confidence interval. The returned DataFrame also comes with a display() method for visualization which will highlight significant changes. To customize the display(), please take a look at confidence_interval_display_demo.ipynb.
Step48: Bootstrap
Step49: Models
Step50: Linear Regression
Step51: What Model(y, x, groupby).compute_on(data) does is
Step52: Ridge Regression
Step53: Lasso Regression
Step54: Logistic Regression
Step55: If y is not binary, by default a multinomal model is fitted. The behavior can be controlled via the 'multinomial' arg.
Step56: Classes are the unique values of y.
Step57: Wrapping sklearn models into Meterstick provides the ability to combine Models with other built-in Metrics and Operations. For example, you can Jackknife the Model to get the uncertainty of coefficients.
Step58: Pipeline
Step59: It's equivalent to
Step60: SQL
Step61: Custom Metric
Step62: CustomSum doesn't have vectorization. It loops through the DataFrame and sum on every slice. As the result, it's slower than vectorized summation.
Step63: With Vectorization
Step64: Precompute, postcompute and final_compute
Step65: Overwrite using Lambda Functions
Step67: Custom Operation
Step68: SQL Generation
Step70: For an Operation, you ususally call the child metrics' get_sql_and_with_clause() to get the subquery you need.
Step71: Caching
Step72: Now let's see what heppens if we reuse sum_clicks.
Step73: Then sum_clicks only gets computed once. For Metics that are not quite compatible, you can still put them in a MeticList and set return_dataframe to False to maximize the caching.
Step74: If you really cannot compute Metrics together, you can use a cache_key.
Step75: The resulte are cached in ctr, a composite Metric, as well as its children, the Sum Metrics.
Step76: You can flush the cache by calling flush_cache(key, split_by=None, recursive=True, prune=True), where "recursive" means if you want to flush the cache of the children Metrics as well, and "prune" means if the key is not found in current Metric, do you still want to flush the children Metrics or stop early. It's useful when a high level Metric appears in several places then during the flushing we will hit it multiple times. We can save time by stop early.
Step77: Though ctr's cache has been flushed, we can still compute ctr from cache because all its children are cached.
Step78: We won't be able to re-compute ctr if we recursively flush its cache.
Step79: However, the behavior becomes subtle when Operation is involved.
Step80: Note that it's sum_clicks.compute_on(df, 'country') instead of sum_clicks.compute_on(df) got saved in the cache. The reason is we need the former not the latter to compute the PercentChange. Using sum_clicks.compute_on(df, cache_key=42) will always give you the right result so it's not a big issue, just might confuse you sometime.
Step81: Advanced Examples
Step82: Difference in differences
Step83: Compare the standard errors between Jackknife and Bootstrap
Step86: Linear Regression
Step87: LOWESS
Step88: Coefficient Shrikage
|
7,976
|
<ASSISTANT_TASK:>
Python Code:
# Import data
import math
# Create list of values
data = [3,2,3,4,2,3,5,2,2,33,3,5,2,2,5,6,62,2,2,3,6,6,2,23,3,2,3]
# Calculate n
n = len(data)
# Calculate the mean
mean = sum(data)/len(data)
# Create a list of all deviations from the mean
all_deviations_from_mean_squared = []
# For each observation in the data
for observation in data:
# Calculate the deviation from the mean
deviation_from_mean = (observation - mean)
# Square it
deviation_from_mean_squared = deviation_from_mean**2
# Add the result to our list
all_deviations_from_mean_squared.append(deviation_from_mean_squared)
# Sum all the squared deviations in our list
sum_of_deviations_from_mean_squared = sum(all_deviations_from_mean_squared)
# Divide by n
population_variance = sum_of_deviations_from_mean_squared/n
# Show variance
population_variance
# Find the square root of the population variance
population_standard_deviation = math.sqrt(population_variance)
# Print the populaton standard deviation
population_standard_deviation
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Data
Step2: Calculate Population Variance
Step3: Calculate Population Standard Deviation
|
7,977
|
<ASSISTANT_TASK:>
Python Code:
import seaborn
import pandas as pd
import pylab as pl
import yaml
%pylab inline
df = pd.read_pickle("../yelp-challenge/data_urbana_champaign/business_urbana_champaign.p")
df.reset_index(drop=True, inplace=True)
print df.shape
print df.columns.values
len(df.business_id.unique())
df.head(2)
print df.city.unique(), '\n', len(df.city.unique())
city_count = df.city.groupby(df.city).apply(lambda x:x.count())
uc_sum = city_count[city_count.keys().isin(['Champaign', 'Urbana'])].sum()
100.0 * uc_sum / city_count.sum()
df_uc = df[df.city.isin(['Champaign', 'Urbana'])]
df_uc.shape
print "{}% ({}/{}) business_id(s) are open".format(100.0 * sum(df_uc.is_open == 1) / len(df_uc), sum(df_uc.is_open == 1), len(df_uc))
df_uc_open = df_uc[df_uc.is_open == 1]
df_uc_open.reset_index(drop=True, inplace=True)
df_uc_open.shape
100.0 * df_uc_open.hours.dropna().shape[0] / df_uc_open.shape[0]
df_uc_open = df_uc_open.dropna(subset=['hours'])
df_uc_open.reset_index(drop=True, inplace=True)
df_uc_open.shape
len(df_uc_open.postal_code.unique())
df.review_count.describe()
df_uc_open.review_count.describe()
df_uc_open.ix[df_uc_open.review_count.argmax()]
cat = {}
none_num = 0
for i in df_uc_open.categories:
if not i:
none_num += 1
continue
for j in i:
cat[j] = cat.get(j,0) + 1
none_num
sorted(cat.iteritems(), key=lambda x: x[1], reverse=True)
df_uc_open.neighborhood.unique()
df_uc_open = df_uc_open.dropna(subset=['attributes'])
df_uc_open.reset_index(drop=True, inplace=True)
df_uc_open.shape
# 814 -> ?
# df_uc_open.attributes
# import yaml
att_count = {}
for i in df_uc_open.attributes:
data = yaml.load('['+','.join(i)+']')
for d in data:
att_count[d.keys()[0]] = att_count.get(d.keys()[0], 0) + 1
sorted(att_count.iteritems(), key=lambda x: x[1], reverse=True)
df_uc_open.shape[0]
data = yaml.load('['+','.join(df_uc_open.attributes[452])+']')
data
pl.hist(df_uc_open.stars)
df_uc_open.stars.describe()
s = 3.9
1.0 * sum(df_uc_open.stars > s) / 729
df_uc_open.stars.value_counts() /729
pl.hist?
c = ['Champaign', 'Urbana']
print c[0], '\n',df_uc_open[df_uc_open.city == c[0]].stars.describe()
pl.hist(df_uc_open[df_uc_open.city == c[0]].stars, normed=True)
print c[1], '\n',df_uc_open[df_uc_open.city == c[1]].stars.describe()
pl.hist(df_uc_open[df_uc_open.city == c[1]].stars, normed=True)
print df.shape, df_uc.shape, df_uc_open.shape
df_uc_open.to_pickle("UC01_df_uc_open.p")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: geo
Step2: 1. 'city'
Step3: we only consider Champaign and Urbana as our target in this dataset
Step4: 2. 'is_open'
Step5: we only consider those with is_open == 1
Step6: 3. 'Hour'
Step7: Drop all records with missing 'Hour'
Step8: 4. 'postal_code'
Step9: 5. 'review_count'
Step10: 6. 'categories'
Step11: Here we should set the list of categories we are interested.
Step12: All 'Neighborhood' == None
Step13: We need a threshold to decide which attributes we use to build model.
Step14: There are 2 kinds of attributes
Step15: If we would like to build a binary classification model, here we could set threshold 'star' as '<4.0'
Step16: output df_uc_open
|
7,978
|
<ASSISTANT_TASK:>
Python Code:
from indicnlp.morph import unsupervised_morph
morph = unsupervised_morph.UnsupervisedMorphAnalyzer("bn")
text = u\
করা করেছিলাম করেছি করতে করেছিল হয়েছে হয়েছিল হয় হওয়ার হবে আবিষ্কৃত আবিষ্কার অভিষিক্ত অভিষেক অভিষেকের আমি আমার আমাদের তুমি তোমার তোমাদের বসা বসেছিল বসে বসি বসেছিলাম বস বসার\
word_token = text.split(" ")
word_morph = []
for i in word_token:
word_morph.append(morph.morph_analyze(i))
import pandas as pd
indic = pd.DataFrame({"1_Word": word_token, "2_Morpheme": word_morph})
indic
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator
bangla_text = "ami apni tumi tomar tomader amar apnar apnader akash"
text_trans = ItransTransliterator.from_itrans(bangla_text, "bn")
print repr(text_trans).decode("unicode_escape")
from transliteration import getInstance
trans = getInstance()
text_trans = trans.transliterate(bangla_text, "bn_IN")
print repr(text_trans).decode("unicode_escape")
import rbs
word_stem1 = []
for i in word_token:
word_stem1.append(rbs.stemWord(i, True))
bs1 = pd.DataFrame({"1_Word": word_token, "2_Stem": word_stem1})
bs1
import jnius_config
jnius_config.set_classpath(".", "path to class")
from jnius import autoclass
cls = autoclass("RuleFileParser")
stemmer = cls()
word_stem2 = []
for i in word_token:
word_stem2.append(stemmer.stemOfWord(i))
bs2 = pd.DataFrame({"1_Word": word_token, "2_Stem": word_stem2})
bs2
from pyavrophonetic import avro
trans_text = avro.parse(bangla_text)
print repr(trans_text).decode("unicode_escape")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using Indic NLP Library
Step2: Transliteration
Step3: Using Silpa
Step4: Using BengaliStemmer
Step5: Using BanglaStemmer
Step6: Using Avro
|
7,979
|
<ASSISTANT_TASK:>
Python Code:
import tataaq
YOUR_API_KEY_HERE = ""
api = tataaq.TataAQ(apikey=YOUR_API_KEY_HERE)
# Ping the server to see if we have valid auth credentials
resp = api.ping()
print (resp.status_code)
import pandas as pd
import feather
# Request decice information for EBAM001
resp = api.device("EBAM001")
resp.status_code
resp.headers
resp.json()
# Request the data
resp = api.data("EBAM001")
# Print the meta information
resp.json()['meta']
# print the 0 row
resp.json()['data'][0]
# return data after 2017-01-01
resp = api.data("EBAM001", per_page=100, filter="timestamp,gt,2017-01-01")
resp.json()['meta']
meta, df = api.data("EBAM001", dataframe=True)
meta
df.info()
meta, df = api.data("EBAM001", per_page=10000, filter="timestamp,gt,2017-01-01", dataframe=True)
df.index = df['timestamp_local']
df.info()
# Delete a couple of columns so we can easily peak at the data
del df['instrument']
del df['timestamp']
df.head()
%time feather.write_dataframe(df, "EBAM001_2017_data.feather")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Things
Step2: Retrieve Information about a Device
Step3: Access the status of the previous request
Step4: Access the header information
Step5: Access the json information (data)
Step6: Retrieve the Actual Data
Step7: We can get the actual data by accesing the "data" key in the resp.json() dictionary
Step8: We can also add keywords to our request. The most useful ones are the following
Step9: Utilizing the Magical DataFrame
Step10: Let's take a look at our data now
Step11: Let's get all data from the EBAM for the year 2017
Step12: Export Data
|
7,980
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# from fastai.core
def even_mults(start:float, stop:float, n:int)->np.ndarray:
"Build evenly stepped schedule from `star` to `stop` in `n` steps."
mult = stop/start
step = mult**(1/(n-1))
return np.array([start*(step**i) for i in range(n)])
layer_groups = ['conv_group_1', 'conv_group_2', 'linear_group']
def lr_range(lr:[float,slice])->np.ndarray:
if not isinstance(lr, slice): return lr
if lr.start: res = even_mults(lr.start, lr.stop, len(layer_groups))
else: res = [lr.stop/3]*(len(layer_groups)-1)+[lr.stop]
return np.array(res)
lr = slice(1e-3)
lr_range(lr)
lr = 1e-3
lr_range(lr)
# 10 layer groups
layer_groups = [i for i in range(10)]
lr = slice(1e-3)
lr_range(lr)
lr = slice(1e-6, 1e-3)
lr_range(lr)
1e-3/30
1e-6*30
(1e-3/30 + 1e-6/30)*2
even_mults(1e-6, 1e-3, 3)
even_mults(1e-6, 1e-3, 10)
lr_stop = 1e-3
lr_start= lr_stop / 3**2
even_mults(lr_start, lr_stop, 3)
1e-3/9
(1/9 + 1)/2
5/9
even_mults(1/9, 1, 3)
lr_range(3)
even_mults(1e-10, 1, 11)
from fastai import *
from fastai.vision import *
__version__
import torchvision
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms()
data = (ImageItemList.from_folder(path).split_by_folder()
.label_from_folder().transform(tfms).databunch())
learn = create_cnn(data, torchvision.models.inception_v3)
??models.resnet18
??torchvision.models.inception_v3
def inception_v3_2(pretrained=False, **kwargs):
rInception v3 model architecture from
`"Rethinking the Inception Architecture for Computer Vision" <http://arxiv.org/abs/1512.00567>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
model = torchvision.models.Inception3(**kwargs)
# if pretrained:
# if 'transform_input' not in kwargs:
# kwargs['transform_input'] = True
# model.load_state_dict(model_zoo.load_url(model_urls['inception_v3_google']))
return model
create_cnn(data, inception_v3_2)
??learn.fit_one_cycle
??learn.lr_range
??even_mults
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: let's say for a hypothetical network with 3 layer groups (conv_group_1, conv_group_2, linear_group).
Step2: Interesting, so if you have multiple trainable layer groups, and pass in a slice with only a stop element, you'll get the lr for the last group, and the lr / 3 for all preceeding groups.
Step3: Now what happens when I pass in a start and stop value
Step4: This is so cool. Fastai finds the order / magnitude / exponential / logorithmic mean, not the absolute mean. This is why the step multiplier is (stop/start)**1/(n-1)) where n is the number of layer groups.
Step5: So the question I have, and why I'm here, is
Step6: This is very exciting.
Step8:
|
7,981
|
<ASSISTANT_TASK:>
Python Code:
psource(Chart)
chart = Chart(E0)
print(chart.parses('the stench is in 2 2'))
chart_trace = Chart(nlp.E0, trace=True)
chart_trace.parses('the stench is in 2 2')
print(chart.parses('the stench 2 2'))
import os, sys
sys.path = [os.path.abspath("../../")] + sys.path
from nlp4e import *
from notebook4e import psource
psource(CYK_parse)
E_Prob_Chomsky = ProbGrammar("E_Prob_Chomsky", # A Probabilistic Grammar in CNF
ProbRules(
S = "NP VP [1]",
NP = "Article Noun [0.6] | Adjective Noun [0.4]",
VP = "Verb NP [0.5] | Verb Adjective [0.5]",
),
ProbLexicon(
Article = "the [0.5] | a [0.25] | an [0.25]",
Noun = "robot [0.4] | sheep [0.4] | fence [0.2]",
Adjective = "good [0.5] | new [0.2] | sad [0.3]",
Verb = "is [0.5] | say [0.3] | are [0.2]"
))
words = ['the', 'robot', 'is', 'good']
grammar = E_Prob_Chomsky
P = CYK_parse(words, grammar)
print(P)
parses = {k: p.leaves for k, p in P.items()}
print(parses)
for subtree in P['VP', 2, 3].leaves:
print(subtree.leaves)
psource(astar_search_parsing)
grammar = E0
words = ['the', 'wumpus', 'is', 'dead']
astar_search_parsing(words, grammar)
words_swaped = ["the", "is", "wupus", "dead"]
astar_search_parsing(words_swaped, grammar)
psource(beam_search_parsing)
beam_search_parsing(words, grammar)
beam_search_parsing(words_swaped, grammar)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example
Step2: And then we simply call the parses function
Step3: You can see which edges get added by setting the optional initialization argument trace to true.
Step4: Let's try and parse a sentence that is not recognized by the grammar
Step5: An empty list was returned.
Step6: When updating the probability of a substring, we pick the max of its current one and the probability of the substring broken into two parts
Step7: Now let's see the probabilities table for the sentence "the robot is good"
Step8: A defaultdict object is returned (defaultdict is basically a dictionary but with a default value/type). Keys are tuples in the form mentioned above and the values are the corresponding parse trees which demonstrates how the sentence will be parsed. Let's check the details of each parsing
Step9: Please note that each item in the returned dict represents a parsing strategy. For instance, ('Article', 0, 0)
Step10: So we can interpret this step as parsing the word at index 2 and 3 together('is' and 'good') as a verh phrase.
Step11: Example
Step12: The algorithm returns a 'S' which means it treats the inputs as a sentence. If we change the order of words to make it unreadable
Step13: Then the algorithm asserts that out words cannot be a sentence.
Step14: Example
|
7,982
|
<ASSISTANT_TASK:>
Python Code:
%pylab notebook
%precision 4
Zline = 38.2 + 140.0j # [Ohm]
Zeq = 0.10 + 0.4j # [Ohm]
V_high = 14e3 # [V]
V_low = 2.4e3 # [V]
Pout = 90e3 # [W] load
PF = 0.8 # lagging
VS = 2.3e3 # [V] secondary voltage
a = V_high / V_low
a
Z_line = (1/a)**2 * Zline
print('Z_line = {:.2f} Ω'.format(Z_line))
Is = Pout / (VS*PF)
print('Is = {:.2f} A'.format(Is))
IS_angle = - arccos(PF) # negative because lagging PF
print('θ = {:.2f}°'.format(degrees(IS_angle)))
IS = Is * (cos(IS_angle) + sin(IS_angle)*1j)
print('IS = {:.2f} A ∠{:.2f}°'.format(abs(IS), degrees(IS_angle)))
V_source = VS + IS*Z_line + IS*Zeq
V_source_angle = arctan(V_source.imag/V_source.real) # angle of V_source [rad]
print('V_source = {:.1f} V ∠{:.1f}°'.format(
abs(V_source), degrees(V_source_angle)))
Vsource = V_source * a # [V]
Vsource_angle = arctan(Vsource.imag/Vsource.real) # angle of Vsource [rad]
print('Vsource = {:.1f} kV ∠{:.1f}°'.format(
abs(Vsource)/1000, # display in kV
degrees(Vsource_angle)))
VP = VS + IS*Zeq
VP_angle = arctan(VP.imag/VP.real) # angle of VP [rad]
print('VP = {:.1f} V ∠{:.1f}°'.format(abs(VP), degrees(VP_angle)))
VR = (abs(VP)-VS) / VS * 100 # [%]
print('VR = {:.2f} %'.format(VR))
R = Z_line.real + Zeq.real
Pin = Pout + abs(IS)**2 * R # [W]
print('Pin = {:.2f} kW'.format(Pin/1000)) # [kW]
eta = Pout/Pin * 100 # [%]
print('η = {:.1f} %'.format(eta))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Description
Step2: (a)
Step3: The feeder’s impedance referred to the secondary side is
Step4: The secondary current $I_S$ is given by
Step5: The power factor is 0.80 lagging, so the impedance angel $\theta = \arccos(PF)$ is
Step6: The phasor current is
Step7: (a)
Step8: Therefore, the voltage at the power source is
Step9: (b)
Step10: There is a voltage drop of 15 V under these load conditions.
Step11: (c)
Step12: Therefore, the efficiency of the power system is
|
7,983
|
<ASSISTANT_TASK:>
Python Code:
# Loading data, dividing, modeling and EDA below
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
data = pd.read_csv('../input/new-york-city-taxi-fare-prediction/train.csv', nrows=50000)
# Remove data with extreme outlier coordinates or negative fares
data = data.query('pickup_latitude > 40.7 and pickup_latitude < 40.8 and ' +
'dropoff_latitude > 40.7 and dropoff_latitude < 40.8 and ' +
'pickup_longitude > -74 and pickup_longitude < -73.9 and ' +
'dropoff_longitude > -74 and dropoff_longitude < -73.9 and ' +
'fare_amount > 0'
)
y = data.fare_amount
base_features = ['pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count']
X = data[base_features]
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
first_model = RandomForestRegressor(n_estimators=50, random_state=1).fit(train_X, train_y)
# Environment Set-Up for feedback system.
from learntools.core import binder
binder.bind(globals())
from learntools.ml_explainability.ex2 import *
print("Setup Complete")
# show data
print("Data sample:")
data.head()
train_X.describe()
train_y.describe()
# Check your answer (Run this code cell to receive credit!)
q_1.solution()
import eli5
from eli5.sklearn import PermutationImportance
# Make a small change to the code below to use in this problem.
# perm = PermutationImportance(my_model, random_state=1).fit(val_X, val_y)
# Check your answer
q_2.check()
# uncomment the following line to visualize your results
# eli5.show_weights(perm, feature_names = val_X.columns.tolist())
#%%RM_IF(PROD)%%
import eli5
from eli5.sklearn import PermutationImportance
perm = PermutationImportance(first_model, random_state=1).fit(val_X, val_y)
eli5.show_weights(perm, feature_names = base_features)
q_2.check()
# q_2.hint()
# q_2.solution()
# Check your answer (Run this code cell to receive credit!)
q_3.solution()
# create new features
data['abs_lon_change'] = abs(data.dropoff_longitude - data.pickup_longitude)
data['abs_lat_change'] = abs(data.dropoff_latitude - data.pickup_latitude)
features_2 = ['pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'abs_lat_change',
'abs_lon_change']
X = data[features_2]
new_train_X, new_val_X, new_train_y, new_val_y = train_test_split(X, y, random_state=1)
second_model = RandomForestRegressor(n_estimators=30, random_state=1).fit(new_train_X, new_train_y)
# Create a PermutationImportance object on second_model and fit it to new_val_X and new_val_y
# Use a random_state of 1 for reproducible results that match the expected solution.
perm2 = ____
# show the weights for the permutation importance you just calculated
____
# Check your answer
q_4.check()
# Check your answer (Run this code cell to receive credit!)
q_4.solution()
# Check your answer (Run this code cell to receive credit!)
q_5.solution()
# Check your answer (Run this code cell to receive credit!)
q_6.solution()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following two cells may also be useful to understand the values in the training data
Step2: Question 1
Step3: Question 2
Step4: Uncomment the lines below for a hint or to see the solution.
Step5: Question 3
Step6: Question 4
Step7: How would you interpret these importance scores? Distance traveled seems far more important than any location effects.
Step8: Question 5
Step9: Question 6
|
7,984
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
# lb.fit(labels)
labels_vecs = lb.fit_transform(labels) # Your one-hot encoded labels array here
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
for train_index, test_index in ss.split(codes, labels_vecs):
# print("TRAIN:", train_index, "TEST:", test_index)
train_x, test_x_full = codes[train_index], codes[test_index]
train_y, test_y_full = labels_vecs[train_index], labels_vecs[test_index]
# train_x, train_y =
splitting_idx = int(len(test_x_full)/2)
val_x, val_y = test_x_full[:splitting_idx],test_y_full[:splitting_idx]
test_x, test_y = test_x_full[splitting_idx:],test_y_full[splitting_idx:]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
nn = tf.contrib.layers.fully_connected(inputs_, 512)
logits =tf.contrib.layers.fully_connected(nn, labels_vecs.shape[1] , activation_fn = None)# output layer logits
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_)) # cross entropy loss
optimizer = tf.train.AdamOptimizer().minimize(cost) # training optimizer
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
saver = tf.train.Saver()
epochs = 50
batch_size = 10
iteration=0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(epochs):
for batch_features, batch_labels in get_batches(train_x, train_y, batch_size):
sess.run(optimizer, feed_dict={
inputs_: batch_features,
labels_: batch_labels})
loss= sess.run(cost, feed_dict={
inputs_: batch_features,
labels_: batch_labels})
print("Epoch: {}/{}".format(epoch+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(epoch, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
# TODO: Your training code here
saver.save(sess, "checkpoints/flowers.ckpt")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flower power
Step2: ConvNet Codes
Step3: Below I'm running images through the VGG network in batches.
Step4: Building the Classifier
Step5: Data prep
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Step10: Training
Step11: Testing
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
|
7,985
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
# read in the iris data
iris = load_iris()
X = iris.data
y = iris.target
for i in xrange(1,5):
print "random_state is ", i,", and accuracy score is:"
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=i)
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print metrics.accuracy_score(y_test, y_pred)
# 下面代码演示了K-fold交叉验证是如何进行数据分割的
# simulate splitting a dataset of 25 observations into 5 folds
from sklearn.cross_validation import KFold
kf = KFold(25, n_folds=5, shuffle=False)
# print the contents of each training and testing set
print '{} {:^61} {}'.format('Iteration', 'Training set observations', 'Testing set observations')
for iteration, data in enumerate(kf, start=1):
print '{:^9} {} {:^25}'.format(iteration, data[0], data[1])
from sklearn.cross_validation import cross_val_score
knn = KNeighborsClassifier(n_neighbors=5)
# 这里的cross_val_score将交叉验证的整个过程连接起来,不用再进行手动的分割数据
# cv参数用于规定将原始数据分成多少份
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
print scores
# use average accuracy as an estimate of out-of-sample accuracy
# 对十次迭代计算平均的测试准确率
print scores.mean()
# search for an optimal value of K for KNN model
k_range = range(1,31)
k_scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
k_scores.append(scores.mean())
print k_scores
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(k_range, k_scores)
plt.xlabel("Value of K for KNN")
plt.ylabel("Cross validated accuracy")
# 10-fold cross-validation with the best KNN model
knn = KNeighborsClassifier(n_neighbors=20)
print cross_val_score(knn, X, y, cv=10, scoring='accuracy').mean()
# 10-fold cross-validation with logistic regression
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
print cross_val_score(logreg, X, y, cv=10, scoring='accuracy').mean()
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
# read in the advertising dataset
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
# create a Python list of three feature names
feature_cols = ['TV', 'Radio', 'Newspaper']
# use the list to select a subset of the DataFrame (X)
X = data[feature_cols]
# select the Sales column as the response (y)
y = data.Sales
# 10-fold cv with all features
lm = LinearRegression()
scores = cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')
print scores
# fix the sign of MSE scores
mse_scores = -scores
print mse_scores
# convert from MSE to RMSE
rmse_scores = np.sqrt(mse_scores)
print rmse_scores
# calculate the average RMSE
print rmse_scores.mean()
# 10-fold cross-validation with two features (excluding Newspaper)
feature_cols = ['TV', 'Radio']
X = data[feature_cols]
print np.sqrt(-cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')).mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 上面的测试准确率可以看出,不同的训练集、测试集分割的方法导致其准确率不同,而交叉验证的基本思想是:将数据集进行一系列分割,生成一组不同的训练测试集,然后分别训练模型并计算测试准确率,最后对结果进行平均处理。这样来有效降低测试准确率的差异。
Step2: 3. 使用交叉验证的建议
Step3: 上面的例子显示了偏置-方差的折中,K较小的情况时偏置较低,方差较高;K较高的情况时,偏置较高,方差较低;最佳的模型参数取在中间位置,该情况下,使得偏置和方差得以平衡,模型针对于非样本数据的泛化能力是最佳的。
Step4: 4.3 用于特征选择
Step5: 这里要注意的是,上面的scores都是负数,为什么均方误差会出现负数的情况呢?因为这里的mean_squared_error是一种损失函数,优化的目标的使其最小化,而分类准确率是一种奖励函数,优化的目标是使其最大化。
|
7,986
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from pylab import *
N = 10**5
lambda_ = 2.0
########################################
# Supply the missing coefficient herein below
V1 = -1.0/lambda_
data = V1*log(rand(N))
########################################
m = mean(data)
v = var(data)
print("\u03BB={0}: m={1:1.2f}, \u03C3\u00B2={2:1.2f}"
.format(lambda_,m,v)) #\u... for unicode caracters
from scipy.stats import poisson
lambda_ = 20
N = 10**5
####################################
# Give parameters mu and size in function poisson.rvs
# (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.poisson.html)
sample = poisson.rvs(mu= lambda_, size= N)
####################################
# mean and variance of sample vector
mean_sample = mean(sample)
var_sample = var(sample)
print(("\u03BB = {0}\nestimated mean = {1:1.2f}\n"
+"estimated var = {2:1.2f}")
.format(lambda_,mean_sample, var_sample))
#------------------------
V2 = mean_sample
print("---------------------------\n"
+"RESULTS SUPPLIED FOR LAB 1:\n"
+"---------------------------")
results = ("V"+str(k) for k in range(1,3))
for x in results:
try:
print(x+" = {0:.2f}".format(eval(x)))
except:
print(x+": variable is undefined")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) The discrete valued random variable $X$ follows a Poisson distribution if its probabilities depend on a parameter $\lambda$ and are such that
Step2: Your answers for the exercise
|
7,987
|
<ASSISTANT_TASK:>
Python Code:
X = np.array([[7, 5],[5, 7],[7, 7],[4, 4],[4, 6],[1, 4],[0, 0],[2, 2],[8, 7],[6, 8],[5, 5],[3, 7]], dtype=float)
plt.scatter(X[:,0], X[:,1], s=100)
plt.show()
from sklearn.cluster import KMeans
model = KMeans(n_clusters=2, init="random", n_init=1, max_iter=1, random_state=1).fit(X)
c0, c1 = model.cluster_centers_
print(c0, c1)
plt.scatter(X[model.labels_==0,0], X[model.labels_==0,1], s=100, marker='v', c='r')
plt.scatter(X[model.labels_==1,0], X[model.labels_==1,1], s=100, marker='^', c='b')
plt.scatter(c0[0], c0[1], s=100, c="r")
plt.scatter(c1[0], c1[1], s=100, c="b")
plt.show()
def kmeans_df(c0, c1):
df = pd.DataFrame(np.hstack([X,
np.linalg.norm(X - c0, axis=1)[:, np.newaxis],
np.linalg.norm(X - c1, axis=1)[:, np.newaxis],
model.labels_[:, np.newaxis]]),
columns=["x0", "x1", "d0", "d1", "c"])
return df
kmeans_df(c0, c1)
print(X[model.labels_==0,0].mean(), X[model.labels_==0,1].mean())
print(X[model.labels_==1,0].mean(), X[model.labels_==1,1].mean())
model.score(X)
model = KMeans(n_clusters=2, init="random", n_init=1, max_iter=2, random_state=0).fit(X)
c0, c1 = model.cluster_centers_
print(c0, c1)
plt.scatter(X[model.labels_==0,0], X[model.labels_==0,1], s=100, marker='v', c='r')
plt.scatter(X[model.labels_==1,0], X[model.labels_==1,1], s=100, marker='^', c='b')
plt.scatter(c0[0], c0[1], s=100, c="r")
plt.scatter(c1[0], c1[1], s=100, c="b")
kmeans_df(c0, c1)
model.score(X)
(np.linalg.norm(X[model.labels_==0] - c0, axis=1)**2).sum() + \
(np.linalg.norm(X[model.labels_==1] - c1, axis=1)**2).sum()
model = KMeans(n_clusters=2, init="random", n_init=1, max_iter=100, random_state=0).fit(X)
c0, c1 = model.cluster_centers_
print(c0, c1)
plt.scatter(X[model.labels_==0,0], X[model.labels_==0,1], s=100, marker='v', c='r')
plt.scatter(X[model.labels_==1,0], X[model.labels_==1,1], s=100, marker='^', c='b')
plt.scatter(c0[0], c0[1], s=100, c="r")
plt.scatter(c1[0], c1[1], s=100, c="b")
kmeans_df(c0, c1)
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
from sklearn import datasets
np.random.seed(5)
centers = [[1, 1], [-1, -1], [1, -1]]
iris = datasets.load_iris()
X = iris.data
y = iris.target
estimators = {'k_means_iris_3': KMeans(n_clusters=3),
'k_means_iris_8': KMeans(n_clusters=8)}
fignum = 1
for name, est in estimators.items():
fig = plt.figure(fignum)
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
plt.cla()
est.fit(X)
labels = est.labels_
ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=labels.astype(np.float), s=100, cmap=mpl.cm.jet)
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
plt.title(name)
fignum = fignum + 1
plt.show()
from sklearn.datasets import load_digits
from sklearn.preprocessing import scale
digits = load_digits()
data = scale(digits.data)
def print_digits(images, labels):
f = plt.figure(figsize=(10,2))
plt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05)
i = 0
while (i < 10 and i < images.shape[0]):
ax = f.add_subplot(1, 10, i + 1)
ax.imshow(images[i], cmap=plt.cm.bone)
ax.grid(False)
ax.table
ax.set_title(labels[i])
ax.xaxis.set_ticks([])
ax.yaxis.set_ticks([])
plt.tight_layout()
i += 1
print_digits(digits.images, range(10))
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test, images_train, images_test = \
train_test_split(data, digits.target, digits.images, test_size=0.25, random_state=42)
from sklearn.cluster import KMeans
clf = KMeans(init="k-means++", n_clusters=10, random_state=42)
clf.fit(X_train)
print_digits(images_train, clf.labels_)
y_pred = clf.predict(X_test)
def print_cluster(images, y_pred, cluster_number):
images = images[y_pred == cluster_number]
y_pred = y_pred[y_pred == cluster_number]
print_digits(images, y_pred)
for i in range(10):
print_cluster(images_test, y_pred, i)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
from sklearn.decomposition import PCA
pca = PCA(n_components=2).fit(X_train)
reduced_X_train = pca.transform(X_train)
clf = KMeans(init="k-means++", n_clusters=10, random_state=42)
clf.fit(reduced_X_train)
centroids = clf.cluster_centers_
xmin, xmax = reduced_X_train[:,0].min(), reduced_X_train[:,0].max()
ymin, ymax = reduced_X_train[:,1].min(), reduced_X_train[:,1].max()
XGrid, YGrid = np.meshgrid(np.arange(xmin, xmax, (xmax-xmin)/1000),
np.arange(ymin, ymax, (ymax-ymin)/1000))
ZGrid = np.reshape(clf.predict(np.array([XGrid.ravel(), YGrid.ravel()]).T), XGrid.shape)
cmap = mpl.colors.ListedColormap(sns.color_palette("Set1"))
plt.contourf(XGrid, YGrid, ZGrid, cmap=mpl.cm.Accent)
plt.scatter(reduced_X_train[:, 0], reduced_X_train[:, 1], c=y_train, cmap=mpl.cm.Accent)
plt.scatter(centroids[:, 0], centroids[:, 1], marker='.', s=1000, color='k', alpha=0.6, zorder=10)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: K-Means++
Step2: 예
|
7,988
|
<ASSISTANT_TASK:>
Python Code:
def printArr(arr , n ) :
arr . sort()
if(arr[0 ] == arr[n - 1 ] ) :
print("No ")
else :
print("Yes ")
for i in range(n ) :
print(arr[i ] , end = "▁ ")
print()
if __name__== ' __main __' :
arr =[1 , 2 , 2 , 1 , 3 , 1 ]
N = len(arr )
printArr(arr , N )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
7,989
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import hashlib
import math
import os.path
import random
import re
import sys
import tarfile
import numpy as np
import librosa as rosa
from six.moves import urllib
from six.moves import xrange # pylint: disable=redefined-builtin
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, Dropout, Flatten, Lambda
#from tensorflow.contrib.framework.python.ops import audio_ops as contrib_audio
#from tensorflow.python.ops import io_ops
#from tensorflow.python.platform import gfile
#from tensorflow.python.util import compat
number_mfcc=128
sample_rate=16000
#for mac
#speech_data_dir="/Users/hermitwang/Downloads/speech_dataset"
#for ubuntu
speech_data_dir="/home/hermitwang/TrainingData/datasets/speech_dataset"
def load_wav_mfcc(filename):
wav_loader, sample_rate = rosa.load(filename, sr=16000)
#print(rosa.get_duration(wav_loader, sample_rate))
wav_mfcc = rosa.feature.mfcc(y=wav_loader, sr=16000, n_mfcc=128)
return wav_mfcc
class WavMFCCLoader(object):
def __init__(self, data_dir, wanted, validation_percentage=0, testing_percentage=0):
self.data_dir = data_dir
self.wanted = wanted
self.wav_files = dict()
self.wav_file_index()
def wav_file_index(self):
for dirpath, dirnames, files in os.walk(self.data_dir):
for name in files:
if name.lower().endswith('.wav'):
word_name = dirpath.rsplit('/', 1)[1];
if word_name in self.wanted:
file_name = os.path.join(dirpath, name)
#print(file_name, dirpath, word_name)
if word_name in self.wav_files.keys():
self.wav_files[word_name].append(file_name)
else:
self.wav_files[word_name] = [file_name]
return self.wav_files
def wavs_to_mfcc_pair(self):
how_many_words = len(self.wanted)
a_index = random.randint(0, how_many_words - 1)
b_index = random.randint(0, how_many_words - 1)
a_wav_index = b_wav_index = -1
mfcc_pair = np.array([3, 1])
if (a_index != b_index):
a_wav_index = random.randint(0, len(self.wav_files[self.wanted[a_index]]) - 1)
b_wav_index = random.randint(0, len(self.wav_files[self.wanted[b_index]]) - 1)
mfcc_1 = load_wav_mfcc(self.wav_files[self.wanted[a_index]][a_wav_index])
mfcc_2 = load_wav_mfcc(self.wav_files[self.wanted[b_index]][b_wav_index])
mfcc_pair = 0
else:
a_wav_index = random.randint(0, len(self.wav_files[self.wanted[a_index]]) - 1)
b_wav_index = random.randint(0, len(self.wav_files[self.wanted[a_index]]) - 1)
mfcc_1 = load_wav_mfcc(self.wav_files[self.wanted[a_index]][a_wav_index])
mfcc_2 = load_wav_mfcc(self.wav_files[self.wanted[a_index]][b_wav_index])
mfcc_pair = 1
#print("aaa", mfcc_1.shape, mfcc_2.shape)
return mfcc_1, mfcc_2, mfcc_pair
def get_mfcc_pairs(self, how_many):
mfcc1_data = np.zeros((how_many, 128, 32))
mfcc2_data = np.zeros((how_many, 128, 32))
same_data = np.zeros(how_many)
for i in range(0, how_many - 1):
mfcc1_data_, mfcc2_data_, same_data[i] = self.wavs_to_mfcc_pair()
mfcc1_data[i, :, 0:mfcc1_data_.shape[1]] = mfcc1_data_
mfcc2_data[i, :, 0:mfcc2_data_.shape[1]] = mfcc2_data_
#np.append(mfcc1_data, mfcc1_)
#np.append(mfcc2_data, mfcc2_)
#np.append(same_data, same_)
#print(mfcc_pairs)
return mfcc1_data, mfcc2_data, same_data
loader = WavMFCCLoader(speech_data_dir, wanted=["one", "two", "bed"])
#wav_list = loader.wav_file_index()
mfcc1_data, mfcc2_data, same_pair = loader.get_mfcc_pairs(100)
print(same_pair)
def create_keras_model(fingerprint_shape, is_training=True):
model = Sequential()
model.add(Conv2D(input_shape=fingerprint_shape, filters=64, kernel_size=3, activation="relu"))
model.add(MaxPooling2D())
#if (is_training):
# model.add(Dropout(0.5))
model.add(Conv2D(filters=64, kernel_size=3, activation="relu"))
model.add(MaxPooling2D())
#if (is_training):
# model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(1024))
if (is_training):
model.add(Dropout(0.5))
#model.add(Dense(labels_count, activation="softmax"))
return model
def model_train(labels_count=10, epochs=10, batch_size=32):
x_train = np.random.random((1000, 98, 40, 1))
y_train = keras.utils.to_categorical(np.random.randint(labels_count, size=(1000, 1)), num_classes=labels_count)
x_test = np.random.random((100, 98, 40, 1))
y_test = keras.utils.to_categorical(np.random.randint(labels_count, size=(100, 1)), num_classes=labels_count)
keras_model = create_keras_model((98, 40, 1), True)
keras_model.add(Dense(labels_count, activation='softmax'))
keras_model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
keras_model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size)
score = keras_model.evaluate(x_test, y_test, batch_size)
return score
def create_siamese_model(input_shape, siamese_model = 'concat'):
right_input = Input(input_shape)
left_input = Input(input_shape)
keras_model = create_keras_model(input_shape)
right_encoder = keras_model(right_input)
left_encoder = keras_model(left_input)
if (siamese_model == 'concat'):
concatenated_layer = keras.layers.concatenate([right_encoder, left_encoder])
elif (siamese_model == 'abs'):
concatenated_layer = Lambda(lambda x: tf.sqrt(tf.reduce_sum(tf.square(x[0]-x[1]), 2)), output_shape=lambda x: x[0])([right_encoder, left_encoder])
else:
raise ValueError("unknown siamese_model")
output_layer = Dense(1, activation='sigmoid')(concatenated_layer)
siamese_model = Model([right_input, left_input], output_layer)
return siamese_model
def siamese_train(siamese_model='abs'):
siamese_model = create_siamese_model((128,32,1), siamese_model="concat")
siamese_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
loader = WavMFCCLoader(speech_data_dir, wanted=["one", "two", "bed"])
mfcc1_data, mfcc2_data, pairs = loader.get_mfcc_pairs(1000)
x1_train = mfcc1_data.reshape((1000, 128, 32, 1)) #np.random.random((1000, 98, 40, 1))
x2_train = mfcc2_data.reshape((1000, 128, 32, 1)) #np.random.random((1000, 98, 40, 1))
y_train = pairs #keras.utils.to_categorical(pairs, num_classes=1)
siamese_model.fit([x1_train, x2_train], y_train, epochs=10, batch_size=32)
mfcc1_test, mfcc2_test, pairs_test = loader.get_mfcc_pairs(100)
x1_test = mfcc1_test.reshape((100, 128,32, 1))
x2_test = mfcc2_test.reshape((100, 128,32, 1))
y_test = pairs_test
loss, accuracy = siamese_model.evaluate([x1_test, x2_test], y_test)
print(loss)
return accuracy
#wav_mfcc = load_wav_mfcc("/Users/hermitwang/Downloads/speech_dataset/backward/0a2b400e_nohash_0.wav")
#print(wav_mfcc.shape)
score = siamese_train()
print(score)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: One shot keyword trigger
Step2: Wav MFCC loader
Step3: Conv Network
Step4: Siamese Network
|
7,990
|
<ASSISTANT_TASK:>
Python Code:
import bqplot.pyplot as plt
# first, let's create two vectors x and y to plot using a Lines mark
import numpy as np
x = np.linspace(-10, 10, 100)
y = np.sin(x)
# 1. Create the figure object
fig = plt.figure(title="Simple Line Chart")
# 2. By default axes are created with basic defaults. If you want to customize the axes create
# a dict and pass it to `axxes_options` argument in the marks
axes_opts = {"x": {"label": "X"}, "y": {"label": "Y"}}
# 3. Create a Lines mark by calling plt.plot function
line = plt.plot(
x=x, y=y, axes_options=axes_opts
) # note that custom axes options are passed here
# 4. Render the figure using plt.show()
plt.show()
# first, let's create two vectors x and y to plot a bar chart
x = list("ABCDE")
y = np.random.rand(5)
# 1. Create the figure object
fig = plt.figure(title="Simple Bar Chart")
# 2. Customize the axes options
axes_opts = {
"x": {"label": "X", "grid_lines": "none"},
"y": {"label": "Y", "tick_format": ".0%"},
}
# 3. Create a Bars mark by calling plt.bar function
bar = plt.bar(x=x, y=y, padding=0.2, axes_options=axes_opts)
# 4. directly display the figure object created in step 1 (note that the toolbar no longer shows up)
fig
# first, let's create two vectors x and y
import numpy as np
x = np.linspace(-10, 10, 25)
y = 3 * x + 5
y_noise = y + 10 * np.random.randn(25) # add some random noise to y
# 1. Create the figure object
fig = plt.figure(title="Scatter and Line")
# 3. Create line and scatter marks
# additional attributes (stroke_width, colors etc.) can be passed as attributes to the mark objects as needed
line = plt.plot(x=x, y=y, colors=["green"], stroke_width=3)
scatter = plt.scatter(x=x, y=y_noise, colors=["red"], stroke="black")
# setting x and y axis labels using pyplot functions. Note that these functions
# should be called only after creating the marks
plt.xlabel("X")
plt.ylabel("Y")
# 4. render the figure
fig
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Steps for building plots in pyplot
Step1: For creating other marks (like scatter, pie, bars, etc.), only step 2 needs to be changed. Lets look a simple example to create a bar chart
Step2: Multiple marks can be rendered in a figure. It's as easy as creating marks one after another. They'll all be added to the same figure!
|
7,991
|
<ASSISTANT_TASK:>
Python Code:
summaryDf = pd.DataFrame([extractSummaryLine(l) for l in open('../../data/learnedModel/anto/summary.txt').readlines()],
columns=['bidirectional', 'strict', 'clf', 'feature', 'post', 'precision', 'recall', 'f1'])
summaryDf.sort_values('f1', ascending=False)[:10]
!python ../../toolbox/script/detailConceptPairClfError.py ../../data/voc/npy/wikiEn-skipgram.npy ../../data/learnedModel/anto/bidi__RandomForestClassifier_pCosSim_postNormalize.dill ../../data/wordPair/wordnetAnto.txt anto ../../data/wordPair/wordnetAnto_fake.txt notAnto
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can observe quite good f1-score on RandomForest with normalised projected cosine similarity.
|
7,992
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import (absolute_import, division, print_function,
unicode_literals)
from astropy.time import Time
import astropy.units as u
from astroplan import Observer, FixedTarget
# Observe from Keck
obs = Observer.at_site("Keck")
# Observe these three stars
name_list = ['vega', 'rigel', 'sirius']
target_list = [FixedTarget.from_name(name) for name in name_list]
# Check constraints once an hour for one day
times = Time.now() + u.Quantity(range(24), unit=u.hour)
from astroplan import (AltitudeConstraint, MoonSeparationConstraint,
AtNightConstraint)
from astropy.time import Time
# Make an instance of each constraint
constraint1 = AltitudeConstraint(min=25*u.deg)
constraint2 = AtNightConstraint.twilight_astronomical()
constraint3 = MoonSeparationConstraint(min=30*u.deg)
# Apply each constraint, store each result separately
c1, c2, c3 = [c(obs, target_list, times)
for c in [constraint1, constraint2, constraint3]]
# Show one result
print("AltitudeConstraint result: (col=time, row=target)\n{0}".format(c1))
combination = (c1 & c2) | c3
combination
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12, 3))
ax.imshow(combination, interpolation='nearest', origin='lower', cmap=plt.cm.binary_r)
ax.set_yticks(range(3))
ax.set_yticklabels([t.name for t in target_list])
time_inds = range(24)[::2]
ax.set_xticks(time_inds)
ax.set_xticklabels([t.datetime.strftime("%H:%M") for t in times[time_inds]])
ax.set_title("(AltitudeConstraint & AtNightConstraint) | MoonSeparationConstraint");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make a list of constraints to compute
Step2: Now combine those constraint results with non-trivial boolean logic
Step3: Simple visualization of this combined constraint result, white pixels = constraints are met, black pixels = constraints are not met.
|
7,993
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import datasets
import matplotlib.pyplot as plt
iris = datasets.load_iris()
def plot(dataset, ax, i, j):
ax.scatter(dataset.data[:,i], dataset.data[:,j], c=dataset.target, s=50)
ax.set_xlabel(dataset.feature_names[i], fontsize=20)
ax.set_ylabel(dataset.feature_names[j], fontsize=20)
# row and column sharing
f, ((ax1, ax2), (ax3, ax4), (ax5,ax6)) = plt.subplots(3, 2, figsize=(16,8))
plot(iris, ax1, 0, 1)
plot(iris, ax2, 0, 2)
plot(iris, ax3, 0, 3)
plot(iris, ax4, 1, 2)
plot(iris, ax5, 1, 3)
plot(iris, ax6, 2, 3)
f.tight_layout()
plt.show()
from mat281_code import iplot
iplot.kmeans(N_points=100, n_clusters=4)
import numpy as np
from scipy.linalg import norm
def find_centers(X, k, seed=None):
if seed is None:
seed = np.random.randint(10000000)
np.random.seed(seed)
# Initialize to K random centers
old_centroids = random_centers(X, k)
new_centroids = random_centers(X, k)
while not has_converged(new_centroids, old_centroids):
old_centroids = new_centroids
# Assign all points in X to clusters
clusters = cluster_points(X, old_centroids)
# Reevaluate centers
new_centroids = reevaluate_centers(X, clusters, k)
return (new_centroids, clusters)
def random_centers(X, k):
index = np.random.randint(0, X.shape[0], k)
return X[index, :]
def has_converged(new_mu, old_mu, tol=1E-6):
num = norm(np.array(new_mu)-np.array(old_mu))
den = norm(new_mu)
rel_error= num/den
return rel_error < tol
def cluster_points(X, centroids):
clusters = []
for i, x in enumerate(X):
distances = np.array([norm(x-cj) for cj in centroids])
clusters.append( distances.argmin())
return np.array(clusters)
def reevaluate_centers(X, clusters, k):
centroids = []
for j in range(k):
cj = X[clusters==j,:].mean(axis=0)
centroids.append(cj)
return centroids
from mat281_code import gendata
from mat281_code import plot
from mat281_code import kmeans
X = gendata.init_blobs(1000, 4, seed=40)
ax = plot.data(X)
centroids, clusters = kmeans.find_centers(X, k=4)
plot.clusters(X, centroids, clusters)
from mat281_code import gendata
from mat281_code import plot
from sklearn.cluster import KMeans
X = gendata.init_blobs(10000, 6, seed=43)
plot.data(X)
kmeans = KMeans(n_clusters=6)
kmeans.fit(X)
centroids = kmeans.cluster_centers_
clusters = kmeans.labels_
plot.clusters(X, centroids, clusters)
import numpy as np
from sklearn import datasets
from sklearn.cluster import KMeans
from sklearn.metrics import confusion_matrix
# Parameters
n_clusters = 8
# Loading the data
iris = datasets.load_iris()
X = iris.data
y_true = iris.target
# Running the algorithm
kmeans = KMeans(n_clusters)
kmeans.fit(X)
y_pred = kmeans.labels_
# Show the classificacion report
cm = confusion_matrix(y_true, y_pred)
print cm
print (cm.sum() - np.diag(cm).sum() ) / float(cm.sum()) # 16/100
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Clustering
Step2: k-means
Step3: Aplicación a datos
Step4: ¿Es necesario reinventar la rueda?
Step5: ¿Cómo seleccionar k?
|
7,994
|
<ASSISTANT_TASK:>
Python Code:
import random, datetime
import numpy as np
import pylab as plt
import statsmodels.api as sm
from scipy.stats import norm
from scipy.stats.stats import pearsonr
# str, int, float
str(3)
# int
int('5')
# float
float('7.1')
range(10)
range(1, 10)
dir
dir(str)[-5:]
help(str)
x = ' Hello WorlD '
dir(x)[-10:]
# lower
x.lower()
# upper
x.upper()
# rstrip
x.rstrip()
# strip
x.strip()
# replace
x.replace('lo', '')
# split
x.split('lo')
# join
','.join(['a', 'b'])
x = 'hello world'
type(x)
l = [1,2,3,3] # list
t = (1, 2, 3, 3) # tuple
s = set([1,2,3,3]) # set
d = {'a':1,'b':2,'c':3} # dict
a = np.array(l) # array
print l, t, s, d, a
l = [1,2,3,3] # list
l.append(4)
l
d = {'a':1,'b':2,'c':3} # dict
d.keys()
d = {'a':1,'b':2,'c':3} # dict
d.values()
d = {'a':1,'b':2,'c':3} # dict
d['b']
d = {'a':1,'b':2,'c':3} # dict
d.items()
def devidePlus(m, n): # 结尾是冒号
y = float(m)/n+ 1 # 注意:空格
return y # 注意:return
range(10)
range(1, 10)
for i in range(10):
print i, i*10, i**2
for i in range(10):
print i*10
for i in range(10):
print devidePlus(i, 2)
# 列表内部的for循环
r = [devidePlus(i, 2) for i in range(10)]
r
map(devidePlus, [4,3,2], [2, 1, 5])
# 注意: 将(4, 2)作为一个组合进行计算,将(3, 1)作为一个组合进行计算
map(lambda x, y: x + y, [1, 3, 5, 7, 9], [2, 4, 6, 8, 10])
map(lambda x, y, z: x + y - z, [1, 3, 5, 7, 9], [2, 4, 6, 8, 10], [3, 3, 2, 2, 5])
j = 3
if j%2 == 1:
print r'余数是1'
elif j%2 ==2:
print r'余数是2'
else:
print r'余数既不是1也不是2'
x = 5
if x < 5:
y = -1
z = 5
elif x > 5:
y = 1
z = 11
else:
y = 0
z = 10
print(x, y, z)
j = 0
while j <10:
print j
j+=1 # avoid dead loop
j = 0
while j <10:
if j%2 != 0:
print j**2
j+=1 # avoid dead loop
j = 0
while j <50:
if j == 30:
break
if j%2 != 0:
print j**2
j+=1 # avoid dead loop
a = 4
while a:
print a
a -= 1
if a < 0:
a = None # []
for i in [2, 0, 5]:
try:
print devidePlus(4, i)
except Exception, e:
print e
pass
data =[[i, i**2, i**3] for i in range(10)]
data
for i in data:
print '\t'.join(map(str, i))
type(data)
len(data)
data[0]
# 保存数据
data =[[i, i**2, i**3] for i in range(10000)]
f = open("/Users/chengjun/github/cjc2016/data/data_write_to_file.txt", "wb")
for i in data:
f.write('\t'.join(map(str,i)) + '\n')
f.close()
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
data = f.readlines()
data[:5]
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
data = f.readlines(1000)
len(data)
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
print f.readline()
f = [1, 2, 3, 4, 5]
for k, i in enumerate(f):
print k, i
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
for i in f:
print i
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
for k, i in enumerate(f):
if k%2000 ==0:
print i
data = []
line = '0\t0\t0\n'
line = line.replace('\n', '')
line = line.split('\t')
line = [int(i) for i in line] # convert str to int
data.append(line)
data
# 读取数据
data = []
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
for line in f:
#line = line.replace('\n', '').split('\t')
#line = [int(i) for i in line]
data.append(line)
data
# 读取数据
data = []
with open('/Users/chengjun/github/cjc2016/data/data_write_to_file.txt','r') as f:
for line in f:
line = line.replace('\n', '').split('\t')
line = [int(i) for i in line]
data.append(line)
data
import json
data_dict = {'a':1, 'b':2, 'c':3}
with open('/Users/chengjun/github/cjc2016/save_dict.json', 'w') as f:
json.dump(data_dict, f)
dd = json.load(open("/Users/chengjun/github/cjc2016/save_dict.json"))
dd
data_list = range(10)
with open('/Users/chengjun/github/cjc2016/save_list.json', 'w') as f:
json.dump(data_list, f)
dl = json.load(open("/Users/chengjun/github/cjc2016/save_list.json"))
dl
import dill # pip insstall dill
# http://trac.mystic.cacr.caltech.edu/project/pathos/wiki/dill
def myFunction(num):
return num,num
with open('/Users/chengjun/github/cjc2016/data.pkl', 'wb') as f:
dill.dump(myFunction, f)
with open('/Users/chengjun/github/cjc2016/data.pkl', 'r') as f:
newFunction = dill.load(f)#, strictio=strictio))
newFunction('hello')
%matplotlib inline
import matplotlib.pyplot as plt
x = range(1, 100)
y = [i**-3 for i in x]
plt.plot(x, y, 'b-s')
plt.ylabel('$p(k)$', fontsize = 20)
plt.xlabel('$k$', fontsize = 20)
plt.xscale('log')
plt.yscale('log')
plt.title('Degree Distribution')
plt.show()
import numpy as np
# red dashes, blue squares and green triangles
t = np.arange(0., 5., 0.2)
plt.plot(t, t, 'r--')
plt.plot(t, t**2, 'bs')
plt.plot(t, t**3, 'g^')
plt.show()
# red dashes, blue squares and green triangles
t = np.arange(0., 5., 0.2)
plt.plot(t, t**2, 'b-s', label = '1')
plt.plot(t, t**2.5, 'r-o', label = '2')
plt.plot(t, t**3, 'g-^', label = '3')
plt.annotate(r'$\alpha = 3$', xy=(3.5, 40), xytext=(2, 80),
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize = 20)
plt.ylabel('$f(t)$', fontsize = 20)
plt.xlabel('$t$', fontsize = 20)
plt.legend(loc=2,numpoints=1,fontsize=10)
plt.show()
# plt.savefig('/Users/chengjun/GitHub/cjc2016/figure/save_figure.png',
# dpi = 300, bbox_inches="tight",transparent = True)
plt.figure(1)
plt.subplot(221)
plt.plot(t, t, 'r--')
plt.text(2, 0.8*np.max(t), r'$\alpha = 1$', fontsize = 20)
plt.subplot(222)
plt.plot(t, t**2, 'bs')
plt.text(2, 0.8*np.max(t**2), r'$\alpha = 2$', fontsize = 20)
plt.subplot(223)
plt.plot(t, t**3, 'g^')
plt.text(2, 0.8*np.max(t**3), r'$\alpha = 3$', fontsize = 20)
plt.subplot(224)
plt.plot(t, t**4, 'r-o')
plt.text(2, 0.8*np.max(t**4), r'$\alpha = 4$', fontsize = 20)
plt.show()
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure(1)
plt.subplot(211)
plt.plot(t1, f(t1), 'bo')
plt.plot(t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
import matplotlib.gridspec as gridspec
t = np.arange(0., 5., 0.2)
gs = gridspec.GridSpec(3, 3)
ax1 = plt.subplot(gs[0, :])
plt.plot(t, t**2, 'b-s')
ax2 = plt.subplot(gs[1,:-1])
plt.plot(t, t**2, 'g-s')
ax3 = plt.subplot(gs[1:, -1])
plt.plot(t, t**2, 'r-o')
ax4 = plt.subplot(gs[-1,0])
plt.plot(t, t**2, 'g-^')
ax5 = plt.subplot(gs[-1,-2])
plt.plot(t, t**2, 'b-<')
plt.tight_layout()
def OLSRegressPlot(x,y,col,xlab,ylab):
xx = sm.add_constant(x, prepend=True)
res = sm.OLS(y,xx).fit()
constant, beta = res.params
r2 = res.rsquared
lab = r'$\beta = %.2f, \,R^2 = %.2f$' %(beta,r2)
plt.scatter(x,y,s=60,facecolors='none', edgecolors=col)
plt.plot(x,constant + x*beta,"red",label=lab)
plt.legend(loc = 'upper left',fontsize=16)
plt.xlabel(xlab,fontsize=26)
plt.ylabel(ylab,fontsize=26)
x = np.random.randn(50)
y = np.random.randn(50) + 3*x
pearsonr(x, y)
fig = plt.figure(figsize=(10, 4),facecolor='white')
OLSRegressPlot(x,y,'RoyalBlue',r'$x$',r'$y$')
plt.show()
fig = plt.figure(figsize=(7, 4),facecolor='white')
data = norm.rvs(10.0, 2.5, size=5000)
mu, std = norm.fit(data)
plt.hist(data, bins=25, normed=True, alpha=0.6, color='g')
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
plt.plot(x, p, 'r', linewidth=2)
title = r"$\mu = %.2f, \, \sigma = %.2f$" % (mu, std)
plt.title(title,size=16)
plt.show()
from matplotlib.dates import WeekdayLocator, DayLocator, MONDAY, DateFormatter
from matplotlib.finance import quotes_historical_yahoo_ochl, candlestick_ochl
date1 = (2014, 2, 1)
date2 = (2014, 5, 1)
quotes = quotes_historical_yahoo_ochl('INTC', date1, date2)
fig = plt.figure(figsize=(15, 5))
ax = fig.add_subplot(1,1,1)
candlestick_ochl(ax, quotes, width=0.8, colorup='green', colordown='r', alpha=0.8)
mondays = WeekdayLocator(MONDAY) # major ticks on the mondays
alldays = DayLocator() # minor ticks on the days
weekFormatter = DateFormatter('%b %d') # e.g., Jan 12
ax.xaxis.set_major_locator(mondays)
ax.xaxis.set_minor_locator(alldays)
ax.xaxis.set_major_formatter(weekFormatter)
ax.autoscale_view()
plt.setp( plt.gca().get_xticklabels(), rotation=45, horizontalalignment='right')
plt.title(r'$Intel \,Corporation \,Stock \,Price$',size=16)
fig.subplots_adjust(bottom=0.2)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Variable Type
Step2: dir & help
Step3: type
Step4: Data Structure
Step5: 定义函数
Step6: For 循环
Step7: map
Step8: if elif else
Step9: while循环
Step10: try except
Step11: Write and Read data
Step12: 保存中间步骤产生的字典数据
Step13: 重新读入json
Step14: use dill to save data
Step15: http
|
7,995
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
df = pd.read_csv("../kyphosis.csv")
df.head()
# TODO 1
sns.pairplot(df, hue="Kyphosis", palette="Set1")
from sklearn.model_selection import train_test_split
X = df.drop("Kyphosis", axis=1)
y = df["Kyphosis"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier()
# TODO 2
dtree.fit(X_train, y_train)
predictions = dtree.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
# TODO 3a
print(classification_report(y_test, predictions))
# TODO 3b
print(confusion_matrix(y_test, predictions))
import pydot
from IPython.display import Image
from six import StringIO
from sklearn.tree import export_graphviz
features = list(df.columns[1:])
features
dot_data = StringIO()
export_graphviz(
dtree, out_file=dot_data, feature_names=features, filled=True, rounded=True
)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
Image(graph[0].create_png())
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=100)
rfc.fit(X_train, y_train)
rfc_pred = rfc.predict(X_test)
# TODO 4a
print(confusion_matrix(y_test, rfc_pred))
# TODO 4b
print(classification_report(y_test, rfc_pred))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the Data
Step2: Exploratory Data Analysis
Step3: Train Test Split
Step4: Decision Trees
Step5: Prediction and Evaluation
Step6: Tree Visualization
Step7: Random Forests
|
7,996
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import sqlalchemy
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('hw4.ok')
my_URI = "postgres://sam:@localhost:5432/fec"
%load_ext sql
%sql $my_URI
engine = sqlalchemy.create_engine(my_URI)
connection = engine.connect()
# We use `LIMIT 5` to avoid displaying a huge table.
# Although our tables shouldn't get too large to display,
# this is generally good practice when working in the
# notebook environment. Jupyter notebooks don't handle
# very large outputs well.
%sql SELECT * from cand LIMIT 5
query = '''
SELECT cand_id, cand_name
FROM cand
WHERE cand_pty_affiliation = 'REP'
LIMIT 5
'''
%sql $query
res = %sql select * from cand limit 5
res_df = res.DataFrame()
res_df['cand_id']
# SOLUTION
query_q1a = '''
SELECT count(*)
FROM inter_comm
WHERE transaction_amt > 5000
'''
q1a = %sql $query_q1a
q1a
_ = ok.grade('q01a')
_ = ok.backup()
# SOLUTION
query_q1b = '''
SELECT
cmte_id AS donor_cmte_id,
name AS recipient_name,
transaction_amt AS transaction_amt
FROM inter_comm
WHERE transaction_amt > 5000
ORDER BY transaction_amt DESC
'''
q1b = %sql $query_q1b
q1b
_ = ok.grade('q01b')
_ = ok.backup()
# SOLUTION
query_q1c = '''
SELECT
cmte_id AS donor_cmte_id,
name AS recipient_name,
sum(transaction_amt) AS total_transaction_amt
FROM inter_comm
GROUP BY cmte_id, name
ORDER BY sum(transaction_amt) DESC
LIMIT 20
'''
q1c = %sql $query_q1c
q1c
ok.grade('q01c')
_ = ok.backup()
# SOLUTION
query_q1d =
SELECT cmte_id AS donor_id,
other_id AS recipient_id,
sum(transaction_amt) AS total_transaction_amt
FROM inter_comm
GROUP BY cmte_id, other_id
ORDER BY sum(transaction_amt) DESC
LIMIT 20
q1d = %sql $query_q1d
q1d
_ = ok.grade('q01d')
_ = ok.backup()
# SOLUTION
query_q1e = '''
SELECT
donor.cmte_nm as donor_name,
recipient.cmte_nm as recipient_name,
sum(transaction_amt) AS total_transaction_amt
FROM inter_comm, comm AS donor, comm AS recipient
WHERE inter_comm.cmte_id = donor.cmte_id AND inter_comm.other_id = recipient.cmte_id
GROUP BY donor.cmte_nm, recipient.cmte_nm
ORDER BY sum(transaction_amt) DESC
LIMIT 20
'''
q1e = %sql $query_q1e
q1e
_ = ok.grade('q01e')
_ = ok.backup()
# SOLUTION
query_q2 = '''
SELECT
cmte_st AS state,
count(*) AS count
FROM comm
GROUP BY cmte_st
ORDER BY count(*) DESC
'''
q2 = %sql $query_q2
q2
_ = ok.grade('q02')
_ = ok.backup()
# SOLUTION
query_q3 = '''
WITH pac_donations(cmte_id, pac_donations) AS
(
SELECT i.other_id, SUM(i.transaction_amt)
FROM inter_comm i
WHERE i.entity_tp = 'PAC'
GROUP BY i.other_id
)
SELECT
c.cmte_nm AS cmte_name,
i.pac_donations AS pac_donations
FROM comm c LEFT OUTER JOIN pac_donations i ON c.cmte_id = i.cmte_id
ORDER BY pac_donations, cmte_nm
LIMIT 20
'''
q3 = %sql $query_q3
q3
_ = ok.grade('q03')
_ = ok.backup()
# SOLUTION
query_q4 = '''
SELECT
cfrom.cmte_nm AS from_cmte_name,
cto.cmte_nm AS to_cmte_name
FROM comm cfrom, comm cto, inter_comm ict
WHERE cfrom.cmte_id = ict.cmte_id
AND cfrom.cmte_pty_affiliation = 'REP'
AND cto.cmte_pty_affiliation = 'REP'
AND cto.cmte_id = ict.other_id
GROUP BY cfrom.cmte_id, cfrom.cmte_nm, cto.cmte_id, cto.cmte_nm
ORDER BY count(*) DESC
LIMIT 10
'''
q4 = %sql $query_q4
q4
_ = ok.grade('q04')
_ = ok.backup()
# SOLUTION
query_q5 = '''
SELECT DISTINCT
c1.cand_name AS cand_1,
c2.cand_name AS cand_2
FROM inter_comm i1, inter_comm i2, cand c1, cand c2
WHERE i1.cand_id <> i2.cand_id
AND i1.cmte_id = i2.cmte_id
AND i2.state='FL' AND i1.state='FL'
AND c1.cand_id = i1.cand_id
AND c2.cand_id = i2.cand_id
ORDER BY cand_1
'''
q5 = %sql $query_q5
q5
_ = ok.grade('q05')
_ = ok.backup()
query_q7 =
SELECT
comm.cmte_nm AS cmte_nm,
sum(indiv_sample.transaction_amt) AS total_transaction_amt
FROM indiv_sample, comm
WHERE indiv_sample.cmte_id = comm.cmte_id
GROUP BY comm.cmte_id, comm.cmte_nm
HAVING
cmte_nm = 'HILLARY FOR AMERICA' OR
cmte_nm = 'BERNIE 2016'
# Do not change anything below this line
res = %sql $query_q7
q7 = res.DataFrame().set_index("cmte_nm")
q7 # q7 will be graded
_ = ok.grade('q07')
_ = ok.backup()
# SOLUTION
query_q8 = '''
SELECT
comm.cmte_id AS cmte_id,
comm.cmte_nm AS cmte_name,
SUM (
CASE WHEN transaction_amt::float <= 200.0 THEN transaction_amt::float
ELSE 0.0 END
) / SUM(transaction_amt::float) AS prop_funds
FROM indiv_sample, comm
WHERE indiv_sample.cmte_id = comm.cmte_id
GROUP BY comm.cmte_id, comm.cmte_nm
HAVING
cmte_nm = 'HILLARY FOR AMERICA' OR
cmte_nm = 'BERNIE 2016'
'''
# Do not change anything below this line
res = %sql $query_q8
small_donor_funds_prop = res.DataFrame()
small_donor_funds_prop
_ = ok.grade('q08')
_ = ok.backup()
# SOLUTION
query_q9a =
SELECT transaction_amt
FROM indiv_sample, comm
WHERE indiv_sample.cmte_id = comm.cmte_id
AND cmte_nm = 'HILLARY FOR AMERICA'
# Do not change anything below this line
res = %sql $query_q9a
hillary_contributions = res.DataFrame()
print(hillary_contributions.head())
# Make the Plot
sns.distplot(hillary_contributions)
plt.title('Distribution of Contribution Amounts to Hillary')
plt.xlim((-50, 3000))
plt.ylim((0, 0.02))
# SOLUTION
query_q9b =
SELECT transaction_amt
FROM indiv_sample, comm
WHERE indiv_sample.cmte_id = comm.cmte_id
AND cmte_nm = 'BERNIE 2016'
# Do not change anything below this line
res = %sql $query_q9b
bernie_contributions = res.DataFrame()
print(bernie_contributions.head())
sns.distplot(bernie_contributions)
plt.title('Distribution of Contribution Amounts to Bernie')
plt.xlim((-50, 3000))
plt.ylim((0, 0.02))
_ = ok.grade('q09')
_ = ok.backup()
%%sql
DROP VIEW IF EXISTS hillary CASCADE;
DROP VIEW IF EXISTS bernie CASCADE;
CREATE VIEW hillary AS
SELECT row_number() over () AS row_id, indiv_sample.*
FROM indiv_sample, comm
WHERE indiv_sample.cmte_id = comm.cmte_id
AND comm.cmte_nm = 'HILLARY FOR AMERICA';
CREATE VIEW bernie AS
SELECT row_number() over () AS row_id, indiv_sample.*
FROM indiv_sample, comm
WHERE indiv_sample.cmte_id = comm.cmte_id
AND comm.cmte_nm = 'BERNIE 2016';
SELECT * FROM hillary LIMIT 5
# SOLUTION
n_hillary_rows = 1524
n_trials = 500
seed = 0.42
query_q11 =
CREATE VIEW hillary_design AS
SELECT trial_id, (FLOOR( $n_hillary_rows * RANDOM() ) + 1) :: INTEGER AS row_id
FROM
GENERATE_SERIES(1, $n_trials) as a(trial_id),
GENERATE_SERIES(1, $n_hillary_rows) as sample_size;
# Fill in the $ variables set in the above string
import string
query_q11 = string.Template(query_q11).substitute(locals())
%sql drop view if exists hillary_design cascade
%sql SET SEED TO $seed
%sql $query_q11
%sql select * from hillary_design limit 5
_ = ok.grade('q11')
_ = ok.backup()
# SOLUTION
query_q12 = '''
CREATE VIEW hillary_trials as
SELECT d.trial_id,
SUM(CASE WHEN transaction_amt <= 200
THEN transaction_amt
ELSE 0
END) AS small_donor_sum,
SUM(transaction_amt) AS total
FROM hillary_design d, hillary h
WHERE d.row_id = h.row_id
GROUP BY d.trial_id
'''
# Do not change anything below this line
%sql drop view if exists hillary_trials cascade
%sql SET SEED TO $seed
%sql $query_q12
%sql select * from hillary_trials limit 5
_ = ok.grade('q12')
_ = ok.backup()
# SOLUTION
query_q13 = '''
CREATE VIEW hillary_props as
SELECT
trial_id,
small_donor_sum::float / total::float AS small_donor_prop
FROM hillary_trials
'''
%sql drop view if exists hillary_props cascade
%sql SET SEED TO $seed
%sql $query_q13
%sql select * from hillary_props limit 5
_ = ok.grade('q13')
_ = ok.backup()
# SOLUTION
n_bernie_rows = 1173
n_trials = 500
create_bernie_design =
CREATE VIEW bernie_design AS
SELECT trial_id, (FLOOR( $n_bernie_rows * RANDOM() ) + 1) :: INTEGER AS row_id
FROM
GENERATE_SERIES(1, $n_trials) as a(trial_id),
GENERATE_SERIES(1, $n_bernie_rows) as sample_size;
create_bernie_trials = '''
CREATE VIEW bernie_trials as
SELECT d.trial_id,
SUM(CASE WHEN transaction_amt <= 200
THEN transaction_amt
ELSE 0
END) AS small_donor_sum,
SUM(transaction_amt) AS total
FROM bernie_design d, bernie h
WHERE d.row_id = h.row_id
GROUP BY d.trial_id
'''
create_bernie_props = '''
CREATE VIEW bernie_props as
SELECT trial_id, small_donor_sum::float / total::float AS small_donor_prop
FROM bernie_trials
'''
# Do not change anything below this line
# Fill in the $ variables set in the above string
import string
create_bernie_design = (string.Template(create_bernie_design)
.substitute(locals()))
%sql drop view if exists bernie_design cascade
%sql $create_bernie_design
%sql drop view if exists bernie_trials cascade
%sql $create_bernie_trials
%sql drop view if exists bernie_props
%sql $create_bernie_props
%sql SET SEED TO $seed
%sql select * from bernie_props limit 5
_ = ok.grade('q14')
_ = ok.backup()
res = %sql select * from hillary_props
hillary_trials_df = res.DataFrame()
res = %sql select * from bernie_props
bernie_trials_df = res.DataFrame()
ax = plt.subplot(1,2,1)
sns.distplot(hillary_trials_df['small_donor_prop'], ax=ax)
plt.title('Hillary Bootstrap Prop')
plt.xlim(0.1, 0.9)
plt.ylim(0, 25)
ax = plt.subplot(1,2,2)
sns.distplot(bernie_trials_df['small_donor_prop'], ax=ax)
plt.title('Bernie Bootstrap Prop')
plt.xlim(0.1, 0.9)
plt.ylim(0, 25)
compute_hillary_ci = '''
SELECT percentile_disc(0.025) WITHIN GROUP (ORDER BY small_donor_prop) as ci_lower,
percentile_disc(0.975) WITHIN GROUP (ORDER BY small_donor_prop) as ci_upper
FROM hillary_props
'''
compute_bernie_ci = '''
SELECT percentile_disc(0.025) WITHIN GROUP (ORDER BY small_donor_prop) as ci_lower,
percentile_disc(0.975) WITHIN GROUP (ORDER BY small_donor_prop) as ci_upper
FROM bernie_props
'''
%sql SET SEED TO $seed
hillary_ci = %sql $compute_hillary_ci
bernie_ci = %sql $compute_bernie_ci
print(hillary_ci)
print(bernie_ci)
_ = ok.grade_all()
# Now, we'll submit to okpy
_ = ok.submit()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, let's connect to your Postgres database. On your Heroku Postgres details,
Step2: Table Descriptions
Step3: For longer queries, you can save your query into a string, then use it in the
Step4: In addition, you can assign the SQL statement to a variable and then call .DataFrame() on it to get a Pandas DataFrame.
Step5: Question 1a
Step6: Question 1b
Step7: Question 1c
Step9: If you peruse the results of your last query, you should notice that some names are listed twice with slightly different spellings. Perhaps this causes some contributions to be split extraneously.
Step10: Question 1e
Step11: Question 2
Step12: Question 3
Step13: Question 4
Step14: Question 5
Step16: Part 2
Step17: Question 8
Step20: Question 9
Step21: Question 10
Step23: Question 11
Step24: Question 12
Step25: Question 13
Step27: Question 14
Step28: Plotting the sample distribution
Step29: Computing the Confidence Interval
Step30: Question 15
Step31: Then, we'll submit the assignment to OkPy so that the staff will know to grade it. You can submit as many times as you want, and you can choose which submission you want us to grade by going to https
|
7,997
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import re
!ls -liLah ../../share/Data
# generate the raw_data from CSV
raw_data = pd.read_csv("../../share/Data/SPON_complete", delimiter=",", skipinitialspace=True)
# Stopwords are downloaded and defined here
try:
with open("../german_stopwords_full.txt") as f:
STOPWORDS = [line.strip() for line in f if not line.startswith(";")]
except FileNotFoundError:
!wget https://raw.githubusercontent.com/solariz/german_stopwords/master/german_stopwords_full.txt
with open("../german_stopwords_full.txt") as f:
STOPWORDS = [line.strip() for line in f if not line.startswith(";")]
dynamic_stopwords = ["dass", "", " ", "worden", "jahren", "jahre", "jahr",
"heißt", "heißen", "müsse", "prozent"]
STOPWORDS += dynamic_stopwords
# Aufbereitung der Daten zur späteren Bearbeitung
## TODO: nicht mit DF sondern als Text bearbeiten -- Zeit- und Memkritisch
data = raw_data
if False:
# Cleaning the strings to be only alphanumeric
data['article'] = data['article'].map(lambda x: re.sub(r'\W+', '', str(x), re.UNICODE))
# Removing the stopwords
data['article'] = data['article'].map(lambda x: [item for item in x.split() if item not in STOPWORDS])
data.to_csv("./share/Data/SPON_complete_clean.csv".format(word) , sep=',')
data
def count_words(source):
Counting the words of the column article of a given Dataframe.
It is possible to define a word, so only this word will be counted.
#split column Message to new df, create Serie by stack
s = (source.article.str.split(expand=True).stack().str.lower() )
#remove multiindex
s.index = s.index.droplevel(-1)
s.name= 'words'
#join Serie s to df source
df = (source.join(s))
# Cleaning the strings to be only alphanumeric
df['words'] = df['words'].map(lambda x: re.sub(r'\W+', '', str(x), re.U))
df = df[~df['words'].isin(
STOPWORDS)].groupby(
['words']).size().reset_index(
name='count'
).sort_values(by='count')
return df
def wordcounter(source, word):
df = count_words(source)
return df[df['words'].str.contains(word)]
# This will generate a list of Strings, representating the months from .01.2001 to .12.2012
datestrings = [".{:02d}.20{:02d}".format(m,y) for y in range(1, 17) for m in range(1, 13) ]
top25words_ofCategory = lambda m, c : count_words(
raw_data[
(raw_data.day.str.contains(m, na=False)) & (raw_data.cats.str.contains(c, na=False) )]
).nlargest(25, columns=['count', ]).set_index('words')
gen = False
if gen:
_ = [top25words_ofCategory('{}'.format(Y),
'Politik').to_csv("../../share/DATA/politics_top25words_{}.csv".format(Y))
for Y in range(2001, 2017)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Rohdaten
Step2: Stopwörter
Step3: Aufbereiten der Rohdaten
Step5: Zählen
Step6: Aufbereiten der Daten und schreiben in CSV-Dateien
|
7,998
|
<ASSISTANT_TASK:>
Python Code:
%run ../linked_list/linked_list.py
%load ../linked_list/linked_list.py
class MyLinkedList(LinkedList):
def kth_to_last_elem(self, k):
# TODO: Implement me
pass
# %load test_kth_to_last_elem.py
from nose.tools import assert_equal
class Test(object):
def test_kth_to_last_elem(self):
print('Test: Empty list')
linked_list = MyLinkedList(None)
assert_equal(linked_list.kth_to_last_elem(0), None)
print('Test: k >= len(list)')
assert_equal(linked_list.kth_to_last_elem(100), None)
print('Test: One element, k = 0')
head = Node(2)
linked_list = MyLinkedList(head)
assert_equal(linked_list.kth_to_last_elem(0), 2)
print('Test: General case')
linked_list.insert_to_front(1)
linked_list.insert_to_front(3)
linked_list.insert_to_front(5)
linked_list.insert_to_front(7)
assert_equal(linked_list.kth_to_last_elem(2), 3)
print('Success: test_kth_to_last_elem')
def main():
test = Test()
test.test_kth_to_last_elem()
if __name__ == '__main__':
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unit Test
|
7,999
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
%matplotlib inline
def dd(t1, t2, nu):
return -nu*(3*np.cos(t1)*np.cos(t2) - np.cos(t1-t2))
def anis(t1, t2, sigma):
return sigma*(np.sin(t1)**2 + np.sin(t2)**2)
def tot(t1, t2, nu, sigma):
return dd(t1, t2, nu) + anis(t1, t2, sigma)
def p_unorm(t1,t2,nu,sigma):
return np.sin(t1)*np.sin(t2)*np.exp(-tot(t1,t2,nu,sigma))
from scipy.integrate import dblquad
def boltz_2d(ts, nu, sigma):
e = np.array([[p_unorm(t1,t2,nu,sigma) for t1 in ts] for t2 in ts])
Z = dblquad(lambda t1,t2: p_unorm(t1,t2,nu,sigma),
0, ts[-1], lambda x: 0, lambda x: ts[-1])[0]
return e/Z
nus = [0, 0.3, 0.3]
sigmas = [2.0, 2.0, 0.5]
ts = np.linspace(0, np.pi, 100)
fg = plt.figure(figsize=(10,4))
axs = ImageGrid(
fg, 111, nrows_ncols=(1,3), axes_pad=0.15,
share_all=True,cbar_location="right",
cbar_mode="single",cbar_size="7%",
cbar_pad=0.15,
)
for nu, sigma, ax in zip(nus, sigmas, axs):
e = [[tot(t1, t2, nu, sigma) for t1 in ts] for t2 in ts]
cf=ax.contourf(ts, ts, e)
ax.set_xlabel('$\\theta_1$'); ax.set_ylabel('$\\theta_2$')
ax.set_aspect('equal')
axs[0].set_title('No interactions')
axs[1].set_title('Strong interactions')
axs[2].set_title('Weak anisotropy')
ax.cax.colorbar(cf) # fix color bar
ts = np.linspace(0, np.pi, 100)
fg = plt.figure(figsize=(10,4))
axs = ImageGrid(
fg, 111, nrows_ncols=(1,3), axes_pad=0.15,
share_all=True,cbar_location="right",
cbar_mode="single",cbar_size="7%",
cbar_pad=0.15,
)
for nu, sigma, ax in zip(nus, sigmas, axs):
b = boltz_2d(ts, nu, sigma)
cf=ax.contourf(ts, ts, b)
ax.set_xlabel('$\\theta_1$'); ax.set_ylabel('$\\theta_2$')
ax.set_aspect('equal')
axs[0].set_title('No interactions')
axs[1].set_title('Strong interactions')
axs[2].set_title('Weak anisotropy')
ax.cax.colorbar(cf) # fix color bar
import pymc3 as pm
nu = 0.3
sigma = 1.5
with pm.Model() as model:
z1 = pm.Uniform('z1', -1, 1)
theta1 = pm.Deterministic('theta1', np.arccos(z1))
z2 = pm.Uniform('z2', -1, 1)
theta2 = pm.Deterministic('theta2', np.arccos(z2))
energy = tot(theta1, theta2, nu, sigma)
like = pm.Potential('energy', -energy)
with model:
step = pm.NUTS()
trace = pm.sample(100000, step=step)
b = boltz_2d(ts, nu, sigma)
plt.hist2d(trace['theta1'], trace['theta2'], bins=70, normed=True)
plt.contour(ts, ts, b, cmap='Greys')
plt.gca().set_aspect('equal')
plt.xlabel('$\\theta_1$'); plt.ylabel('$\\theta_2$');
from scipy.integrate import trapz
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
fg, axs = plt.subplots(ncols=2, figsize=(9,3))
for i in range(2):
axs[i].hist(trace['theta{}'.format(i+1)], bins=100, normed=True)
axs[i].plot(ts, b_marginal, lw=2)
axs[i].set_xlabel('$\\theta_{}$'.format(i+1))
axs[i].set_ylabel('$p(\\theta_{})$'.format(i+1))
plt.suptitle('Marginal distributions', fontsize=18)
import magpy as mp
K = 1e5
r = 8e-9
T = 330
Ms=400e3
R=1.5e-3
kdir = [0, 0, 1]
location1 = np.array([0, 0, 0], dtype=np.float)
location2 = np.array([0, 0, R], dtype=np.float)
direction = np.array([0, 0, 1], dtype=np.float)
alpha = 1.0
V = 4./3*np.pi*r**3
sigma = K*V/mp.get_KB()/T
nu = mp.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi / R**3 / mp.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
model = mp.Model(
anisotropy=np.array([K, K], dtype=np.float),
anisotropy_axis=np.array([kdir, kdir], dtype=np.float),
damping=alpha,
location=np.array([location1, location2], dtype=np.float),
magnetisation=Ms,
magnetisation_direction=np.array([direction, direction], dtype=np.float),
radius=np.array([r, r], dtype=np.float),
temperature=T
)
res = model.simulate_ensemble(end_time=1e-9, time_step=1e-12,
max_samples=500, seeds=range(5000),
n_jobs=8, implicit_solve=True,
interactions=False)
m_z = [state['z'][0] for state in res.final_state()]
theta = np.arccos(m_z)
ts = np.linspace(0, np.pi/2, 100)
b = boltz_2d(ts, nu, sigma)
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
b_noint = b_marginal
plt.hist(theta, bins=50, normed=True)
plt.plot(ts[ts<theta.max()], b_marginal[ts<theta.max()])
res = model.simulate_ensemble(end_time=1e-9, time_step=1e-12,
max_samples=500, seeds=range(5000),
n_jobs=8, implicit_solve=True,
interactions=True)
m_z = [state['z'][0] for state in res.final_state()]
theta = np.arccos(m_z)
ts = np.linspace(0, np.pi/2, 100)
b = boltz_2d(ts, nu, sigma)
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
plt.hist(theta, bins=50, normed=True)
plt.plot(ts[ts<theta.max()], b_marginal[ts<theta.max()])
plt.plot(ts[ts<theta.max()], b_noint[ts<theta.max()])
R=1e-7
location1 = np.array([0, 0, 0], dtype=np.float)
location2 = np.array([0, 0, R], dtype=np.float)
nu = mp.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi / R**3 / mp.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
model = mp.Model(
anisotropy=np.array([K, K], dtype=np.float),
anisotropy_axis=np.array([kdir, kdir], dtype=np.float),
damping=alpha,
location=np.array([location1, location2], dtype=np.float),
magnetisation=Ms,
magnetisation_direction=np.array([direction, direction], dtype=np.float),
radius=np.array([r, r], dtype=np.float),
temperature=T
)
res = model.simulate_ensemble(end_time=1e-9, time_step=1e-12,
max_samples=500, seeds=range(5000),
n_jobs=8, implicit_solve=True,
interactions=True)
m_z = [state['z'][0] for state in res.final_state()]
theta = np.arccos(m_z)
ts = np.linspace(0, np.pi/2, 100)
b = boltz_2d(ts, nu, sigma)
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
plt.hist(theta, bins=50, normed=True)
plt.plot(ts[ts<theta.max()], b_marginal[ts<theta.max()])
plt.plot(ts[ts<theta.max()], b_noint[ts<theta.max()])
R=0.6e-8
location1 = np.array([0, 0, 0], dtype=np.float)
location2 = np.array([0, 0, R], dtype=np.float)
nu = mp.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi / R**3 / mp.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
model = mp.Model(
anisotropy=np.array([K, K], dtype=np.float),
anisotropy_axis=np.array([kdir, kdir], dtype=np.float),
damping=alpha,
location=np.array([location1, location2], dtype=np.float),
magnetisation=Ms,
magnetisation_direction=np.array([direction, direction], dtype=np.float),
radius=np.array([r, r], dtype=np.float),
temperature=T
)
res = model.simulate_ensemble(end_time=10e-9, time_step=1e-12,
max_samples=500, seeds=range(5000),
n_jobs=8, implicit_solve=True,
interactions=True)
m_z = [state['z'][0] for state in res.final_state()]
theta = np.arccos(m_z)
ts = np.linspace(0, np.pi/2, 100)
b = boltz_2d(ts, nu, sigma)
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
plt.hist(theta, bins=50, normed=True, label='simulation')
plt.plot(ts[ts<theta.max()], b_marginal[ts<theta.max()], label='analytic')
plt.plot(ts[ts<theta.max()], b_noint[ts<theta.max()], label='analytic ($\\nu=0$)')
plt.legend()
r = 1e-9
V = 4./3*np.pi*r**3
sigma = K*V/mp.get_KB()/T
nu = mp.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi / R**3 / mp.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
model = mp.Model(
anisotropy=np.array([K, K], dtype=np.float),
anisotropy_axis=np.array([kdir, kdir], dtype=np.float),
damping=alpha,
location=np.array([location1, location2], dtype=np.float),
magnetisation=Ms,
magnetisation_direction=np.array([direction, direction], dtype=np.float),
radius=np.array([r, r], dtype=np.float),
temperature=T
)
res = model.simulate(end_time=1e-9, time_step=1e-13, max_samples=1000, seed=1001,
implicit_solve=True, interactions=True)
res.plot();
res = model.simulate_ensemble(end_time=1e-9, time_step=1e-13,
max_samples=500, seeds=range(5000),
n_jobs=8, implicit_solve=True,
interactions=True)
m_z = [state['z'][0] for state in res.final_state()]
theta = np.arccos(m_z)
ts = np.linspace(0, np.pi, 100)
b = boltz_2d(ts, nu, sigma)
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
plt.hist(theta, bins=50, normed=True)
plt.plot(ts, b_marginal)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Individual energy terms
Step2: The unnormalised probability of state $\theta_1\theta_2$
Step3: 2-dimensional Boltzmann distribution
Step4: Analytical results
Step5: Energy landscape
Step6: Probability state-space
Step7: Markov-chain Monte-Carlo (MCMC)
Step8: Setting up the PyMC3 model is simple! Specify the priors and the energy function using pm.Potential
Step9: Chose the NUTS algorithm as our MCMC step and request a large number of random samples. These are returned in the trace variable.
Step10: Compare results
Step11: Marginalisation over one variable
Step12: Langevin dynamics simulations (sLLG)
Step13: Negligible interactions case
Step14: Weakly interacting case
Step15: Strongly interacting case
Step16: High noise
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.