text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Just for display purposes, let's read in the movie names too.
Step2: We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.
Step3: This is the number of latent factors in each embedding.
Step4: Randomly split into training and validation.
Step5: Create subset for Excel
Step6: Dot product
Step7: The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well...
Step8: This result is quite a bit better than the best benchmarks that we could find with a quick google search - so looks like a great approach!
Step9: We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.
Step10: Analyze results
Step11: First, we'll look at the movie bias term. We create a 'model' - which in keras is simply a way of associating one or more inputs with one more more outputs, using the functional API. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).
Step12: Now we can look at the top and bottom rated movies. These ratings are corrected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.
Step13: We can now do the same thing for the embeddings.
Step14: Because it's hard to interpret 50 embeddings, we use PCA to simplify them down to just 3 vectors.
Step15: Here's the 1st component. It seems to be 'critically acclaimed' or 'classic'.
Step16: The 2nd is 'hollywood blockbuster'.
Step17: The 3rd is 'violent vs happy'.
Step18: We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.
Step19: Neural net
|
<ASSISTANT_TASK:>
Python Code:
ratings = pd.read_csv(path+'ratings.csv')
ratings.head()
len(ratings)
movie_names = pd.read_csv(path+'movies.csv').set_index('movieId')['title'].to_dict()
users = ratings.userId.unique()
movies = ratings.movieId.unique()
userid2idx = {o:i for i,o in enumerate(users)}
movieid2idx = {o:i for i,o in enumerate(movies)}
ratings.movieId = ratings.movieId.apply(lambda x: movieid2idx[x])
ratings.userId = ratings.userId.apply(lambda x: userid2idx[x])
user_min, user_max, movie_min, movie_max = (ratings.userId.min(),
ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max())
user_min, user_max, movie_min, movie_max
n_users = ratings.userId.nunique()
n_movies = ratings.movieId.nunique()
n_users, n_movies
n_factors = 50
np.random.seed = 42
msk = np.random.rand(len(ratings)) < 0.8
trn = ratings[msk]
val = ratings[~msk]
g=ratings.groupby('userId')['rating'].count()
topUsers=g.sort_values(ascending=False)[:15]
g=ratings.groupby('movieId')['rating'].count()
topMovies=g.sort_values(ascending=False)[:15]
top_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId')
top_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId')
pd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum)
user_in = Input(shape=(1,), dtype='int64', name='user_in')
u = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-4))(user_in)
movie_in = Input(shape=(1,), dtype='int64', name='movie_in')
m = Embedding(n_movies, n_factors, input_length=1, W_regularizer=l2(1e-4))(movie_in)
x = merge([u, m], mode='dot')
x = Flatten()(x)
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=3,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6,
validation_data=([val.userId, val.movieId], val.rating))
def embedding_input(name, n_in, n_out, reg):
inp = Input(shape=(1,), dtype='int64', name=name)
return inp, Embedding(n_in, n_out, input_length=1, W_regularizer=l2(reg))(inp)
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
def create_bias(inp, n_in):
x = Embedding(n_in, 1, input_length=1)(inp)
return Flatten()(x)
ub = create_bias(user_in, n_users)
mb = create_bias(movie_in, n_movies)
x = merge([u, m], mode='dot')
x = Flatten()(x)
x = merge([x, ub], mode='sum')
x = merge([x, mb], mode='sum')
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=10,
validation_data=([val.userId, val.movieId], val.rating))
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=5,
validation_data=([val.userId, val.movieId], val.rating))
model.save_weights(model_path+'bias.h5')
model.load_weights(model_path+'bias.h5')
model.predict([np.array([3]), np.array([6])])
g=ratings.groupby('movieId')['rating'].count()
topMovies=g.sort_values(ascending=False)[:2000]
topMovies = np.array(topMovies.index)
get_movie_bias = Model(movie_in, mb)
movie_bias = get_movie_bias.predict(topMovies)
movie_ratings = [(b[0], movie_names[movies[i]]) for i,b in zip(topMovies,movie_bias)]
sorted(movie_ratings, key=itemgetter(0))[:15]
sorted(movie_ratings, key=itemgetter(0), reverse=True)[:15]
get_movie_emb = Model(movie_in, m)
movie_emb = np.squeeze(get_movie_emb.predict([topMovies]))
movie_emb.shape
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
movie_pca = pca.fit(movie_emb.T).components_
fac0 = movie_pca[0]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac0, topMovies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac1 = movie_pca[1]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac1, topMovies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac2 = movie_pca[2]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac2, topMovies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
reload(sys)
sys.setdefaultencoding('utf8')
start=50; end=100
X = fac0[start:end]
Y = fac2[start:end]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(topMovies[start:end], X, Y):
plt.text(x,y,movie_names[movies[i]], color=np.random.rand(3)*0.7, fontsize=14)
plt.show()
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
x = merge([u, m], mode='concat')
x = Flatten()(x)
x = Dropout(0.3)(x)
x = Dense(70, activation='relu')(x)
x = Dropout(0.75)(x)
x = Dense(1)(x)
nn = Model([user_in, movie_in], x)
nn.compile(Adam(0.001), loss='mse')
nn.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=8,
validation_data=([val.userId, val.movieId], val.rating))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load LendingClub Dataset
Step2: As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
Step3: We will be using the same 4 categorical features as in the previous assignment
Step4: Subsample dataset to make sure classes are balanced
Step5: Note
Step6: The feature columns now look like this
Step7: Train-Validation split
Step8: Early stopping methods for decision trees
Step9: Quiz question
Step10: Quiz question
Step11: We then wrote a function best_splitting_feature that finds the best feature to split on given the data and a list of features to consider.
Step12: Finally, recall the function create_leaf from the previous assignment, which creates a leaf node given a set of target values.
Step13: Incorporating new early stopping conditions in binary decision tree implementation
Step14: Here is a function to count the nodes in your tree
Step15: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step16: Build a tree!
Step17: Let's now train a tree model ignoring early stopping conditions 2 and 3 so that we get the same tree as in the previous assignment. To ignore these conditions, we set min_node_size=0 and min_error_reduction=-1 (a negative value).
Step18: Making predictions
Step19: Now, let's consider the first example of the validation set and see what the my_decision_tree_new model predicts for this data point.
Step20: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step21: Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as my_decision_tree_old.
Step22: Quiz question
Step23: Now, let's use this function to evaluate the classification error of my_decision_tree_new on the validation_set.
Step24: Now, evaluate the validation error using my_decision_tree_old.
Step25: Quiz question
Step26: Evaluating the models
Step27: Now evaluate the classification error on the validation data.
Step28: Quiz Question
Step29: Compute the number of nodes in model_1, model_2, and model_3.
Step30: Quiz question
Step31: Calculate the accuracy of each model (model_4, model_5, or model_6) on the validation set.
Step32: Using the count_leaves function, compute the number of leaves in each of each models in (model_4, model_5, and model_6).
Step33: Quiz Question
Step34: Now, let us evaluate the models (model_7, model_8, or model_9) on the validation_set.
Step35: Using the count_leaves function, compute the number of leaves in each of each models (model_7, model_8, and model_9).
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
loans = graphlab.SFrame('lending-club-data.gl/')
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
train_data, validation_set = loans_data.random_split(.8, seed=1)
def reached_minimum_node_size(data, min_node_size):
# Return True if the number of data points is less than or equal to the minimum node size.
## YOUR CODE HERE
return (len(data) <= min_node_size)
def error_reduction(error_before_split, error_after_split):
# Return the error before the split minus the error after the split.
## YOUR CODE HERE
return (error_before_split - error_after_split)
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
safe_loans = len(labels_in_node[labels_in_node == +1])
# Count the number of -1's (risky loans)
risky_loans = len(labels_in_node[labels_in_node == -1])
# Return the number of mistakes that the majority classifier makes.
if safe_loans > risky_loans:
return risky_loans
else:
return safe_loans
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
left_mistakes = intermediate_node_num_mistakes(left_split['safe_loans'])
# Calculate the number of misclassified examples in the right split.
right_mistakes = intermediate_node_num_mistakes(right_split['safe_loans'])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
error = (left_mistakes + right_mistakes) / num_data_points
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
if error < best_error:
best_feature = feature
best_error = error
return best_feature # Return the best feature we found
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True,
'prediction': None}
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = 1
else:
leaf['prediction'] = -1
# Return the leaf node
return leaf
def decision_tree_create(data, features, target, current_depth = 0,
max_depth = 10, min_node_size=1,
min_error_reduction=0.0):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1: All nodes are of the same type.
if intermediate_node_num_mistakes(target_values) == 0:
print "Stopping condition 1 reached. All data points have the same target value."
return create_leaf(target_values)
# Stopping condition 2: No more features to split on.
if remaining_features == []:
print "Stopping condition 2 reached. No remaining features."
return create_leaf(target_values)
# Early stopping condition 1: Reached max depth limit.
if current_depth >= max_depth:
print "Early stopping condition 1 reached. Reached maximum depth."
return create_leaf(target_values)
# Early stopping condition 2: Reached the minimum node size.
# If the number of data points is less than or equal to the minimum size, return a leaf.
if reached_minimum_node_size(data, min_node_size) == True:
print "Early stopping condition 2 reached. Reached minimum node size."
return create_leaf(target_values) ## YOUR CODE HERE
# Find the best splitting feature
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
# Early stopping condition 3: Minimum error reduction
# Calculate the error before splitting (number of misclassified examples
# divided by the total number of examples)
error_before_split = intermediate_node_num_mistakes(target_values) / float(len(data))
# Calculate the error after splitting (number of misclassified examples
# in both groups divided by the total number of examples)
left_mistakes = intermediate_node_num_mistakes(left_split[target]) / float(len(data)) ## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split[target]) / float(len(data)) ## YOUR CODE HERE
error_after_split = (left_mistakes + right_mistakes) / float(len(data))
# If the error reduction is LESS THAN OR EQUAL TO min_error_reduction, return a leaf.
if error_reduction(error_before_split, error_after_split) <= min_error_reduction: ## YOUR CODE HERE
print "Early stopping condition 3 reached. Minimum error reduction."
return create_leaf(target_values) ## YOUR CODE HERE
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
small_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 10, min_error_reduction=0.0)
if count_nodes(small_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_decision_tree)
print 'Number of nodes that should be there : 5'
my_decision_tree_new = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 100, min_error_reduction=0.0)
my_decision_tree_old = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
### YOUR CODE HERE
return classify(tree['right'], x, annotate)
validation_set[0]
print 'Predicted class: %s ' % classify(my_decision_tree_new, validation_set[0])
classify(my_decision_tree_new, validation_set[0], annotate = True)
classify(my_decision_tree_old, validation_set[0], annotate = True)
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error and return it
## YOUR CODE HERE
num_errors = 0
for item in xrange(len(data)):
if data['safe_loans'][item] != prediction[item]:
num_errors += 1
return num_errors / float(len(data))
evaluate_classification_error(my_decision_tree_new, validation_set)
evaluate_classification_error(my_decision_tree_old, validation_set)
model_1 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 0, min_error_reduction=-1)
model_2 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_3 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 14,
min_node_size = 0, min_error_reduction=-1)
print "Training data, classification error (model 1):", evaluate_classification_error(model_1, train_data)
print "Training data, classification error (model 2):", evaluate_classification_error(model_2, train_data)
print "Training data, classification error (model 3):", evaluate_classification_error(model_3, train_data)
print "Validation data, classification error (model 1):", evaluate_classification_error(model_1, validation_set)
print "Validation data, classification error (model 2):", evaluate_classification_error(model_2, validation_set)
print "Validation data, classification error (model 3):", evaluate_classification_error(model_3, validation_set)
def count_leaves(tree):
if tree['is_leaf']:
return 1
return count_leaves(tree['left']) + count_leaves(tree['right'])
print "Complexity (model 1):", count_leaves(model_1)
print "Complexity (model 2):", count_leaves(model_2)
print "Complexity (model 3):", count_leaves(model_3)
model_4 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_5 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=0)
model_6 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=5)
print "Validation data, classification error (model 4):", evaluate_classification_error(model_4, validation_set)
print "Validation data, classification error (model 5):", evaluate_classification_error(model_5, validation_set)
print "Validation data, classification error (model 6):", evaluate_classification_error(model_6, validation_set)
print "Complexity (model 4):", count_leaves(model_4)
print "Complexity (model 5):", count_leaves(model_5)
print "Complexity (model 6):", count_leaves(model_6)
model_7 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_8 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 2000, min_error_reduction=-1)
model_9 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 50000, min_error_reduction=-1)
print "Validation data, classification error (model 7):", evaluate_classification_error(model_7, validation_set)
print "Validation data, classification error (model 8):", evaluate_classification_error(model_8, validation_set)
print "Validation data, classification error (model 9):", evaluate_classification_error(model_9, validation_set)
print "Complexity (model 7):", count_leaves(model_7)
print "Complexity (model 8):", count_leaves(model_8)
print "Complexity (model 9):", count_leaves(model_9)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can see the nested hierarchical structure of the constituents in the preceding output as compared to the flat structure in shallow parsing. Refer to the Penn Treebank reference as needed to lookup other tags.
Step2: Dependency Parsing with Spacy
Step3: Dependency Parsing with Stanford NLP
Step4: Dependency Parsing with Stanford Core NLP
|
<ASSISTANT_TASK:>
Python Code:
# set java path
import os
java_path = r'C:\Program Files\Java\jre1.8.0_192\bin\java.exe'
os.environ['JAVAHOME'] = java_path
from nltk.parse.stanford import StanfordParser
scp = StanfordParser(path_to_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser.jar',
path_to_models_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser-3.5.2-models.jar')
scp
sentence = 'This NLP Workshop is being organized by Analytics Vidhya as part of the DataHack Summit 2018'
sentence
result = list(scp.raw_parse(sentence))[0]
print(result)
os.environ['PATH'] = os.environ['PATH']+r';C:\Program Files\gs\gs9.25\bin'
result
result.pretty_print()
from nltk.parse import CoreNLPParser
cnp = CoreNLPParser()
cnp
result = list(cnp.raw_parse(sentence))[0]
print(result)
result
result.pretty_print()
import spacy
nlp = spacy.load('en', parse=False, tag=False, entity=False)
dependency_pattern = '{left}<---{word}[{w_type}]--->{right}\n--------'
sentence_nlp = nlp(sentence)
for token in sentence_nlp:
print(dependency_pattern.format(word=token.orth_,
w_type=token.dep_,
left=[t.orth_
for t
in token.lefts],
right=[t.orth_
for t
in token.rights]))
from spacy import displacy
displacy.render(sentence_nlp, jupyter=True,
options={'distance': 110,
'arrow_stroke': 2,
'arrow_width': 8})
from nltk.parse.stanford import StanfordDependencyParser
sdp = StanfordDependencyParser(path_to_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser.jar',
path_to_models_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser-3.5.2-models.jar')
sdp
result = list(sdp.raw_parse(sentence))[0]
# print the dependency tree
print(result.tree())
result.tree()
result
from nltk.parse.corenlp import CoreNLPDependencyParser
dep_parser = CoreNLPDependencyParser()
dep_parser
result = list(dep_parser.raw_parse(sentence))[0]
print(result.tree())
result.tree()
result
list(result.triples())
print(result.to_conll(4))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In our expements we will work with MNIST dataset
Step2: Firstly, let us define the shape of inputs of our model, loss function and an optimizer
Step3: Secondly, we create pipelines for train and test Simple ResNet model
Step4: The same thing for Stochastic ResNet model
Step5: Let's train our models
Step6: Show test accuracy for all iterations
|
<ASSISTANT_TASK:>
Python Code:
import sys
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqn
%matplotlib inline
sys.path.append('../../..')
sys.path.append('../../utils')
import utils
from resnet_with_stochastic_depth import StochasticResNet
from batchflow import B,V,F
from batchflow.opensets import MNIST
from batchflow.models.tf import ResNet50
dset = MNIST()
ResNet_config = {
'inputs': {'images': {'shape': (28, 28, 1)},
'labels': {'classes': (10),
'transform': 'ohe',
'dtype': 'int64',
'name': 'targets'}},
'input_block/inputs': 'images',
'loss': 'softmax_cross_entropy',
'optimizer': 'Adam',
'output': dict(ops=['accuracy'])
}
Stochastic_config = {**ResNet_config}
res_train_ppl = (dset.train.p
.init_model('dynamic',
ResNet50,
'resnet',
config=ResNet_config)
.train_model('resnet',
feed_dict={'images': B('images'),
'labels': B('labels')}))
res_test_ppl = (dset.test.p
.init_variable('resacc', init_on_each_run=list)
.import_model('resnet', res_train_ppl)
.predict_model('resnet',
fetches='output_accuracy',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('resacc'),
mode='a'))
stochastic_train_ppl = (dset.train.p
.init_model('dynamic',
StochasticResNet,
'stochastic',
config=Stochastic_config)
.init_variable('stochasticacc', init_on_each_run=list)
.train_model('stochastic',
feed_dict={'images': B('images'),
'labels': B('labels')}))
stochastic_test_ppl = (dset.test.p
.init_variable('stochasticacc', init_on_each_run=list)
.import_model('stochastic', stochastic_train_ppl)
.predict_model('stochastic',
fetches='output_accuracy',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('stochasticacc'),
mode='a'))
for i in tqn(range(1000)):
res_train_ppl.next_batch(400, n_epochs=None, shuffle=True)
res_test_ppl.next_batch(400, n_epochs=None, shuffle=True)
stochastic_train_ppl.next_batch(400, n_epochs=None, shuffle=True)
stochastic_test_ppl.next_batch(400, n_epochs=None, shuffle=True)
resnet_loss = res_test_ppl.get_variable('resacc')
stochastic_loss = stochastic_test_ppl.get_variable('stochasticacc')
utils.draw(resnet_loss, 'ResNet', stochastic_loss, 'Stochastic', window=20, type_data='accuracy')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Predictions
Step2: steps towards persisting (saving) SVM models
Step3: Submissions 2
|
<ASSISTANT_TASK:>
Python Code:
def load_feat_vec(patientid,sub_name="stage1_feat"):
f=file("./2017datascibowl/"+sub_name+"/"+patientid+"feat_vec","rb")
arr = np.load(f)
f.close()
return arr
def prepare_inputX(sub_name="stage1_feat_lowres64", ratio_of_train_to_total = 0.4,
ratio_valid_to_rest = 0.2):
patients_stage1_feat = os.listdir('./2017datascibowl/'+sub_name)
patients_stage1_feat = [id.replace("feat_vec","") for id in patients_stage1_feat] # remove the suffix "feat_vec"
# get y labels
y_ids = pd.read_csv('./2017datascibowl/stage1_labels.csv')
y_ids_found=y_ids.loc[y_ids['id'].isin(patients_stage1_feat)]
m = len(patients_stage1_feat)
found_indices =[]
for i in range(m):
if patients_stage1_feat[i] in y_ids_found['id'].as_matrix():
found_indices.append(i)
patients_stage1_feat_found = [patients_stage1_feat[i] for i in found_indices]
y_found=[]
for i in range(len(patients_stage1_feat_found)):
if (patients_stage1_feat_found[i] in y_ids_found['id'].as_matrix()):
cancer_val = y_ids_found.loc[y_ids_found['id']==patients_stage1_feat_found[i]]['cancer'].as_matrix()
y_found.append( cancer_val )
y_found=np.array(y_found).flatten()
assert (len(y_found)==len(patients_stage1_feat_found))
numberofexamples = len(patients_stage1_feat_found)
numberoftrainingexamples = int(numberofexamples*ratio_of_train_to_total)
numbertovalidate = int((numberofexamples - numberoftrainingexamples)*ratio_valid_to_rest)
numbertotest= numberofexamples - numberoftrainingexamples - numbertovalidate
shuffledindices = np.random.permutation( numberofexamples)
patients_train = [patients_stage1_feat_found[id] for id in shuffledindices[:numberoftrainingexamples]]
patients_valid = [patients_stage1_feat_found[id] for id in shuffledindices[numberoftrainingexamples:numberoftrainingexamples+numbertovalidate]]
patients_test = [patients_stage1_feat_found[id] for id in shuffledindices[numberoftrainingexamples+numbertovalidate:]]
y_train = y_found[shuffledindices[:numberoftrainingexamples]]
y_valid = y_found[shuffledindices[numberoftrainingexamples:numberoftrainingexamples+numbertovalidate]]
y_test = y_found[shuffledindices[numberoftrainingexamples+numbertovalidate:]]
patients_train_vecs = [load_feat_vec(id,sub_name) for id in patients_train]
patients_train_vecs = np.array(patients_train_vecs)
patients_valid_vecs = [load_feat_vec(id,sub_name) for id in patients_valid]
patients_valid_vecs = np.array(patients_valid_vecs)
patients_test_vecs = [load_feat_vec(id,sub_name) for id in patients_test]
patients_test_vecs = np.array(patients_test_vecs)
patient_ids = {"train":patients_train,"valid":patients_valid,"test":patients_test}
ys = {"train":y_train,"valid":y_valid,"test":y_test}
Xs = {"train":patients_train_vecs,"valid":patients_valid_vecs,"test":patients_test_vecs}
return patient_ids, ys, Xs
patient_ids32, ys32,Xs32=prepare_inputX("stage1_HOG32",0.275,0.2)
y_train_rep2 = np.copy(ys32["train"]) # 2nd representation
y_train_rep2[y_train_rep2<=0]=-1
y_valid_rep2 = np.copy(ys32["valid"]) # 2nd representation
y_valid_rep2[y_valid_rep2<=0]=-1
y_test_rep2 = np.copy(ys32["test"]) # 2nd representation
y_test_rep2[y_test_rep2<=0]=-1
C_trial=[0.1,1.0,10.,100.]
sigma_trial=[0.1,1.0,10.]
C_trial[3]
SVM_stage1 = SVM_parallel(Xs32["train"],y_train_rep2,len(y_train_rep2),
C_trial[3],sigma_trial[1],0.005 ) # C=100.,sigma=1.0, alpha=0.001
SVM_stage1.build_W();
SVM_stage1.build_update();
%time SVM_stage1.train_model_full(3) # iterations=3,CPU times: user 3min 50s, sys: 7min 19s, total: 11min 9s
%time SVM_stage1.train_model_full(100)
SVM_stage1.build_b()
yhat32_valid = SVM_stage1.make_predictions_parallel( Xs32["valid"] )
accuracy_score_temp=(np.sign(yhat32_valid[0]) == y_valid_rep2).sum()/float(len(y_valid_rep2))
print(accuracy_score_temp)
y_valid_rep2
stage1_sample_submission_csv = pd.read_csv("./2017datascibowl/stage1_sample_submission.csv")
sub_name="stage1_HOG32"
patients_sample_vecs = np.array( [load_feat_vec(id,sub_name) for id in stage1_sample_submission_csv['id'].as_matrix()] )
print(len(patients_sample_vecs))
%time yhat_sample = SVM_stage1.make_predictions_parallel( patients_sample_vecs[:2] )
f32=open("./2017datascibowl/lambda_multHOG32_C100sigma1","wb")
np.save(f32,SVM_stage1.lambda_mult.get_value())
f32.close()
yhat_sample_rep2 = np.copy(yhat_sample[0]) # representation 2, {-1,1}, not representation of binary classes as {0,1}
yhat_sample_rep2 = np.sign( yhat_sample_rep2); # representation 1, {0,1}, not representation of binary classes as {-1,1}
yhat_sample_rep1 = np.copy(yhat_sample_rep2)
np.place(yhat_sample_rep1,yhat_sample_rep1<0.,0.)
f32load=open("./2017datascibowl/lambda_multHOG32_C100sigma1","rb")
testload32=np.load(f32load)
f32load.close()
SVM_stage1_reloaded = SVM_parallel(Xs32["train"],y_train_rep2,len(y_train_rep2),
C_trial[3],sigma_trial[1],0.005 ) # C=100.,sigma=1.0, alpha=0.001
SVM_stage1_reloaded.lambda_mult.get_value()[:20]
testload32[:20]
SVM_stage1_reloaded.lambda_mult.set_value( testload32 )
SVM_stage1_reloaded.lambda_mult.get_value()[:20]
SVM_stage1_reloaded.build_b()
%time yhat_sample = SVM_stage1_reloaded.make_predictions_parallel( patients_sample_vecs )
np.sign(yhat_sample[0])
yhat_sample_rep2 = np.copy(yhat_sample[0]) # representation 2, {-1,1}, not representation of binary classes as {0,1}
yhat_sample_rep2 = np.sign( yhat_sample_rep2); # representation 1, {0,1}, not representation of binary classes as {-1,1}
yhat_sample_rep1 = np.copy(yhat_sample_rep2)
np.place(yhat_sample_rep1,yhat_sample_rep1<0.,0.)
Prattscaling_results = SVM_stage1_reloaded.make_prob_Pratt(yhat_sample_rep1)
Prattscaling_results
stage2_sample_submission_csv = pd.read_csv("./2017datascibowl/stage2_sample_submission.csv")
sub_name="stage2_HOG32"
patients_sample2_vecs = np.array( [load_feat_vec(id,sub_name) for id in stage2_sample_submission_csv['id'].as_matrix()] )
print(len(patients_sample2_vecs))
%time yhat_sample2 = SVM_stage1_reloaded.make_predictions_parallel( patients_sample2_vecs )
patients_sample2_vecs.shape
Xs32["train"].shape
np.sign(yhat_sample2[0])
yhat_sample2_rep2 = np.copy(yhat_sample2[0]) # representation 2, {-1,1}, not representation of binary classes as {0,1}
yhat_sample2_rep2 = np.sign( yhat_sample2_rep2); # representation 1, {0,1}, not representation of binary classes as {-1,1}
yhat_sample2_rep1 = np.copy(yhat_sample2_rep2)
np.place(yhat_sample2_rep1,yhat_sample2_rep1<0.,0.)
Prattscaling_results2 = SVM_stage1_reloaded.make_prob_Pratt(yhat_sample2_rep1)
Prattscaling_results2
sample2_out = pd.DataFrame(zip(stage2_sample_submission_csv['id'].as_matrix(),Prattscaling_results2[0]))
sample2_out.columns=["id","cancer"]
sample2_out.to_csv("./2017datascibowl/sample2submit00.csv",index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will also need a couple of specific modules and a litle "IPython magic" to show the plots
Step2: Back to top
Step3: We will also setup two damping matrices, one proportional to the mass and stiffness matrices (C1) and the other non proportional (C2)
Step4: Back to top
Step5: The angular frequencies are computed as the square root of the eigenvalues
Step6: The modal vectors, the columns of the modal matrix, have unit norm
Step7: Contrary to what is normaly done, we will visualize the modal vectors in a polar plot of the corresponding amplitudes and angles of equivalent complex values
Step8: Back to top
Step9: The system and input matrices are the following
Step10: The eigenanalysis yields the eigenvalues and eigenvectors
Step11: As we can see, the eigenvalues come in complex conjugate pairs. Let us take only the ones in the upper half-plane
Step12: These complex eigenvalues can be decomposed into angular frequency and damping coefficient
Step13: The columns of the modal matrix, the modal vectors, also come in conjugate pairs, each vector having unit norm
Step14: Moreover, the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices
Step15: We will visualize again the complex valued modal vectors with a polar plot of the corresponding amplitudes and angles
Step16: Back to top
Step17: The system and input matrices are the following
Step18: The eigenanalysis yields the eigenvalues and eigenvectors of the system matrix
Step19: As we can see, the eigenvalues come in complex conjugate pairs. Again, let us take only the ones in the upper half-plane
Step20: These complex eigenvalues can be decomposed into angular frequency and damping coefficient much like in the propotional damping case
Step21: Again, the columns of the modal matrix, the modal vectors, come in conjugate pairs, and each vector has unit norm
Step22: Moreover, the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices
Step23: Once more we will visualize the complex valued modal vectors through a polar plot of the corresponding amplitudes and angles
|
<ASSISTANT_TASK:>
Python Code:
import sys
import numpy as np
import scipy as sp
import matplotlib as mpl
print('System: {}'.format(sys.version))
print('numpy version: {}'.format(np.__version__))
print('scipy version: {}'.format(sp.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
from numpy import linalg as LA
import matplotlib.pyplot as plt
%matplotlib inline
MM = np.matrix(np.diag([1., 2.]))
print(MM)
KK = np.matrix([[20., -10.], [-10., 10.]])
print(KK)
C1 = 0.1*MM+0.04*KK
print(C1)
C2 = np.matrix([[0.1, 0.2], [0.2, 0.2]])
print(C2)
W2, F1 = LA.eig(LA.solve(MM,KK)) # eigenanalysis
ix = np.argsort(np.absolute(W2)) # sort eigenvalues in ascending order
W2 = W2[ix] # sorted eigenvalues
F1 = F1[:,ix] # sorted eigenvectors
print(np.round_(W2, 4))
print(np.round_(F1, 4))
print(np.sqrt(W2))
print(LA.norm(F1, axis=0))
fig, ax = plt.subplots(1, 2, subplot_kw=dict(polar=True))
for mode in range(2):
ax[mode].set_title('Mode #{}'.format(mode+1))
for dof in range(2):
r = np.array([0, np.absolute(F1[dof,mode])])
t = np.array([0, np.angle(F1[dof,mode])])
ax[mode].plot(t, r, 'o-', label='DOF #{}'.format(dof+1))
plt.legend(loc='lower left', bbox_to_anchor=(1., 0.))
plt.show()
print(np.round_(F1.T*C1*F1, 4))
A = np.bmat([[np.zeros_like(MM), MM], [MM, C1]])
print(A)
B = np.bmat([[MM, np.zeros_like(MM)], [np.zeros_like(MM), -KK]])
print(B)
w1, v1 = LA.eig(LA.solve(A,B))
ix = np.argsort(np.absolute(w1))
w1 = w1[ix]
v1 = v1[:,ix]
print(np.round_(w1, 4))
print(np.round_(v1, 4))
print(np.round_(w1[::2], 4))
zw = -w1.real # damping coefficient time angular frequency
wD = w1.imag # damped angular frequency
zn = 1./np.sqrt(1.+(wD/-zw)**2) # the minus sign is formally correct!
wn = zw/zn # undamped angular frequency
print('Angular frequency: {}'.format(wn[[0,2]]))
print('Damping coefficient: {}'.format(zn[[0,2]]))
print(LA.norm(v1[:,::2], axis=0))
AA = v1[:2,[0,2]]
AB = AA.conjugate()
BA = np.multiply(AA,w1[[0,2]])
BB = BA.conjugate()
v1_new = np.bmat([[AA, AB], [BA, BB]])
print(np.round_(v1_new[:,[0,2,1,3]], 4))
fig, ax = plt.subplots(1, 2, subplot_kw=dict(polar=True))
for mode in range(2):
ax[mode].set_title('Mode #{}'.format(mode+1))
for dof in range(2):
r = np.array([0, np.absolute(v1[dof,2*mode])])
t = np.array([0, np.angle(v1[dof,2*mode])])
ax[mode].plot(t, r, 'o-', label='DOF #{}'.format(dof+1))
plt.legend(loc='lower left', bbox_to_anchor=(1., 0.))
plt.show()
print(np.round_(F1.T*C2*F1, 4))
A = np.bmat([[np.zeros_like(MM), MM], [MM, C2]])
print(A)
B = np.bmat([[MM, np.zeros_like(MM)], [np.zeros_like(MM), -KK]])
print(B)
w2, v2 = LA.eig(LA.solve(A,B))
ix = np.argsort(np.absolute(w2))
w2 = w2[ix]
v2 = v2[:,ix]
print(np.round_(w2, 4))
print(np.round_(v2, 4))
print(np.round_(w2[[0,2]], 4))
zw = -w2.real # damping coefficient times angular frequency
wD = w2.imag # damped angular frequency
zn = 1./np.sqrt(1.+(wD/-zw)**2) # the minus sign is formally correct!
wn = zw/zn # undamped angular frequency
print('Angular frequency: {}'.format(wn[[0,2]]))
print('Damping coefficient: {}'.format(zn[[0,2]]))
print(LA.norm(v2[:,[0,2]], axis=0))
AA = v2[:2,[0,2]]
AB = AA.conjugate()
BA = np.multiply(AA,w2[[0,2]])
BB = BA.conjugate()
v2_new = np.bmat([[AA, AB], [BA, BB]])
print(np.round_(v2_new[:,[0,2,1,3]], 4))
fig, ax = plt.subplots(1, 2, subplot_kw=dict(polar=True))
for mode in range(2):
ax[mode].set_title('Mode #{}'.format(mode+1))
for dof in range(2):
r = np.array([0, np.absolute(v2[dof,2*mode])])
t = np.array([0, np.angle(v2[dof,2*mode])])
ax[mode].plot(t, r, 'o-', label='DOF #{}'.format(dof+1))
plt.legend(loc='lower left', bbox_to_anchor=(1., 0.))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the Cem class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
Step2: Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
Step3: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,
Step4: OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.
Step5: Before running the model, let's set a couple input parameters. These two parameters represent the wave height and wave period of the incoming waves to the coastline.
Step6: The main output variable for this model is water depth. In this case, the CSDMS Standard Name is much shorter
Step7: With the grid_id, we can now get information about the grid. For instance, the number of dimension and the type of grid (structured, unstructured, etc.). This grid happens to be uniform rectilinear. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but instead are scalars.
Step8: Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include
Step9: Allocate memory for the water depth grid and get the current values from cem.
Step10: Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.
Step11: It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain.
Step12: Right now we have waves coming in but no sediment entering the ocean. To add some discharge, we need to figure out where to put it. For now we'll put it on a cell that's next to the ocean.
Step13: The CSDMS Standard Name for this variable is
Step14: Set the bedload flux and run the model.
Step15: Let's add another sediment source with a different flux and update the model.
Step16: Here we shut off the sediment supply completely.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pymt.models
cem = pymt.models.Cem()
cem.output_var_names
cem.input_var_names
angle_name = 'sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity'
print("Data type: %s" % cem.get_var_type(angle_name))
print("Units: %s" % cem.get_var_units(angle_name))
print("Grid id: %d" % cem.get_var_grid(angle_name))
print("Number of elements in grid: %d" % cem.get_grid_number_of_nodes(0))
print("Type of grid: %s" % cem.get_grid_type(0))
args = cem.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.)
cem.initialize(*args)
import numpy as np
cem.set_value("sea_surface_water_wave__height", 2.)
cem.set_value("sea_surface_water_wave__period", 7.)
cem.set_value("sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity", 0. * np.pi / 180.)
grid_id = cem.get_var_grid('sea_water__depth')
grid_type = cem.get_grid_type(grid_id)
grid_rank = cem.get_grid_ndim(grid_id)
print('Type of grid: %s (%dD)' % (grid_type, grid_rank))
spacing = np.empty((grid_rank, ), dtype=float)
shape = cem.get_grid_shape(grid_id)
cem.get_grid_spacing(grid_id, out=spacing)
print('The grid has %d rows and %d columns' % (shape[0], shape[1]))
print('The spacing between rows is %f and between columns is %f' % (spacing[0], spacing[1]))
z = np.empty(shape, dtype=float)
cem.get_value('sea_water__depth', out=z)
def plot_coast(spacing, z):
import matplotlib.pyplot as plt
xmin, xmax = 0., z.shape[1] * spacing[0] * 1e-3
ymin, ymax = 0., z.shape[0] * spacing[1] * 1e-3
plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean')
plt.colorbar().ax.set_ylabel('Water Depth (m)')
plt.xlabel('Along shore (km)')
plt.ylabel('Cross shore (km)')
plot_coast(spacing, z)
qs = np.zeros_like(z)
qs[0, 100] = 1250
cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate')
cem.time_step, cem.time_units, cem.time
for time in range(3000):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
cem.time
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
val = np.empty((5, ), dtype=float)
cem.get_value("basin_outlet~coastal_center__x_coordinate", val)
val / 100.
qs[0, 150] = 1500
for time in range(3750):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
qs.fill(0.)
for time in range(4000):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll get some compounds. Here we just use PubChem CIDs to retrieve, but you could search (e.g. using name, SMILES, SDF, etc.).
Step2: The similarity between two molecules is typically calculated using molecular fingerprints that encode structural information about the molecule as a series of bits (0 or 1). These bits represent the presence or absence of particular patterns or substructures — two molecules that contain more of the same patterns will have more bits in common, indicating that they are more similar.
Step3: We can decode this from hexadecimal and then display as a binary string as follows
Step4: There is more information about the PubChem fingerprints at ftp
Step5: Let's try it out
|
<ASSISTANT_TASK:>
Python Code:
import pubchempy as pcp
from IPython.display import Image
coumarin = pcp.Compound.from_cid(323)
Image(url='https://pubchem.ncbi.nlm.nih.gov/image/imgsrv.fcgi?cid=323&t=l')
coumarin_314 = pcp.Compound.from_cid(72653)
Image(url='https://pubchem.ncbi.nlm.nih.gov/image/imgsrv.fcgi?cid=72653&t=l')
coumarin_343 = pcp.Compound.from_cid(108770)
Image(url='https://pubchem.ncbi.nlm.nih.gov/image/imgsrv.fcgi?cid=108770&t=l')
aspirin = pcp.Compound.from_cid(2244)
Image(url='https://pubchem.ncbi.nlm.nih.gov/image/imgsrv.fcgi?cid=2244&t=l')
coumarin.fingerprint
bin(int(coumarin.fingerprint, 16))
def tanimoto(compound1, compound2):
fp1 = int(compound1.fingerprint, 16)
fp2 = int(compound2.fingerprint, 16)
fp1_count = bin(fp1).count('1')
fp2_count = bin(fp2).count('1')
both_count = bin(fp1 & fp2).count('1')
return float(both_count) / (fp1_count + fp2_count - both_count)
tanimoto(coumarin, coumarin)
tanimoto(coumarin, coumarin_314)
tanimoto(coumarin, coumarin_343)
tanimoto(coumarin_314, coumarin_343)
tanimoto(coumarin, aspirin)
tanimoto(coumarin_343, aspirin)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get acq stats data and clean
Step4: Model definition
Step5: Plotting and validation
Step6: Color != 1.5 fit (this is MOST acq stars)
Step7: Comparing warm_pix vs. T_ccd parametrization
Step8: Looking to see if repeat observations of particular stars impact the results
Step9: Histogram of warm pixel fraction
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table
from astropy.time import Time
import tables
from scipy import stats
import tables3_api
from scipy.interpolate import CubicSpline
%matplotlib inline
with tables.open_file('/proj/sot/ska/data/acq_stats/acq_stats.h5', 'r') as h5:
cols = h5.root.data.cols
names = {'tstart': 'guide_tstart',
'obsid': 'obsid',
'obc_id': 'acqid',
'halfwidth': 'halfw',
'warm_pix': 'n100_warm_frac',
'mag_aca': 'mag_aca',
'mag_obs': 'mean_trak_mag',
'known_bad': 'known_bad',
'color': 'color1',
'img_func': 'img_func',
'ion_rad': 'ion_rad',
'sat_pix': 'sat_pix',
'agasc_id': 'agasc_id',
't_ccd': 'ccd_temp',
'slot': 'slot'}
acqs = Table([getattr(cols, h5_name)[:] for h5_name in names.values()],
names=list(names.keys()))
year_q0 = 1999.0 + 31. / 365.25 # Jan 31 approximately
acqs['year'] = Time(acqs['tstart'], format='cxcsec').decimalyear.astype('f4')
acqs['quarter'] = (np.trunc((acqs['year'] - year_q0) * 4)).astype('f4')
acqs['color_1p5'] = np.where(acqs['color'] == 1.5, 1, 0)
# Create 'fail' column, rewriting history as if the OBC always
# ignore the MS flag in ID'ing acq stars. Set ms_disabled = False
# to not do this
obc_id = acqs['obc_id']
obc_id_no_ms = (acqs['img_func'] == 'star') & ~acqs['sat_pix'] & ~acqs['ion_rad']
acqs['fail'] = np.where(obc_id | obc_id_no_ms, 0.0, 1.0)
acqs['fail_mask'] = acqs['fail'].astype(bool)
# Define a 'mag' column that is the observed mag if available else the catalog mag
USE_OBSERVED_MAG = False
if USE_OBSERVED_MAG:
acqs['mag'] = np.where(acqs['fail_mask'], acqs['mag_aca'], acqs['mag_obs'])
else:
acqs['mag'] = acqs['mag_aca']
# Filter for year and mag (previously used data through 2007:001)
ok = (acqs['year'] > 2014.0) & (acqs['mag'] > 8.5) & (acqs['mag'] < 10.6)
# Filter known bad obsids
print('Filtering known bad obsids, start len = {}'.format(np.count_nonzero(ok)))
bad_obsids = [
# Venus
2411,2414,6395,7306,7307,7308,7309,7311,7312,7313,7314,7315,7317,7318,7406,583,
7310,9741,9742,9743,9744,9745,9746,9747,9749,9752,9753,9748,7316,15292,16499,
16500,16501,16503,16504,16505,16506,16502,
]
for badid in bad_obsids:
ok = ok & (acqs['obsid'] != badid)
print('Filtering known bad obsids, end len = {}'.format(np.count_nonzero(ok)))
data_all = acqs[ok]
del data_all['img_func']
data_all.sort('year')
# Adjust probability (in probit space) for box size. See:
# https://github.com/sot/skanb/blob/master/pea-test-set/fit_box_size_acq_prob.ipynb
b1 = 0.96
b2 = -0.30
box0 = (data_all['halfwidth'] - 120) / 120 # normalized version of box, equal to 0.0 at nominal default
data_all['box_delta'] = b1 * box0 + b2 * box0**2
data_all = data_all.group_by('quarter')
data_mean = data_all.groups.aggregate(np.mean)
spline_mags = np.array([8.5, 9.25, 10.0, 10.4, 10.6])
def p_fail(pars, mag,
wp, wp2=None,
box_delta=0):
Acquisition probability model
:param pars: 7 parameters (3 x offset, 3 x scale, p_fail for bright stars)
:param wp: warm fraction
:param box_delta: search box half width (arcsec)
p_bright_fail = 0.03 # For now
p0s, p1s, p2s = pars[0:5], pars[5:10], pars[10:15]
if wp2 is None:
wp2 = wp ** 2
# Make sure box_delta has right dimensions
wp, box_delta = np.broadcast_arrays(wp, box_delta)
p0 = CubicSpline(spline_mags, p0s, bc_type=((1, 0.0), (2, 0.0)))(mag)
p1 = CubicSpline(spline_mags, p1s, bc_type=((1, 0.0), (2, 0.0)))(mag)
p2 = CubicSpline(spline_mags, p2s, bc_type=((1, 0.0), (2, 0.0)))(mag)
probit_p_fail = p0 + p1 * wp + p2 * wp2 + box_delta
p_fail = stats.norm.cdf(probit_p_fail) # transform from probit to linear probability
return p_fail
def p_acq_fail(data=None):
Sherpa fit function wrapper to ensure proper use of data in fitting.
if data is None:
data = data_all
wp = (data['warm_pix'] - 0.13) / 0.1
wp2 = wp ** 2
box_delta = data['box_delta']
mag = data['mag']
def sherpa_func(pars, x=None):
return p_fail(pars, mag, wp, wp2, box_delta)
return sherpa_func
def fit_poly_spline_model(data_mask=None):
from sherpa import ui
data = data_all if data_mask is None else data_all[data_mask]
comp_names = [f'p{i}{j}' for i in range(3) for j in range(5)]
# Approx starting values based on plot of p0, p1, p2 in
# fit_acq_prob_model-2018-04-poly-warmpix
spline_p = {}
spline_p[0] = np.array([-2.6, -2.3, -1.7, -1.0, 0.0])
spline_p[1] = np.array([0.1, 0.1, 0.3, 0.6, 2.4])
spline_p[2] = np.array([0.0, 0.1, 0.5, 0.4, 0.1])
data_id = 1
ui.set_method('simplex')
ui.set_stat('cash')
ui.load_user_model(p_acq_fail(data), 'model')
ui.add_user_pars('model', comp_names)
ui.set_model(data_id, 'model')
ui.load_arrays(data_id, np.array(data['year']), np.array(data['fail'], dtype=np.float))
# Initial fit values from fit of all data
fmod = ui.get_model_component('model')
for i in range(3):
for j in range(5):
comp_name = f'p{i}{j}'
setattr(fmod, comp_name, spline_p[i][j])
comp = getattr(fmod, comp_name)
comp.max = 10
comp.min = -4.0 if i == 0 else 0.0
ui.fit(data_id)
# conf = ui.get_confidence_results()
return ui.get_fit_results()
def plot_fit_grouped(pars, group_col, group_bin, mask=None, log=False, colors='br', label=None, probit=False):
data = data_all if mask is None else data_all[mask]
data['model'] = p_acq_fail(data)(pars)
group = np.trunc(data[group_col] / group_bin)
data = data.group_by(group)
data_mean = data.groups.aggregate(np.mean)
len_groups = np.diff(data.groups.indices)
data_fail = data_mean['fail']
model_fail = np.array(data_mean['model'])
fail_sigmas = np.sqrt(data_fail * len_groups) / len_groups
# Possibly plot the data and model probabilities in probit space
if probit:
dp = stats.norm.ppf(np.clip(data_fail + fail_sigmas, 1e-6, 1-1e-6))
dm = stats.norm.ppf(np.clip(data_fail - fail_sigmas, 1e-6, 1-1e-6))
data_fail = stats.norm.ppf(data_fail)
model_fail = stats.norm.ppf(model_fail)
fail_sigmas = np.vstack([data_fail - dm, dp - data_fail])
plt.errorbar(data_mean[group_col], data_fail, yerr=fail_sigmas,
fmt='.' + colors[1:], label=label, markersize=8)
plt.plot(data_mean[group_col], model_fail, '-' + colors[0])
if log:
ax = plt.gca()
ax.set_yscale('log')
def mag_filter(mag0, mag1):
ok = (data_all['mag'] > mag0) & (data_all['mag'] < mag1)
return ok
def t_ccd_filter(t_ccd0, t_ccd1):
ok = (data_all['t_ccd'] > t_ccd0) & (data_all['t_ccd'] < t_ccd1)
return ok
def wp_filter(wp0, wp1):
ok = (data_all['warm_pix'] > wp0) & (data_all['warm_pix'] < wp1)
return ok
def plot_fit_all(parvals, mask=None, probit=False):
if mask is None:
mask = np.ones(len(data_all), dtype=bool)
plt.figure()
plot_fit_grouped(parvals, 'mag', 0.25, wp_filter(0.20, 0.25) & mask, log=False,
colors='gk', label='0.20 < WP < 0.25')
plot_fit_grouped(parvals, 'mag', 0.25, wp_filter(0.10, 0.20) & mask, log=False,
colors='cm', label='0.10 < WP < 0.20')
plt.legend(loc='upper left');
plt.ylim(0.001, 1.0);
plt.xlim(9, 11)
plt.grid()
plt.figure()
plot_fit_grouped(parvals, 'mag', 0.25, wp_filter(0.20, 0.25) & mask, probit=True, colors='gk', label='0.20 < WP < 0.25')
plot_fit_grouped(parvals, 'mag', 0.25, wp_filter(0.10, 0.20) & mask, probit=True, colors='cm', label='0.10 < WP < 0.20')
plt.legend(loc='upper left');
# plt.ylim(0.001, 1.0);
plt.xlim(9, 11)
plt.grid()
plt.figure()
plot_fit_grouped(parvals, 'warm_pix', 0.02, mag_filter(10.3, 10.6) & mask, log=False, colors='gk', label='10.3 < mag < 10.6')
plot_fit_grouped(parvals, 'warm_pix', 0.02, mag_filter(10, 10.3) & mask, log=False, colors='cm', label='10 < mag < 10.3')
plot_fit_grouped(parvals, 'warm_pix', 0.02, mag_filter(9, 10) & mask, log=False, colors='br', label='9 < mag < 10')
plt.legend(loc='best')
plt.grid()
plt.figure()
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(10.3, 10.6) & mask, colors='gk', label='10.3 < mag < 10.6')
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(10, 10.3) & mask, colors='cm', label='10 < mag < 10.3')
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.5, 10) & mask, colors='br', label='9.5 < mag < 10')
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.0, 9.5) & mask, colors='gk', label='9.0 < mag < 9.5')
plt.legend(loc='best')
plt.grid()
plt.figure()
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(10.3, 10.6) & mask, colors='gk', label='10.3 < mag < 10.6', probit=True)
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(10, 10.3) & mask, colors='cm', label='10 < mag < 10.3', probit=True)
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.5, 10) & mask, colors='br', label='9.5 < mag < 10', probit=True)
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.0, 9.5) & mask, colors='gk', label='9.0 < mag < 9.5', probit=True)
plt.legend(loc='best')
plt.grid();
def plot_splines(pars):
mag = np.arange(8.5, 10.81, 0.1)
p0 = CubicSpline(spline_mags, pars[0:5], bc_type=((1, 0.0), (2, 0.0)))(mag)
p1 = CubicSpline(spline_mags, pars[5:10], bc_type=((1, 0.0), (2, 0.0)))(mag)
p2 = CubicSpline(spline_mags, pars[10:15], bc_type=((1, 0.0), (2, 0.0)))(mag)
plt.plot(mag, p0, label='p0')
plt.plot(mag, p1, label='p1')
plt.plot(mag, p2, label='p2')
plt.grid()
plt.legend();
# fit = fit_sota_model(data_all['color'] == 1.5, ms_disabled=True)
mask_no_1p5 = data_all['color'] != 1.5
fit_no_1p5 = fit_poly_spline_model(mask_no_1p5)
plot_splines(fit_no_1p5.parvals)
plot_fit_all(fit_no_1p5.parvals, mask_no_1p5)
plot_fit_grouped(fit_no_1p5.parvals, 'year', 0.10, mag_filter(10.3, 10.6) & mask_no_1p5,
colors='gk', label='10.3 < mag < 10.6')
plt.xlim(2016.0, None)
y0, y1 = plt.ylim()
x = DateTime('2017-10-01T00:00:00').frac_year
plt.plot([x, x], [y0, y1], '--r', alpha=0.5)
plt.grid();
plot_fit_grouped(fit_no_1p5.parvals, 'year', 0.10, mag_filter(10.0, 10.3) & mask_no_1p5,
colors='gk', label='10.0 < mag < 10.3')
plt.xlim(2016.0, None)
y0, y1 = plt.ylim()
x = DateTime('2017-10-01T00:00:00').frac_year
plt.plot([x, x], [y0, y1], '--r', alpha=0.5)
plt.grid();
dat = data_all
ok = (dat['year'] > 2017.75) & (mask_no_1p5) & (dat['mag_aca'] > 10.3) & (dat['mag_aca'] < 10.6)
dok = dat[ok]
plt.hist(dok['t_ccd'], bins=np.arange(-15, -9, 0.4));
plt.grid();
dat = data_all
ok = (dat['year'] < 2017.75) & (dat['year'] > 2017.0) & (mask_no_1p5) & (dat['mag_aca'] > 10.3) & (dat['mag_aca'] < 10.6)
dok = dat[ok]
plt.hist(dok['t_ccd'], bins=np.arange(-15, -9, 0.4));
plt.grid()
dat = data_all
ok = (dat['year'] > 2017.75) & (mask_no_1p5) & (dat['mag_aca'] > 10.3) & (dat['mag_aca'] < 10.6)
dok = dat[ok]
plt.hist(dok['warm_pix'], bins=np.linspace(0.15, 0.30, 30));
plt.grid()
dat = data_all
ok = (dat['year'] < 2017.75) & (dat['year'] > 2017.0) & (mask_no_1p5) & (dat['mag_aca'] > 10.3) & (dat['mag_aca'] < 10.6)
dok = dat[ok]
plt.hist(dok['warm_pix'], bins=np.linspace(0.15, 0.30, 30));
plt.grid()
np.count_nonzero(ok)
from collections import defaultdict
fails = defaultdict(list)
for row in dok:
fails[row['agasc_id']].append(row['fail'])
fails
np.count_nonzero(dat['fail_mask'][ok])
dok = dat[ok]
plt.hist(data_all['warm_pix'], bins=100)
plt.grid()
plt.xlabel('Warm pixel fraction');
plt.hist(data_all['mag'], bins=np.arange(6, 11.1, 0.1))
plt.grid()
plt.xlabel('Mag_aca')
ok = ~data_all['fail'].astype(bool)
dok = data_all[ok]
plt.plot(dok['mag_aca'], dok['mag_obs'] - dok['mag_aca'], '.')
plt.plot(dok['mag_aca'], dok['mag_obs'] - dok['mag_aca'], ',', alpha=0.3)
plt.grid()
plt.plot(data_all['year'], data_all['warm_pix'])
plt.ylim(0, None)
plt.grid();
plt.plot(data_all['year'], data_all['t_ccd'])
# plt.ylim(0, None)
plt.xlim(2017.0, None)
plt.grid();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up logging
Step2: Set up corpus
Step3: Set up two topic models
Step4: Using U_Mass Coherence
Step5: View the pipeline parameters for one coherence model
Step6: Interpreting the topics
Step7: Using C_V coherence
Step8: Pipeline parameters for C_V coherence
Step9: Print coherence values
Step10: Support for wrappers
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import logging
import pyLDAvis.gensim
import json
import warnings
warnings.filterwarnings('ignore') # To ignore all warnings that arise here to enhance clarity
from gensim.models.coherencemodel import CoherenceModel
from gensim.models.ldamodel import LdaModel
from gensim.models.wrappers import LdaVowpalWabbit, LdaMallet
from gensim.corpora.dictionary import Dictionary
from numpy import array
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
texts = [['human', 'interface', 'computer'],
['survey', 'user', 'computer', 'system', 'response', 'time'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees'],
['graph', 'minors', 'trees'],
['graph', 'minors', 'survey']]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
goodLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=50, num_topics=2)
badLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=1, num_topics=2)
goodcm = CoherenceModel(model=goodLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
badcm = CoherenceModel(model=badLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
print goodcm
pyLDAvis.enable_notebook()
pyLDAvis.gensim.prepare(goodLdaModel, corpus, dictionary)
pyLDAvis.gensim.prepare(badLdaModel, corpus, dictionary)
print goodcm.get_coherence()
print badcm.get_coherence()
goodcm = CoherenceModel(model=goodLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')
badcm = CoherenceModel(model=badLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')
print goodcm
print goodcm.get_coherence()
print badcm.get_coherence()
model1 = LdaVowpalWabbit('/home/devashish/vw-8', corpus=corpus, num_topics=2, id2word=dictionary, passes=50)
model2 = LdaVowpalWabbit('/home/devashish/vw-8', corpus=corpus, num_topics=2, id2word=dictionary, passes=1)
cm1 = CoherenceModel(model=model1, corpus=corpus, coherence='u_mass')
cm2 = CoherenceModel(model=model2, corpus=corpus, coherence='u_mass')
print cm1.get_coherence()
print cm2.get_coherence()
model1 = LdaMallet('/home/devashish/mallet-2.0.8RC3/bin/mallet',corpus=corpus , num_topics=2, id2word=dictionary, iterations=50)
model2 = LdaMallet('/home/devashish/mallet-2.0.8RC3/bin/mallet',corpus=corpus , num_topics=2, id2word=dictionary, iterations=1)
cm1 = CoherenceModel(model=model1, texts=texts, coherence='c_v')
cm2 = CoherenceModel(model=model2, texts=texts, coherence='c_v')
print cm1.get_coherence()
print cm2.get_coherence()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What does the following code do?
Step2: Defines a linear function of x with the slope a and the intercept b.
Step3: Defines a residuals function for linear that compares the output of linear to whatever's in y.
Step4: Uses least_squares regression to find values of a and b that minimize linear_r given d.x and d.y.
Step7: Put together
Step10: <h4>For your assigned model
|
<ASSISTANT_TASK:>
Python Code:
d = pd.read_csv("data/dataset_0.csv")
plt.plot(d.x,d.y,'o')
def linear(x,a,b):
return a + b*x
def linear(x,a,b):
return a + b*x
def linear_r(param,x,y):
return linear(x,param[0],param[1]) - y
def linear_r(param,x,y): # copied from previous cell
return linear(x,param[0],param[1]) - y # copied from previous cell
param_guesses = [0,0]
fit = scipy.optimize.least_squares(linear_r,param_guesses,
args=(d.x,d.y))
fit_a = fit.x[0]
fit_b = fit.x[1]
sum_of_square_residuals = fit.cost
print(fit.x)
x_range = np.linspace(np.min(d.x),np.max(d.x),100)
plt.plot(d.x,d.y,"o")
plt.plot(x_range,linear(x_range,fit_a,fit_b))
plt.plot(x_range,linear(x_range,0,0))
def linear(x,a,b):
Linear model of x using a (slope) and b (intercept)
return a + b*x
def linear_r(param,x,y):
Residuals function for linear
return linear(x,param[0],param[1]) - y
# Read data
d = pd.read_csv("data/dataset_0.csv")
plt.plot(d.x,d.y,'o')
# Perform regression
param_guesses = [1,1]
fit = scipy.optimize.least_squares(linear_r,param_guesses,args=(d.x,d.y))
fit_a = fit.x[0]
fit_b = fit.x[1]
sum_of_square_residuals = fit.cost
# Plot result
x_range = np.linspace(np.min(d.x),np.max(d.x),100)
plt.plot(x_range,linear(x_range,fit_a,fit_b))
print(fit.cost)
def binding(x,a,b):
binding equation with b
return a*(b*x/(1 + b*x))
def binding_r(param,x,y):
Residuals function for binding
return binding(x,param[0],param[1]) - y
# Read data
d = pd.read_csv("data/dataset_0.csv")
plt.plot(d.x,d.y,'o')
# Perform regression
param_guesses = [5,0.3]
fit = scipy.optimize.least_squares(binding_r,param_guesses,args=(d.x,d.y))
fit_a = fit.x[0]
fit_b = fit.x[1]
sum_of_square_residuals = fit.cost
# Plot result
x_range = np.linspace(np.min(d.x),np.max(d.x),100)
plt.plot(x_range,binding(x_range,fit_a,fit_b))
print(fit)
def model(a,b,x):
return a*(b*x/(1 + b*x))
#Residuals function
def model_r(param,x,y):
return model(param[0],param[1],x) - y
# Read data
d = pd.read_csv("data/dataset_0.csv")
plt.plot(d.x,d.y,'o')
# Perform regression
param_guesses = [5,0.3]
fit = scipy.optimize.least_squares(model_r,param_guesses,args=(d.x,d.y))
fit_a = fit.x[0]
fit_b = fit.x[1]
sum_of_square_residuals = fit.cost
# Plot result
x_range = np.linspace(np.min(d.x),np.max(d.x),100)
plt.plot(x_range,model(x_range,fit_a,fit_b))
print(fit.cost)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading data
Step2: Loading embeddings
Step3: Vocabulary and Coverage functions
Step4: Starting point
Step5: #### Paragram seems to have a significantly lower coverage.
Step6: Better, but we lost a bit of information on the other embeddings.
Step7: What's wrong ?
Step8: First faults appearing are
Step9: FastText does not understand contractions
Step10: Now, let us deal with special characters
Step11: FastText seems to have a better knowledge of special characters
Step12: What's still missing ?
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import operator
import re
train = pd.read_csv("../input/train.csv").drop('target', axis=1)
test = pd.read_csv("../input/test.csv")
df = pd.concat([train ,test])
print("Number of texts: ", df.shape[0])
def load_embed(file):
def get_coefs(word,*arr):
return word, np.asarray(arr, dtype='float32')
if file == '../input/embeddings/wiki-news-300d-1M/wiki-news-300d-1M.vec':
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(file) if len(o)>100)
else:
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(file, encoding='latin'))
return embeddings_index
glove = '../input/embeddings/glove.840B.300d/glove.840B.300d.txt'
paragram = '../input/embeddings/paragram_300_sl999/paragram_300_sl999.txt'
wiki_news = '../input/embeddings/wiki-news-300d-1M/wiki-news-300d-1M.vec'
print("Extracting GloVe embedding")
embed_glove = load_embed(glove)
print("Extracting Paragram embedding")
embed_paragram = load_embed(paragram)
print("Extracting FastText embedding")
embed_fasttext = load_embed(wiki_news)
def build_vocab(texts):
sentences = texts.apply(lambda x: x.split()).values
vocab = {}
for sentence in sentences:
for word in sentence:
try:
vocab[word] += 1
except KeyError:
vocab[word] = 1
return vocab
def check_coverage(vocab, embeddings_index):
known_words = {}
unknown_words = {}
nb_known_words = 0
nb_unknown_words = 0
for word in vocab.keys():
try:
known_words[word] = embeddings_index[word]
nb_known_words += vocab[word]
except:
unknown_words[word] = vocab[word]
nb_unknown_words += vocab[word]
pass
print('Found embeddings for {:.2%} of vocab'.format(len(known_words) / len(vocab)))
print('Found embeddings for {:.2%} of all text'.format(nb_known_words / (nb_known_words + nb_unknown_words)))
unknown_words = sorted(unknown_words.items(), key=operator.itemgetter(1))[::-1]
return unknown_words
vocab = build_vocab(df['question_text'])
print("Glove : ")
oov_glove = check_coverage(vocab, embed_glove)
print("Paragram : ")
oov_paragram = check_coverage(vocab, embed_paragram)
print("FastText : ")
oov_fasttext = check_coverage(vocab, embed_fasttext)
df['lowered_question'] = df['question_text'].apply(lambda x: x.lower())
vocab_low = build_vocab(df['lowered_question'])
print("Glove : ")
oov_glove = check_coverage(vocab_low, embed_glove)
print("Paragram : ")
oov_paragram = check_coverage(vocab_low, embed_paragram)
print("FastText : ")
oov_fasttext = check_coverage(vocab_low, embed_fasttext)
def add_lower(embedding, vocab):
count = 0
for word in vocab:
if word in embedding and word.lower() not in embedding:
embedding[word.lower()] = embedding[word]
count += 1
print(f"Added {count} words to embedding")
print("Glove : ")
add_lower(embed_glove, vocab)
print("Paragram : ")
add_lower(embed_paragram, vocab)
print("FastText : ")
add_lower(embed_fasttext, vocab)
print("Glove : ")
oov_glove = check_coverage(vocab_low, embed_glove)
print("Paragram : ")
oov_paragram = check_coverage(vocab_low, embed_paragram)
print("FastText : ")
oov_fasttext = check_coverage(vocab_low, embed_fasttext)
oov_glove[:10]
contraction_mapping = {"ain't": "is not", "aren't": "are not","can't": "cannot", "'cause": "because", "could've": "could have", "couldn't": "could not", "didn't": "did not", "doesn't": "does not", "don't": "do not", "hadn't": "had not", "hasn't": "has not", "haven't": "have not", "he'd": "he would","he'll": "he will", "he's": "he is", "how'd": "how did", "how'd'y": "how do you", "how'll": "how will", "how's": "how is", "I'd": "I would", "I'd've": "I would have", "I'll": "I will", "I'll've": "I will have","I'm": "I am", "I've": "I have", "i'd": "i would", "i'd've": "i would have", "i'll": "i will", "i'll've": "i will have","i'm": "i am", "i've": "i have", "isn't": "is not", "it'd": "it would", "it'd've": "it would have", "it'll": "it will", "it'll've": "it will have","it's": "it is", "let's": "let us", "ma'am": "madam", "mayn't": "may not", "might've": "might have","mightn't": "might not","mightn't've": "might not have", "must've": "must have", "mustn't": "must not", "mustn't've": "must not have", "needn't": "need not", "needn't've": "need not have","o'clock": "of the clock", "oughtn't": "ought not", "oughtn't've": "ought not have", "shan't": "shall not", "sha'n't": "shall not", "shan't've": "shall not have", "she'd": "she would", "she'd've": "she would have", "she'll": "she will", "she'll've": "she will have", "she's": "she is", "should've": "should have", "shouldn't": "should not", "shouldn't've": "should not have", "so've": "so have","so's": "so as", "this's": "this is","that'd": "that would", "that'd've": "that would have", "that's": "that is", "there'd": "there would", "there'd've": "there would have", "there's": "there is", "here's": "here is","they'd": "they would", "they'd've": "they would have", "they'll": "they will", "they'll've": "they will have", "they're": "they are", "they've": "they have", "to've": "to have", "wasn't": "was not", "we'd": "we would", "we'd've": "we would have", "we'll": "we will", "we'll've": "we will have", "we're": "we are", "we've": "we have", "weren't": "were not", "what'll": "what will", "what'll've": "what will have", "what're": "what are", "what's": "what is", "what've": "what have", "when's": "when is", "when've": "when have", "where'd": "where did", "where's": "where is", "where've": "where have", "who'll": "who will", "who'll've": "who will have", "who's": "who is", "who've": "who have", "why's": "why is", "why've": "why have", "will've": "will have", "won't": "will not", "won't've": "will not have", "would've": "would have", "wouldn't": "would not", "wouldn't've": "would not have", "y'all": "you all", "y'all'd": "you all would","y'all'd've": "you all would have","y'all're": "you all are","y'all've": "you all have","you'd": "you would", "you'd've": "you would have", "you'll": "you will", "you'll've": "you will have", "you're": "you are", "you've": "you have" }
def known_contractions(embed):
known = []
for contract in contraction_mapping:
if contract in embed:
known.append(contract)
return known
print("- Known Contractions -")
print(" Glove :")
print(known_contractions(embed_glove))
print(" Paragram :")
print(known_contractions(embed_paragram))
print(" FastText :")
print(known_contractions(embed_fasttext))
def clean_contractions(text, mapping):
specials = ["’", "‘", "´", "`"]
for s in specials:
text = text.replace(s, "'")
text = ' '.join([mapping[t] if t in mapping else t for t in text.split(" ")])
return text
df['treated_question'] = df['lowered_question'].apply(lambda x: clean_contractions(x, contraction_mapping))
vocab = build_vocab(df['treated_question'])
print("Glove : ")
oov_glove = check_coverage(vocab, embed_glove)
print("Paragram : ")
oov_paragram = check_coverage(vocab, embed_paragram)
print("FastText : ")
oov_fasttext = check_coverage(vocab, embed_fasttext)
punct = "/-'?!.,#$%\'()*+-/:;<=>@[\\]^_`{|}~" + '""“”’' + '∞θ÷α•à−β∅³π‘₹´°£€\×™√²—–&'
def unknown_punct(embed, punct):
unknown = ''
for p in punct:
if p not in embed:
unknown += p
unknown += ' '
return unknown
print("Glove :")
print(unknown_punct(embed_glove, punct))
print("Paragram :")
print(unknown_punct(embed_paragram, punct))
print("FastText :")
print(unknown_punct(embed_fasttext, punct))
punct_mapping = {"‘": "'", "₹": "e", "´": "'", "°": "", "€": "e", "™": "tm", "√": " sqrt ", "×": "x", "²": "2", "—": "-", "–": "-", "’": "'", "_": "-", "`": "'", '“': '"', '”': '"', '“': '"', "£": "e", '∞': 'infinity', 'θ': 'theta', '÷': '/', 'α': 'alpha', '•': '.', 'à': 'a', '−': '-', 'β': 'beta', '∅': '', '³': '3', 'π': 'pi', }
def clean_special_chars(text, punct, mapping):
for p in mapping:
text = text.replace(p, mapping[p])
for p in punct:
text = text.replace(p, f' {p} ')
specials = {'\u200b': ' ', '…': ' ... ', '\ufeff': '', 'करना': '', 'है': ''} # Other special characters that I have to deal with in last
for s in specials:
text = text.replace(s, specials[s])
return text
df['treated_question'] = df['treated_question'].apply(lambda x: clean_special_chars(x, punct, punct_mapping))
vocab = build_vocab(df['treated_question'])
print("Glove : ")
oov_glove = check_coverage(vocab, embed_glove)
print("Paragram : ")
oov_paragram = check_coverage(vocab, embed_paragram)
print("FastText : ")
oov_fasttext = check_coverage(vocab, embed_fasttext)
oov_fasttext[:100]
mispell_dict = {'colour': 'color', 'centre': 'center', 'favourite': 'favorite', 'travelling': 'traveling', 'counselling': 'counseling', 'theatre': 'theater', 'cancelled': 'canceled', 'labour': 'labor', 'organisation': 'organization', 'wwii': 'world war 2', 'citicise': 'criticize', 'youtu ': 'youtube ', 'Qoura': 'Quora', 'sallary': 'salary', 'Whta': 'What', 'narcisist': 'narcissist', 'howdo': 'how do', 'whatare': 'what are', 'howcan': 'how can', 'howmuch': 'how much', 'howmany': 'how many', 'whydo': 'why do', 'doI': 'do I', 'theBest': 'the best', 'howdoes': 'how does', 'mastrubation': 'masturbation', 'mastrubate': 'masturbate', "mastrubating": 'masturbating', 'pennis': 'penis', 'Etherium': 'Ethereum', 'narcissit': 'narcissist', 'bigdata': 'big data', '2k17': '2017', '2k18': '2018', 'qouta': 'quota', 'exboyfriend': 'ex boyfriend', 'airhostess': 'air hostess', "whst": 'what', 'watsapp': 'whatsapp', 'demonitisation': 'demonetization', 'demonitization': 'demonetization', 'demonetisation': 'demonetization'}
def correct_spelling(x, dic):
for word in dic.keys():
x = x.replace(word, dic[word])
return x
df['treated_question'] = df['treated_question'].apply(lambda x: correct_spelling(x, mispell_dict))
vocab = build_vocab(df['treated_question'])
print("Glove : ")
oov_glove = check_coverage(vocab, embed_glove)
print("Paragram : ")
oov_paragram = check_coverage(vocab, embed_paragram)
print("FastText : ")
oov_fasttext = check_coverage(vocab, embed_fasttext)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementación no vectorizada
Step2: Ejemplo 2
Step3: Ejemplo 3
|
<ASSISTANT_TASK:>
Python Code:
def derivada(f, x_0, delta_x):
pendiente = (f(x_0 + delta_x) - f(x_0))/delta_x
return pendiente
def raiz(f, x_0, delta_x):
x_1 = x_0 - f(x_0)/derivada(f, x_0, delta_x)
return x_1
def secante_modificada(f, x_0, delta_x):
print("{0:s} \t {1:15s} \t {2:15s} \t {3:15s}".format('i', 'x anterior', 'x actual', 'error relativo %'))
x_actual = x_0
i = 0
print("{0:d} \t {1:15s} \t {2:.15f} \t {3:15s}".format(i, '???????????????', x_actual, '???????????????'))
error_permitido = 0.000001
while True:
x_anterior = x_actual
x_actual = raiz(f, x_anterior, delta_x)
if x_actual != 0:
error_relativo = abs((x_actual - x_anterior)/x_actual)*100
i = i + 1
print("{0:d} \t {1:.15f} \t {2:.15f} \t {3:15.11f}".format(i, x_anterior, x_actual, error_relativo))
if (error_relativo < error_permitido) or (i>=20):
break
print('\nx =', x_actual)
def f(x):
# f(x) = x^5 + x^3 + 3
y = x**5 + x**3 + 3
return y
derivada(f, 0, -0.5)
raiz(f, 0, -0.5)
secante_modificada(f, 0, -0.5)
secante_modificada(f, 0, -1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In looking at the top 10 trips for each station, we see some very interesting results. For Capital bikeshare, the top most common trip and the 4th most common trip share the same stations. This could imply some sort of round trip behavior. For example, "Eastern Market Metro / Pennsylvania Ave & 7th St SE" is right beside a train station, which could indicate that many people are taking a bike from the train station to Capitol Hil for the day, and when it is time to go home, they take a bike back to the train station. For New York, the most frequent trip begins and ends at Central Park, which might indicate that people are taking the bike for a leisure drive and returning to the station after a stroll around the park. Furthermore, this may indicate that stations are being used for different purposes
Step2: We'll define a round trip by looking at the start and end station to see if they are the same. One small issue, though--Capital bikeshare's data is formatted slightly differently, as shown below
Step3: Since NYC's data comes in seconds, let's convert DC's data to seconds as well.
Step4: Something isn't quite right. Looking at the data, the shortest time for NYC we're looking at is 60 seconds, and a staggering 0 seconds for DC. At a first glance, this is way too short for a "round trip". One possible explanation is that someone realized a bike they rented was malfunctioning and returned it immediately to the station. We'll need to filter out these low times before continuing so that our results aren't skewed. For now, let's say a round trip has to be at least five minutes to count. Likewise, for trip durations, both the Capital and Citi websites urge members to seek out a local bike rental shop if they need a bike for more than 24 hours. The longest trip for the NYC data is almost a week long, which will skew our results as well, so let's remove the long outliers too.
Step5: Let's look at this visually
Step6: Now that we have a better idea of how round trips work, let's take a look at one way trips. First things first, we need to check our data integrity and remove anything that doesn't make sense.
Step7: Some of these numbers are unbelievable. In some other world, a 27 bike ride <i>might</i> be justifiable, but the max one way trip length for NYC is over 62 days, which is completely unreasonable. Let's filter those out.
Step8: Looking at the frequencies for each of the locations, there seems to be a general bin for trip duration that has the highest frequency across all the categories. Aside from that, it is difficult to say that frequency alone, if at all, can indicate whether a one way trip or a round trip occurs.
Step9: To visualize the heatmap, we plan on using d3js, so we'll output the files to a .tsv format that our other code can read in.
Step10: Thanks to the "%%javascript" magic word, we can embed JavaScript right into the cells! First thing we want to do is import d3js into the notebook.
Step11: The "%%html" magic word allows us to style our HTML and SVG that we are about to create. Running this cell after the HTML and SVG is created will change the styling to match any changes made to the cell. It can also be run before running the HTML and SVG to predefine styles.
Step12: With our styles in place, we have to define our div elements so that d3 can populate them. The code to do that is located below these images, as once again, running a cell in a notebook can apply changes retroactively. Unforutnately, due to security reasons, iPython does not allow arbitrary execution of JavaScript code unless it's on your own machine, so the code you see below must be run on your own machine to display the graphs inline. Images of the graphs are embedded after the code for viewing, though!
|
<ASSISTANT_TASK:>
Python Code:
import glob
import csv
from collections import Counter
import numpy as np
from matplotlib import pyplot as plt
import re
%matplotlib inline
def get_top_trips(path,N=10):
#the headers on the CSV are slightly different depending on whether the data is from Citi or Capital
if path=="capital":
start_station = "Start station"
end_station = "End station"
if path=="citi":
start_station = "start station name"
end_station = "end station name"
trips = []
for filename in glob.glob('./'+path+'/*.csv'):
with open(filename,'rU') as f:
reader = csv.DictReader(f)
for row in reader:
trips.append((row[start_station],row[end_station]))
return Counter(trips).most_common(N)
get_top_trips("capital")
get_top_trips("citi")
def get_duration(path):
#once again, the files are formatted slightly differently...
if path=="citi":
trip_duration = 'tripduration'
start_station = 'start station id'
end_station = 'end station id'
if path=="capital":
trip_duration = 'Duration'
start_station = 'Start station ID'
end_station = 'End station ID'
duration = []
for filename in glob.glob("./"+path+"/*.csv"):
with open(filename,'rU') as f:
reader = csv.DictReader(f)
for row in reader:
duration.append((row[trip_duration],row[start_station],row[end_station]))
return duration
citibike = get_duration('citi')
print len(citibike)
capitalbike = get_duration('capital')
print len(capitalbike)
capital[1][0]
def parse_time(time):
hms = re.sub("\s\s","",re.sub("[a-z]."," ",time)).split(" ")
return int(hms[0])*60+60*int(hms[1])+int(hms[2])
nyc_round_trip = [int(entry[0]) for entry in citibike if entry[1]==entry[2]]
nyc_one_way = [int(entry[0]) for entry in citibike if entry[1]!=entry[2]]
dc_round_trip = [parse_time(entry[0]) for entry in capitalbike if entry[1]==entry[2]]
dc_one_way = [parse_time(entry[0]) for entry in capitalbike if entry[1]!=entry[2]]
min(nyc_round_trip), min(dc_round_trip)
max(nyc_round_trip), max(dc_round_trip)
nyc_round_trip = [trip for trip in nyc_round_trip if trip>=300 and trip<20000]
dc_round_trip = [trip for trip in dc_round_trip if trip>=300 and trip<20000]
min(nyc_round_trip), min(dc_round_trip)
def plot_freq(trip_data,city,nbins=500):
plt.xlim([0, 15500])
hist = plt.hist(trip_data,bins=np.arange(0,max(trip_data),len(trip_data)*1./nbins))
plt.ylim([0,22500])
plt.title(city + ' Round Trip Frequency')
plt.xlabel('Trip Duration in Seconds')
plt.ylabel('Counts')
plt.show()
plot_freq(nyc_round_trip,"NYC",500)
plot_freq(dc_round_trip,"DC",100)
min(nyc_one_way),min(dc_one_way),max(nyc_one_way),max(dc_one_way)
nyc_one_way = [trip for trip in nyc_one_way if trip>=300 and trip<20000]
dc_one_way = [trip for trip in dc_one_way if trip>=300 and trip<20000]
min(nyc_one_way),min(dc_one_way),max(nyc_one_way),max(dc_one_way)
def plot_freq_one_way(trip_data,city,nbins=500):
plt.xlim([0, 15000])
hist = plt.hist(trip_data,bins=np.arange(0,max(trip_data),len(trip_data)*1./nbins))
plt.ylim([0,1600000])
plt.title(city + ' One Way Trip Frequency')
plt.xlabel('Trip Duration in Seconds')
plt.ylabel('Counts')
plt.show()
plot_freq_one_way(nyc_one_way,"NYC",45000)
plot_freq_one_way(dc_one_way,"DC",3500)
from dateutil import parser
'''
set param='depart' or param='arrive'
'''
def count_trips(times,param,param2=None):
trips = OrderedDict()
for day in range(0,7):
trips[str(day)] = OrderedDict()
for hour in range(0,24):
trips[str(day)][str(hour)] = 0
counter = 0
for time in times[param]:
hour = parser.parse(time).strftime("%-H")
day = parser.parse(time).strftime("%w")
trips[day][hour] += 1
counter += 1
# if (counter % 20000 == 0):
# print "Counted {0} trips".format(counter)
if param2 is not None:
for time in times[param2]:
hour = parser.parse(time).strftime("%-H")
day = parser.parse(time).strftime("%w")
trips[day][hour] += 1
counter += 1
# if (counter % 20000 == 0):
# print "Counted {0} trips".format(counter)
return trips
def write_tsv(file_name, trips_dict):
trips_array = list()
for day in trips_dict:
for hour in trips_dict[day]:
trips_array.append({'day':day,'hour':hour,'count':trips_dict[day][hour]})
with open(file_name, 'w') as f:
dict_writer = csv.DictWriter(f, delimiter='\t',fieldnames=['day','hour','count'])
dict_writer.writeheader()
dict_writer.writerows(trips_array)
def get_one_way_dc():
times = dict()
times['depart'] = []
times['arrive'] = []
for filename in glob.glob("./capital/*.csv"):
with open(filename,'rU') as f:
reader = csv.DictReader(f)
for row in reader:
if row['Start station ID'] == row['End station ID']:
times['depart'].append(row['Start date'])
times['arrive'].append(row['End date'])
return times
def get_one_way_nyc():
times = dict()
times['depart'] = []
times['arrive'] = []
for filename in glob.glob("./citi/*.csv"):
with open(filename,'rU') as f:
reader = csv.DictReader(f)
for row in reader:
if row['start station id'] != row['end station id']:
times['depart'].append(row['starttime'])
times['arrive'].append(row['stoptime'])
return times
def get_round_trip_dc():
times = dict()
times['depart'] = []
times['arrive'] = []
for filename in glob.glob("./capital/*.csv"):
with open(filename,'rU') as f:
reader = csv.DictReader(f)
for row in reader:
if row['Start station ID'] != row['End station ID']:
times['depart'].append(row['Start date'])
times['arrive'].append(row['End date'])
return times
def get_round_trip_nyc():
times = dict()
times['depart'] = []
times['arrive'] = []
for filename in glob.glob("./citi/*.csv"):
with open(filename,'rU') as f:
reader = csv.DictReader(f)
for row in reader:
if row['start station id'] == row['end station id']:
times['depart'].append(row['starttime'])
times['arrive'].append(row['stoptime'])
return times
dc_one_way_data = get_one_way_dc()
dc_one_way_trips = count_trips(dc_one_way_data, 'arrive', 'depart');
write_tsv('./data/dc_one_way.tsv',dc_one_way_trips)
nyc_one_way_data = get_one_way_nyc()
nyc_one_way_trips = count_trips(nyc_one_way_data, 'arrive', 'depart');
write_tsv('./data/nyc_one_way.tsv',nyc_one_way_trips)
dc_round_trip_data = get_round_trip_dc()
dc_round_trips = count_trips(dc_round_trip_data, 'arrive', 'depart');
write_tsv('./data/dc_round_trips.tsv',dc_round_trips)
nyc_round_trip_data = get_round_trip_nyc()
nyc_round_trips = count_trips(nyc_round_trip_data, 'arrive','depart');
write_tsv('./data/nyc_round_trips.tsv',nyc_round_trips)
%%javascript
require.config({
paths: {
d3: "http://d3js.org/d3.v3.min"
}
});
require(["d3"], function(d3) {
console.log(d3.version);
});
%%html
<style>
rect.bordered {
stroke: #E6E6E6;
stroke-width:2px;
}
text.mono {
font-size: 9pt;
font-family: Consolas, courier;
fill: #aaa;
}
text.axis-workweek {
fill: #000;
}
text.axis-worktime {
fill: #000;
}
body {
font-size: 9pt;
font-family: Consolas, courier;
fill: #aaa;
}
</style>
%%html
One-way trip in DC
<div id="dccommuter"></div>
Round trip in DC
<div id="dcleisure"></div>
One-way trip in NYC
<div id="nyccommuter"></div>
Round trip in NYC
<div id="nycleisure"></div>
%%javascript
//$(document).ready(function(){
draw_heatmap("./data/dc_round_trips.tsv", "#dccommuter");
draw_heatmap("./data/dc_one_way.tsv", "#dcleisure");
draw_heatmap("./data/nyc_one_way.tsv", "#nyccommuter");
draw_heatmap("./data/nyc_round_trips.tsv", "#nycleisure");
//});
var margin = { top: 50, right: 0, bottom: 100, left: 30 },
width = 960 - margin.left - margin.right,
height = 430 - margin.top - margin.bottom,
gridSize = Math.floor(width / 24),
legendElementWidth = gridSize*2,
buckets = 9,
colors = ["#ffffd9","#edf8b1","#c7e9b4","#7fcdbb","#41b6c4","#1d91c0","#225ea8","#253494","#081d58"], // alternatively colorbrewer.YlGnBu[9]
// colors = ["#ffffd9","#edf8b1","#c7e9b4","#7fcdbb","#41b6c4","#1d91c0","#225ea8","#253494","#081d58"], // alternatively colorbrewer.YlGnBu[9]
days = ["Su", "Mo", "Tu", "We", "Th", "Fr", "Sa"],
times = ["1a", "2a", "3a", "4a", "5a", "6a", "7a", "8a", "9a", "10a", "11a", "12a", "1p", "2p", "3p", "4p", "5p", "6p", "7p", "8p", "9p", "10p", "11p", "12p"];
function draw_heatmap(source, div) {
d3.tsv(source,
function(d) {
return {
day: +d.day,
hour: +d.hour,
value: +d.count
};
},
function(error, data) {
var maxCount = d3.max(data, function (d) { return d.value; });
var colorScale = d3.scale.quantile()
.domain([0, buckets - 1, 100])
.range(colors);
var svg = d3.select(div).append("svg")
.attr("width", width + margin.left + margin.right)
.attr("height", height + margin.top + margin.bottom)
.append("g")
.attr("transform", "translate(" + margin.left + "," + margin.top + ")");
var dayLabels = svg.selectAll(".dayLabel")
.data(days)
.enter().append("text")
.text(function (d) { return d; })
.attr("x", 0)
.attr("y", function (d, i) { return i * gridSize; })
.style("text-anchor", "end")
.attr("transform", "translate(-6," + gridSize / 1.5 + ")")
.attr("class", function (d, i) { return ((i >= 1 && i <= 5) ? "dayLabel mono axis axis-workweek" : "dayLabel mono axis"); });
var timeLabels = svg.selectAll(".timeLabel")
.data(times)
.enter().append("text")
.text(function(d) { return d; })
.attr("x", function(d, i) { return i * gridSize; })
.attr("y", 0)
.style("text-anchor", "middle")
.attr("transform", "translate(" + gridSize / 2 + ", -6)")
.attr("class", function(d, i) { return ((i >= 7 && i <= 16) ? "timeLabel mono axis axis-worktime" : "timeLabel mono axis"); });
var heatMap = svg.selectAll(".hour")
.data(data)
.enter().append("rect")
// .attr("x", function(d) { return (d.hour - 1) * gridSize; })
// .attr("y", function(d) { return (d.day - 1) * gridSize; })
.attr("x", function(d) { return (d.hour) * gridSize; })
.attr("y", function(d) { return (d.day) * gridSize; })
.attr("rx", 4)
.attr("ry", 4)
.attr("class", "hour bordered")
.attr("width", gridSize)
.attr("height", gridSize)
.style("fill", colors[0]);
heatMap.transition().duration(1000)
.style("fill", function(d) { return colorScale(d.value * 100 / maxCount); });
heatMap.append("title").text(function(d) { return Math.round(d.value * 100 / maxCount) + " %"; });
var legend = svg.selectAll(".legend")
.data([0].concat(colorScale.quantiles()), function(d) { return d; })
.enter().append("g")
.attr("class", "legend");
legend.append("rect")
.attr("x", function(d, i) { return legendElementWidth * i; })
.attr("y", height)
.attr("width", legendElementWidth)
.attr("height", gridSize / 2)
.style("fill", function(d, i) { return colors[i]; });
legend.append("text")
.attr("class", "mono")
.text(function(d) { return "≥ " + (Math.round(d)) + " %"; })
.attr("x", function(d, i) { return legendElementWidth * i; })
.attr("y", height + gridSize);
svg.append("text")
.text("percentage of riders")
.attr("x", function(d) { return legendElementWidth * 9; })
.attr("y", height + gridSize)
});
}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To unpickle just do this
Step2: -=-=-= Exploring hourly and weekly consumption patterns (no seasonality) =-=-=-
Step3: Task #2 (10%)
Step4: Task #3 (10%)
Step5: -=-=-= Exploring seasonal effects =-=-=-
Step6: Task #5 (10%)
Step7: Task #7 (10%)
Step8: Task #8 (20%)
Step9: Task #9 (10%)
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import datetime as dt
import itertools
import pickle
%matplotlib inline
pickle_file = open('../../lectures/data/campusDemand.pkl','rb')
pickled_data = pickle.load(pickle_file)
pickle_file.close()
# Since we pickled them all together as a list, I'm going to assign each element of the list to the same variable
# we had been using before:
data = pickled_data[0]
pointNames = pickled_data[1]
data_by_day = pickled_data[2]
idx = pickled_data[3]
# Your code goes here
# Your code goes here
# Your code goes here
# Your code goes here
# Your code goes here...
# Your code goes here...
# Your code goes here...
# Your code goes here...
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = []
target_id_text = []
for sentence in source_text.split('\n'):
s = []
for word in sentence.split():
s.append(source_vocab_to_int[word])
source_id_text.append(s)
for sentence in target_text.split('\n'):
s = []
for word in sentence.split():
s.append(target_vocab_to_int[word])
s.append(target_vocab_to_int['<EOS>'])
target_id_text.append(s)
return (source_id_text, target_id_text)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learn_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return (inputs, targets, learn_rate, keep_prob, target_sequence_length, \
max_target_sequence_length, source_sequence_length)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return decoder_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed_encoder_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size,
encoding_embedding_size)
def lstm_cell(rnn_size):
cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=1))
cell_dropout = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
return cell_dropout
stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell(rnn_size) for _ in range(num_layers)])
encoder_output, encoder_state = tf.nn.dynamic_rnn(stacked_lstm,
embed_encoder_input,
sequence_length=source_sequence_length, dtype=tf.float32)
return (encoder_output, encoder_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
train_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
train_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, train_helper, encoder_state,
output_layer)
train_decoder_output = tf.contrib.seq2seq.dynamic_decode(train_decoder, impute_finished=True,
maximum_iterations=max_summary_length)[0]
return train_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
start_tokens = tf.tile(tf.constant([start_of_sequence_id],
dtype=tf.int32), [batch_size], name='start_tokens')
infer_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens,
end_of_sequence_id)
infer_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, infer_helper, encoder_state,
output_layer)
infer_decoder_output = tf.contrib.seq2seq.dynamic_decode(infer_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)[0]
return infer_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
def lstm_cell(rnn_size):
cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=1))
cell_dropout = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
return cell_dropout
stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell(rnn_size) for _ in range(num_layers)])
output_layer = Dense(target_vocab_size, use_bias=False)
with tf.variable_scope('decode'):
train_output = decoding_layer_train(encoder_state,
stacked_lstm,
dec_embed_input,
target_sequence_length,
max_target_sequence_length,
output_layer,
keep_prob)
with tf.variable_scope('decode', reuse=True):
infer_output = decoding_layer_infer(encoder_state,
stacked_lstm,
dec_embeddings,
target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],
max_target_sequence_length,
target_vocab_size,
output_layer,
batch_size,
keep_prob)
return (train_output, infer_output)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
_, enc_state = encoding_layer(input_data,
rnn_size,
num_layers,
keep_prob,
source_sequence_length,
source_vocab_size,
enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
train_dec_output, infer_dec_output = decoding_layer(dec_input,
enc_state,
target_sequence_length,
max_target_sentence_length,
rnn_size,
num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size,
keep_prob,
dec_embedding_size)
return (train_dec_output, infer_dec_output)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
#Number of Epochs
epochs = 3
#Batch Size
batch_size = 256
#RNN Size
rnn_size = 500
#Number of Layers
num_layers = 2
#Embedding Size
encoding_embedding_size = 250
decoding_embedding_size = 250
#Learning Rate
learning_rate = 0.001
#Dropout Keep Probability
keep_probability = 0.7
display_step = 100
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
s = sentence.lower().split()
word_ids = [vocab_to_int.get(w, vocab_to_int['<UNK>']) for w in s]
return word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Different ways of learning from data
Step2: In the plot we can easily see that the blue points are concentrated on the top-left corner, green ones in bottom left and red ones in top right.
Step3: So, in this case we got a classification accuracy of 56.67 %.
Step4: Why Probabilistic Graphical Models
Step5: In this case the parameters of the network would be $ P(L) $, $ P(W) $ and $ P(T | L, W) $. So, we will need to store 5 values for $ L $, 3 values for $ W $ and 45 values for $ P(T | L, W) $. So, a total of 45 + 5 + 3 = 53 values to completely parameterize the network which is actually more than 45 values which we need for $ P (T, L, W) $. But in the cases of bigger networks graphical models help in saving space. We can take the example of the student network shown below
|
<ASSISTANT_TASK:>
Python Code:
%run ../scripts/1/discretize.py
data
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Adding a little bit of noise so that it's easier to visualize
data_with_noise = data.iloc[:, :2] + np.random.normal(loc=0, scale=0.1, size=(150, 2))
plt.scatter(data_with_noise.length, data_with_noise.width, c=['b', 'g', 'r'], s=200, alpha=0.3)
from sklearn.tree import DecisionTreeClassifier
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data.ix[:, ['length', 'width']].values, data.type.values, test_size=0.2)
classifier = DecisionTreeClassifier(max_depth=4)
classifier.fit(X_train, y_train)
classifier.predict(X_test)
classifier.score(X_test, y_test)
X_train, X_test = data[:120], data[120:]
X_train
# Computing the joint probability distribution over the training data
joint_prob = data.groupby(['length', 'width', 'type']).size() / 120
joint_prob
# Predicting values
# Selecting just the feature variables.
X_test_features = X_test.iloc[:, :2].values
X_test_actual_results = X_test.iloc[:, 2].values
predicted_values = []
for i in X_test_features:
predicted_values.append(np.argmax(joint_prob[i[0], i[1]]))
predicted_values = np.array(predicted_values)
predicted_values
# Comparing results with the actual data.
predicted_values == X_test_actual_results
score = (predicted_values == X_test_actual_results).sum() / 30
print(score)
Image(filename='../images/1/Iris_BN.png')
Image(filename='../images/1/student.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Let's just compute the mesh at a single time-point that we know should be during egress.
Step3: Native
Step4: Visible Partial
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
phoebe.devel_on() # DEVELOPER MODE REQUIRED FOR VISIBLE_PARTIAL - DON'T USE FOR SCIENCE
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('mesh', times=[0.05], columns=['visibilities'])
b.run_compute(eclipse_method='native')
afig, mplfig = b.plot(component='primary', fc='visibilities', xlim=(-0.5, 0.25), ylim=(-0.4, 0.4), show=True)
b.run_compute(eclipse_method='visible_partial')
afig, mplfig = b.plot(component='primary', fc='visibilities', xlim=(-0.5, 0.25), ylim=(-0.4, 0.4), show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem #1
Step2: Part (a)
Step3: Part (b)
Step4: Part (c)
Step5: Problem #2
Step6: Part (b)
Step7: Problem #3
Step9: Of course, this solution can also be obtained by way of stochastic gradient descent (not implemented here).
Step10: Part (b)
Step11: Problem #7
Step12: Part (b)
Step13: Part (c)
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.stats import norm
from scipy.io import loadmat
# Load the matrix into memory, assuming for now that it is stored in the home directory
A = loadmat("../dataset/hw1/A11F17108.mat")['A']
# Obtain x, where Ax[:,i] = A[i,:].T
x = np.linalg.lstsq(A,A.T)[0]
# Test
for i in range(200):
if not np.allclose(np.dot(A,x[:,i]), A[i,:]):
print("Fails on index %d" % i)
break
if i == 199:
print("Succeeds!")
norms = np.linalg.norm(x,axis=0)
# Test
for i in range(200):
if not np.allclose(np.sqrt(np.sum(np.power(x[:,i],2))), norms[i]):
print("Fails on index %d" % i)
break
if i == 199:
print("Succeeds!")
np.argmin(norms)
# Test
m = np.inf
ind = 0
for i in range(200):
if norms[i] < m:
m = norms[i]
ind = i
print(ind)
np.argmax(norms)
# Test
m = 0
ind = 0
for i in range(200):
if norms[i] > m:
m = norms[i]
ind = i
print(ind)
np.mean(norms)
# Compute the SVD of A with the last row clipped, and use the last row of V
u,s,v = np.linalg.svd(A[:-1,:])
ns = v[-1,:]
# Test
target = np.zeros((199,1))
for i in range(199):
if not np.allclose(np.dot(A[i,:],ns),target):
print("Fails on index %d" %i)
break
if i == 198:
print("Succeeds!")
np.linalg.norm(ns)
# check orientation
ns[0]
# return relevant indices
(ns[2],ns[11],ns[36])
# Just in case of an oopsie
(ns[3],ns[12],ns[37])
# Test
for i in range(0,199):
if not np.allclose(np.dot(ns,A[i,:]),0):
print("Failed at index %d" %i)
break
if i == 198:
print("Succeeds!")
q,r = np.linalg.qr(A[:-1,:].T, mode="complete")
y = q[:,-1]
y[0]
y = -y
(y[2],y[11],y[36])
(y[3],y[12],y[37])
# Test
for i in range(0,199):
if not np.allclose(np.dot(y,A[i,:]),0):
print("Failed at index %d" %i)
break
if i == 198:
print("Succeeds!")
u,s,v = np.linalg.svd(A)
s[0]/s[-1]
# solve for y
x = 4.92594
y = 1./x
y
(x,y)
# solve for the vector connecting (x,y) to (5,2)
s = np.array([x-5,y-2])
s
# normalize and scale
nm = np.linalg.norm(s)
w = s[0]/nm+5
z = s[1]/nm+2
(w,z)
# Test
(w-5)**2 + (z-2)**2
# Print the distance
np.linalg.norm([x-w,y-z])
def choose(n, k):
A fast way to calculate binomial coefficients by Andrew Dalke (contrib).
if 0 <= k <= n:
ntok = 1
ktok = 1
for t in xrange(1, min(k, n - k) + 1):
ntok *= n
ktok *= t
n -= 1
return ntok // ktok
else:
return 0
# the total number of possible triples
tot = choose(52,3)*6
# the total number of acceptable triples over the total number of triples
float(choose(13,3)*4)/tot
# Test
# Count all possible acceptable triples and divide by all possible triples
trips = [(i,j,k) for i in range(1,12) for j in range(i+1,13) for k in range(j+1,14)]
float(len(trips)*4)/tot
# Examine the acceptable triples. Bear in mind there are four of each type
trips[:20]
# Much simpler.
26./50
# Create given normal variables
X = norm(loc=2,scale=1)
Y = norm(loc=-1,scale=np.sqrt(2))
X.cdf(2) - X.cdf(0)
Y.cdf(2) - Y.cdf(0)
(X.cdf(2) - X.cdf(0)) + (Y.cdf(2) - Y.cdf(0)) - ((X.cdf(2) - X.cdf(0))*(Y.cdf(2) - Y.cdf(0)))
(X.cdf(2) - X.cdf(0))*(Y.cdf(2) - Y.cdf(0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's look at a few lines of this file
Step2: We can load this file more cleanly with the pandas library (http
Step3: Taking a closer look at raw_clinvar, we see that the clinical significance data is squished together in one field (ClinSig)
Step4: Separating this field
Step5: Notice that some of variants contain multiple assertions, separated by the ',' or '|' characters. We can extract just these rows as follows
Step6: We can flatten the clinvar data so there is one <b>assertion</b> per line
Step7: Let's deduplicate
Step8: and output to file
Step9: To reduce to Pathogenic SNPs, we can simply run
Step10: Let's output these to file
Step11: 2a. Pulling Allele Frequency Data
Step12: We can use these data to filter clinical variants
Step13: 2a.2. Broad ExAc
Step14: 2a.3. Merging
Step15: 2b. Pulling Pubmed Publications
Step16: Let's test this function on a few examples
Step17: 3. Penetrance
Step18: 4.2 Uncertainty in the disease risk conveyed by pathogenic variation
Step19: Recall
Step20: Let's retrieve all PubMed IDs for these accessions
|
<ASSISTANT_TASK:>
Python Code:
import os
os.system('annotate_variation.pl -buildver hg19 -downdb -webfrom annovar clinvar_20150629 humandb/')
with open('humandb/hg19_clinvar_20150629.txt') as infile:
first_five_lines = [next(infile) for i in range(5)]
print first_five_lines
import pandas as pd
raw_clinvar = pd.read_table('humandb/hg19_clinvar_20150629.txt', header=None)
raw_clinvar.columns = ['Chromosome','Start','Stop','Ref','Alt','ClinSig']
print 'There are %s rows in this version of ClinVar.' %raw_clinvar.shape[0]
distinct_variants = set(zip(raw_clinvar.Chromosome,raw_clinvar.Start,raw_clinvar.Stop,raw_clinvar.Ref,raw_clinvar.Alt))
num_distinct_variants = len(distinct_variants)
print 'There are %s distinct variants in this version of ClinVar.' %num_distinct_variants
raw_clinvar.head(5)
assertion = [x.split(';')[0].split('=')[1] for x in raw_clinvar.ClinSig]
disease = [x.split(';')[1].split('=')[1] for x in raw_clinvar.ClinSig]
accession = [x.split(';')[3].split('=')[1] for x in raw_clinvar.ClinSig]
clinvar = raw_clinvar.drop('ClinSig',1)
clinvar['assertion'], clinvar['disease'], clinvar['accession'] = assertion, disease, accession
clinvar.head(25)
import re
accession = clinvar.accession.map(lambda x: re.split(r'[,|]+',x) if ('|' in x or ',' in x) else [x])
multi_rows = [i for i, e in enumerate(accession) if len(e) > 1]
multi = clinvar.ix[multi_rows,:]
multi.head(5)
assertion = clinvar.assertion.map(lambda x: re.split(r'[,|]+',x) if ('|' in x or ',' in x) else [x])
disease = clinvar.disease.map(lambda x: re.split(r'[,|]+',x) if ('|' in x or ',' in x) else [x])
accession = clinvar.accession.map(lambda x: re.split(r'[,|]+',x) if ('|' in x or ',' in x) else [x])
v0 = clinvar.Chromosome.tolist()
v1 = clinvar.Start.tolist()
v2 = clinvar.Stop.tolist()
v3 = clinvar.Ref.tolist()
v4 = clinvar.Alt.tolist()
variants = zip(v0,v1,v2,v3,v4)
flat = []
for i, v in enumerate(variants):
num_assertions = len(accession[i])
v = list(v)
if num_assertions > 1:
flat += [v+[assertion[i][j],disease[i][j],accession[i][j]] for j in range(num_assertions)]
else:
flat += [v+assertion[i]+disease[i]+accession[i]]
flat_clinvar = pd.DataFrame(flat)
flat_clinvar.columns = ['Chromosome','Start','Stop','Ref','Alt','Assertion','Disease','Accession']
flat_clinvar.head(5)
flat_clinvar = flat_clinvar.drop_duplicates()
print "There are %s distinct assertions in this version of Clinvar." %flat_clinvar.shape[0]
flat_clinvar.to_csv(path_or_buf='flat_clinvar.tsv',sep='\t',index=False,
columns=['Chromosome','Start','Stop','Ref','Alt','Assertion',
'Disease','Accession'])
pathogenic_SNPs = flat_clinvar[(flat_clinvar.Start == flat_clinvar.Stop) & (flat_clinvar.Assertion == 'pathogenic') &
(flat_clinvar.Alt.str.len() == 1) & (flat_clinvar.Ref.str.len() == 1)]
print 'There are %s pathogenic SNP assertions in this version of ClinVar.' % pathogenic_SNPs.shape[0]
pathogenic_SNPs.to_csv(path_or_buf='flat_clinvar_pathogenic_SNPs.tsv',sep='\t',index=False,
columns=['Chromosome','Start','Stop','Ref','Alt','Assertion',
'Disease','Accession'])
os.system('annotate_variation.pl -downdb -webfrom annovar -build hg19 esp6500si_ea humandb/')
os.system('annotate_variation.pl -downdb -webfrom annovar -build hg19 esp6500si_aa humandb/')
os.system('annotate_variation.pl -downdb -webfrom annovar -build hg19 esp6500si_all humandb/')
infile_name = 'flat_clinvar_pathogenic_SNPs.tsv'
outfile_name = 'ESP'
os.system('annotate_variation.pl -filter -dbtype esp6500si_all -build hg19 -out %s %s humandb/' % (outfile_name,infile_name))
ESP_data = pd.read_table('ESP.hg19_esp6500si_all_dropped',
header=None)
ESP_data = ESP_data.drop(0,1)
ESP_data.columns = ['ESP_Overall_Frequency','Chromosome','Start','Stop','Ref','Alt',
'Assertion','Disease','Accession']
print "There are %s variant-disease assertions with ESP frequency data available." %ESP_data.shape[0]
ESP_data.head(5)
os.system('annotate_variation.pl -downdb -webfrom annovar -build hg19 exac03 humandb/')
infile_name = 'flat_clinvar_pathogenic_SNPs.tsv'
outfile_name = 'ExAc'
os.system('annotate_variation.pl -filter -build hg19 -dbtype exac03 -out %s %s humandb/' % (outfile_name,infile_name))
ExAc_data = pd.read_table('ExAc.hg19_exac03_dropped',
header=None)
ExAc_data = ExAc_data.drop(0,1)
ExAc_data.columns = ['ExAc_Overall_Frequency','Chromosome','Start','Stop','Ref','Alt',
'Assertion','Disease','Accession']
print "There are %s variant-disease assertions with ExAc frequency data available." %ExAc_data.shape[0]
ExAc_data.head(5)
merged = pd.merge(ESP_data,ExAc_data,how='outer',on=['Chromosome','Start','Stop','Ref','Alt','Disease',
'Assertion','Accession'])
print "There are %s variant-disease assertions with frequency data in either ExAC or ESP." %merged.shape[0]
num_SNPs = merged.shape[0]
print "There are %s SNP variant-disease assertions with frequency data in either ExAc or ESP." %num_SNPs
num_variants = len(set(zip(merged.Chromosome,merged.Start,merged.Stop,merged.Ref,merged.Alt)))
print "There are %s distinct variants in the set of variant-disease assertions with frequency data in either ExAc or ESP." %num_variants
num_diseases = len(set(zip(merged.Disease)))
print "There are %s distinct diseases in the set of variant-disease assertions with frequency data in either ExAc or ESP." %num_diseases
merged.to_csv(path_or_buf='merged_output.csv',sep=',',index=False,
columns=['Chromosome','Start','Stop','Ref','Alt','Assertion',
'Disease','Accession','ExAc_Overall_Frequency','ESP_Overall_Frequency'])
from bs4 import BeautifulSoup
import urllib2
def getPubMedIDs(RCV):
try:
response = urllib2.urlopen("http://www.ncbi.nlm.nih.gov/clinvar/" + RCV)
except urllib2.HTTPError:
print "Page not found"
else:
soup = BeautifulSoup(response.read())
tag = soup.body.find("div", id="clinvar_rec_pubmed_ids1")
if tag:
return tag.text.split(", ")
return []
print "Expecting 5 PubMedIDs for RCV000019428:", getPubMedIDs("RCV000019428")
print "Expecting 2 PubMedIDs for RCV000019429:", getPubMedIDs("RCV000019429")
print "Expecting 0 PubMedIDs for RCV000116253.2:", getPubMedIDs("RCV000116253.2")
import os
os.system('mkdir plots')
a = flat_clinvar.groupby(['Chromosome','Start','Stop','Ref','Alt'])
var_counts = list(a.size().values)
assertions_per_variant = [var_counts.count(i) for i in range(1,20)]
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.figure(figsize = (7,7))
plt.bar(range(1,len(assertions_per_variant)+1),assertions_per_variant,
align='center',color='teal')
plt.xlabel('Assertions per Variant',fontsize=14)
plt.ylabel('Count',fontsize=14)
#plt.title('Pathogenicity Assertions per Variant in ClinVar')
plt.xlim([0,15])
plt.xticks(range(len(assertions_per_variant)),fontsize=12)
plt.savefig('plots/assertions_per_variant.pdf')
assertions_per_variant[0]/(len(a.groups))
len(a.groups)
# parameters above as Python variables
import numpy as np
prev = 1.0/500
het_range = np.linspace((0.001),(0.1),10)
pen = {} # to be computed below
genetic_model = 'AD'
HCM = merged[merged.Disease.str.contains('hypertrophic_cardiomyopathy')]
HCM = HCM.drop_duplicates(subset = ['Chromosome','Start','Stop','Ref','Alt']) # 81 distinct variants
raw_names = zip(HCM.Chromosome,HCM.Start,HCM.Ref,HCM.Alt)
clean_names = [str(chrom)+':'+str(int(pos))+str(ref)+'>'+alt for (chrom,pos,ref,alt) in raw_names]
HCM[['Chromosome','Start','Ref','Alt','ExAc_Overall_Frequency','ESP_Overall_Frequency','Accession','Disease']].head(5)
import numpy as np
max_freqs = [np.nanmax(np.array([x,y])) for (x,y) in zip(HCM.ESP_Overall_Frequency,HCM.ExAc_Overall_Frequency)]
max_freqs = [x**2 + 2*x*(1-x) for x in max_freqs]
for i,var in enumerate(clean_names):
these_pens = []
for h in het_range:
f = max_freqs[i]
these_pens += [min(prev*h/f,1)]
pen[var] = these_pens
# now export to R
import csv
outfile = open('sens_analysis','w')
for k,v in pen.items():
outfile.write(k)
for i in v:
outfile.write(','+str(i))
outfile.write('\n')
outfile.close()
HCM_PMIDs = {}
for acc in set(HCM.Accession):
HCM_PMIDs[acc] = getPubMedIDs(acc)
HCM_PMIDs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wir öffnen die Datenbank und lassen uns die Keys der einzelnen Tabellen ausgeben.
Step2: Aufgabe 2
Step3: Als nächstes Untersuchen wir exemplarisch für zwei Empfänger-Sender-Gruppen die Attributzusammensetzung.
Step4: Für die Analyse der Frames definieren wir einige Hilfsfunktionen.
Step5: Bestimme nun die Spaltezusammensetzung von df_x1_t1_trx_1_4.
Step6: Betrachte den Inhalt der "target"-Spalte von df_x1_t1_trx_1_4.
Step7: Als nächstes laden wir den Frame x3_t2_trx_3_1 und betrachten seine Dimension.
Step8: Gefolgt von einer Analyse seiner Spaltenzusammensetzung und seiner "target"-Werte.
Step9: Frage
Step10: Wir betrachten wie verschiedene Farbschemata unterschiedliche Merkmale unserer Rohdaten hervorheben.
Step11: Aufgabe 3
Step12: Überprüfe neu beschrifteten Dataframe „/x1/t1/trx_3_1“ verwenden. Wir erwarten als Ergebnisse für 5 zu Beginn des Experiments „Empty“ (bzw. 0) und für 120 mitten im Experiment „Not Empty“ (bzw. 1).
Step13: Aufgabe 4
Step14: Öffnen von Hdf mittels pandas
Step15: Beispiel Erkenner
Step16: Schließen von HDF Store
Step17: Aufgabe 5
Step18: Vorverarbeitung
Step19: Wir sehen, dass nur noch die 6 x 2000 Messungen für die jeweiligen Paare sowie die 'target'-Werte in den resultierenden Frames enthalten sind.
Step20: Aufgabe 6
Step21: Starten des online servers
|
<ASSISTANT_TASK:>
Python Code:
# imports
import re
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import pprint as pp
hdf = pd.HDFStore('../../data/raw/TestMessungen_NEU.hdf')
print(hdf.keys)
df_x1_t1_trx_1_4 = hdf.get('/x1/t1/trx_1_4')
print("Rows:", df_x1_t1_trx_1_4.shape[0])
print("Columns:", df_x1_t1_trx_1_4.shape[1])
# first inspection of columns from df_x1_t1_trx_1_4
df_x1_t1_trx_1_4.head(5)
# Little function to retrieve sender-receiver tuples from df columns
def extract_snd_rcv(df):
regex = r"trx_[1-4]_[1-4]"
# creates a set containing the different pairs
snd_rcv = {x[4:7] for x in df.columns if re.search(regex, x)}
return [(x[0],x[-1]) for x in snd_rcv]
# Sums the number of columns for each sender-receiver tuple
def get_column_counts(snd_rcv, df):
col_counts = {}
for snd,rcv in snd_rcv:
col_counts['Columns for pair {} {}:'.format(snd, rcv)] = len([i for i, word in enumerate(list(df.columns)) if word.startswith('trx_{}_{}'.format(snd, rcv))])
return col_counts
# Analyze the column composition of a given measurement.
def analyse_columns(df):
df_snd_rcv = extract_snd_rcv(df)
cc = get_column_counts(df_snd_rcv, df)
for x in cc:
print(x, cc[x])
print("Sum of pair related columns: %i" % sum(cc.values()))
print()
print("Other columns are:")
for att in [col for col in df.columns if 'ifft' not in col and 'ts' not in col]:
print(att)
# Analyze the values of the target column.
def analyze_target(df):
print(df['target'].unique())
print("# Unique values in target: %i" % len(df['target'].unique()))
analyse_columns(df_x1_t1_trx_1_4)
analyze_target(df_x1_t1_trx_1_4)
df_x3_t2_trx_3_1 = hdf.get('/x3/t2/trx_3_1')
print("Rows:", df_x3_t2_trx_3_1.shape[0])
print("Columns:", df_x3_t2_trx_3_1.shape[1])
analyse_columns(df_x3_t2_trx_3_1)
analyze_target(df_x3_t2_trx_3_1)
vals = df_x1_t1_trx_1_4.loc[:,'trx_2_4_ifft_0':'trx_2_4_ifft_1999'].values
# one big heatmap
plt.figure(figsize=(14, 12))
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='nipy_spectral_r')
plt.show()
# compare different heatmaps
plt.figure(1, figsize=(12,10))
# nipy_spectral_r scheme
plt.subplot(221)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='nipy_spectral_r')
# terrain scheme
plt.subplot(222)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='terrain')
# Vega10 scheme
plt.subplot(223)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='Vega10')
# Wistia scheme
plt.subplot(224)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='Wistia')
# Adjust the subplot layout, because the logit one may take more space
# than usual, due to y-tick labels like "1 - 10^{-3}"
plt.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.25,
wspace=0.2)
plt.show()
# Iterating over hdf data and creating interim data presentation stored in data/interim/testmessungen_interim.hdf
# Interim data representation contains aditional binary class (binary_target - encoding 0=empty and 1=not empty)
# and multi class target (multi_target - encoding 0-9 for each possible class)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
interim_path = '../../data/interim/01_testmessungen.hdf'
def binary_mapper(df):
def map_binary(target):
if target.startswith('Empty'):
return 0
else:
return 1
df['binary_target'] = pd.Series(map(map_binary, df['target']))
def multiclass_mapper(df):
le.fit(df['target'])
df['multi_target'] = le.transform(df['target'])
for key in hdf.keys():
df = hdf.get(key)
binary_mapper(df)
multiclass_mapper(df)
df.to_hdf(interim_path, key)
hdf.close()
hdf = pd.HDFStore('../../data/interim/01_testmessungen.hdf')
df_x1_t1_trx_3_1 = hdf.get('/x1/t1/trx_3_1')
print("binary_target for measurement 5:", df_x1_t1_trx_3_1['binary_target'][5])
print("binary_target for measurement 120:", df_x1_t1_trx_3_1['binary_target'][120])
hdf.close()
from evaluation import *
from filters import *
from utility import *
from features import *
# raw data to achieve target values
hdf = pd.HDFStore('../../data/raw/TestMessungen_NEU.hdf')
# generate datasets
tst = ['1','2','3']
tst_ds = []
for t in tst:
df_tst = hdf.get('/x1/t'+t+'/trx_3_1')
lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')]
#df_tst_cl,_ = distortion_filter(df_tst_cl)
groups = get_trx_groups(df_tst)
df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single, label='target')
df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single)
df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature
df_all = pd.concat( [df_std, df_mean, df_p2p], axis=1 ) # added p2p feature
df_all = cf_std_window(df_all, window=4, label='target')
df_tst_sum = generate_class_label_presence(df_all, state_variable='target')
# remove index column
df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()]
print('Columns in Dataset:',t)
print(df_tst_sum.columns)
tst_ds.append(df_tst_sum.copy())
# holdout validation
print(hold_out_val(tst_ds, target='target', include_self=False, cl='rf', verbose=False, random_state=1))
hdf.close()
# Load raw data
hdf = pd.HDFStore("../../data/raw/TestMessungen_NEU.hdf")
# Check available keys in hdf store
print(hdf.keys)
hdf_path = "../../data/interim/02_tesmessungen.hdf"
# Mapping groundtruth to 0-empty and 1-not empty and prepare for further preprocessing by
# removing additional timestamp columns and index column
# Storing cleaned dataframes (no index, removed _ts columns, mapped multi classes to 0-empty, 1-not empty)
# to new hdfstore to `data/interim/02_testmessungen.hdf`
dfs = []
for key in hdf.keys():
df = hdf.get(key)
#df['target'] = df['target'].map(lambda x: 0 if x.startswith("Empty") else 1)
# drop all time stamp columns who endswith _ts
cols = [c for c in df.columns if not c.lower().endswith("ts")]
df = df[cols]
df = df.drop('Timestamp', axis=1)
df = df.drop('index', axis=1)
df.to_hdf(hdf_path, key)
hdf.close()
hdf = pd.HDFStore(hdf_path)
df = hdf.get("/x1/t1/trx_1_2")
df.head()
# Step-1 repeating the previous taks 4 to get a comparable base result with the now dropped _ts and index column to improve from
# generate datasets
from evaluation import *
from filters import *
from utility import *
from features import *
def prepare_features(c, p):
tst = ['1','2','3']
tst_ds = []
for t in tst:
df_tst = hdf.get('/x'+c+'/t'+t+'/trx_'+p)
lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')]
#df_tst_cl,_ = distortion_filter(df_tst_cl)
df_tst,_ = distortion_filter(df_tst)
groups = get_trx_groups(df_tst)
df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single, label='target')
df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single)
df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature
df_kurt = rf_grouped(df_tst, groups=groups, fn=rf_kurtosis_single)
df_all = pd.concat( [df_std, df_mean, df_p2p, df_kurt], axis=1 ) # added p2p feature
df_all = cf_std_window(df_all, window=4, label='target')
df_all = cf_diff(df_all, label='target')
df_tst_sum = generate_class_label_presence(df_all, state_variable='target')
# remove index column
df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()]
# print('Columns in Dataset:',t)
# print(df_tst_sum.columns)
tst_ds.append(df_tst_sum.copy())
return tst_ds
tst_ds = prepare_features(c='1', p='3_1')
# Evaluating different supervised learning methods provided in eval.py
# added a NN evaluator but there are some problems regarding usage and hidden layers
# For the moment only kurtosis and cf_diff are added to the dataset as well as the distortion filter
# Feature selection is needed right now!
for elem in ['rf', 'dt', 'nb' ,'nn','knn']:
print(elem, ":", hold_out_val(tst_ds, target='target', include_self=False, cl=elem, verbose=False, random_state=1))
# extra column features generated and reduced with PCA
from evaluation import *
from filters import *
from utility import *
from features import *
from new_features import *
def prepare_features_PCA_cf(c, p):
tst = ['1','2','3']
tst_ds = []
for t in tst:
df_tst = hdf.get('/x'+c+'/t'+t+'/trx_'+p)
lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')]
df_tst,_ = distortion_filter(df_tst)
groups = get_trx_groups(df_tst)
df_cf_mean = reduce_dim_PCA(cf_mean_window(df_tst, window=3, column_key="ifft", label=None ).fillna(0), n_comps=10)
#df_cf_std = reduce_dim_PCA(cf_std_window(df_tst, window=3, column_key="ifft", label=None ).fillna(0), n_comps=10)
df_cf_ptp = reduce_dim_PCA(cf_ptp(df_tst, window=3, column_key="ifft", label=None ).fillna(0), n_comps=10)
#df_cf_kurt = reduce_dim_PCA(cf_kurt(df_tst, window=3, column_key="ifft", label=None ).fillna(0), n_comps=10)
#df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single)
df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single, label='target')
df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature
df_kurt = rf_grouped(df_tst, groups=groups, fn=rf_kurtosis_single)
df_skew = rf_grouped(df_tst, groups=groups, fn=rf_skew_single)
df_all = pd.concat( [df_mean, df_p2p, df_kurt, df_skew], axis=1 )
df_all = cf_std_window(df_all, window=4, label='target')
df_all = cf_diff(df_all, label='target')
df_all = reduce_dim_PCA(df_all.fillna(0), n_comps=10, label='target')
df_all = pd.concat( [df_all, df_cf_mean, df_cf_ptp], axis=1)
df_tst_sum = generate_class_label_presence(df_all, state_variable='target')
# remove index column
df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()]
#print('Columns in Dataset:',t)
#print(df_tst_sum.columns)
tst_ds.append(df_tst_sum.copy())
return tst_ds
tst_ds_PCA = prepare_features_PCA_cf(c='1', p='3_1')
# Evaluating different supervised learning methods provided in eval.py
# We can see that the column features have increased F1 score of the classifiers
# Best score for Naive Bayes
for elem in ['rf', 'dt', 'nb' ,'nn','knn']:
print(elem, ":", hold_out_val(tst_ds_PCA, target='target', include_self=False, cl=elem, verbose=False, random_state=1))
def evaluate_models(ds):
res = {}
for elem in ['rf', 'dt', 'nb' ,'nn','knn']:
res[elem] = hold_out_val(ds, target='target', include_self=False, cl=elem, verbose=False, random_state=1)
return res
def evaluate_performance(c, p):
# include a prepare data function?
ds = prepare_features(c, p)
return evaluate_models(ds)
def evaluate_performance_PCA_cf(c, p):
# include a prepare data function?
ds = prepare_features_PCA_cf(c, p)
return evaluate_models(ds)
config = ['1','2','3','4']
pairing = ['1_2','1_4','2_3','3_1','3_4','4_2']
tst_ds = []
res_all = []
for c in config:
print("Testing for configuration", c)
for p in pairing:
print("Analyse performance for pairing", p)
res = evaluate_performance(c, p)
res_all.append(res)
# TODO draw graph
for model in res:
print(model, res[model])
all_keys = set().union(*(d.keys() for d in res_all))
print(all_keys)
print("results for prepare_features() function")
for key in all_keys:
print("mean F1 for {}: {}".format(key, sum(item[key][0] for item in res_all)/len(res_all)))
config = ['1','2','3','4']
pairing = ['1_2','1_4','2_3','3_1','3_4','4_2']
tst_ds = []
res_all_PCA = []
for c in config:
print("Testing for configuration", c)
for p in pairing:
print("Analyse performance for pairing", p)
res = evaluate_performance_PCA_cf(c, p)
res_all_PCA.append(res)
# TODO draw graph
for model in res:
print(model, res[model])
all_keys = set().union(*(d.keys() for d in res_all_PCA))
print(all_keys)
print("results for prepare_features_PCA_cf() function")
for key in all_keys:
print("mean F1 for {}: {}".format(key, sum(item[key][0] for item in res_all_PCA)/len(res_all_PCA)))
from sklearn.externals import joblib
joblib.dump(res['dt'], '../../models/solution_ueb02/model.plk')
# Navigate to notebooks/solution_ueb02 and start the server
# with 'python -m online'
# Nun werden zeilenweise Anfragen an die REST-API simuliert, jeder valider json request wird mit einer
# json prediction response beantwortet
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import necessary libraries.
Step2: Set environment variables.
Step3: Check data exists
Step4: Now that we have the Keras wide-and-deep code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
Step7: We then use the %%writefile magic to write the contents of the cell below to a file called task.py in the babyweight/trainer folder.
Step16: In the same way we can write to the file model.py the model that we developed in the previous notebooks.
Step17: Train locally
Step18: Training on Cloud AI Platform
Step19: The training job should complete within 10 to 15 minutes. You do not need to wait for this training job to finish before moving forward in the notebook, but will need a trained model to complete our next lab.
Step20: Build and push container image to repo
Step21: Note
Step22: Kindly ignore the incompatibility errors.
Step23: Train on Cloud AI Platform
Step24: When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was
Step25: Repeat training
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip3 install cloudml-hypertune
import os
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "${PROJECT}
# TODO: Change these to try this notebook out
PROJECT = "your-project-name-here" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.1"
os.environ["PYTHONVERSION"] = "3.7"
%%bash
gcloud config set project ${PROJECT}
gcloud config set compute/region ${REGION}
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*000000000000.csv
%%bash
mkdir -p babyweight/trainer
touch babyweight/trainer/__init__.py
%%writefile babyweight/trainer/task.py
import argparse
import json
import os
from trainer import model
import tensorflow as tf
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--job-dir",
help="this model ignores this field, but it is required by gcloud",
default="junk"
)
parser.add_argument(
"--train_data_path",
help="GCS location of training data",
required=True
)
parser.add_argument(
"--eval_data_path",
help="GCS location of evaluation data",
required=True
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
required=True
)
parser.add_argument(
"--batch_size",
help="Number of examples to compute gradient over.",
type=int,
default=512
)
parser.add_argument(
"--nnsize",
help="Hidden layer sizes for DNN -- provide space-separated layers",
nargs="+",
type=int,
default=[128, 32, 4]
)
parser.add_argument(
"--nembeds",
help="Embedding size of a cross of n key real-valued parameters",
type=int,
default=3
)
parser.add_argument(
"--num_epochs",
help="Number of epochs to train the model.",
type=int,
default=10
)
parser.add_argument(
"--train_examples",
help=Number of examples (in thousands) to run the training job over.
If this is more than actual # of examples available, it cycles through
them. So specifying 1000 here when you have only 100k examples makes
this 10 epochs.,
type=int,
default=5000
)
parser.add_argument(
"--eval_steps",
help=Positive number of steps for which to evaluate model. Default
to None, which means to evaluate until input_fn raises an end-of-input
exception,
type=int,
default=None
)
# Parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# Unused args provided by service
arguments.pop("job_dir", None)
arguments.pop("job-dir", None)
# Modify some arguments
arguments["train_examples"] *= 1000
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
arguments["output_dir"] = os.path.join(
arguments["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
# Run the training job
model.train_and_evaluate(arguments)
%%writefile babyweight/trainer/model.py
import datetime
import os
import shutil
import numpy as np
import tensorflow as tf
import hypertune
# Determine CSV, label, and key columns
CSV_COLUMNS = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
def features_and_labels(row_data):
Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode='eval'):
Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: 'train' | 'eval' to determine if training or evaluating.
Returns:
`Dataset` object.
print("mode = {}".format(mode))
# Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS)
# Map dataset to features and label
dataset = dataset.map(map_func=features_and_labels) # features, label
# Shuffle and repeat for training
if mode == 'train':
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
def create_input_layers():
Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
deep_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]
}
wide_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string")
for colname in ["is_male", "plurality"]
}
inputs = {**wide_inputs, **deep_inputs}
return inputs
def categorical_fc(name, values):
Helper function to wrap categorical feature by indicator column.
Args:
name: str, name of feature.
values: list, list of strings of categorical values.
Returns:
Categorical and indicator column of categorical feature.
cat_column = tf.feature_column.categorical_column_with_vocabulary_list(
key=name, vocabulary_list=values)
ind_column = tf.feature_column.indicator_column(
categorical_column=cat_column)
return cat_column, ind_column
def create_feature_columns(nembeds):
Creates wide and deep dictionaries of feature columns from inputs.
Args:
nembeds: int, number of dimensions to embed categorical column down to.
Returns:
Wide and deep dictionaries of feature columns.
deep_fc = {
colname: tf.feature_column.numeric_column(key=colname)
for colname in ["mother_age", "gestation_weeks"]
}
wide_fc = {}
is_male, wide_fc["is_male"] = categorical_fc(
"is_male", ["True", "False", "Unknown"])
plurality, wide_fc["plurality"] = categorical_fc(
"plurality", ["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"])
# Bucketize the float fields. This makes them wide
age_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["mother_age"],
boundaries=np.arange(15, 45, 1).tolist())
wide_fc["age_buckets"] = tf.feature_column.indicator_column(
categorical_column=age_buckets)
gestation_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["gestation_weeks"],
boundaries=np.arange(17, 47, 1).tolist())
wide_fc["gestation_buckets"] = tf.feature_column.indicator_column(
categorical_column=gestation_buckets)
# Cross all the wide columns, have to do the crossing before we one-hot
crossed = tf.feature_column.crossed_column(
keys=[age_buckets, gestation_buckets],
hash_bucket_size=1000)
deep_fc["crossed_embeds"] = tf.feature_column.embedding_column(
categorical_column=crossed, dimension=nembeds)
return wide_fc, deep_fc
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):
Creates model architecture and returns outputs.
Args:
wide_inputs: Dense tensor used as inputs to wide side of model.
deep_inputs: Dense tensor used as inputs to deep side of model.
dnn_hidden_units: List of integers where length is number of hidden
layers and ith element is the number of neurons at ith layer.
Returns:
Dense tensor output from the model.
# Hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(
units=numnodes,
activation="relu",
name="dnn_{}".format(layerno+1))(deep)
deep_out = deep
# Linear model for the wide side
wide_out = tf.keras.layers.Dense(
units=10, activation="relu", name="linear")(wide_inputs)
# Concatenate the two sides
both = tf.keras.layers.concatenate(
inputs=[deep_out, wide_out], name="both")
# Final output is a linear activation because this is regression
output = tf.keras.layers.Dense(
units=1, activation="linear", name="weight")(both)
return output
def rmse(y_true, y_pred):
Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):
Builds wide and deep model using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
# Create input layers
inputs = create_input_layers()
# Create feature columns for both wide and deep
wide_fc, deep_fc = create_feature_columns(nembeds)
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
wide_inputs = tf.keras.layers.DenseFeatures(
feature_columns=wide_fc.values(), name="wide_inputs")(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(
feature_columns=deep_fc.values(), name="deep_inputs")(inputs)
# Get output of model given inputs
output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
def train_and_evaluate(args):
model = build_wide_deep_model(args["nnsize"], args["nembeds"])
print("Here is our Wide-and-Deep architecture so far:\n")
print(model.summary())
trainds = load_dataset(
args["train_data_path"],
args["batch_size"],
'train')
evalds = load_dataset(
args["eval_data_path"], 1000, 'eval')
if args["eval_steps"]:
evalds = evalds.take(count=args["eval_steps"])
num_batches = args["batch_size"] * args["num_epochs"]
steps_per_epoch = args["train_examples"] // num_batches
checkpoint_path = os.path.join(args["output_dir"], "checkpoints/babyweight")
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path, verbose=1, save_weights_only=True)
history = model.fit(
trainds,
validation_data=evalds,
epochs=args["num_epochs"],
steps_per_epoch=steps_per_epoch,
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[cp_callback])
EXPORT_PATH = os.path.join(
args["output_dir"], datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
hp_metric = history.history['val_rmse'][-1]
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='rmse',
metric_value=hp_metric,
global_step=args['num_epochs'])
print("Exported trained model to {}".format(EXPORT_PATH))
%%bash
OUTDIR=babyweight_trained
rm -rf ${OUTDIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
python3 -m trainer.task \
--job-dir=./tmp \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--batch_size=10 \
--num_epochs=1 \
--train_examples=1 \
--eval_steps=1
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBID=babyweight_$(date -u +%y%m%d_%H%M%S)
gcloud ai-platform jobs submit training ${JOBID} \
--region=${REGION} \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=${OUTDIR} \
--staging-bucket=gs://${BUCKET} \
--master-machine-type=n1-standard-8 \
--scale-tier=CUSTOM \
--runtime-version=${TFVERSION} \
--python-version=${PYTHONVERSION} \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=10000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
%%writefile babyweight/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY trainer /babyweight/trainer
RUN apt update && \
apt install --yes python3-pip && \
pip3 install --upgrade --quiet tensorflow==2.1 && \
pip3 install --upgrade --quiet cloudml-hypertune
ENV PYTHONPATH ${PYTHONPATH}:/babyweight
ENTRYPOINT ["python3", "babyweight/trainer/task.py"]
%%writefile babyweight/push_docker.sh
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=babyweight_training_container
export IMAGE_URI=gcr.io/${PROJECT_ID}/${IMAGE_REPO_NAME}
echo "Building $IMAGE_URI"
docker build -f Dockerfile -t ${IMAGE_URI} ./
echo "Pushing $IMAGE_URI"
docker push ${IMAGE_URI}
%%bash
cd babyweight
bash push_docker.sh
%%bash
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=babyweight_training_container
export IMAGE_URI=gcr.io/${PROJECT_ID}/${IMAGE_REPO_NAME}
echo "Running $IMAGE_URI"
docker run ${IMAGE_URI} \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=gs://${BUCKET}/babyweight/trained_model \
--batch_size=10 \
--num_epochs=10 \
--train_examples=1 \
--eval_steps=1
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBID=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
# gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBID} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-4 \
--scale-tier=CUSTOM \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=20000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
%%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
hyperparameterMetricTag: rmse
goal: MINIMIZE
maxTrials: 20
maxParallelTrials: 5
enableTrialEarlyStopping: True
params:
- parameterName: batch_size
type: INTEGER
minValue: 8
maxValue: 512
scaleType: UNIT_LOG_SCALE
- parameterName: nembeds
type: INTEGER
minValue: 3
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
%%bash
OUTDIR=gs://${BUCKET}/babyweight/hyperparam
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBNAME}
gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBNAME} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-8 \
--scale-tier=CUSTOM \
--config=hyperparam.yaml \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=5000 \
--eval_steps=100
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBNAME}
gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBNAME} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-4 \
--scale-tier=CUSTOM \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=20000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The fundamental building block of Python code is an expression. Cells can contain multiple lines with multiple expressions. When you run a cell, the lines of code are executed in the order in which they appear. Every print expression prints a line. Run the next cell and notice the order of the output.
Step2: Writing Jupyter notebooks
Step3: 2. Creating our Dataframe <a id='dataframe'></a>
Step4: 2.1 Adding features from our data
Step5: Then we will take the average across those rows.
Step6: And finally, we will append those values to our dataframe as a column called clo.
Step7: We then repeat this process for all of the other columns that we want to create.
Step8: 3. Exploring the Data <a id='exploring data'></a>
Step9: We can calculate all of the above statistics (except mode) for the entire table with one line.
Step10: 3.2 Data Visualization
Step11: Next, we'll compare the distributions of the voiced and voiceless voice-onset times.
Step12: The distributions of the three voiceless stops are below.
Step13: The distributions of the three voiced stops are below.
Step14: Below, we see the native languages represented in the data.
Step15: Below, we have a the distribution of height.
Step16: 4. Relationships between closures <a id='closures'></a>
Step17: 4.1 Using a line where x = y
Step18: 4.1.2 Voiced
Step19: 4.2 Using box-and-whisker plots
Step20: With the above plot, it can be different to compare values of the box-and-whisker plots because the outliers require us to zoom out. Below, we will zoom in to the boxes.
Step21: We then recreate those graphs, but using our voiced closures.
Step22: Do our box-whisker plots corroborate the scatter plot data? Are we able to come to the same conclusions that we were before?
Step23: Compare the distributions. Can you make any meaningful observations?
Step24: In the scatter plot above, each dot represents the average closure and height of an individual.
Step25: 5.3 Visualizing Multiple Features
Step26: What conclusions can you make from the graph above, if any? Is it easy to analyze this plot? Why?
Step27: The data from the previous semester does not have all of the same features (columns) that this semester's data has. So in order to make easy comparisons, we will just select out the columns that are in both dataframes.
Step28: Let's look at the difference between the major statistics of the previous data and this semester's.
Step29: It's a little unintuitive to tell how large of differences those are, so let's look at the relative difference to this semester's data.
Step30: Now, let's add some color to help spot the largest relative changes. Run the next two cells.
Step31: Now that we can see where the largest relative differences between this semester's and the prior semester's data are, let's take a look at them with further visualization. We'll start with vot because the column has quite a few rows with dark colors.
Step32: Why is this? The graph below should offer some insight.
Step33: There are some large differences for kvot, so let's take a look at those distributions.
|
<ASSISTANT_TASK:>
Python Code:
print("Hello, World!")
print("First this line is printed,")
print("and then this one.")
# imports -- just run this cell
import scipy
import numpy as np
import pandas as pd
import seaborn as sns
from scipy.stats import mode
from ipywidgets import interact
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from matplotlib import colors
from sklearn.linear_model import LinearRegression
import warnings
warnings.filterwarnings('ignore')
sns.set_style('darkgrid')
%matplotlib inline
file_name = 'data/fall17.csv'
data = pd.read_csv(file_name)
data.head()
subset = data[['pclo', 'tclo', 'kclo', 'bclo', 'dclo', 'gclo']]
subset.head()
clo_avg = subset.mean(axis=1)
clo_avg
data['clo'] = clo_avg
data.head()
data['vot'] = data[['pvot', 'tvot', 'kvot', 'bvot', 'dvot', 'gvot']].mean(axis=1)
data['vclo'] = data[['bclo', 'dclo', 'gclo']].mean(axis=1)
data['vvot'] = data[['bvot', 'dvot', 'gvot']].mean(axis=1)
data['vlclo'] = data[['pclo', 'tclo', 'kclo']].mean(axis=1)
data['vlvot'] = data[['pvot', 'tvot', 'kvot']].mean(axis=1)
data.head()
closure_mode = mode(data['clo'])[0][0]
print('Mode: ', closure_mode)
data['clo'].describe()
data.describe()
sns.distplot(data['vot'], kde_kws={"label": "vot"})
sns.distplot(data['vvot'], kde_kws={"label": "voiced vot"})
sns.distplot(data['vlvot'], kde_kws={"label": "voiceless vot"})
plt.xlabel('ms')
sns.distplot(data['pvot'], kde_kws={"label": "pvot"})
sns.distplot(data['tvot'], kde_kws={"label": "tvot"})
sns.distplot(data['kvot'], kde_kws={"label": "kvot"})
plt.xlabel('ms')
plt.ylabel('proportion per ms')
sns.distplot(data['bvot'], kde_kws={"label": "bvot"})
sns.distplot(data['dvot'], kde_kws={"label": "dvot"})
sns.distplot(data['gvot'], kde_kws={"label": "gvot"})
plt.xlabel('ms')
plt.ylabel('proportion per ms')
sns.countplot(y="language", data=data)
sns.distplot(data['height'])
plt.xlabel('height (cm)')
def plot_with_equality_line(xs, ys, best_fit=False):
fig, ax = plt.subplots()
sns.regplot(xs, ys, fit_reg=best_fit, ax=ax)
lims = [np.min([ax.get_xlim(), ax.get_ylim()]), np.max([ax.get_xlim(), ax.get_ylim()])]
ax.plot(lims, lims, '--', alpha=0.75, zorder=0, c='black')
ax.set_xlim(lims)
ax.set_ylim(lims)
print('Points above line: ' + str(sum(xs < ys)))
print('Points below line: ' + str(sum(xs > ys)))
print('Points on line: ' + str(sum(xs == ys)))
plot_with_equality_line(data['tclo'], data['pclo'])
plt.xlabel('tclo (ms)')
plt.ylabel('pclo (ms)')
plot_with_equality_line(data['kclo'], data['pclo'])
plt.xlabel('kclo (ms)')
plt.ylabel('pclo (ms)')
plot_with_equality_line(data['kclo'], data['tclo'])
plt.xlabel('kclo (ms)')
plt.ylabel('tclo (ms)')
plot_with_equality_line(data['dclo'], data['bclo'])
plt.xlabel('dclo (ms)')
plt.ylabel('bclo (ms)')
plot_with_equality_line(data['gclo'], data['bclo'])
plt.xlabel('gclo (ms)')
plt.ylabel('bclo (ms)')
plot_with_equality_line(data['gclo'], data['dclo'])
plt.ylabel('dclo (ms)')
plt.xlabel('gclo (ms)')
sns.boxplot(data=data[['pclo', 'tclo', 'kclo']], width=.3, palette="Set3")
plt.ylabel('duration (ms)')
plt.xlabel('Voiceless Closures')
sns.boxplot(data=data[['pclo', 'tclo', 'kclo']], width=.3, palette="Set3")
plt.ylabel('duration (ms)')
plt.xlabel('Voiceless Closures')
plt.ylim(0, 212)
sns.boxplot(data=data[['bclo', 'dclo', 'gclo']], width=.3, palette="Set2")
plt.ylabel('duration (ms)')
plt.xlabel('Voiced Closures')
sns.boxplot(data=data[['bclo', 'dclo', 'gclo']], width=.3, palette="Set2")
plt.ylabel('duration (ms)')
plt.xlabel('Voiced Closures')
plt.ylim(0, 212)
sns.violinplot(x="vot", y="language", data=data)
plt.xlabel('vot (ms)')
trimmed = data[data['clo'] < 250]
sns.lmplot('height', 'clo', data=trimmed, fit_reg=True)
plt.xlabel('height (cm)')
plt.ylabel('clo (ms)')
sns.regplot('height', 'vclo', data=trimmed, fit_reg=True)
sns.regplot('height', 'vlclo', data=trimmed, fit_reg=True)
plt.xlabel('height (cm)')
plt.ylabel('clo (ms)')
sns.lmplot('height', 'clo',data=trimmed, fit_reg=False, hue="language")
plt.xlabel('height (cm)')
plt.ylabel('clo (ms)')
old_file_name = 'data/fall15.csv'
fa15 = pd.read_csv(old_file_name)
fa15.head()
current_subset = data[fa15.columns]
current_subset.head()
difference = fa15.describe() - current_subset.describe()
difference
relative_difference = difference / current_subset.describe()
relative_difference
scale = pd.DataFrame({'scale': np.arange(-3,5,1)*.2}).set_index(relative_difference.index)
def background_gradient(s, df, m=None, M=None, cmap='RdBu_r', low=0, high=0):
# code modified from: https://stackoverflow.com/questions/38931566/pandas-style-background-gradient-both-rows-and-colums
if m is None:
m = df.min().min()
if M is None:
M = df.max().max()
rng = M - m
norm = colors.Normalize(m - (rng * low), M + (rng * high))
normed = norm(s.values)
c = [colors.rgb2hex(x) for x in ListedColormap(sns.color_palette(cmap,8))(normed)]
return ['background-color: %s' % color for color in c]
relative_difference.merge(scale, left_index=True, right_index=True).style.apply(background_gradient,
df=relative_difference, m=-1, M=1)
sns.distplot(data['vot'], kde_kws={"label": "Fall 2017 vot"})
sns.distplot(fa15['vot'], kde_kws={"label": "Fall 2015 vot"})
plt.xlabel('ms')
sns.distplot(data['vlvot'], kde_kws={"label": "Fall 2017 vlvot"}) # notice the call to voiced vot
sns.distplot(fa15['vot'], kde_kws={"label": "Fall 2015 vot"})
plt.xlabel('ms')
sns.distplot(fa15['kvot'], kde_kws={"label": "Fall 2015 kvot"})
sns.distplot(data['kvot'], kde_kws={"label": "Fall 2017 kvot"})
plt.xlabel('kvot (ms)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 运行结果如下:(46508, 13)
Step2: 注意这里有一个问题。Python下做决策树的时候,每一个特征都应该是数值(整型或者实数)类型的。但是我们一眼就可以看出,grade, sub_grade, home_ownership等列的取值都是类别(categorical)型。所以,必须经过一步转换,把这些类别都映射成为某个数值,才能进行下面的步骤。
Step3: 下面我们需要做的事情,是把数据分成两部分,分别叫做训练集和测试集。
Step4: 至此,一切数据准备工作都已就绪。我们开始呼唤Python中的scikit-learn软件包。决策树的模型,已经集成在内。只需要3条语句,直接调用就可以,非常方便。
Step5: 好了,你要的决策树已经生成完了。
Step6: 测试
Step7: 才从61%增加到了63%没啥进步啊。。。。。改成5\6\7\8\9\10试试看?要不我1~10全部都做一下试试看
Step8: 为什么决策树深度增加而准确率下降了呢?我也不知道。
Step9: KNN算法准确率也才55%多,我们能找到一个比较好的K值吗?
Step10: 之前的版本正确率是63%,现在的正确率居然有64%,折腾好几天了,实在有点受不了了
|
<ASSISTANT_TASK:>
Python Code:
#为了让Python能够高效率处理表格数据,我们使用一个非常优秀的数据处理框架Pandas。
import pandas as pd
#然后我们把loans.csv里面的内容全部读取出来,存入到一个叫做df的变量里面。
df = pd.read_csv('loans.csv')
#我们看看df这个数据框的前几行,以确认数据读取无误。因为表格列数较多,屏幕上显示不完整,我们向右拖动表格,看表格最右边几列是否也正确读取。
df.tail()
#统计一下总行数,看是不是所有行也都完整读取进来了。
df.shape
X = df.drop('safe_loans', axis=1)
y = df.safe_loans
#我们看一下特征数据X的形状:(46508, 12)除了最后一列,其他行列都在。符合我们的预期。我们再看看“目标”列。
#特征数据Y的形状:执行后显示如下结果:(46508,)逗号后面没有数字,指的是只有1列。:
print(X.shape,y.shape)
from sklearn.preprocessing import LabelEncoder
from collections import defaultdict
d = defaultdict(LabelEncoder)
X_trans = X.apply(lambda x: d[x.name].fit_transform(x))
X_trans.tail()
from sklearn.model_selection import train_test_split #原教程这里需要修改,sklearn.cross_validation该模块在0.18版本中被弃用,支持所有重构的类和函数都被移动到的model_selection模块。 另请注意,新的CV迭代器的接口与本模块的接口不同。 此模块将在0.20中删除。
X_train, X_test, y_train, y_test = train_test_split(X_trans, y, random_state=1)
#我们看看训练数据集和测试集的形状:
print(X_train.shape,X_test.shape)
from sklearn import tree
clf = tree.DecisionTreeClassifier(max_depth=3)
clf = clf.fit(X_train, y_train)
with open("safe-loans.dot", 'w') as f:
f = tree.export_graphviz(clf,
out_file=f,
max_depth = 3,
impurity = True,
feature_names = list(X_train),
class_names = ['not safe', 'safe'],
rounded = True,
filled= True )
from subprocess import check_call
check_call(['dot','-Tpng','safe-loans.dot','-o','safe-loans.png'])
from IPython.display import Image as PImage
from PIL import Image, ImageDraw, ImageFont
img = Image.open("safe-loans.png")
draw = ImageDraw.Draw(img)
img.save('output.png')
PImage("output.png")
test_rec = X_test.iloc[1,:]
print('电脑告诉我们,它调查后风险结果是这样的:',clf.predict([test_rec]),'之前提到过,1代表这笔贷款是安全的。实际情况如何呢?我们来验证一下。从测试集目标里面取出对应的标记:',y_test.iloc[1])
#下面我们验证一下,根据训练得来的决策树模型,贷款风险类别判断准确率究竟有多高。
from sklearn.metrics import accuracy_score
accuracy_score(y_test, clf.predict(X_test))
#模型才61%多的准确率啊,我修改一下,max_depth=4决策树深度为4看看效果
from sklearn import tree
clf2 = tree.DecisionTreeClassifier(max_depth=4)
clf2 = clf2.fit(X_train, y_train)
with open("safe-loans.dot", 'w') as f:
f = tree.export_graphviz(clf2,
out_file=f,
max_depth = 4,
impurity = True,
feature_names = list(X_train),
class_names = ['not safe', 'safe'],
rounded = True,
filled= True )
from subprocess import check_call
check_call(['dot','-Tpng','safe-loans.dot','-o','safe-loans.png'])
from IPython.display import Image as PImage
from PIL import Image, ImageDraw, ImageFont
img = Image.open("safe-loans.png")
draw = ImageDraw.Draw(img)
img.save('output.png')
PImage("output.png")
#下面我们验证一下,根据max_depth=4决策树深度为4的决策树模型,贷款风险类别判断准确率究竟有多高。
from sklearn.metrics import accuracy_score
accuracy_score(y_test, clf2.predict(X_test))
from sklearn import tree
from sklearn.metrics import accuracy_score
clfi=[0]*21
k_range = range(1,20)
test_accuracy = []
for i in k_range:
test_accuracy.append(accuracy_score(y_test, tree.DecisionTreeClassifier(max_depth=i).fit(X_train, y_train).predict(X_test)))
# 然后画图,看运行正确率多少
import matplotlib.pyplot as plt
plt.plot(k_range, test_accuracy)
plt.xlabel("决策树深度")
plt.ylabel("准确率")
plt.show()
from sklearn.neighbors import KNeighborsClassifier
# K=5
knn5 = KNeighborsClassifier(n_neighbors=5)
knn5.fit(X_train, y_train)
y_pred = knn5.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
k_range = range(1, 26)
test_accuracy = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
test_accuracy.append(accuracy_score(y_test, y_pred))
# 然后画图,看运行正确率多少
import matplotlib.pyplot as plt
plt.plot(k_range, test_accuracy)
plt.xlabel("Value of K for KNN")
plt.ylabel("Testing Accuracy")
plt.show()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X_trans, y, random_state=1)
y_train=0.5*(y_train+1) # y输出范围从0到1
y_test=0.5*(y_test+1) # y输出范围从0到1
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
model = Sequential() #建立模型,快速开始序贯(Sequential)模型,序贯模型是多个网络层的线性堆叠,也就是“一条路走到黑”。
model.add(Dense(input_dim = 12, output_dim = 96)) #添加输入层、隐藏层的连接
model.add(Activation('tanh')) #以Tanh函数为激活函数
model.add(Dense(input_dim = 96, output_dim = 48)) #添加隐藏层、隐藏层的连接
model.add(Activation('relu')) #以Relu函数为激活函数
model.add(Dropout(0.1))
model.add(Dense(input_dim = 48, output_dim = 48)) #添加隐藏层、隐藏层的连接
model.add(Activation('relu')) #以Relu函数为激活函数
model.add(Dense(input_dim = 48, output_dim = 36)) #添加隐藏层、隐藏层的连接
model.add(Activation('relu')) #以Relu函数为激活函数
model.add(Dropout(0.15))
model.add(Dense(input_dim = 36, output_dim = 36)) #添加隐藏层、隐藏层的连接
model.add(Activation('relu')) #以Relu函数为激活函数
model.add(Dense(input_dim = 36, output_dim = 36)) #添加隐藏层、隐藏层的连接
model.add(Activation('relu')) #以Relu函数为激活函数
model.add(Dense(input_dim = 36, output_dim = 12)) #添加隐藏层、隐藏层的连接
model.add(Activation('relu')) #以Relu函数为激活函数
model.add(Dense(input_dim = 12, output_dim = 1)) #添加隐藏层、输出层的连接
model.add(Activation('sigmoid')) #以sigmoid函数为激活函数
#编译模型,损失函数为binary_crossentropy(之前是mean_squared_error),用adam法求解
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train.values, y_train.values, epochs=170, batch_size = 2000, verbose=False) #训练模型,epochs训练170次
r = pd.DataFrame(model.predict_classes(x_test.values))
'''
r = pd.DataFrame(model.predict(x_test.values))
rr=r.values
tr=rr.flatten()
for i in range(tr.shape[0]):
if tr[i]>0.5:
tr[i]=1
else:
tr[i]=0
'''
from sklearn.metrics import accuracy_score
print('# 模型评分,正确率:',accuracy_score(y_test,r))
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
#画出模型结构图,并保存成图片
SVG(model_to_dot(model).create(prog='dot', format='svg'))
from keras.utils import plot_model
plot_model(model, to_file='model.png')
y_train.tail()
y_test.tail()
model.evaluate(x=x_test.values, y=y_test.values, batch_size=200, verbose=1, sample_weight=None)
model.metrics_names
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We then extract the data from the text files located in the tests folder. They will be stored in RVDataSet objects, which are defined in the dataset module.
Step2: We can visualize the radial velocities by running the function plot() of a given dataset object. For instance
Step3: Now that we have the data, how do we estimate the orbital parameters of the system? We use the methods and functions inside the estimate module. But first, we need to provide an initial guess for the orbital parameters. They are
Step4: Now we need to instantiate a FullOrbit object with the datasets and our guess, as well as the parametrization option we want to use. Then, we plot it.
Step5: We estimate the orbital parameters of the system using the Nelder-Mead optimization algorithm implemented in the lmfit package. This will compute the best solution or, in other words, the one that minimizes the residuals of the fit.
Step6: Now let's plot the solution we obtained.
Step7: If the result looks good, that is great
Step8: With that done, we plot the walkers to see how the simulation went.
Step9: Let's cut the beginning of the simulation (the first 500 steps) because they correspond to the burn-in phase.
Step10: Now we use a corner plot to analyze the posterior distributions of the parameters, as well as the correlations between them.
Step11: And that should be pretty much it. Finally, we compute the orbital parameters in a human-readable fashion.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import astropy.units as u
from radial import estimate, dataset
%matplotlib inline
harps = dataset.RVDataSet(file='../../tests/HIP67620_HARPS.dat', # File name
t_offset=-2.45E6, # Time offset (units of days)
rv_offset='subtract_mean', # RV offset
instrument_name='HARPS',
target_name='HIP 67620',
skiprows=1, # Number of rows to skip in the data file
t_col=5, # Column corresponding to time in the data file
rv_col=6, # Column corresponding to RVs
rv_unc_col=7 # Column corresponding to RV ucnertainties
)
aat = dataset.RVDataSet(file='../../tests/HIP67620_AAT.dat', t_offset=-2.45E6, rv_offset='subtract_mean',
instrument_name='AATPS', target_name='HIP 67620', delimiter=',')
w16 = dataset.RVDataSet(file='../../tests/HIP67620_WF16.dat', t_offset=-5E4, rv_offset='subtract_mean',
instrument_name='W16', target_name='HIP 67620', t_col=1,
rv_col=3, rv_unc_col=4)
w16.plot()
# guess is a dictionary, which is a special type of "list" in python
# Instead of being indexed by a number, the items in a dictionary
# are indexed by a key (which is a string)
guess = {'k': 6000,
'period': 4000,
't0': 5000,
'omega': 180 * np.pi / 180,
'ecc': 0.3,
'gamma_0': 0,
'gamma_1': 0,
'gamma_2': 0}
estim = estimate.FullOrbit(datasets=[w16],
guess=guess,
parametrization='mc10')
plot = estim.plot_rvs(plot_guess=True, fold=False, legend_loc=2)
plt.show()
result = estim.lmfit_orbit(update_guess=True)
pylab.rcParams['font.size'] = 12
fig, gs = estim.plot_rvs(plot_guess=True, fold=False, legend_loc=4)
estim.emcee_orbit(nwalkers=12, nsteps=1000, nthreads=4)
estim.plot_emcee_sampler()
estim.make_chains(500)
fig = estim.plot_corner()
plt.show()
estim.print_emcee_result(main_star_mass=0.954, # in M_sol units
mass_sigma=0.006)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a Twitter app and find your consumer token and secret
Step2: Authentificate with the Twitter API
Step3: Collecting tweets from the Streaming API
Step4: Step 2
Step5: Step 3
Step6: Saving the stream to a file
|
<ASSISTANT_TASK:>
Python Code:
# this will install tweepy on your machine
!pip install tweepy
consumer_key = 'xxx'
consumer_secret = 'xxx'
access_token = 'xxx'
access_token_secret = 'xxx'
import tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
# create the api object that we will use to interact with Twitter
api = tweepy.API(auth)
# example of:
tweet = api.update_status('Hello Twitter')
# see all the information contained in a tweet:
print(tweet)
#override tweepy.StreamListener to make it print tweet content when new data arrives
class MyStreamListener(tweepy.StreamListener):
def on_status(self, status):
print(status.text)
myStreamListener = MyStreamListener()
myStream = tweepy.Stream(auth = api.auth, listener=myStreamListener)
myStream.filter(track=['new york'])
myStream.disconnect()
myStream.filter(track=['realdonaldtrump,trump'], languages=['en'])
myStream.disconnect()
# streaming tweets from a given location
# we need to provide a comma-separated list of longitude,latitude pairs specifying a set of bounding boxes
# for example for New York
myStream.filter(locations=[-74,40,-73,41])
myStream.disconnect()
#override tweepy.StreamListener to make it save data to a file
class StreamSaver(tweepy.StreamListener):
def __init__(self, filename, max_num_tweets=2000, api=None):
self.filename = filename
self.num_tweets = 0
self.max_num_tweets = max_num_tweets
tweepy.StreamListener.__init__(self, api=api)
def on_data(self, data):
#print json directly to file
with open(self.filename,'a') as tf:
tf.write(data)
self.num_tweets += 1
print(self.num_tweets)
if self.num_tweets >= self.max_num_tweets:
return True
def on_error(self, status):
print(status)
# create the new StreamListener and stream object that will save collected tweets to a file
saveStream = StreamSaver(filename='testTweets.txt')
mySaveStream = tweepy.Stream(auth = api.auth, listener=saveStream)
mySaveStream.filter(track=['realdonaldtrump,trump'], languages=['en'])
mySaveStream.disconnect()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
v_range = 255
return np.array(x) / 255
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn import preprocessing
all_values = [0,1,2,3,4,5,6,7,8,9]
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
lb = preprocessing.LabelBinarizer()
lb.fit(all_values)
return lb.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], "x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, [None, n_classes], "y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name = "keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
input_depth = x_tensor.get_shape().as_list()[3]
weight = tf.Variable(tf.truncated_normal((conv_ksize[0], conv_ksize[1], input_depth, conv_num_outputs)))
bias = tf.Variable(tf.zeros(conv_num_outputs))
x = tf.nn.conv2d(x_tensor, weight, strides=[1, *conv_strides, 1], padding='SAME')
x = tf.nn.bias_add(x, bias)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, ksize=[1, *pool_ksize, 1], strides=[1, *pool_strides, 1], padding='SAME')
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def x_shape(x_tensor):
return x_tensor.get_shape().as_list()
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Functionprint(x_tensor)
return tf.reshape(x_tensor, [-1, np.prod(x_shape(x_tensor)[1:])])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
x_length = x_shape(x_tensor)[1]
weights = tf.Variable(tf.truncated_normal([x_length, num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
x = tf.add(tf.matmul(x_tensor, weights), bias)
x = tf.nn.relu(x)
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
x_length = x_shape(x_tensor)[1]
weights = tf.Variable(tf.truncated_normal([x_length, num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.add(tf.matmul(x_tensor, weights), bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize = (3, 3)
conv_strides = (1, 1)
pool_ksize = (2, 2)
pool_strides = (2, 2)
conv_num_outputs_1 = 16
conv_num_outputs_2 = 32
x = conv2d_maxpool(x, conv_num_outputs_1, conv_ksize, conv_strides, pool_ksize, pool_strides)
x = conv2d_maxpool(x, conv_num_outputs_2, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc_num_outputs = 784
x = fully_conn(x, fc_num_outputs)
x = tf.nn.dropout(x, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
x = output(x, 10)
# TODO: return output
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer,
feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost,
feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
valid_acc = session.run(accuracy,
feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
# TODO: Tune Parameters
epochs = 100
batch_size = 256
keep_probability = 0.70
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
Step6: Let's run it
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Hidden layer
num_hidden_layer_features = 1024
hidden_weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_hidden_layer_features]))
hidden_biases = tf.Variable(tf.zeros([num_hidden_layer_features]))
hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset, hidden_weights) + hidden_biases)
# Output layer
weights = tf.Variable(
tf.truncated_normal([num_hidden_layer_features, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
logits = tf.matmul(hidden_layer, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_relu = tf.nn.relu(tf.matmul(tf_valid_dataset, hidden_weights) + hidden_biases)
valid_output = tf.matmul(valid_relu, weights) + biases
valid_prediction = tf.nn.softmax(valid_output)
test_relu = tf.nn.relu(tf.matmul(tf_test_dataset, hidden_weights) + hidden_biases)
test_output = tf.matmul(test_relu, weights) + biases
test_prediction = tf.nn.softmax(test_output)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h4>There are 52 complaints that have been mislabelled as unsubstantiated.</h4>
Step2: This dataset contains unsubstantiated complaints, which we don't need. There are three columns that indicate substantiation. A DHS person explained that if any one of them has the word 'substantiated,' then the complaint was substantiated.
Step3: <h3>Now we prepare the five-year, detailed data</h3>
Step4: Drop Adult Foster Homes and select columns.
Step5: No longer need the facility_type field.
Step6: There are thousands of complaints that appear in both datasets. If a complaint is a duplicate, we want to keep the one that is in the five-year set, because that one has richer data. To do this, we will add a 'source' column to each dataframe, value '1' for the five-year data and '2' for the ten-year data. We will then sort based on that column, then de-duplicate on the abuse_number field, telling pandas to keep the first instance of the duplicate that it finds.
Step7: Set abuse_numbers to uppercase (three abuse numbers in ten-year data have lowercase)
Step8: Add a 'year' column based on incident date.
Step9: <h3>Clean the abuse_type column</h3>
Step10: <h2>Join with scraped complaints</h2>
Step11: Set all abuse numbers to upper case.
Step12: Add a column that tells us if the complaint has an equivalent online, based on the present of the online name.
Step13: <h3>Join to a lookup table for the code number</h3>
Step14: <h1>Join with facilities</h1>
Step15: Select the columns we need and drop the one duplicate in here.
Step16: Churchill Estates Residential Care has blank facility_type and capacity fields. The facility is an RCF and has 108 capacity. Info obtained from DHS PIO.
Step17: <h3>Left join facilities to complaints.</h3>
Step18: The analysis is only of complaints in 2005 or later.
Step19: merged_comp_fac has all the complaints we need for the complaints analysis.
Step20: Next, left join the facilities to the pivot table.
Step21: <h2>Add our own outcome code</h2>
Step22: <h2>Export the facility and complaints data for munging</h2>
|
<ASSISTANT_TASK:>
Python Code:
#Five years of detailed complaint data for all four kinds of facilities (Residential Care, Assisted Living, Nursing, and Adult Foster Home)
detailed = pd.read_excel('../../data/raw/Oregonian Abuse records 5 years May 2016.xlsx', header=3)
#Ten years of non-detailed complaints for Nursing Facilities
NF_complaints = pd.read_excel('../../data/raw/Copy of Oregonian Data Request Facility Abuse Records April 2016 Reviewed.xlsx',sheetname='NF Complaints')
#Ten years of non-detailed complaints for Assisted Living Facilities
ALF_complaints = pd.read_excel('../../data/raw/Copy of Oregonian Data Request Facility Abuse Records April 2016 Reviewed.xlsx',sheetname='ALF Complaints')
#Ten years of non-detailed complaints for Residential Care Facilities
RCF_complaints = pd.read_excel('../../data/raw/Copy of Oregonian Data Request Facility Abuse Records April 2016 Reviewed.xlsx',sheetname='RCF Complaints')
#NF has an inconsistently named column
NF_complaints.rename(columns={'Abuse_CbcAbuse': 'CbcAbuse'}, inplace=True)
ten_year_complaints = pd.concat([RCF_complaints,ALF_complaints,NF_complaints], ignore_index=True).reset_index().drop('index',1)
ten_year_complaints.rename(columns={'Abuse_Number':'abuse_number', 'Facility ID':'facility_id','Incident Date':'incident_date','Fac Type': 'facility_type',
'Investigation Results':'results_1','FacilityInvestResultsAbuse':'results_2','FacilityInvestResultsRule':'results_3','OutcomeCode':'outcome_code',
'CbcAbuse':'abuse_type'}, inplace=True)
ten_year_complaints = ten_year_complaints[['abuse_number','facility_id','incident_date','results_1',
'results_2','results_3','outcome_code','abuse_type']][ten_year_complaints['abuse_number'].notnull()]
sub_comps = pd.read_excel('../../data/raw/52 mislabelled as unsubstantiated.xlsx', header=None, names=['abuse_number'])
miss_comps = sub_comps.merge(ten_year_complaints, how = 'left', left_on='abuse_number',right_on='abuse_number')#.count()
ten_year_complaints = ten_year_complaints[(ten_year_complaints['results_1']=='Substantiated')|
(ten_year_complaints['results_2']=='Substantiated')|
(ten_year_complaints['results_3']=='Substantiated')]
ten_year_complaints = pd.concat([ten_year_complaints,miss_comps]).reset_index().drop('index',1)
ten_year_ready = ten_year_complaints[['abuse_number','facility_id','incident_date','outcome_code','abuse_type']].reset_index().drop('index',1)
detailed.rename(columns={'Abuse_Number':'abuse_number','Facility ID':'facility_id',
'Incident Date':'incident_date','Investigation Results':'results_1',
'Facility Invest Results Abuse':'results_2','Facility Invest Results Rule':'results_3',
'Outcome Code':'outcome_code','Action Notes':'action_notes','Outcome Notes':'outcome_notes',
'Cbc Abuse Indicator':'abuse_type', 'Facility Type':'facility_type'}, inplace=True)
five_year_complaints = detailed[['abuse_number','facility_id','facility_type','incident_date','outcome_code',
'action_notes','outcome_notes','abuse_type']][detailed['facility_type']!='AFH']
five_year_ready = five_year_complaints.drop('facility_type',1)
five_year_ready['source']=1
ten_year_ready['source']=2
five_ten_concat = pd.concat([five_year_ready,ten_year_ready])
five_ten_concat['abuse_number'] = five_ten_concat['abuse_number'].apply(lambda x:x.upper())
five_ten_concat = five_ten_concat.sort_values('source')
complaints = five_ten_concat.drop_duplicates(subset='abuse_number', keep='first').reset_index().drop('index',1)
complaints['year']=complaints['incident_date'].dt.year.astype(int)
complaints.count()
complaints['abuse_type'].fillna('',inplace=True)
complaints['abuse_type'] = complaints['abuse_type'].apply(lambda x: x.upper())
complaints["abuse_type"] = complaints["abuse_type"].apply(dict([
('0', ''),
('1', ''),
('2', ''),
('363', ''),
('I', ''),
('A', 'A'),
('L', 'L'),
]).get).fillna('')
scraped_comp = pd.read_csv('../../data/scraped/scraped_complaints_3_25.csv')
scraped_comp['abuse_number'] = scraped_comp['abuse_number'].apply(lambda x: x.upper())
scraped_comp = scraped_comp.drop_duplicates(subset='abuse_number').drop(['fac_type','inv_comp_date','city_name'],1)
merged = complaints.merge(scraped_comp, how = 'left',on = 'abuse_number')
merged['outcome_code'] = merged['outcome_code'].fillna(0)
merged['public'] = np.where(merged['fac_name'].notnull(),'online','offline')
codes = pd.read_excel('../../data/raw/OLRO Outcome Codes.xlsx', header=3)
codes.rename(columns = {'Code':'outcome_code','Display Text':'outcome'}, inplace = True)
codes['outcome_code'] = codes['outcome_code'].astype(str)
codes = codes.drop('Definition',1)
merged['outcome_code'] = merged['outcome_code'].astype(int).astype(str)
merged = merged.merge(codes, how = 'left')
merged.groupby('abuse_type').count()
merged['fac_name'].fillna('',inplace=True)
facilities = pd.read_csv('../../data/raw/APD_FacilityRecords.csv')
facilities.rename(columns={'FACID':'facid','Facility ID':'facility_id','FAC_CCMUNumber':'fac_ccmunumber','FAC_Type':'facility_type',
'FAC_Capacity':'fac_capacity','Facility Name':'facility_name','Facility Address':'street',
'Other Service':'other_service','Owner':'owner','Operator':'operator'}, inplace=True)
facilities = facilities[['facility_id','fac_ccmunumber','facility_type','fac_capacity','facility_name']].drop_duplicates(subset='facility_id', keep='last')
facilities.loc[318,'facility_type']='RCF'
facilities.loc[318,'fac_capacity']=108
merged_comp_fac = facilities.merge(merged, on = 'facility_id',how = 'left')
merged_comp_fac = merged_comp_fac[['abuse_number','facility_id','facility_type','facility_name','abuse_type','action_notes','incident_date','outcome','outcome_notes',
'year','fac_name','public']][merged_comp_fac['year']>2004]
complaint_pivot = merged_comp_fac.pivot_table(values='abuse_number',index='facility_id',columns='public', aggfunc='count').reset_index()
fac_pivot_merge = facilities.merge(complaint_pivot, how='left',on='facility_id')
merged_comp_fac["omg_outcome"] = merged_comp_fac["outcome"].apply(dict([
('No Negative Outcome', 'Potential harm'),
('Exposed to Potential Harm', 'Potential harm'),
('Fall Without Injury', 'Fall, no injury'),
('Left facility without assistance without injury', 'Left facility without attendant, no injury'),
('Loss of Dignity', 'Loss of Dignity'),
('Fall with Injury', 'Fracture or other injury'),
('Injury During Self-Transfer', 'Fracture or other injury'),
('Fall Resulting In Fractured Bone(s)', 'Fracture or other injury'),
('Fall Resulting In Fractured Hip', 'Fracture or other injury'),
('Transfer Resulting In Skin Injury or Bruise', 'Fracture or other injury'),
('Fractured Bone', 'Fracture or other injury'),
('Fractured Hip', 'Fracture or other injury'),
('Burned', 'Fracture or other injury'),
('Transfer Resulting In Fractured Hip', 'Fracture or other injury'),
('Transfer Resulting In Fracture Bone(s)', 'Fracture or other injury'),
('Left Facility Without Assistance With Injury', 'Fracture or other injury'),
('Bruised', 'Fracture or other injury'),
('Skin Injury', 'Fracture or other injury'),
('Negative Behavior Escalated, Affected Other Resident(s)', 'Failure to address resident aggression'),
('Medical Condition Developed or Worsened', 'Medical condition developed or worsened'),
('Decubitus Ulcer(s) Developed', 'Medical condition developed or worsened'),
('Decubitus Ulcer(s) Worsened', 'Medical condition developed or worsened'),
('Urinary Tract Infection Worsened', 'Medical condition developed or worsened'),
('Transfer To Hospital For Treatment', 'Medical condition developed or worsened'),
('Received Incorrect or Wrong Dose of Medication(s)', 'Medication error'),
('The resident did not receive an ordered medication', 'Medication error'),
('Loss of Resident Property', 'Loss of property, theft or financial exploitation'),
('Loss of Medication', 'Loss of property, theft or financial exploitation'),
('Financially Exploited', 'Loss of property, theft or financial exploitation'),
('Unreasonable Discomfort', 'Unreasonable discomfort or continued pain'),
('Pain And Suffering Continued', 'Unreasonable discomfort or continued pain'),
('Undesirable Weight Loss', 'Weight loss'),
('Poor Continuity Of Care', 'Inadequate care'),
('Failed To Have Quality of Life Maintained or Enhanced', 'Inadequate care'),
('Failed to Receive Needed Services', 'Inadequate care'),
('Denied Choice In Treatment', 'Inadequate care'),
('Incontinence', 'Inadequate hygiene'),
('Inadequate Hygiene', 'Inadequate hygiene'),
('Physically Abused', 'Physical abuse'),
('Corporally Punished', 'Physical abuse'),
('Verbally Abused', 'Verbal or emotional abuse'),
('Mentally or Emotionally Abused', 'Verbal or emotional abuse'),
('Involuntarily Secluded', 'Involuntary seclusion'),
('Raped', 'Sexual abuse'),
('Sexually Abused', 'Sexual abuse'),
('Deceased', 'Death'),
('Facility was understaffed with no negative outcome', 'Staffing issues'),
('Unable to timely assess adequacy of staffing', 'Staffing issues'),
('Improperly Transferred Out of Facility, Denied Readmission or Inappropriate Move Within Facility', 'Denied readmission or moved improperly'),
]).get).fillna('')
merged_comp_fac.to_csv('../../data/processed/complaints-3-25-scrape.csv',index=False)
fac_pivot_merge.to_csv('../../data/processed/facilities-3-25-scrape.csv',index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 上面的描述是:
Step2: 非数字的处理
Step3: 除了Sex字段,还有Embarked也不是数值的,我们也需要进行转换
Step4: String值得填充我们是用多的填,这里$S$最多,因此用它填。
Step5: 用pandas和sklearn做机器学习还是很简单的,流程就是:处理数据->选择特征->选择算法->多次交叉验证->最后把结果给append进来。
Step6: 模型准确率是拿预测准确的值除以总的预测结果。我们的准确率是0.78,比全靠蒙的50%高不了多少。我们可以使用更高级的算法。逻辑回归能够直接得出某只的概率。
Step7: 虽然叫逻辑回归,但是我们一般用它做分类。现在的结果还是0.78多。
Step8: 我们看到样本只有0.78的数量,这是因为我们的随机森林个数偏少的原因。我们可以增加决策树来试试,并且决策树也更深一些。
Step9: 现在我们看到决策树的准确率到了0.81了。这也说明参数的调节也非常重要。
Step10: 还有就是名字当中含有的一些职业属性,也可以提取出来作为一种特征。
Step11: 特征选择
Step12: 我们使用多个算法同时做。
|
<ASSISTANT_TASK:>
Python Code:
import pandas
titanic = pandas.read_csv("titanic_train.csv")
titanic.head()
print(titanic.describe())
titanic["Age"] = titanic["Age"].fillna(titanic["Age"].median())
print(titanic.describe())
print(titanic["Sex"].unique())
# 对男女进行编号
titanic.loc[titanic["Sex"] == "male", "Sex"] = 0
titanic.loc[titanic["Sex"] == "female", "Sex"] = 1
print(titanic["Embarked"].unique())
titanic["Embarked"] = titanic["Embarked"].fillna('S')
titanic.loc[titanic["Embarked"] == "S", "Embarked"] = 0
titanic.loc[titanic["Embarked"] == "C", "Embarked"] = 1
titanic.loc[titanic["Embarked"] == "Q", "Embarked"] = 2
# import the linear regression class
from sklearn.linear_model import LinearRegression
# 交叉验证
from sklearn.cross_validation import KFold
# 选择特征
predictors = ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']
# Initialize our algorithm class
alg = LinearRegression()
# titanic.shape返回[m, n],m是多少个;n_folds做几倍的交叉验证;random_state表示每次切分都是相同的
kf = KFold(titanic.shape[0], n_folds=3, random_state=1)
predictions = []
for train, test in kf:
#先拿到特征
train_predictors = (titanic[predictors].iloc[train, :])
#拿到真实的label
train_target = titanic["Survived"].iloc[train]
#用线性回归应用到数据上
alg.fit(train_predictors, train_target)
#训练完了就可以做预测了
test_predictions = alg.predict(titanic[predictors].iloc[test,:])
predictions.append(test_predictions)
import numpy as np
# 预测在3个分开的numpy array中。把它合并在一起。
# 我们在axis 0上拼接它们。因为它们只有一个轴。
predictions = np.concatenate(predictions, axis=0)
predictions[predictions > 0.5] = 1
predictions[predictions <= 0.5] = 0
accuracy = sum(predictions[predictions == titanic['Survived']]) / len(predictions)
print(accuracy)
from sklearn import cross_validation
from sklearn.linear_model import LogisticRegression
alg = LogisticRegression(random_state = 1)
#计算每个交叉验证的准确值
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=3)
#获取均值
print(scores.mean())
from sklearn import cross_validation
from sklearn.ensemble import RandomForestClassifier
predictors = ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']
# n_estimators 是我们想构造的树数量
# min_samples_split 节点进行分裂,最小的节点数量
# min_samples_leaf 最小叶子节点个数
alg = RandomForestClassifier(random_state=1, n_estimators=10, min_samples_split=2, min_samples_leaf=1)
kf = cross_validation.KFold(titanic.shape[0], n_folds=3, random_state=1)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic['Survived'], cv=kf)
print(scores.mean())
alg = RandomForestClassifier(random_state=1, n_estimators=50, min_samples_split=4, min_samples_leaf=2)
kf = cross_validation.KFold(titanic.shape[0], n_folds=3, random_state=1)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic['Survived'], cv=kf)
print(scores.mean())
# 生成家庭列
titanic["FamilySize"] = titanic["SibSp"] + titanic["Parch"]
# apply方法用来产生新的series
titanic["NameLength"] = titanic["Name"].apply(lambda x: len(x))
import re
# 从名字中获取title
def get_title(name):
title_search = re.search('([A-Za-z]+)\.', name)
if title_search:
return title_search.group(1)
return ""
titles = titanic["Name"].apply(get_title)
print(pandas.value_counts(titles))
title_mapping = {"Mr": 1, "Miss": 2, "Mrs":3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7, "Mlle": 8, "Col": 9, "Capt": 10, "Ms": 11,
"Don": 12, "Sir": 13, "Lady": 14, "Johkheer": 15, "Countess": 16, "Mme": 17, "Jonkheer": 18}
for k, v in title_mapping.items():
titles[titles==k] = v
print(pandas.value_counts(titles))
titanic["Title"] = titles
import numpy as np
from sklearn.feature_selection import SelectKBest, f_classif
import matplotlib.pyplot as plt
predictors = ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked', 'FamilySize', 'Title', 'NameLength']
# 执行特征选择
selector = SelectKBest(f_classif, k=5)
selector.fit(titanic[predictors], titanic["Survived"])
# 获得每个特征的原始p-Value,然后转换成值
scores = -np.log10(selector.pvalues_)
# 画出分值
plt.bar(range(len(predictors)), scores)
plt.xticks(range(len(predictors)), predictors, rotation='vertical')
plt.show()
# 选出4个最佳特征
predictors = ['Pclass', 'Sex', 'Fare', 'Title']
alg = RandomForestClassifier(random_state=1, n_estimators=50, min_samples_split=8, min_samples_leaf=4)
from sklearn.ensemble import GradientBoostingClassifier
import numpy as np
algorithms = [
[GradientBoostingClassifier(random_state=1, n_estimators=25, max_depth=3), ['Pclass', 'Sex', 'Fare', 'FamilySize', 'Title', 'Age', 'Embarked']],
[LogisticRegression(random_state=1), ['Pclass', 'Sex', 'Fare', 'FamilySize', 'Title', 'Age', 'Embarked']]
]
kf = KFold(titanic.shape[0], n_folds=3, random_state=1)
predictions = []
for train, test in kf:
train_target = titanic['Survived'].iloc[train]
full_test_predictions = []
for alg, predictors in algorithms:
alg.fit(titanic[predictors].iloc[train,:], train_target)
# .astype(float) 用来转换DataFrame为浮点类型
test_predictions = alg.predict_proba(titanic[predictors].iloc[test, :].astype(float))[:, 1]
full_test_predictions.append(test_predictions)
# 两种算法平均,也可以加上权重
test_predictions = (full_test_predictions[0] + full_test_predictions[1]) / 2
test_predictions[test_predictions <= .5] = 0
test_predictions[test_predictions > .5] = 1
predictions.append(test_predictions)
predictions = np.concatenate(predictions, axis=0)
accuracy = sum(predictions[predictions == titanic["Survived"]]) / len(predictions)
print(accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset samplers
Step2: Visualize the dataset
Step3: Embedding model
Step4: Similarity loss
Step5: Indexing
Step6: Calibration
Step7: Visualization
Step8: Metrics
Step9: We can also take 100 examples for each class and plot the confusion matrix for
Step10: No Match
Step11: Visualize clusters
|
<ASSISTANT_TASK:>
Python Code:
import random
from matplotlib import pyplot as plt
from mpl_toolkits import axes_grid1
import numpy as np
import tensorflow as tf
from tensorflow import keras
import tensorflow_similarity as tfsim
tfsim.utils.tf_cap_memory()
print("TensorFlow:", tf.__version__)
print("TensorFlow Similarity:", tfsim.__version__)
# This determines the number of classes used during training.
# Here we are using all the classes.
num_known_classes = 10
class_list = random.sample(population=range(10), k=num_known_classes)
classes_per_batch = 10
# Passing multiple examples per class per batch ensures that each example has
# multiple positive pairs. This can be useful when performing triplet mining or
# when using losses like `MultiSimilarityLoss` or `CircleLoss` as these can
# take a weighted mix of all the positive pairs. In general, more examples per
# class will lead to more information for the positive pairs, while more classes
# per batch will provide more varied information in the negative pairs. However,
# the losses compute the pairwise distance between the examples in a batch so
# the upper limit of the batch size is restricted by the memory.
examples_per_class_per_batch = 8
print(
"Batch size is: "
f"{min(classes_per_batch, num_known_classes) * examples_per_class_per_batch}"
)
print(" Create Training Data ".center(34, "#"))
train_ds = tfsim.samplers.TFDatasetMultiShotMemorySampler(
"cifar10",
classes_per_batch=min(classes_per_batch, num_known_classes),
splits="train",
steps_per_epoch=4000,
examples_per_class_per_batch=examples_per_class_per_batch,
class_list=class_list,
)
print("\n" + " Create Validation Data ".center(34, "#"))
val_ds = tfsim.samplers.TFDatasetMultiShotMemorySampler(
"cifar10",
classes_per_batch=classes_per_batch,
splits="test",
total_examples_per_class=100,
)
num_cols = num_rows = 5
# Get the first 25 examples.
x_slice, y_slice = train_ds.get_slice(begin=0, size=num_cols * num_rows)
fig = plt.figure(figsize=(6.0, 6.0))
grid = axes_grid1.ImageGrid(fig, 111, nrows_ncols=(num_cols, num_rows), axes_pad=0.1)
for ax, im, label in zip(grid, x_slice, y_slice):
ax.imshow(im)
ax.axis("off")
embedding_size = 256
inputs = keras.layers.Input((32, 32, 3))
x = keras.layers.Rescaling(scale=1.0 / 255)(inputs)
x = keras.layers.Conv2D(64, 3, activation="relu")(x)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.Conv2D(128, 3, activation="relu")(x)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.MaxPool2D((4, 4))(x)
x = keras.layers.Conv2D(256, 3, activation="relu")(x)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.Conv2D(256, 3, activation="relu")(x)
x = keras.layers.GlobalMaxPool2D()(x)
outputs = tfsim.layers.MetricEmbedding(embedding_size)(x)
# building model
model = tfsim.models.SimilarityModel(inputs, outputs)
model.summary()
epochs = 3
learning_rate = 0.002
val_steps = 50
# init similarity loss
loss = tfsim.losses.MultiSimilarityLoss()
# compiling and training
model.compile(
optimizer=keras.optimizers.Adam(learning_rate), loss=loss, steps_per_execution=10,
)
history = model.fit(
train_ds, epochs=epochs, validation_data=val_ds, validation_steps=val_steps
)
x_index, y_index = val_ds.get_slice(begin=0, size=200)
model.reset_index()
model.index(x_index, y_index, data=x_index)
x_train, y_train = train_ds.get_slice(begin=0, size=1000)
calibration = model.calibrate(
x_train,
y_train,
calibration_metric="f1",
matcher="match_nearest",
extra_metrics=["precision", "recall", "binary_accuracy"],
verbose=1,
)
num_neighbors = 5
labels = [
"Airplane",
"Automobile",
"Bird",
"Cat",
"Deer",
"Dog",
"Frog",
"Horse",
"Ship",
"Truck",
"Unknown",
]
class_mapping = {c_id: c_lbl for c_id, c_lbl in zip(range(11), labels)}
x_display, y_display = val_ds.get_slice(begin=200, size=10)
# lookup nearest neighbors in the index
nns = model.lookup(x_display, k=num_neighbors)
# display
for idx in np.argsort(y_display):
tfsim.visualization.viz_neigbors_imgs(
x_display[idx],
y_display[idx],
nns[idx],
class_mapping=class_mapping,
fig_size=(16, 2),
)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
x = calibration.thresholds["distance"]
ax1.plot(x, calibration.thresholds["precision"], label="precision")
ax1.plot(x, calibration.thresholds["recall"], label="recall")
ax1.plot(x, calibration.thresholds["f1"], label="f1 score")
ax1.legend()
ax1.set_title("Metric evolution as distance increase")
ax1.set_xlabel("Distance")
ax1.set_ylim((-0.05, 1.05))
ax2.plot(calibration.thresholds["recall"], calibration.thresholds["precision"])
ax2.set_title("Precision recall curve")
ax2.set_xlabel("Recall")
ax2.set_ylabel("Precision")
ax2.set_ylim((-0.05, 1.05))
plt.show()
cutpoint = "optimal"
# This yields 100 examples for each class.
# We defined this when we created the val_ds sampler.
x_confusion, y_confusion = val_ds.get_slice(0, -1)
matches = model.match(x_confusion, cutpoint=cutpoint, no_match_label=10)
cm = tfsim.visualization.confusion_matrix(
matches,
y_confusion,
labels=labels,
title="Confusion matrix for cutpoint:%s" % cutpoint,
normalize=False,
)
idx_no_match = np.where(np.array(matches) == 10)
no_match_queries = x_confusion[idx_no_match]
if len(no_match_queries):
plt.imshow(no_match_queries[0])
else:
print("All queries have a match below the distance threshold.")
# Each class in val_ds was restricted to 100 examples.
num_examples_to_clusters = 1000
thumb_size = 96
plot_size = 800
vx, vy = val_ds.get_slice(0, num_examples_to_clusters)
# Uncomment to run the interactive projector.
# tfsim.visualization.projector(
# model.predict(vx),
# labels=vy,
# images=vx,
# class_mapping=class_mapping,
# image_size=thumb_size,
# plot_size=plot_size,
# )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's look at the informations contained in a tweet
Step2: you can find a description of the fields in the Twitter API documentation
Step10: Building the network of interactions
Step11: Let's build the network
Step12: Some basic properties of the Network
Step13: Network components
Step15: Exercise
Step16: Random Attack
Step17: High degree attack
Step18: Exercise
|
<ASSISTANT_TASK:>
Python Code:
#load tweets
import json
filename = 'AI2.txt'
tweet_list = []
with open(filename, 'r') as fopen:
# each line correspond to a tweet
for line in fopen:
if line != '\n':
tweet_list.append(json.loads(line))
# take the first tweet of the list
tweet = tweet_list[2]
# each tweet is a python dictionary
type(tweet)
# all the 'entries' of the dictionary
tweet.keys()
#creation time
tweet['created_at']
# text of the tweet
print(tweet['text'])
# user info
tweet['user']
# user is itslef a dict
print(type(tweet['user']))
tweet['user']['name']
# unique id of the user
tweet['user']['id']
#is the tweet a retweet?
'retweeted_status' in tweet
if 'retweeted_status' in tweet:
print(tweet['retweeted_status'])
# the `retweeted_status` is also a tweet dictionary
# user id and name of the retweeted user?
if 'retweeted_status' in tweet:
print(tweet['retweeted_status']['user']['id'])
print(tweet['retweeted_status']['user']['name'])
# is the tweet a reply?
'in_reply_to_user_id' in tweet and tweet['in_reply_to_user_id'] is not None
# 'entities' contains the hashtags, urls and usernames used in the tweet
tweet['entities']
# user id of the mentioned users
for mention in tweet['entities']['user_mentions']:
print(mention['id'])
# is the tweet a quote?
'quoted_status' in tweet
# let's define some functions to extract the interactions from tweets
def getTweetID(tweet):
If properly included, get the ID of the tweet
return tweet.get('id')
def getUserIDandScreenName(tweet):
If properly included, get the tweet
user ID and Screen Name
user = tweet.get('user')
if user is not None:
uid = user.get('id')
screen_name = user.get('screen_name')
return uid, screen_name
else:
return (None, None)
def getRetweetedUserIDandSreenName(tweet):
If properly included, get the retweet
source user ID and Screen Name
retweet = tweet.get('retweeted_status')
if retweet is not None:
return getUserIDandScreenName(retweet)
else:
return (None, None)
def getRepliedUserIDandScreenName(tweet):
If properly included, get the ID and Screen Name
of the user the tweet replies to
reply_id = tweet.get('in_reply_to_user_id')
reply_screenname = tweet.get('in_reply_to_screen_name')
return reply_id, reply_screenname
def getUserMentionsIDandScreenName(tweet):
If properly included, return a list of IDs and Screen Names tuple
of all user mentions, including retweeted and replied users
mentions = []
entities = tweet.get('entities')
if entities is not None:
user_mentions = entities.get('user_mentions')
for mention in user_mentions:
mention_id = mention.get('id')
screen_name = mention.get('screen_name')
mentions.append((mention_id, screen_name))
return mentions
def getQuotedUserIDandScreenName(tweet):
If properly included, get the ID of the user the tweet is quoting
quoted_status = tweet.get('quoted_status')
if quoted_status is not None:
return getUserIDandScreenName(quoted_status)
else:
return (None, None)
def getAllInteractions(tweet):
Get all the interactions from this tweet
returns : (tweeter_id, tweeter_screenname), list of (interacting_id, interacting_screenname)
# Get the tweeter
tweeter = getUserIDandScreenName(tweet)
# Nothing to do if we couldn't get the tweeter
if tweeter[0] is None:
return (None, None), []
# a python set is a collection of unique items
# we use a set to avoid duplicated ids
interacting_users = set()
# Add person they're replying to
interacting_users.add(getRepliedUserIDandScreenName(tweet))
# Add person they retweeted
interacting_users.add(getRetweetedUserIDandSreenName(tweet))
# Add person they quoted
interacting_users.add(getQuotedUserIDandScreenName(tweet))
# Add mentions
interacting_users.update(getUserMentionsIDandScreenName(tweet))
# remove the tweeter if he is in the set
interacting_users.discard(tweeter)
# remove the None case
interacting_users.discard((None,None))
# Return our tweeter and their influencers
return tweeter, list(interacting_users)
print(getUserIDandScreenName(tweet))
print(getAllInteractions(tweet))
import networkx as nx
# define an empty Directed Graph
# A directed graph is a graph where edges have a direction
# in our case the edges goes from user that sent the tweet to
# the user with whom they interacted (retweeted, mentioned or quoted)
G = nx.DiGraph()
# loop over all the tweets and add edges if the tweet include some interactions
for tweet in tweet_list:
# find all influencers in the tweet
tweeter, interactions = getAllInteractions(tweet)
tweeter_id, tweeter_name = tweeter
# add an edge to the Graph for each influencer
for interaction in interactions:
interact_id, interact_name = interaction
# add edges between the two user ids
# this will create new nodes if the nodes are not already in the network
G.add_edge(tweeter_id, interact_id)
# add name as a property to each node
# with networkX each node is a dictionary
G.node[tweeter_id]['name'] = tweeter_name
G.node[interact_id]['name'] = interact_name
# The graph's node are contained in a dictionary
print(type(G.node))
#print(G.node.keys())
# the keys are the user_id
print(G.node[tweeter_id])
# each node is itself a dictionary with node attributes as key,value pairs
print(type(G.node[tweeter_id]))
# edges are also contained in a dictionary
print(type(G.edge))
# we can see all the edges going out of this node
# each edge is a dictionary inside this dictionary with a key
# corresponding to the target user_id
print(G.edge[tweeter_id])
# so we can access the edge using the source user_id and the target user_id
G.edge[tweeter_id][interact_id]
G.number_of_nodes()
G.number_of_edges()
# listing all nodes
node_list = G.nodes()
node_list[:3]
# degree of a node
print(G.degree(node_list[2]))
print(G.in_degree(node_list[2]))
print(G.out_degree(node_list[2]))
# dictionary with the degree of all nodes
all_degrees = G.degree(node_list) # this is the degree for undirected edges
in_degrees = G.in_degree(node_list)
out_degrees = G.in_degree(node_list)
# average degree
2*G.number_of_edges()/G.number_of_nodes()
import numpy as np
np.array(list(all_degrees.values())).mean()
np.array(list(in_degrees.values())).mean()
np.array(list(out_degrees.values())).mean()
# maximum degree
max(all_degrees.values())
# we want to make a list with (user_id, username, degree) for all nodes
degree_node_list = []
for node in G.nodes_iter():
degree_node_list.append((node, G.node[node]['name'], G.degree(node)))
print('Unordered user, degree list')
print(degree_node_list[:10])
# sort the list according the degree in descinding order
degree_node_list = sorted(degree_node_list, key=lambda x:x[2], reverse=True)
print('Ordered user, degree list')
print(degree_node_list[:10])
# we need to import matplolib for making plots
# and numpy for numerical computations
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# this returns a list of set of nodes belonging to the
# different (weakly) connected components
components = list(nx.weakly_connected_components(G))
# sort the component according to their size
components = list(sorted(components, key=lambda x:len(x), reverse=True))
# make a list with the size of each component
comp_sizes = []
for comp in components:
comp_sizes.append(len(comp))
# plot the histogram of component sizes
hist = plt.hist(comp_sizes, bins=100)
# histogram with logarithmic y scale
hist = plt.hist(comp_sizes, bins=100, log=True)
plt.xlabel('component size')
plt.ylabel('number of components')
# sizes of the ten largest components
comp_sizes[:10]
# let's make a new graph which is the subgraph of G corresponding to
# the largest connected component
# let's find the largest component
largest_comp = components[0]
LCC = G.subgraph(largest_comp)
G.number_of_nodes()
LCC.number_of_nodes()
# let's plot the degree distribution inside the LCC
degrees = nx.degree(LCC)
degrees
degree_array = np.array(list(degrees.values()))
hist = plt.hist(degree_array, bins=100)
# using logarithmic scales
hist = plt.hist(degree_array, bins=100, log=True)
plt.xscale('log')
# logarithmic scale with logarithmic bins
N, bins, patches = plt.hist(degree_array, bins=np.logspace(0,np.log10(degree_array.max()+1), 20), log=True)
plt.xscale('log')
plt.xlabel('k - degree')
plt.ylabel('number of nodes')
# Degree probability distribution (P(k))
# since we have logarithmic bins, we need to
# take into account the fact that the bins
# have different lenghts when normalizing
bin_lengths = np.diff(bins) # lenght of each bin
summ = np.sum(N*bin_lengths)
normalized_degree_dist = N/summ
# check normalization:
print(np.sum(normalized_degree_dist*bin_lengths))
hist = plt.bar(bins[:-1], normalized_degree_dist, width=np.diff(bins))
plt.xscale('log')
plt.yscale('log')
plt.xlabel('k (degree)')
plt.ylabel('P(k)')
import random
def getGCsize(G):
returns the size of the largest component of G
comps = nx.connected_components(G)
return max([len(comp) for comp in comps])
# list that will contain the size of the GC as we remove nodes
rnd_attack_GC_sizes = []
# we will take into account the undirected version of the graph
LCCundirected = nx.Graph(LCC)
nodes_list = LCCundirected.nodes()
while len(nodes_list) > 1:
# add the size of the current GC
rnd_attack_GC_sizes.append(getGCsize(LCCundirected))
# pick a random node
rnd_node = random.choice(nodes_list)
# remove from graph
LCCundirected.remove_node(rnd_node)
# remove from node list
nodes_list.remove(rnd_node)
# convert list to numpy array
rnd_attack_GC_sizes = np.array(rnd_attack_GC_sizes)
# normalize by the initial size of the GC
GC_rnd = rnd_attack_GC_sizes/rnd_attack_GC_sizes[0]
# fraction of removed nodes
q = np.linspace(0,1,num=GC_rnd.size)
plt.plot(q,GC_rnd)
plt.xlabel('q')
plt.ylabel('GC')
# high degree attack
LCCundirected = nx.Graph(LCC)
# list of pairs (node, degree) sorted according the degree
node_deg_dict = nx.degree(LCCundirected)
nodes_sorted = sorted(node_deg_dict, key=node_deg_dict.get)
# list that will contain the size of the GC as we remove nodes
hd_attack_GC_sizes = []
while len(nodes_sorted) > 1:
hd_attack_GC_sizes.append(getGCsize(LCCundirected))
#remove node according to their degree
node = nodes_sorted.pop()
LCCundirected.remove_node(node)
hd_attack_GC_sizes = np.array(hd_attack_GC_sizes)
GC_hd = hd_attack_GC_sizes/hd_attack_GC_sizes[0]
q = np.linspace(0,1,num=GC_hd.size)
plt.plot(q,GC_rnd, label='random attack')
plt.plot(q,GC_hd, label='High-Degree attack')
plt.xlabel('q')
plt.ylabel('GC')
plt.legend()
nx.write_graphml(LCC, 'twitter_lcc_AI2.graphml')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example Document
Step2: What does this look like
Step3: Default Parsers
Step4: Defining a New Property Model
Step5: Writing a New Parser
Step6: Running the New Parser
|
<ASSISTANT_TASK:>
Python Code:
from chemdataextractor import Document
from chemdataextractor.model import Compound
from chemdataextractor.doc import Paragraph, Heading
d = Document(
Heading(u'Synthesis of 2,4,6-trinitrotoluene (3a)'),
Paragraph(u'The procedure was followed to yield a pale yellow solid (b.p. 240 °C)')
)
d
d.records.serialize()
from chemdataextractor.model import BaseModel, StringType, ListType, ModelType
class BoilingPoint(BaseModel):
value = StringType()
units = StringType()
Compound.boiling_points = ListType(ModelType(BoilingPoint))
import re
from chemdataextractor.parse import R, I, W, Optional, merge
prefix = (R(u'^b\.?p\.?$', re.I) | I(u'boiling') + I(u'point')).hide()
units = (W(u'°') + Optional(R(u'^[CFK]\.?$')))(u'units').add_action(merge)
value = R(u'^\d+(\.\d+)?$')(u'value')
bp = (prefix + value + units)(u'bp')
from chemdataextractor.parse.base import BaseParser
from chemdataextractor.utils import first
class BpParser(BaseParser):
root = bp
def interpret(self, result, start, end):
compound = Compound(
boiling_points=[
BoilingPoint(
value=first(result.xpath('./value/text()')),
units=first(result.xpath('./units/text()'))
)
]
)
yield compound
Paragraph.parsers = [BpParser()]
d = Document(
Heading(u'Synthesis of 2,4,6-trinitrotoluene (3a)'),
Paragraph(u'The procedure was followed to yield a pale yellow solid (b.p. 240 °C)')
)
d.records.serialize()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set conference schedule here, session_papers has a list of sessions and the number of papers they can hold
Step2: Cluster papers into sessions, this may take some time, so just stop when you're happy with the output in example.xlsx
|
<ASSISTANT_TASK:>
Python Code:
doc_topic = np.genfromtxt('doc_topic.csv',delimiter=',')
topic_word = np.genfromtxt('topic_word.csv',delimiter=',')
with open('vocab.csv') as f:
vocab = f.read().splitlines()
# Show document distributions across topics
plt.imshow(doc_topic.T,interpolation='none')
plt.show()
# Remove topic 2 = catch all prasa-robmech jargon (if your stopwords are set up nicely don't bother)
#doc_topic = np.delete(doc_topic, (3), axis=1)
#doc_topic = (doc_topic.T/np.sum(doc_topic,axis=1)).T
#topic_word = np.delete(topic_word,(3),axis=0)
#topic_word = topic_word/np.sum(topic_word,axis=0)
#plt.imshow(doc_topic.T,interpolation='none')
#plt.show()
from scipy.spatial.distance import pdist
from scipy.spatial.distance import squareform
Y = pdist(doc_topic, 'seuclidean')
D = squareform(Y)
plt.figure(figsize=(15,8))
plt.imshow(D,interpolation='none')
plt.show()
# Number of papers in each session, schedule
session_papers = [4, 4, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 3, 3, 4, 3]
print sum(session_papers), len(session_papers)
# Makes pretty spreadsheet, requires a csv file with paper details (title, authors, paper id)
def save_schedule():
import xlsxwriter
from matplotlib import cm
from matplotlib import colors
workbook = xlsxwriter.Workbook('example.xlsx')
worksheet = workbook.add_worksheet()
worksheet.set_column(0, 0, 10)
worksheet.set_column(1, 1, 50)
worksheet.set_column(2, 4, 80)
with open('vocab.csv') as f:
vocab = f.read().splitlines()
import csv
paper_details = []
with open('paper_details.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
paper_details.append(row)
worksheet.write(0, 0, 'Session')
worksheet.write(0, 1, 'Topic')
worksheet.write(0, 2, 'Title')
worksheet.write(0, 3, 'Authors')
worksheet.write(0, 4, 'Paper ID')
cmap = cm.get_cmap('hsv', int(np.max(mfinal))) # PiYG
for j,sess in enumerate(sorted(mfinal)):
i = np.argsort(mfinal)[j]
detail = paper_details[int(i)]
Pt = 1.0/session_papers[int(sess)]*np.sum(doc_topic[mfinal==sess,:],axis=0)
Pw = np.sum(np.multiply(topic_word.T,Pt),axis=1)
bins = np.argsort(Pw)[-6:]
sess_topic = ' '.join(np.array(vocab)[bins].tolist())
fmt = workbook.add_format()
fmt.set_border(1)
fmt.set_bg_color(colors.rgb2hex(cmap(int(sess))[:3]))
worksheet.write(j+1, 0, sess,fmt)
worksheet.write(j+1, 1, sess_topic,fmt)
worksheet.write(j+1, 2, detail['title'],fmt)
worksheet.write(j+1, 3, detail['authors'],fmt)
worksheet.write(j+1, 4, detail['paper_id'],fmt)
workbook.close()
N = doc_topic.shape[0]
K = len(session_papers)
Num_Iters = 2500
# Greedy clustering
EBest = 10000;
plt.figure(figsize=(20,8))
for reseed_iter in range(Num_Iters):
# Randomly allocate papers to sessions
mp = np.arange(N)
np.random.shuffle(mp)
Gcs = np.hstack((0,np.cumsum(np.array(session_papers))))
m = np.zeros((N,))
for j in range(1,Gcs.shape[0]):
m[(mp<Gcs[j])&(mp >= Gcs[j-1])] = j-1
# Calculate cost of session assignment
E = 0
for k in range(K):
i,j = np.meshgrid(np.where(m==k),np.where(m==k))
E = E + np.sum(D[i,j])/(D.shape[0]*D.shape[0])
E = E/K
t = 0
while(1):
E_p = E
rp = np.arange(N)
np.random.shuffle(rp)
for a in rp:
for b in set(range(N)) - set([a]):
temp = m[a]
m[a] = m[b]
m[b] = temp
E_t = 0
for k in range(K):
i,j = np.meshgrid(np.where(m==k),np.where(m==k))
E_t = E_t + np.sum(D[i,j])/(D.shape[0]*D.shape[0])
E_t = E_t/K
if (E_t < E):
E = E_t
#print "Iter:", reseed_iter, t,a,b,E,EBest
#display.clear_output(wait=True)
else:
m[b] = m[a]
m[a] = temp
if (E_p == E):
break
t = t + 1
if (E < EBest):
EBest = E
mfinal = m
save_schedule()
#Show session distribution assignments
Sess_mat = []
for i in range(K):
Sess_mat.append(doc_topic[mfinal==i,:])
Sess_mat.append(np.zeros((1,doc_topic.shape[1])))
#plt.subplot(4,4,i+1)
#plt.imshow(doc_topic[mfinal==i,:],interpolation='none')
#Pt = 1.0/session_papers[i]*np.sum(doc_topic[mfinal==i,:],axis=0)
#Pw = np.sum(np.multiply(topic_word.T,Pt),axis=1)
#bins = np.argsort(Pw)[-4:]
#sess_topic = ' '.join(np.array(vocab)[bins].tolist())
#plt.title(sess_topic)
plt.imshow(np.vstack(Sess_mat).T,interpolation='none')
plt.ylabel('Topic distribution')
display.clear_output(wait=True)
display.display(plt.gcf())
print "Iter:", reseed_iter, t,a,b,E,EBest
#Show session distribution assignments
plt.figure(figsize=(15,5))
for i in range(K):
plt.subplot(3,4,i)
plt.imshow(doc_topic[mfinal==i,:],interpolation='none')
plt.show()
# Save to csv instead of xlsx if you prefer
def save_csv():
with open('vocab.csv') as f:
vocab = f.read().splitlines()
import csv
paper_details = []
with open('paper_details.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
paper_details.append(row)
keys = paper_details[0].keys()
keys.insert(0,'topic')
keys.insert(0,'session')
with open('scheduled_papers.csv', 'wb') as output_file:
dict_writer = csv.DictWriter(output_file, keys)
dict_writer.writeheader()
for j,sess in enumerate(sorted(mfinal)):
i = np.argsort(mfinal)[j]
detail = paper_details[int(i)]
Pt = 1.0/session_papers[int(sess)]*np.sum(doc_topic[mfinal==sess,:],axis=0)
#Pt = doc_topic[int(i),:]
Pw = np.sum(np.multiply(topic_word.T,Pt),axis=1)
bins = np.argsort(Pw)[-6:]
sess_topic = ' '.join(np.array(vocab)[bins].tolist())
print detail['title'][0:40], sess_topic
detail['topic'] = sess_topic
detail['session'] = sess
dict_writer.writerow(detail)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 14
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return x.astype(np.float32, copy=False) / float(255.0)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn import preprocessing
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
one_hot_binarizer = preprocessing.LabelBinarizer()
one_hot_binarizer.fit(range(0, 10))
return one_hot_binarizer.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32,
[None, image_shape[0], image_shape[1], image_shape[2]],
name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32,
[None, n_classes],
name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernel size 2-D Tuple for convolution
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
height = conv_ksize[0]
width = conv_ksize[1]
input_depth = x_tensor.get_shape().as_list()[3]
output_depth = conv_num_outputs
filter_weights = tf.Variable(tf.random_normal([height, width, input_depth, output_depth], mean=0.0, stddev=0.05))
filter_bias = tf.Variable(tf.random_normal([output_depth]))
# the stride for each dimension (batch_size, height, width, depth)
conv_strides_dims = [1, conv_strides[0], conv_strides[1], 1]
padding = 'SAME'
#print("neural net is being created...")
# https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#conv2d
# `tf.nn.conv2d` does not include the bias computation so we have to add it ourselves after.
convolution = tf.nn.conv2d(x_tensor, filter_weights, conv_strides_dims, padding) + filter_bias
# batch normalization on convolution
convolution = tf.contrib.layers.batch_norm(convolution, center=True, scale=True)
#convolution = tf.nn.batch_normalization(convolution, mean=0.0, variance=1.0, offset=0.0, scale)
# non-linear activation function
convolution = tf.nn.elu(convolution)
# the ksize (filter size) for each dimension (batch_size, height, width, depth)
ksize = [1, pool_ksize[0], pool_ksize[1], 1]
# the stride for each dimension (batch_size, height, width, depth)
pool_strides_dims = [1, pool_strides[0], pool_strides[1], 1]
# https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#max_pool
return tf.nn.max_pool(convolution, ksize, pool_strides_dims, padding)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=tf.nn.elu)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
#batch_size = x_tensor.get_shape().as_list()[1]
#weight = tf.Variable(tf.random_normal([batch_size, num_outputs], mean=0.0, stddev=0.03))
#bias = tf.Variable(tf.zeros(num_outputs))
#output_layer = tf.add(tf.matmul(x_tensor, weight), bias)
#return output_layer
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs1 = 32
conv_num_outputs2 = 128
conv_num_outputs3 = 512
conv_ksize = (4, 4)
conv_strides = (1, 1)
pool_ksize = (4, 4)
pool_strides = (2, 2)
conv_layer1 = conv2d_maxpool(x, conv_num_outputs1, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_layer1 = tf.nn.dropout(conv_layer1, tf.to_float(keep_prob))
conv_layer2 = conv2d_maxpool(conv_layer1, conv_num_outputs2, conv_ksize, conv_strides, pool_ksize, pool_strides)
#conv_layer2 = tf.nn.dropout(conv_layer2, tf.to_float(keep_prob))
conv_layer3 = conv2d_maxpool(conv_layer2, conv_num_outputs3, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer3, conv_num_outputs3, (4, 4), (1, 1), pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer3, conv_num_outputs3, (4, 4), (1, 1), pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer3, conv_num_outputs3, (5, 5), (1, 1), pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer3, conv_num_outputs3, (5, 5), (1, 1), pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer3, conv_num_outputs3, (5, 5), (1, 1), pool_ksize, pool_strides)
conv_layer3 = tf.nn.dropout(conv_layer3, tf.to_float(keep_prob))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flattened = flatten(conv_layer3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
# num_outputs can be arbitrary in size
num_outputs = 1024
fully_conn_layer1 = fully_conn(flattened, 512)
fully_conn_layer1 = tf.nn.dropout(fully_conn_layer1, tf.to_float(keep_prob))
fully_conn_layer2 = fully_conn(fully_conn_layer1, 512)
#fully_conn_layer3 = fully_conn(fully_conn_layer2, 128)
#fully_conn_layer4 = fully_conn(fully_conn_layer3, 64)
#fully_conn_layer3 = tf.nn.dropout(fully_conn_layer3, tf.to_float(keep_prob))
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(fully_conn_layer2, 10)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
return session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict =
{x: feature_batch, y: label_batch, keep_prob: 1.0})
valid_accuracy = session.run(accuracy,feed_dict =
{x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss {} - Validation Accuracy: {}'.format(
loss,
valid_accuracy))
return float('{}'.format(valid_accuracy))
# TODO: Tune Parameters
epochs = 40
batch_size = 128
keep_probability = 0.7
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
valid_acc = print_stats(sess, batch_features, batch_labels, cost, accuracy)
print('Accuracy: {}'.format(valid_acc))
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
best_valid_accuracy = 0.0
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
valid_acc = print_stats(sess, batch_features, batch_labels, cost, accuracy)
if (valid_acc > best_valid_accuracy):
print('best validation accuracy ({} > {}); saving model'.format(valid_acc, best_valid_accuracy))
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
best_valid_accuracy = valid_acc
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Read in the hanford.csv file in the data/ folder
Step2: <img src="../../images/hanford_variables.png"></img>
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
import numpy as np
from sklearn.linear_model import LogisticRegression
df = pd.read_csv("hanford.csv")
df
df.describe()
df['Exposure'].max() - df['Exposure'].min()
df['Mortality'].max() - df['Mortality'].min()
df['Exposure'].quantile(q=0.25)
df['Exposure'].quantile(q=0.25)
df['Exposure'].quantile(q=0.5)
df['Exposure'].quantile(q=0.75)
iqr_ex = df['Exposure'].quantile(q=0.75) - df['Exposure'].quantile(q=0.25)
iqr_ex
df['Mortality'].quantile(q=0.25)
df['Mortality'].quantile(q=0.5)
df['Mortality'].quantile(q=0.75)
iqr_mort = df['Mortality'].quantile(q=0.75) - df['Mortality'].quantile(q=0.25)
iqr_mort
df.std()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Selecting Waveforms to Display
Step2: If you don't like typing all those quotation marks, you can place multiple, space-separated peeker names inside a string
Step3: Spacing the Waveforms
Step4: Specifying a Time Window
Step5: Showing Cycle Times
Step6: Adding Titles and Captions
Step7: Setting the Display Size
Step8: Sometimes you'll have a long simulation that creates an unreadable display because it's squeezed
Step9: Skinning It
Step10: Accessing the WaveJSON Data
Step11: After you manipulate the WaveJSON data, you can display it using the wavejson_to_wavedrom() function
|
<ASSISTANT_TASK:>
Python Code:
from myhdl import *
from myhdlpeek import Peeker
def adder_bit(a, b, c_in, sum_, c_out):
'''Single bit adder.'''
@always_comb
def adder_logic():
sum_.next = a ^ b ^ c_in
c_out.next = (a & b) | (a & c_in) | (b & c_in)
# Add some peekers to monitor the inputs and outputs.
Peeker(a, 'a')
Peeker(b, 'b')
Peeker(c_in, 'c_in')
Peeker(sum_, 'sum')
Peeker(c_out, 'c_out')
return adder_logic
def adder(a, b, sum_):
'''Connect single-bit adders to create a complete adder.'''
c = [Signal(bool(0)) for _ in range(len(a)+1)] # Carry signals between stages.
s = [Signal(bool(0)) for _ in range(len(a))] # Sum bit for each stage.
stages = [] # Storage for adder bit instances.
# Create the adder bits and connect them together.
for i in range(len(a)):
stages.append( adder_bit(a=a(i), b=b(i), sum_=s[i], c_in=c[i], c_out=c[i+1]) )
# Concatenate the sum bits and send them out on the sum_ output.
@always_comb
def make_sum():
sum_.next = ConcatSignal(*reversed(s))
return instances() # Return all the adder stage instances.
# Create signals for interfacing to the adder.
a, b, sum_ = [Signal(intbv(0,0,8)) for _ in range(3)]
# Clear-out any existing peeker stuff before instantiating the adder.
Peeker.clear()
# Instantiate the adder.
add_1 = adder(a=a, b=b, sum_=sum_)
# Create some more peekers to monitor the top-level buses.
Peeker(a, 'a_bus')
Peeker(b, 'b_bus')
Peeker(sum_, 'sum_bus')
# Create a testbench generator that applies random inputs to the adder.
from random import randrange
def test(duration):
for _ in range(duration):
a.next, b.next = randrange(0, a.max), randrange(0, a.max)
yield delay(1)
# Simulate the adder, testbench and peekers.
Simulation(add_1, test(8), *Peeker.instances()).run()
Peeker.show_waveforms('a_bus', 'b_bus', 'sum_bus', 'sum[2]', 'sum[1]', 'sum[0]')
Peeker.show_waveforms('a_bus b_bus sum_bus sum[2] sum[1] sum[0]')
Peeker.show_waveforms('a_bus b_bus | sum_bus sum[2] sum[1] sum[0]')
signals = 'a_bus b_bus | sum_bus sum[2] sum[1] sum[0]'
Peeker.show_waveforms(signals, start_time=5, stop_time=15)
Peeker.show_waveforms(signals, start_time=5, stop_time=15, tock=True)
Peeker.show_waveforms(signals, start_time=5, stop_time=15, tock=True,
title='Multi-Bit, Hierarchical Adder', caption='It really works!')
Peeker.show_waveforms(signals, start_time=5, stop_time=15, tock=True,
title='Multi-Bit, Hierarchical Adder', caption='It reall works!', width=8)
Peeker.clear_traces()
Simulation(add_1, test(100), *Peeker.instances()).run()
Peeker.to_wavedrom(signals, width=4000)
# Peeker.clear_traces()
Simulation(add_1, test(8), *Peeker.instances()).run()
Peeker.to_wavedrom(signals, skin='narrow')
wavejson = Peeker.to_wavejson(signals)
wavejson
from myhdlpeek import wavejson_to_wavedrom
wavejson_to_wavedrom(wavejson)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can of course run this tutorial locally if you prefer. In this case, don't run the above cell since it will download and install Anaconda on your local machine. In either case, we can now import deepchem the package to play with.
Step2: Basic Data Handling in DeepChem
Step3: We've given these arrays some evocative names
Step4: In order to be able to work with this data in DeepChem, we need to wrap these arrays so DeepChem knows how to work with them. DeepChem has a Dataset API that it uses to facilitate its handling of datasets. For handling of Numpy datasets, we use DeepChem's NumpyDataset object.
Step5: Ok, now what? We have these arrays in a NumpyDataset object. What can we do with it? Let's try printing out the object.
Step6: Ok, that's not terribly informative. It's telling us that dataset is a Python object that lives somewhere in memory. Can we recover the two datasets that we used to construct this object? Luckily, the DeepChem API allows us to recover the two original datasets by calling the dataset.X and dataset.y attributes of the original object.
Step7: This set of transformations raises a few questions. First, what was the point of it all? Why would we want to wrap objects this way instead of working with the raw Numpy arrays? The simple answer is for have a unified API for working with larger datasets. Suppose that X and y are so large that they can't fit easily into memory. What would we do then? Being able to work with an abstract dataset object proves very convenient then. In fact, you'll have reason to use this feature of Dataset later in the tutorial series.
Step8: There are a couple of other fields that the dataset object tracks. The first is dataset.ids. This is a listing of unique identifiers for the datapoints in the dataset.
Step9: In addition, the dataset object has a field dataset.w. This is the "example weight" associated with each datapoint. Since we haven't explicitly assigned the weights, this is simply going to be all ones.
Step10: What if we want to set nontrivial weights for a dataset? One time we might want to do this is if we have a dataset where there are only a few positive examples to play with. It's pretty straightforward to do this with DeepChem.
Step11: MNIST Example
Step12: Let's take a look at some of the data we've loaded so we can visualize our samples.
Step13: Converting a Numpy Array to tf.data.dataset()
Step14: Extracting the numpy dataset from tf.data
Step15: Now that you have the numpy arrays of data and labels, you can convert it to NumpyDataset.
Step16: Converting NumpyDataset to tf.data
Step17: Using Splitters to split DeepChem Datasets
Step18: We then featurize the data using any one of the featurizers present.
Step19: As we can see that without providing the user specifications on how to split the data, the data was split into a default of 80,10,10.
Step20: Specified Splitter
Step21: When we split the data using the specified splitter it compares the data in each row of the split_field which the user has to specify whether the given row should be used as training data, validation data or testing data. The user has to specify as train,test and valid in the split_field.
Step22: Indice Splitter
Step24: RandomGroupSplitter
Step25: So the RandomGroupSplitter when properly assigned the groups, splits the data accordingly and preserves the groupings.
|
<ASSISTANT_TASK:>
Python Code:
%tensorflow_version 1.x
!curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import deepchem_installer
%time deepchem_installer.install(version='2.3.0')
# Run this cell to see if things work
import deepchem as dc
import numpy as np
data = np.random.random((4, 4))
labels = np.random.random((4,)) # labels of size 20x1
data, labels
from deepchem.data.datasets import NumpyDataset
dataset = NumpyDataset(data, labels)
dataset
dataset.X, dataset.y
for x, y, _, _ in dataset.itersamples():
print(x, y)
dataset.ids
dataset.w
w = np.random.random((4,)) # initializing weights with random vector of size 4x1
dataset_with_weights = NumpyDataset(data, labels, w) # creates numpy dataset object
dataset_with_weights.w
# Install tensorflow-datasets
## TODO(rbharath): Switch to stable version on release
# TODO(rbharath): This only works on TF2. Uncomment once we've upgraded.
#!pip install -q --upgrade tfds-nightly tf-nightly
# TODO(rbharath): This cell will only work with TF2 installed. Swap to this as default soon.
#import tensorflow_datasets as tfds
#data_dir = '/tmp/tfds'
## Fetch full datasets for evaluation
# tfds.load returns tf.Tensors (or tf.data.Datasets if batch_size != -1)
# You can convert them to NumPy arrays (or iterables of NumPy arrays) with tfds.dataset_as_numpy
#mnist_data, info = tfds.load(name="mnist", batch_size=-1, data_dir=data_dir, with_info=True)
#mnist_data = tfds.as_numpy(mnist_data)
#train_data, test_data = mnist_data['train'], mnist_data['test']
#num_labels = info.features['label'].num_classes
#h, w, c = info.features['image'].shape
#num_pixels = h * w * c
## Full train set
#train_images, train_labels = train_data['image'], train_data['label']
#train_images = np.reshape(train_images, (len(train_images), num_pixels))
#train_labels = one_hot(train_labels, num_labels)
## Full test set
#test_images, test_labels = test_data['image'], test_data['label']
#test_images = np.reshape(test_images, (len(test_images), num_pixels))
#test_labels = one_hot(test_labels, num_labels)
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Load the numpy data of MNIST into NumpyDataset
train = NumpyDataset(mnist.train.images, mnist.train.labels)
valid = NumpyDataset(mnist.validation.images, mnist.validation.labels)
import matplotlib.pyplot as plt
# Visualize one sample
sample = np.reshape(train.X[5], (28, 28))
plt.imshow(sample)
plt.show()
import tensorflow as tf
data_small = np.random.random((4,5))
label_small = np.random.random((4,))
dataset = tf.data.Dataset.from_tensor_slices((data_small, label_small))
print ("Data\n")
print (data_small)
print ("\n Labels")
print (label_small)
iterator = dataset.make_one_shot_iterator() # iterator
next_element = iterator.get_next()
numpy_data = np.zeros((4, 5))
numpy_label = np.zeros((4,))
sess = tf.Session() # tensorflow session
for i in range(4):
data_, label_ = sess.run(next_element) # data_ contains the data and label_ contains the labels that we fed in the previous step
numpy_data[i, :] = data_
numpy_label[i] = label_
print ("Numpy Data")
print(numpy_data)
print ("\n Numpy Label")
print(numpy_label)
dataset_ = NumpyDataset(numpy_data, numpy_label) # convert to NumpyDataset
dataset_.X # printing just to check if the data is same!!
iterator_ = dataset_.make_iterator() # Using make_iterator for converting NumpyDataset to tf.data
next_element_ = iterator_.get_next()
sess = tf.Session() # tensorflow session
data_and_labels = sess.run(next_element_) # data_ contains the data and label_ contains the labels that we fed in the previous step
print ("Numpy Data")
print(data_and_labels[0]) # Data in the first index
print ("\n Numpy Label")
print(data_and_labels[1]) # Labels in the second index
!wget https://raw.githubusercontent.com/deepchem/deepchem/master/deepchem/models/tests/example.csv
import os
current_dir=os.path.dirname(os.path.realpath('__file__'))
input_data=os.path.join(current_dir,'example.csv')
import deepchem as dc
tasks=['log-solubility']
featurizer=dc.feat.CircularFingerprint(size=1024)
loader = dc.data.CSVLoader(tasks=tasks, smiles_field="smiles",featurizer=featurizer)
dataset=loader.featurize(input_data)
from deepchem.splits.splitters import IndexSplitter
splitter=IndexSplitter()
train_data,valid_data,test_data=splitter.split(dataset)
train_data=[i for i in train_data]
valid_data=[i for i in valid_data]
test_data=[i for i in test_data]
len(train_data),len(valid_data),len(test_data)
train_data,valid_data,test_data=splitter.split(dataset,frac_train=0.7,frac_valid=0.2,frac_test=0.1)
train_data=[i for i in train_data]
valid_data=[i for i in valid_data]
test_data=[i for i in test_data]
len(train_data),len(valid_data),len(test_data)
!wget https://raw.githubusercontent.com/deepchem/deepchem/master/deepchem/models/tests/user_specified_example.csv
from deepchem.splits.splitters import SpecifiedSplitter
current_dir=os.path.dirname(os.path.realpath('__file__'))
input_file=os.path.join(current_dir, 'user_specified_example.csv')
tasks=['log-solubility']
featurizer=dc.feat.CircularFingerprint(size=1024)
loader = dc.data.CSVLoader(tasks=tasks, smiles_field="smiles",featurizer=featurizer)
dataset=loader.featurize(input_file)
split_field='split'
splitter=SpecifiedSplitter(input_file,split_field)
train_data,valid_data,test_data=splitter.split(dataset)
train_data,valid_data,test_data
from deepchem.splits.splitters import IndiceSplitter
splitter=IndiceSplitter(valid_indices=[7],test_indices=[9])
splitter.split(dataset)
!wget https://raw.githubusercontent.com/deepchem/deepchem/master/deepchem/models/tests/example.csv
# This is workaround...
def load_solubility_data():
Loads solubility dataset
featurizer = dc.feat.CircularFingerprint(size=1024)
tasks = ["log-solubility"]
task_type = "regression"
loader = dc.data.CSVLoader(
tasks=tasks, smiles_field="smiles", featurizer=featurizer)
return loader.featurize("example.csv")
from deepchem.splits.splitters import RandomGroupSplitter
groups = [0, 4, 1, 2, 3, 7, 0, 3, 1, 0]
solubility_dataset=load_solubility_data()
splitter=RandomGroupSplitter(groups=groups)
train_idxs, valid_idxs, test_idxs = splitter.split(solubility_dataset)
train_idxs,valid_idxs,test_idxs
train_data=[]
for i in range(len(train_idxs)):
train_data.append(groups[train_idxs[i]])
valid_data=[]
for i in range(len(valid_idxs)):
valid_data.append(groups[valid_idxs[i]])
test_data=[]
for i in range(len(test_idxs)):
test_data.append(groups[test_idxs[i]])
print("Groups present in the training data =",train_data)
print("Groups present in the validation data = ",valid_data)
print("Groups present in the testing data = ", test_data)
from deepchem.splits.splitters import ScaffoldSplitter
splitter=ScaffoldSplitter()
solubility_dataset=load_solubility_data()
train_data,valid_data,test_data = splitter.split(solubility_dataset,frac_train=0.7,frac_valid=0.2,frac_test=0.1)
len(train_data),len(valid_data),len(test_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear
Step2: Sigmoid
Step3: Tanh
Step4: Rectified Linear Unit (ReLU)
Step5: Leaky ReLU
Step6: Exponential Linear Unit (eLU)
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as wg
from ipywidgets import interactive, fixed
%matplotlib inline
def plot_interactive(w, b, func, ylim=fixed((0, 1)), show_der=False):
plt.figure(0)
x = np.linspace(-10, 10, num=1000)
z = w*x + b
y = func(z)
plt.plot(x, y, color='blue')
if show_der:
der = func(z, derivative=True)
y_der_z = der
y_der_x = w*der
plt.plot(x, y_der_z, color='red')
plt.plot(x, y_der_x, color='green')
plt.xlim(-10, 10)
plt.ylim(ylim[0], ylim[1])
plt.show()
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(linear), ylim=fixed((-10, 10)))
interactive_plot
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(sigmoid))
interactive_plot
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(tanh), ylim=fixed((-2, 2)))
interactive_plot
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(relu), ylim=fixed((-1, 10)))
interactive_plot
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(leaky_relu), ylim=fixed((-1, 10)))
interactive_plot
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(elu), ylim=fixed((-2, 10)))
interactive_plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download Data - MNIST
Step3: SystemML Softmax Model
Step5: 2. Compute Test Accuracy
Step6: 3. Extract Model Into Spark DataFrames For Future Use
|
<ASSISTANT_TASK:>
Python Code:
# Create a SystemML MLContext object
from systemml import MLContext, dml
ml = MLContext(sc)
%%sh
mkdir -p data/mnist/
cd data/mnist/
curl -O http://pjreddie.com/media/files/mnist_train.csv
curl -O http://pjreddie.com/media/files/mnist_test.csv
training =
source("mnist_softmax.dml") as mnist_softmax
# Read training data
data = read($data, format="csv")
n = nrow(data)
# Extract images and labels
images = data[,2:ncol(data)]
labels = data[,1]
# Scale images to [0,1], and one-hot encode the labels
images = images / 255.0
labels = table(seq(1, n), labels+1, n, 10)
# Split into training (55,000 examples) and validation (5,000 examples)
X = images[5001:nrow(images),]
X_val = images[1:5000,]
y = labels[5001:nrow(images),]
y_val = labels[1:5000,]
# Train
[W, b] = mnist_softmax::train(X, y, X_val, y_val)
script = dml(training).input("$data", "data/mnist/mnist_train.csv").output("W", "b")
W, b = ml.execute(script).get("W", "b")
testing =
source("mnist_softmax.dml") as mnist_softmax
# Read test data
data = read($data, format="csv")
n = nrow(data)
# Extract images and labels
X_test = data[,2:ncol(data)]
y_test = data[,1]
# Scale images to [0,1], and one-hot encode the labels
X_test = X_test / 255.0
y_test = table(seq(1, n), y_test+1, n, 10)
# Eval on test set
probs = mnist_softmax::predict(X_test, W, b)
[loss, accuracy] = mnist_softmax::eval(probs, y_test)
print("Test Accuracy: " + accuracy)
script = dml(testing).input("$data", "data/mnist/mnist_test.csv", W=W, b=b)
ml.execute(script)
W_df = W.toDF()
b_df = b.toDF()
W_df, b_df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Just check that analytical solution coincides with the solution of ODE for the variance
Step2: Test of different SME solvers
Step3: Deterministic part depend on time
Step4: Both d1 and d2 time-dependent
Step5: Multiple sc_ops with time dependence
Step6: Versions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_formats = ['svg']
from qutip import *
from qutip.ui.progressbar import BaseProgressBar
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
y_sse = None
import time
def arccoth(x):
return 0.5*np.log((1.+x)/(x-1.))
############ parameters #############
th = 0.1 # Interaction parameter
alpha = np.cos(th)
beta = np.sin(th)
gamma = 1.
def gammaf(t):
return 0.25+t/12+t*t/6
def f_gamma(t,*args):
return (0.25+t/12+t*t/6)**(0.5)
################# Solution of the differential equation for the variance Vc ####################
T = 6.
N_store = int(20*T+1)
tlist = np.linspace(0,T,N_store)
y0 = 0.5
def func(y, t):
return -(gammaf(t) - alpha*beta)*y - 2*alpha*alpha*y*y + 0.5*gammaf(t)
y_td = odeint(func, y0, tlist)
y_td = y_td.ravel()
def func(y, t):
return -(gamma - alpha*beta)*y - 2*alpha*alpha*y*y + 0.5*gamma
y = odeint(func, y0, tlist)
############ Exact steady state solution for Vc #########################
Vc = (alpha*beta - gamma + np.sqrt((gamma-alpha*beta)**2 + 4*gamma*alpha**2))/(4*alpha**2)
#### Analytic solution
A = (gamma**2 + alpha**2 * (beta**2 + 4*gamma) - 2*alpha*beta*gamma)**0.5
B = arccoth((-4*alpha**2*y0 + alpha*beta - gamma)/A)
y_an = (alpha*beta - gamma + A / np.tanh(0.5*A*tlist - B))/(4*alpha**2)
f, (ax, ax2) = plt.subplots(2, 1, sharex=True)
ax.set_title('Variance as a function of time')
ax.plot(tlist,y)
ax.plot(tlist,Vc*np.ones_like(tlist))
ax.plot(tlist,y_an)
ax.set_ylim(0,0.5)
ax2.set_title('Deviation of odeint from analytic solution')
ax2.set_xlabel('t')
ax2.set_ylabel(r'$\epsilon$')
ax2.plot(tlist,y_an - y.T[0]);
####################### Model ###########################
N = 30 # number of Fock states
Id = qeye(N)
a = destroy(N)
s = 0.5*((alpha+beta)*a + (alpha-beta)*a.dag())
x = (a + a.dag())/np.sqrt(2)
H = Id
c_op = [np.sqrt(gamma)*a]
c_op_td = [[a,f_gamma]]
sc_op = [s]
e_op = [x, x*x]
rho0 = fock_dm(N,0) # initial vacuum state
#sc_len=1 # one stochastic operator
############## time steps and trajectories ###################
ntraj = 1 #100 # number of trajectories
T = 6. # final time
N_store = int(20*T+1) # number of time steps for which we save the expectation values/density matrix
tlist = np.linspace(0,T,N_store)
ddt = (tlist[1]-tlist[0])
Nsubs = (10*np.logspace(0,1,10)).astype(int)
stepsizes = [ddt/j for j in Nsubs] # step size is doubled after each evaluation
Nt = len(Nsubs) # number of step sizes that we compare
Nsubmax = Nsubs[-1] # Number of intervals for the smallest step size;
dtmin = (tlist[1]-tlist[0])/(Nsubmax)
ntraj = 1
def run_cte_cte(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for j in range(0,ntraj):
for jj in range(0,Nt):
Nsub = Nsubs[jj]
sol = smesolve(H, rho0, tlist, c_op, sc_op, e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_an - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats_cte_cte(**kw):
start = time.time()
y = run_cte_cte(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return y,tag,fit,time.time()-start
stats_cte_cte = []
stats_cte_cte.append(get_stats_cte_cte(solver='euler-maruyama'))
stats_cte_cte.append(get_stats_cte_cte(solver='platen'))
stats_cte_cte.append(get_stats_cte_cte(solver='pred-corr'))
stats_cte_cte.append(get_stats_cte_cte(solver='milstein'))
stats_cte_cte.append(get_stats_cte_cte(solver='milstein-imp'))
stats_cte_cte.append(get_stats_cte_cte(solver='pred-corr-2'))
stats_cte_cte.append(get_stats_cte_cte(solver='explicit1.5'))
stats_cte_cte.append(get_stats_cte_cte(solver="taylor1.5"))
stats_cte_cte.append(get_stats_cte_cte(solver="taylor1.5-imp", args={"tol":1e-8}))
stats_cte_cte.append(get_stats_cte_cte(solver="taylor2.0"))
stats_cte_cte.append(get_stats_cte_cte(solver="taylor2.0", noiseDepth=500))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>1hdD"
for i,run in enumerate(stats_cte_cte):
ax.loglog(stepsizes, run[0], mark[i], label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
ax.loglog(stepsizes, 0.2*np.array(stepsizes)**2.0, label="$\propto\Delta t^{2}$")
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
ntraj = 1
def run_(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for j in range(0,ntraj):
for jj in range(0,Nt):
Nsub = Nsubs[jj]
sol = smesolve(H, rho0, tlist, c_op_td, sc_op, e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_td - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats_d1(**kw):
start = time.time()
y = run_(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return y,tag,fit,time.time()-start
stat_d1 = []
stat_d1.append(get_stats_d1(solver='euler-maruyama'))
stat_d1.append(get_stats_d1(solver='platen'))
stat_d1.append(get_stats_d1(solver='pc-euler'))
stat_d1.append(get_stats_d1(solver='milstein'))
stat_d1.append(get_stats_d1(solver='milstein-imp'))
stat_d1.append(get_stats_d1(solver='pc-euler-2'))
stat_d1.append(get_stats_d1(solver='explicit1.5'))
stat_d1.append(get_stats_d1(solver="taylor1.5"))
stat_d1.append(get_stats_d1(solver="taylor1.5-imp", args={"tol":1e-8}))
stat_d1.append(get_stats_d1(solver="taylor2.0"))
stat_d1.append(get_stats_d1(solver="taylor2.0", noiseDepth=500))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>1hdD"
for i,run in enumerate(stat_d1):
ax.loglog(stepsizes, run[0], mark[i], label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.05*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.12*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
ax.loglog(stepsizes, 0.7*np.array(stepsizes)**2.0, label="$\propto\Delta t^{2}$")
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
def f(t, args):
return 0.5+0.25*t-t*t*0.125
Nsubs = (15*np.logspace(0,0.8,10)).astype(int)
stepsizes = [ddt/j for j in Nsubs] # step size is doubled after each evaluation
Nt = len(Nsubs) # number of step sizes that we compare
Nsubmax = Nsubs[-1] # Number of intervals for the smallest step size;
dtmin = (tlist[1]-tlist[0])/(Nsubmax)
sc_op_td = [[sc_op[0],f]]
sol = smesolve(H, rho0, tlist, c_op_td, sc_op_td, e_op, nsubsteps=1000, method="homodyne",solver="taylor15")
y_btd = sol.expect[1]-sol.expect[0]*sol.expect[0].conj()
plt.plot(y_btd)
ntraj = 1
def run_(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for j in range(0,ntraj):
for jj in range(0,Nt):
Nsub = Nsubs[jj]
sol = smesolve(H, rho0, tlist, c_op_td, sc_op_td, e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_btd - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats_d2(**kw):
start = time.time()
y = run_(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return y,tag,fit,time.time()-start
stats_d2 = []
stats_d2.append(get_stats_d2(solver='euler-maruyama'))
stats_d2.append(get_stats_d2(solver='platen'))
stats_d2.append(get_stats_d2(solver='pc-euler'))
stats_d2.append(get_stats_d2(solver='milstein'))
stats_d2.append(get_stats_d2(solver='milstein-imp'))
stats_d2.append(get_stats_d2(solver='pc-euler-2'))
stats_d2.append(get_stats_d2(solver='explicit1.5'))
stats_d2.append(get_stats_d2(solver='taylor1.5'))
stats_d2.append(get_stats_d2(solver='taylor1.5-imp', args={"tol":2e-9}))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>1hdD"
for i,run in enumerate(stats_d2):
ax.loglog(stepsizes, run[0], mark[i], label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.05*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
def f(t, args):
return 0.5+0.25*t-t*t*0.125
def g(t, args):
return 0.25+0.25*t-t*t*0.125
Nsubs = (20*np.logspace(0,0.6,8)).astype(int)
stepsizes = [ddt/j for j in Nsubs] # step size is doubled after each evaluation
Nt = len(Nsubs) # number of step sizes that we compare
Nsubmax = Nsubs[-1] # Number of intervals for the smallest step size;
dtmin = (tlist[1]-tlist[0])/(Nsubmax)
sc_op2_td = [[sc_op[0],f],[sc_op[0],g]]
sol = smesolve(H, rho0, tlist, c_op_td, sc_op2_td, e_op, nsubsteps=1000, method="homodyne",solver=152)
y_btd2 = sol.expect[1]-sol.expect[0]*sol.expect[0].conj()
plt.plot(y_btd2)
ntraj = 1
def run_multi(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for j in range(0,ntraj):
for jj in range(0,Nt):
Nsub = Nsubs[jj]
sol = smesolve(H, rho0, tlist, c_op_td, sc_op2_td, e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_btd2 - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats_multi(**kw):
start = time.time()
y = run_multi(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return y,tag,fit,time.time()-start
stats_multi = []
stats_multi.append(get_stats_multi(solver='euler-maruyama'))
stats_multi.append(get_stats_multi(solver='platen'))
stats_multi.append(get_stats_multi(solver='pc-euler'))
stats_multi.append(get_stats_multi(solver='milstein'))
stats_multi.append(get_stats_multi(solver='milstein-imp'))
stats_multi.append(get_stats_multi(solver='pc-euler-2'))
stats_multi.append(get_stats_multi(solver='explicit1.5'))
stats_multi.append(get_stats_multi(solver="taylor1.5"))
stats_multi.append(get_stats_multi(solver="taylor1.5-imp"))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>1hdD"
for i,run in enumerate(stats_multi):
ax.loglog(stepsizes, run[0], mark[i], label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.2*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
from qutip.ipynbtools import version_table
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: Next, we build the label vocabulary, which maps every label in the training data to an index.
Step3: Model
Step4: Training
Step5: Evaluation
|
<ASSISTANT_TASK:>
Python Code:
from multilabel import EATINGMEAT_BECAUSE_MAP, EATINGMEAT_BUT_MAP, JUNKFOOD_BECAUSE_MAP, JUNKFOOD_BUT_MAP
LABEL_MAP = JUNKFOOD_BUT_MAP
BERT_MODEL = 'bert-base-uncased'
BATCH_SIZE = 16 if "base" in BERT_MODEL else 2
GRADIENT_ACCUMULATION_STEPS = 1 if "base" in BERT_MODEL else 8
MAX_SEQ_LENGTH = 100
PREFIX = "junkfood_but"
import ndjson
import glob
from collections import Counter
train_file = f"../data/interim/{PREFIX}_train_withprompt.ndjson"
synth_files = glob.glob(f"../data/interim/{PREFIX}_train_withprompt_allsynth.ndjson")
dev_file = f"../data/interim/{PREFIX}_dev_withprompt.ndjson"
test_file = f"../data/interim/{PREFIX}_test_withprompt.ndjson"
with open(train_file) as i:
train_data = ndjson.load(i)
synth_data = []
for f in synth_files:
with open(f) as i:
synth_data += ndjson.load(i)
with open(dev_file) as i:
dev_data = ndjson.load(i)
with open(test_file) as i:
test_data = ndjson.load(i)
labels = Counter([item["label"] for item in train_data])
print(labels)
print(len(synth_data))
def map_to_multilabel(items):
return [{"text": item["text"], "label": LABEL_MAP[item["label"]]} for item in items]
train_data = map_to_multilabel(train_data)
dev_data = map_to_multilabel(dev_data)
synth_data = map_to_multilabel(synth_data)
test_data = map_to_multilabel(test_data)
import sys
sys.path.append('../')
from quillnlp.models.bert.preprocessing import preprocess, create_label_vocabulary
label2idx = create_label_vocabulary(train_data)
idx2label = {v:k for k,v in label2idx.items()}
target_names = [idx2label[s] for s in range(len(idx2label))]
MAX_SEQ_LENGTH = 100
train_dataloader = preprocess(train_data, BERT_MODEL, label2idx, MAX_SEQ_LENGTH, BATCH_SIZE)
dev_dataloader = preprocess(dev_data, BERT_MODEL, label2idx, MAX_SEQ_LENGTH, BATCH_SIZE)
test_dataloader = preprocess(test_data, BERT_MODEL, label2idx, MAX_SEQ_LENGTH, BATCH_SIZE, shuffle=False)
import sys
sys.path.append('../')
import torch
from quillnlp.models.bert.models import get_multilabel_bert_classifier
BERT_MODEL = 'bert-base-uncased'
device = "cuda" if torch.cuda.is_available() else "cpu"
model = get_multilabel_bert_classifier(BERT_MODEL, len(label2idx), device=device)
from quillnlp.models.bert.train import train
batch_size = 16 if "base" in BERT_MODEL else 2
gradient_accumulation_steps = 1 if "base" in BERT_MODEL else 8
output_model_file = train(model, train_dataloader, dev_dataloader, batch_size, gradient_accumulation_steps, device)
from quillnlp.models.bert.train import evaluate
from sklearn.metrics import precision_recall_fscore_support, classification_report
print("Loading model from", output_model_file)
device="cpu"
model = get_multilabel_bert_classifier(BERT_MODEL, len(label2idx), model_file=output_model_file, device=device)
model.eval()
_, test_correct, test_predicted = evaluate(model, test_dataloader, device)
print("Test performance:", precision_recall_fscore_support(test_correct, test_predicted, average="micro"))
print(classification_report(test_correct, test_predicted, target_names=target_names))
all_correct = 0
fp, fn, tp, tn = 0, 0, 0, 0
for c, p in zip(test_correct, test_predicted):
if sum(c == p) == len(c):
all_correct +=1
for ci, pi in zip(c, p):
if pi == 1 and ci == 1:
tp += 1
same = 1
elif pi == 1 and ci == 0:
fp += 1
elif pi == 0 and ci == 1:
fn += 1
else:
tn += 1
same =1
precision = tp/(tp+fp)
recall = tp/(tp+fn)
print("P:", precision)
print("R:", recall)
print("A:", all_correct/len(test_correct))
for item, predicted, correct in zip(test_data, test_predicted, test_correct):
correct_labels = [idx2label[i] for i, l in enumerate(correct) if l == 1]
predicted_labels = [idx2label[i] for i, l in enumerate(predicted) if l == 1]
print("{}#{}#{}".format(item["text"], ";".join(correct_labels), ";".join(predicted_labels)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To display a benzene molecule we need at least two pieces of information
Step2: We can pass those to the class MolecularViewer and call the method lines to render the molecule as a wireframe
|
<ASSISTANT_TASK:>
Python Code:
from chemview import MolecularViewer
import numpy as np
coordinates = np.array([[0.00, 0.13, 0.00], [0.12, 0.07, 0.00], [0.12,-0.07, 0.00],
[0.00,-0.14, 0.00], [-0.12,-0.07, 0.00],[-0.12, 0.07, 0.00],
[ 0.00, 0.24, 0.00], [ 0.21, 0.12, 0.00], [ 0.21,-0.12, 0.00],
[ 0.00,-0.24, 0.00], [-0.21,-0.12, 0.00],[-0.21, 0.12, 0.00]])
atomic_types = ['C', 'C', 'C', 'C', 'C', 'C', 'H', 'H', 'H', 'H', 'H', 'H']
bonds = [(0, 6), (1, 7), (2, 8), (3, 9), (4, 10), (5, 11),
(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 0)]
mv = MolecularViewer(coordinates, topology={'atom_types': atomic_types,
'bonds': bonds})
mv.ball_and_sticks()
mv
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting Started
Step2: <div align="justify">Now that you can access the data, you can use a number of functions which can help you analyse it. You can find these functions in the libraries at the top of the page. Try to make a table of some of the information within your data file so that you can get a feel of the typical values for data in the set. Understanding the range of values for different variables will help with plotting graphs.</div>
Step5: Invariant mass reconstruction
Step7: Momentum is a vector quantity, it has x,y, and z components. Try calculating the magnitude of the momentum of the first kaon candidate and plotting a histogram of this, you'll need the H1_PX, H1_PY and H1_PZ variables.
Step9: Hints
Step12: Hints
Step17: Adding features of the $B$ meson
Step19: You should have a graph that sharply peaks at the mass of the B<sup>+</sup> meson. The mass of the B<sup>+</sup> and B<sup>-</sup> meson are the same. Check that the peak of your graph is at the known mass of the B meson. Congratulations!
Step20: Make histograms of the probability of a final state particle being a kaon or a pion.
Step23: Now calculate the invariant mass of the B meson for the real data and plot a histogram of this.
Step24: Additional exercise
Step25: Now count the numbers of events of each of the two types (N<sup>+</sup> and N<sup>-</sup>). Also calculate the difference between these two numbers.
Step26: In order to calculate the Asymmetry, you can make use of the formula
Step27: Hint
Step29: Congratulations! You have performed your first search for a matter anti-matter difference.
Step30: Hints
Step31: <div align="justify">While drawing the Dalitz plot for the real data, label the axes accordingly. Compare the Dalitz plots of the real data with the one for the simulation.
Step32: Hint
Step33: Two body resonances
Step34: Observing a large asymmetry in some regions of the plot does not necessarily mean you have observed CP violation. If there are very few events in that region of the plot the uncertainty on that large asymmetry may be large. Hence, the value may still be compatible with zero.
Step35: Observing CP violation
|
<ASSISTANT_TASK:>
Python Code:
# Start the Spark Session
# When Using Spark on CERN SWAN, use this and do not select to connect to a CERN Spark cluster
# If you want to use a cluster, please copy the data to a cluster filesystem first
from pyspark.sql import SparkSession
spark = (SparkSession.builder
.appName("LHCb opendata")
.master("local[*]")
.config("spark.driver.memory", "2g")
.getOrCreate()
)
# Test that Spark SQL works
sql = spark.sql
sql("select 'Hello World!'").show()
# Let us now load the simulated data
# as detailed above you should have downloaded locally the simulation data as detailed at
# https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Physics
# this works from SWAN and CERN machines with eos mounted
path = "/eos/project/s/sparkdltrigger/public/LHCb_opendata/"
# This reads the first dataset into a Spark DataFrame
sim_data_df = spark.read.parquet(path + "PhaseSpaceSimulation.parquet")
# This registers the Spark DataFrame as a temporary view and will allow the use of SQL, used later in the notebook
sim_data_df.createOrReplaceTempView("sim_data")
sim_data_df.cache() # it is a small dataset (~2 MB) so we can afford to cache it
sim_data_df.count()
# Display the first 10 rows in the sim_data_df Spark DataFrame
sim_data_df.limit(10).toPandas() # use pandas only for pretty visualization
# Print the schema of the simulation data
sim_data_df.printSchema() # the schema of the root file
%pylab inline
pylab.rcParams['figure.figsize'] = (12.0, 8.0)
# Plot a histogram of the distribution of the H1_PX variable, using Pandas
# This is a basic solution that moves all the data from the Spark DataFrame
# into a Python Pandas DataFrame. It's OK for small sata sets, but it has scalability issues
h1px_data = sim_data_df.select("H1_PX").toPandas() # select H1_PX data and moves it to Pandas
h1px_data.plot.hist(bins=31, range=[-150000, 150000], title="Histogram - distribution of H1_PX, simulation data")
xlabel('H1_PX (MeV/c)')
ylabel('Count');
# This example computes and plots a histogram of the H1_PX data, similarly to the previous cell
# The notable difference is that Spark SQL is used to compute the aggregations and only the final result
# is returned and transformed into a Pandas DataFrame, just for plotting.
# This vesion can scale on a cluster for large datasets, while the previous version requires to fetch all data into Pandas
histogram_h1px_df = sql(
select round(H1_PX/10000,0) * 10000 as bin, count(1) as count
from sim_data
group by round(H1_PX/10000,0) order by 1
)
histogram_h1px_pandas = histogram_h1px_df.toPandas()
histogram_h1px_pandas.plot.bar(x='bin', y='count', title="Histogram - distribution of H1_PX, simulation data,")
xlabel('H1_PX (MeV/c)')
ylabel('Count');
# This is the same query used for the histogram displayed above.
# It is here just to show the numeric values of each of the bins
sql(
select round(H1_PX/10000,0) * 10000 as bin, count(1) as count
from sim_data
group by round(H1_PX/10000,0) order by 1
).show(50)
# Selects the vector components of the momentum of H1 and computes the magnitude of the vector
# Only consider data where H1_PROBK = 1.0 (note,this could be relaxed to H1_PROBK >= <some threshold value>)
p_tot = sql(
select H1_PX, H1_PY, H1_PZ, round(sqrt(H1_PX*H1_PX + H1_PY*H1_PY + H1_PZ*H1_PZ),2) H1_PTOT
from sim_data
where H1_PROBK = 1.0)
p_tot.show(5) # displays the first 5 rows of the result
# calculate a variable for the magnitude of the momentum of the first kaon
# plot a histogram of this variable
h1ptot_data_plot = p_tot.select("H1_PTOT").toPandas().plot.hist(bins=31, range=[0, 550000])
xlabel('H1_PTOT (MeV/c)')
ylabel('Count');
# Computes the Energy of the kaon candidates using the formula of special relativity
# that is including the magnitude of the momentum and invariant mass
kcharged_mass = 493.677
Energy_H1 = spark.sql(f
select round(sqrt({kcharged_mass*kcharged_mass} + H1_PX*H1_PX + H1_PY*H1_PY + H1_PZ*H1_PZ),2) H1_Energy
from sim_data
where H1_PROBK = 1.0
)
Energy_H1.show(5)
# Plots a histogram of the energy of the first kaon candidate
Energy_H1_data_plot = Energy_H1.toPandas().plot.hist(bins=31, range=[0, 550000])
xlabel('H1_Energy (MeV)')
ylabel('Count');
# calculate variables for the energy of the other two kaons
kcharged_mass = 493.677
Energy_H2 = spark.sql(f
select sqrt({kcharged_mass*kcharged_mass} + H2_PX*H2_PX + H2_PY*H2_PY + H2_PZ*H2_PZ) H2_Energy
from sim_data
where H2_PROBK = 1.0
)
Energy_H3 = spark.sql(f
select sqrt({kcharged_mass*kcharged_mass} + H3_PX*H3_PX + H3_PY*H3_PY + H3_PZ*H3_PZ) H3_Energy
from sim_data
where H3_PROBK = 1.0
)
Energy_H2_data_plot = Energy_H2.toPandas().plot.hist(bins=31, range=[0, 550000])
xlabel('H2_Energy (MeV)')
ylabel('Count')
Energy_H3_data_plot = Energy_H3.toPandas().plot.hist(bins=31, range=[0, 550000])
xlabel('H3_Energy (MeV)')
ylabel('Count');
# calculate the energy of the B meson from the sum of the energies of the kaons
sum_kaons_energy = sql(f
select
sqrt({kcharged_mass*kcharged_mass} + H1_PX*H1_PX + H1_PY*H1_PY + H1_PZ*H1_PZ) +
sqrt({kcharged_mass*kcharged_mass} + H2_PX*H2_PX + H2_PY*H2_PY + H2_PZ*H2_PZ) +
sqrt({kcharged_mass*kcharged_mass} + H3_PX*H3_PX + H3_PY*H3_PY + H3_PZ*H3_PZ) as Tot_Energy
from sim_data
where H1_ProbK = 1.0 and H2_ProbK = 1.0 and H3_ProbK = 1.0
)
sum_kaons_energy.show(10)
# Calculate the momentum components of the B meson
# This is a vector sum (i.e. we sum each vector component of the kaons)
sum_kaons_momentum = sql(
select
H1_PX + H2_PX + H3_PX as PX_Tot,
H1_PY + H2_PY + H3_PY as PY_Tot,
H1_PZ + H2_PZ + H3_PZ as PZ_Tot
from sim_data
where H1_ProbK = 1.0 and H2_ProbK = 1.0 and H3_ProbK = 1.0)
sum_kaons_momentum.show(10)
# Calculate the momentum components of the B meson
# This computes the vector magnitude of the vector computed above
# we use the spark sql declarative interface as opposed to writing an SQL statement for this
# the two approaches are equivalent in Spark
sum_kaons_momentum_magnitude = sum_kaons_momentum.selectExpr("sqrt(PX_Tot*PX_Tot + PY_Tot*PY_Tot + PZ_Tot*PZ_Tot) as P_Tot")
sum_kaons_momentum_magnitude.show(10)
# calculate the B meson invariant mass
# plot the B meson invariant mass in a histogram
b_meson_4momentum = sum_kaons_energy = sql(f
select
sqrt({kcharged_mass*kcharged_mass} + H1_PX*H1_PX + H1_PY*H1_PY + H1_PZ*H1_PZ) +
sqrt({kcharged_mass*kcharged_mass} + H2_PX*H2_PX + H2_PY*H2_PY + H2_PZ*H2_PZ) +
sqrt({kcharged_mass*kcharged_mass} + H3_PX*H3_PX + H3_PY*H3_PY + H3_PZ*H3_PZ) as Tot_Energy,
H1_PX + H2_PX + H3_PX as PX_Tot,
H1_PY + H2_PY + H3_PY as PY_Tot,
H1_PZ + H2_PZ + H3_PZ as PZ_Tot
from sim_data
where H1_ProbK = 1.0 and H2_ProbK = 1.0 and H3_ProbK = 1.0
)
b_meson_4momentum.show(5)
b_meson_invariant_mass = b_meson_4momentum.selectExpr(
sqrt(Tot_Energy* Tot_Energy - (PX_Tot*PX_Tot + PY_Tot*PY_Tot + PZ_Tot*PZ_Tot) ) as invariant_mass)
b_meson_invariant_mass.show(5)
b_meson_invariant_mass.toPandas().plot.hist(bins=101, range=[4000, 6000],
title="Histogram - distribution of B meson invariant mass, simulation data")
xlabel('b_meson_invariant_mass (MeV)')
ylabel('Count');
# Note the mass of the charged B meson is expected to be 5279.29±0.15 and this is consistenet with the
# the peak in found in the data plotted here
# Create the DataFrames Physics data
# Only metadata is read at this stage (reading into Spark DatFrames is lazily executed)
# this works from SWAN and CERN machines with eos mounted
path = "/eos/project/s/sparkdltrigger/public/LHCb_opendata/"
B2HHH_MagnetDown_df = spark.read.parquet(path + "B2HHH_MagnetDown.parquet")
B2HHH_MagnetUp_df = spark.read.parquet(path + "B2HHH_MagnetUp.parquet")
# Put all the data together
B2HHH_AllData_df = B2HHH_MagnetDown_df.union(B2HHH_MagnetUp_df)
# This defines the cut criteria
# You can experiment with different criteria
preselection = H1_ProbPi < 0.5 and H2_ProbPi < 0.5 and H3_ProbPi < 0.5
and H1_ProbK > 0.5 and H2_ProbK > 0.5 and H3_ProbK > 0.5
and H1_isMuon = 0 and H2_isMuon = 0 and H3_isMuon = 0
# Apply cuts to the data as a filter
B2HHH_AllData_WithCuts_df = B2HHH_AllData_df.filter(preselection)
# This *may take a few minutes* as data will be read at this stage
B2HHH_AllData_WithCuts_df.cache() # flags the DataFrame for caching, this is useful for performance
B2HHH_AllData_WithCuts_df.count() # triggers an action, data will be read at this stage
# This registers the dataframe with cuts (filters) as a view for later use with SQL
B2HHH_AllData_WithCuts_df.createOrReplaceTempView("B2HHH_AllData_WithCuts")
# Displays a sample of the data
B2HHH_AllData_WithCuts_df.limit(10).toPandas()
# plot the probability that a final state particle is a kaon
B2HHH_AllData_WithCuts_df.select("H1_ProbK", "H2_ProbK", "H3_ProbK").toPandas().plot.hist(bins=101, range=[0.0, 1.0])
xlabel('Probability value that the particle is a kaon')
ylabel('Count');
# plot the probability that a final state particle is a pion
B2HHH_AllData_WithCuts_df.select("H1_ProbPi", "H2_ProbPi", "H3_ProbPi").toPandas().plot.hist(bins=101, range=[0.0, 1.0])
xlabel('Probability value that the particle is a pion')
ylabel('Count');
# calculate the B meson invariant mass
# plot the B meson invariant mass in a histogram
b_meson_4momentum_withcuts = sql(f
select
sqrt({kcharged_mass*kcharged_mass} + H1_PX*H1_PX + H1_PY*H1_PY + H1_PZ*H1_PZ) +
sqrt({kcharged_mass*kcharged_mass} + H2_PX*H2_PX + H2_PY*H2_PY + H2_PZ*H2_PZ) +
sqrt({kcharged_mass*kcharged_mass} + H3_PX*H3_PX + H3_PY*H3_PY + H3_PZ*H3_PZ) as Tot_Energy,
H1_PX + H2_PX + H3_PX as PX_Tot,
H1_PY + H2_PY + H3_PY as PY_Tot,
H1_PZ + H2_PZ + H3_PZ as PZ_Tot
from B2HHH_AllData_WithCuts
)
b_meson_4momentum_withcuts.createOrReplaceTempView("b_meson_4momentum_mycuts_read_data")
b_meson_4momentum_withcuts.show(5)
b_meson_invariant_mass_withcuts = b_meson_4momentum_withcuts.selectExpr(
sqrt(Tot_Energy* Tot_Energy - (PX_Tot*PX_Tot + PY_Tot*PY_Tot + PZ_Tot*PZ_Tot) ) as invariant_mass)
b_meson_invariant_mass_withcuts.show(5)
# draw a histogram for the B meson mass in the real data
b_meson_invariant_mass_withcuts.toPandas().plot.hist(bins=101, range=[4000, 6000],
title="Histogram - distribution of B meson invariant mass, real data")
xlabel('b_meson_invariant_mass (MeV)')
ylabel('Count');
# make a variable for the charge of the B mesons
B_charge_df = B2HHH_AllData_WithCuts_df.selectExpr("H1_charge + H2_charge + H3_charge as B_Charge")
# make variables for the numbers of positive and negative B mesons
# I am using the declarative API of Spark SQl for this. If you want to use SQL you can do the following:
# B_charge_df.createOrReplaceTempView("B_charge_table")
# sql("select B_Charge, count(*) from B_charge_table group by B_Charge").show(5)
B_charge_df.groupBy("B_Charge").count().show()
# calculate the value of the asymmetry, by using the formula above, and then print it
N_plus = 12390.0
N_minus = 11505.0
A = (N_minus - N_plus) / (N_minus + N_plus)
A
# calculate the statistical significance of your result and print it
sqrt((1 - A*A)/(N_minus + N_plus))
# calculate the invariant masses for each possible hadron pair combination
two_body_resonances_df = sql(f
select
sqrt({kcharged_mass*kcharged_mass} + H1_PX*H1_PX + H1_PY*H1_PY + H1_PZ*H1_PZ) +
sqrt({kcharged_mass*kcharged_mass} + H2_PX*H2_PX + H2_PY*H2_PY + H2_PZ*H2_PZ) as Energy_K1_K2,
sqrt((H1_PX + H2_PX)*(H1_PX + H2_PX) + (H1_PY + H2_PY)*(H1_PY + H2_PY)
+ (H1_PZ + H2_PZ)*(H1_PZ + H2_PZ)) as P_K1_K2,
sqrt({kcharged_mass*kcharged_mass} + H1_PX*H1_PX + H1_PY*H1_PY + H1_PZ*H1_PZ) +
sqrt({kcharged_mass*kcharged_mass} + H3_PX*H3_PX + H3_PY*H3_PY + H3_PZ*H3_PZ) as Energy_K1_K3,
sqrt((H1_PX + H3_PX)*(H1_PX + H3_PX) + (H1_PY + H3_PY)*(H1_PY + H3_PY)
+ (H1_PZ + H3_PZ)*(H1_PZ + H3_PZ)) as P_K1_K3,
sqrt({kcharged_mass*kcharged_mass} + H2_PX*H2_PX + H2_PY*H2_PY + H2_PZ*H2_PZ) +
sqrt({kcharged_mass*kcharged_mass} + H3_PX*H3_PX + H3_PY*H3_PY + H3_PZ*H3_PZ) as Energy_K2_K3,
sqrt((H2_PX + H3_PX)*(H2_PX + H3_PX) + (H2_PY + H3_PY)*(H2_PY + H3_PY)
+ (H2_PZ + H3_PZ)*(H2_PZ + H3_PZ)) as P_K2_K3,
H1_Charge, H2_Charge, H3_Charge
from B2HHH_AllData_WithCuts
)
two_body_resonances_df.limit(5).toPandas()
# Computes 2-body resonance invariant mass from two_body_resonances_df
two_body_resonances_invariant_mass_GeV_df = two_body_resonances_df.selectExpr(
"sqrt(Energy_K1_K2*Energy_K1_K2 - P_K1_K2*P_K1_K2) / 1000.0 as Mass_K1_K2",
"sqrt(Energy_K1_K3*Energy_K1_K3 - P_K1_K3*P_K1_K3) / 1000.0 as Mass_K1_K3",
"sqrt(Energy_K2_K3*Energy_K2_K3 - P_K2_K3*P_K2_K3) / 1000.0 as Mass_K2_K3",
"H1_Charge", "H2_Charge", "H3_Charge")
two_body_resonances_invariant_mass_GeV_df.show(5)
# Two_body_resonances_invariant_mass_GeV_df.filter("H1_Charge * H2_Charge = -1").show()
two_body_resonances_invariant_mass_GeV_df.createOrReplaceTempView("t1")
sql("select H2_Charge * H3_Charge, count(*) from t1 group by H2_Charge * H3_Charge").show()
# plot the invariant mass for one of these combinations
two_body_resonances_invariant_mass_GeV_df.filter("H1_Charge + H2_Charge = 0").select("Mass_K1_K2") \
.toPandas().plot.hist(bins=101, range=[0, 10],
title="Histogram - distribution of K1_K2 resonance invariant mass, simulation data")
xlabel('b_meson_invariant_mass (GeV)')
ylabel('Count');
two_body_resonances_invariant_mass_GeV_df.filter("H1_Charge * H3_Charge = -1").count()
# Make a Dalitz plot with labelled axes for the simulation data
# This is achieved by plotting a scatter graph from Mass_12 squared vs. Mass_13 squared
# As in the text of the exercise, add a filter on data that the possible resonance has charge 0
dalitz_plot_df = two_body_resonances_invariant_mass_GeV_df \
.filter("H1_Charge + H2_Charge = 0").filter("H1_Charge + H3_Charge = 0") \
.selectExpr("Mass_K1_K2*Mass_K1_K2 as M12_squared", "Mass_K1_K3*Mass_K1_K3 as M13_squared")
dalitz_plot_df.toPandas().plot.scatter(x='M12_squared', y='M13_squared',
title="Dalitz plot, B+/- meson decay into thre kaons, simulation data")
xlabel('Mass_K1_K2 squared (GeV^2)')
ylabel('Mass_K1_K3 squared (GeV^2)');
# calculate the invariant masses for each possible hadron pair combination in the real data
# make a Dalitz plot for the real data (with your preselection cuts applied)
# make a new Dalitz plot with a mass ordering of the axes
# plot a binned Dalitz Plot
# use colorbar() to make a legend for your plot at the side
# make a Dalitz plot for the B+ events
# make a Dalitz plot for the B- events
# Make a plot showing the asymmetry between these two Daltz plots
# i.e. calculate the asymmetry between each bin of the B+ and B- Dalitz plots and show the result in another 2D plot
# Make a plot showing the uncertainty on the asymmetry
# Make a plot showing the statistical significance of the asymmetry
# Make a plot showing the invariant mass of the B+ meson particles
# using events from a region of the Dalitz plot showing sizeable CP asymmetries
# Make a plot showing the invariant mass of the B- meson particles using events from the same region
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hello Qubit
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
import cirq
print("installed cirq.")
# Pick a qubit.
qubit = cirq.GridQubit(0, 0)
# Create a circuit
circuit = cirq.Circuit(
cirq.X(qubit)**0.5, # Square root of NOT.
cirq.measure(qubit, key='m') # Measurement.
)
print("Circuit:")
print(circuit)
# Simulate the circuit several times.
simulator = cirq.Simulator()
result = simulator.run(circuit, repetitions=20)
print("Results:")
print(result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Character counting and entropy
Step4: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step6: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
def char_probs(s):
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
x = []
for char in s:
x.append(char)
r = {x[i]: x.count(x[i])/len(x) for i in range(len(x))}
return(r)
char_probs("abbc")
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
a = np.array(list(d.values()))
entropy = -(sum(a))*np.log2(a)
return(entropy)
entropy({'a': 0.25, 'b': 0.5, 'c': 0.25})
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
from IPython.display import display
def new_func(n):
Made so interact can take a string and output entropy of char_prob("string")
a = char_probs(n)
print(entropy(a))
interact(new_func, n = "aaa");
assert True # use this for grading the pi digits histogram
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm4-8', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: The estimation game
Step4: The following function simulates experiments where we try to estimate the mean of a population based on a sample with size n=7. We run iters=1000 experiments and collect the mean and median of each sample.
Step6: Using $\bar{x}$ to estimate the mean works a little better than using the median; in the long run, it minimizes RMSE. But using the median is more robust in the presence of outliers or large errors.
Step7: The following function simulates experiments where we try to estimate the variance of a population based on a sample with size n=7. We run iters=1000 experiments and two estimates for each sample, $S^2$ and $S_{n-1}^2$.
Step8: The mean error for $S^2$ is non-zero, which suggests that it is biased. The mean error for $S_{n-1}^2$ is close to zero, and gets even smaller if we increase iters.
Step9: Here's the "sampling distribution of the mean" which shows how much we should expect $\bar{x}$ to vary from one experiment to the next.
Step10: The mean of the sample means is close to the actual value of $\mu$.
Step11: An interval that contains 90% of the values in the sampling disrtribution is called a 90% confidence interval.
Step12: And the RMSE of the sample means is called the standard error.
Step13: Confidence intervals and standard errors quantify the variability in the estimate due to random sampling.
Step15: The RMSE is smaller for the sample mean than for the sample median.
|
<ASSISTANT_TASK:>
Python Code:
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
import numpy as np
import thinkstats2
import thinkplot
def RMSE(estimates, actual):
Computes the root mean squared error of a sequence of estimates.
estimate: sequence of numbers
actual: actual value
returns: float RMSE
e2 = [(estimate-actual)**2 for estimate in estimates]
mse = np.mean(e2)
return np.sqrt(mse)
import random
def Estimate1(n=7, iters=1000):
Evaluates RMSE of sample mean and median as estimators.
n: sample size
iters: number of iterations
mu = 0
sigma = 1
means = []
medians = []
for _ in range(iters):
xs = [random.gauss(mu, sigma) for _ in range(n)]
xbar = np.mean(xs)
median = np.median(xs)
means.append(xbar)
medians.append(median)
print('Experiment 1')
print('rmse xbar', RMSE(means, mu))
print('rmse median', RMSE(medians, mu))
Estimate1()
def MeanError(estimates, actual):
Computes the mean error of a sequence of estimates.
estimate: sequence of numbers
actual: actual value
returns: float mean error
errors = [estimate-actual for estimate in estimates]
return np.mean(errors)
def Estimate2(n=7, iters=1000):
mu = 0
sigma = 1
estimates1 = []
estimates2 = []
for _ in range(iters):
xs = [random.gauss(mu, sigma) for i in range(n)]
biased = np.var(xs)
unbiased = np.var(xs, ddof=1)
estimates1.append(biased)
estimates2.append(unbiased)
print('mean error biased', MeanError(estimates1, sigma**2))
print('mean error unbiased', MeanError(estimates2, sigma**2))
Estimate2()
def SimulateSample(mu=90, sigma=7.5, n=9, iters=1000):
xbars = []
for j in range(iters):
xs = np.random.normal(mu, sigma, n)
xbar = np.mean(xs)
xbars.append(xbar)
return xbars
xbars = SimulateSample()
cdf = thinkstats2.Cdf(xbars)
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Sample mean',
ylabel='CDF')
np.mean(xbars)
ci = cdf.Percentile(5), cdf.Percentile(95)
ci
stderr = RMSE(xbars, 90)
stderr
def Estimate3(n=7, iters=1000):
lam = 2
means = []
medians = []
for _ in range(iters):
xs = np.random.exponential(1.0/lam, n)
L = 1 / np.mean(xs)
Lm = np.log(2) / thinkstats2.Median(xs)
means.append(L)
medians.append(Lm)
print('rmse L', RMSE(means, lam))
print('rmse Lm', RMSE(medians, lam))
print('mean error L', MeanError(means, lam))
print('mean error Lm', MeanError(medians, lam))
Estimate3()
def SimulateGame(lam):
Simulates a game and returns the estimated goal-scoring rate.
lam: actual goal scoring rate in goals per game
goals = 0
t = 0
while True:
time_between_goals = random.expovariate(lam)
t += time_between_goals
if t > 1:
break
goals += 1
# estimated goal-scoring rate is the actual number of goals scored
L = goals
return L
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For information on how to configure and tune the solver, please see the documentation for optlang project and note that model.solver is simply an optlang object of class Model.
|
<ASSISTANT_TASK:>
Python Code:
from cobra.io import load_model
model = load_model('textbook')
model.solver = 'glpk'
# or if you have cplex installed
model.solver = 'cplex'
type(model.solver)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <img src="images/utfsm.png" alt="" width="200px" align="right"/>
Step2: Contenido
Step3: Importante
Step4: 2. Librería Numpy
Step5: 2.1 Array vs Matrix
Step6: Desafío 1
Step7: 2.2 Indexación y Slicing
Step8: Observación
Step9: Desafío 2
Step10: 2. Librería Numpy
Step11: 2. Librería Numpy
Step12: Desafío 3
Step13: Desafío 4
Step14: 2. Librería Numpy
Step15: 2. Librería Numpy
Step16: Revisemos si el archivo quedó bien escrito. Cambiaremos de python a bash para utilizar los comandos del terminal
Step17: Desafío 5
Step18: 2. Librería Numpy
Step19: 2.6 Índices
Step20: Desafío 6
|
<ASSISTANT_TASK:>
Python Code:
IPython Notebook v4.0 para python 3.0
Librerías adicionales: numpy, matplotlib
Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT.
(c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout.
# Configuración para recargar módulos y librerías dinámicamente
%reload_ext autoreload
%autoreload 2
# Configuración para graficos en línea
%matplotlib inline
# Configuración de estilo
from IPython.core.display import HTML
HTML(open("./style/style.css", "r").read())
import numpy as np
print np.version.version # Si alguna vez tienen problemas, verifiquen su version de numpy
# Presionar tabulacción con el cursor despues de np.arr
np.arr
# Presionar Ctr-Enter para obtener la documentacion de la funcion np.array usando "?"
np.array?
# Presionar Ctr-Enter
%who
x = 10
%who
# Operaciones con np.matrix
A = np.matrix([[1,2],[3,4]])
B = np.matrix([[1, 1],[0,1]], dtype=float)
x = np.matrix([[1],[2]])
print "A =\n", A
print "B =\n", B
print "x =\n", x
print "A+B =\n", A+B
print "A*B =\n", A*B
print "A*x =\n", A*x
print "A*A = A^2 =\n", A**2
print "x.T*A =\n", x.T * A
# Operaciones con np.matrix
A = np.array([[1,2],[3,4]])
B = np.array([[1, 1],[0,1]], dtype=float)
x = np.array([1,2]) # No hay necesidad de definir como fila o columna!
print "A =\n", A
print "B =\n", B
print "x =\n", x
print "A+B =\n", A+B
print "AoB = (multiplicacion elementwise) \n", A*B
print "A*B = (multiplicacion matricial, v1) \n", np.dot(A,B)
print "A*B = (multiplicacion matricial, v2) \n", A.dot(B)
print "A*A = A^2 = (potencia matricial)\n", np.linalg.matrix_power(A,2)
print "AoA = (potencia elementwise)\n", A**2
print "A*x =\n", np.dot(A,x)
print "x.T*A =\n", np.dot(x,A) # No es necesario transponer.
# 1: Utilizando matrix
A = np.matrix([]) # FIX ME
B = np.matrix([]) # FIX ME
print "np.matrix, AxB=\n", #FIX ME
# 2: Utilizando arrays
A = np.array([]) # FIX ME
B = np.array([]) # FIX ME
print "np.matrix, AxB=\n", #FIX ME
x = np.arange(9) # "Vector" con valores del 0 al 8
print "x = ", x
print "x[:] = ", x[:]
print "x[5:] = ", x[5:]
print "x[:8] = ", x[:8]
print "x[:-1] = ", x[:-1]
print "x[1:-1] = ", x[1:-1]
print "x[1:-1:2] = ", x[1:-1:2]
A = x.reshape(3,3) # Arreglo con valores del 0 al 8, en 3 filas y 3 columnas.
print "\n"
print "A = \n", A
print "primera fila de A\n", A[0,:]
print "ultima columna de A\n", A[:,-1]
print "submatriz de A\n", A[:2,:2]
def f(x):
return 1 + x**2
x = np.array([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) # O utilizar np.linspace!
y = f(x) # Tan facil como llamar f sobre x
dydx = ( y[1:] - y[:-1] ) / ( x[1:] - x[:-1] )
x_aux = 0.5*(x[1:] + x[:-1])
# To plot
fig = plt.figure(figsize=(12,8))
plt.plot(x, y, '-s', label="f")
plt.plot(x_aux, dydx, '-s', label="df/dx")
plt.legend(loc="upper left")
plt.show()
def g(x):
return 1 + x**2 + np.sin(x)
x = np.linspace(0,1,10)
y = g(x)
d2ydx2 = 0 * x # FIX ME
x_aux = 0*d2ydx2 # FIX ME
# To plot
fig = plt.figure(figsize=(12,8))
plt.plot(x, y, label="f")
plt.plot(x_aux, d2ydx2, label="d2f/dx2")
plt.legend(loc="upper left")
plt.show()
# arrays 1d
A = np.ones(3)
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros(3)
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(1,3)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# Si queremos forzar la misma forma que A y B
C = np.eye(1,3).flatten() # o np.eye(1,3)[0,:]
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# square arrays
A = np.ones((3,3))
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros((3,3))
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(3) # Or np.eye(3,3)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# fat 2d array
A = np.ones((2,5))
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros((2,5))
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(2,5)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
x = np.linspace(0., 1., 6)
A = x.reshape(3,2)
print "x = \n", x
print "A = \n", A
print "np.diag(x) = \n", np.diag(x)
print "np.diag(B) = \n", np.diag(A)
print ""
print "A.sum() = ", A.sum()
print "A.sum(axis=0) = ", A.sum(axis=0)
print "A.sum(axis=1) = ", A.sum(axis=1)
print ""
print "A.mean() = ", A.mean()
print "A.mean(axis=0) = ", A.mean(axis=0)
print "A.mean(axis=1) = ", A.mean(axis=1)
print ""
print "A.std() = ", A.std()
print "A.std(axis=0) = ", A.std(axis=0)
print "A.std(axis=1) = ", A.std(axis=1)
A = np.outer(np.arange(3),np.arange(3))
print A
# FIX ME
# FIX ME
# FIX ME
# FIX ME
# FIX ME
def mi_funcion(x):
f = 1 + x + x**3 + x**5 + np.sin(x)
return f
N = 5
x = np.linspace(-1,1,N)
y = mi_funcion(x)
# FIX ME
I = 0 # FIX ME
# FIX ME
print "Area bajo la curva: %.3f" %I
# Ilustración gráfica
x_aux = np.linspace(x.min(),x.max(),N**2)
fig = plt.figure(figsize=(12,8))
fig.gca().fill_between(x, 0, y, alpha=0.25)
plt.plot(x_aux, mi_funcion(x_aux), 'k')
plt.plot(x, y, 'r.-')
plt.show()
# Ejemplo de lectura de datos
data = np.loadtxt("data/cherry.txt")
print data.shape
print data
# Ejemplo de lectura de datos, saltandose 11 lineas y truncando a enteros
data_int = np.loadtxt("data/cherry.txt", skiprows=11).astype(int)
print data_int.shape
print data_int
# Guardando el archivo con un header en español
encabezado = "Diametro Altura Volumen (Valores truncados a numeros enteros)"
np.savetxt("data/cherry_int.txt", data_int, fmt="%d", header=encabezado)
%%bash
cat data/cherry_int.txt
# Leer datos
#FIX_ME#
# Convertir a mks
#FIX_ME#
# Guardar en nuevo archivo
#FIX_ME#
x = np.linspace(0,42,10)
print "x = ", x
print "x.shape = ", x.shape
print "\n"
mask_x_1 = x>10
print "mask_x_1 = ", mask_x_1
print "x[mask_x_1] = ", x[mask_x_1]
print "x[mask_x_1].shape = ", x[mask_x_1].shape
print "\n"
mask_x_2 = x > x.mean()
print "mask_x_2 = ", mask_x_2
print "x[mask_x_2] = ", x[mask_x_2]
print "x[mask_x_2].shape = ", x[mask_x_2].shape
A = np.linspace(10,20,12).reshape(3,4)
print "\n"
print "A = ", A
print "A.shape = ", A.shape
print "\n"
mask_A_1 = A>13
print "mask_A_1 = ", mask_A_1
print "A[mask_A_1] = ", A[mask_A_1]
print "A[mask_A_1].shape = ", A[mask_A_1].shape
print "\n"
mask_A_2 = A > 0.5*(A.min()+A.max())
print "mask_A_2 = ", mask_A_2
print "A[mask_A_2] = ", A[mask_A_2]
print "A[mask_A_2].shape = ", A[mask_A_2].shape
T = np.linspace(-100,100,24).reshape(2,3,4)
print "\n"
print "T = ", T
print "T.shape = ", T.shape
print "\n"
mask_T_1 = T>=0
print "mask_T_1 = ", mask_T_1
print "T[mask_T_1] = ", T[mask_T_1]
print "T[mask_T_1].shape = ", T[mask_T_1].shape
print "\n"
mask_T_2 = 1 - T + 2*T**2 < 0.1*T**3
print "mask_T_2 = ", mask_T_2
print "T[mask_T_2] = ", T[mask_T_2]
print "T[mask_T_2].shape = ", T[mask_T_2].shape
x = np.linspace(10,20,11)
print "x = ", x
print "x.shape = ", x.shape
print "\n"
ind_x_1 = np.array([1,2,3,5,7])
print "ind_x_1 = ", ind_x_1
print "x[ind_x_1] = ", x[ind_x_1]
print "x[ind_x_1].shape = ", x[ind_x_1].shape
print "\n"
ind_x_2 = np.array([0,0,1,2,3,4,5,6,7,-3,-2,-1,-1])
print "ind_x_2 = ", ind_x_2
print "x[ind_x_2] = ", x[ind_x_2]
print "x[ind_x_2].shape = ", x[ind_x_2].shape
A = np.linspace(-90,90,10).reshape(2,5)
print "A = ", A
print "A.shape = ", A.shape
print "\n"
ind_row_A_1 = np.array([0,0,0,1,1])
ind_col_A_1 = np.array([0,2,4,1,3])
print "ind_row_A_1 = ", ind_row_A_1
print "ind_col_A_1 = ", ind_col_A_1
print "A[ind_row_A_1,ind_col_A_1] = ", A[ind_row_A_1,ind_col_A_1]
print "A[ind_row_A_1,ind_col_A_1].shape = ", A[ind_row_A_1,ind_col_A_1].shape
print "\n"
ind_row_A_2 = 1
ind_col_A_2 = np.array([0,1,3])
print "ind_row_A_2 = ", ind_row_A_2
print "ind_col_A_2 = ", ind_col_A_2
print "A[ind_row_A_2,ind_col_A_2] = ", A[ind_row_A_2,ind_col_A_2]
print "A[ind_row_A_2,ind_col_A_2].shape = ", A[ind_row_A_2,ind_col_A_2].shape
import numpy as np
k = 0.8
rho = 1.2 #
r_m = np.array([ 25., 25., 25., 25., 25., 25., 20., 20., 20., 20., 20.])
v_kmh = np.array([10.4, 12.6, 9.7, 7.2, 12.3, 10.8, 12.9, 13.0, 8.6, 12.6, 11.2]) # En kilometros por hora
P = 0
n_activos = 0
P_mean = 0.0
P_total = 0.0
print "Existen %d aerogeneradores activos del total de %d" %(n_activos, r.shape[0])
print "La potencia promedio de los aeorgeneradores es {0:.2f} ".format(P_mean)
print "La potencia promedio de los aeorgeneradores es " + str(P_total)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h3 align = 'center'> Variables </h3>
Step2: We can now replace dyHat/dz3 with f prime of z 3.
Step3: We have one final term to compute
Step4: So how should we change our W’s to decrease our cost? We can now compute dJ/dW, which tells us which way is uphill in our 9 dimensional optimization space.
Step5: If we move this way by adding a scalar times our derivative to our weights, our cost will increase, and if we do the opposite, subtract our gradient from our weights, we will move downhill and reduce our cost. This simple step downhill is the core of gradient descent and a key part of how even very sophisticated learning algorithms are trained.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('GlcnxUlrtek')
%pylab inline
#Import code from last time
from partTwo import *
def sigmoid(z):
#Apply sigmoid activation function to scalar, vector, or matrix
return 1/(1+np.exp(-z))
def sigmoidPrime(z):
#Derivative of sigmoid function
return np.exp(-z)/((1+np.exp(-z))**2)
testValues = np.arange(-5,5,0.01)
plot(testValues, sigmoid(testValues), linewidth=2)
plot(testValues, sigmoidPrime(testValues), linewidth=2)
grid(1)
legend(['sigmoid', 'sigmoidPrime'])
# Part of NN Class (won't work alone, needs to be included in class as
# shown in below and in partFour.py):
def costFunctionPrime(self, X, y):
#Compute derivative with respect to W and W2 for a given X and y:
self.yHat = self.forward(X)
delta3 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z3))
dJdW2 = np.dot(self.a2.T, delta3)
# Whole Class with additions:
class Neural_Network(object):
def __init__(self):
#Define Hyperparameters
self.inputLayerSize = 2
self.outputLayerSize = 1
self.hiddenLayerSize = 3
#Weights (parameters)
self.W1 = np.random.randn(self.inputLayerSize,self.hiddenLayerSize)
self.W2 = np.random.randn(self.hiddenLayerSize,self.outputLayerSize)
def forward(self, X):
#Propogate inputs though network
self.z2 = np.dot(X, self.W1)
self.a2 = self.sigmoid(self.z2)
self.z3 = np.dot(self.a2, self.W2)
yHat = self.sigmoid(self.z3)
return yHat
def sigmoid(self, z):
#Apply sigmoid activation function to scalar, vector, or matrix
return 1/(1+np.exp(-z))
def sigmoidPrime(self,z):
#Gradient of sigmoid
return np.exp(-z)/((1+np.exp(-z))**2)
def costFunction(self, X, y):
#Compute cost for given X,y, use weights already stored in class.
self.yHat = self.forward(X)
J = 0.5*sum((y-self.yHat)**2)
return J
def costFunctionPrime(self, X, y):
#Compute derivative with respect to W and W2 for a given X and y:
self.yHat = self.forward(X)
delta3 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z3))
dJdW2 = np.dot(self.a2.T, delta3)
delta2 = np.dot(delta3, self.W2.T)*self.sigmoidPrime(self.z2)
dJdW1 = np.dot(X.T, delta2)
return dJdW1, dJdW2
NN = Neural_Network()
cost1 = NN.costFunction(X,y)
dJdW1, dJdW2 = NN.costFunctionPrime(X,y)
dJdW1
dJdW2
scalar = 3
NN.W1 = NN.W1 + scalar*dJdW1
NN.W2 = NN.W2 + scalar*dJdW2
cost2 = NN.costFunction(X,y)
print cost1, cost2
dJdW1, dJdW2 = NN.costFunctionPrime(X,y)
NN.W1 = NN.W1 - scalar*dJdW1
NN.W2 = NN.W2 - scalar*dJdW2
cost3 = NN.costFunction(X, y)
print cost2, cost3
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we'll load up a Universe.
Step2: We'll need to create an AtomGroup (an array of atoms) for each polymer chain.
Step3: It is important that the contents of each AtomGroup are in order. Selections done using select_atoms will always be sorted.
Step4: This list of AtomGroups is the required input for the PersistenceLength analysis class.
Step5: This has created the .results attribute on the analysis class.
Step6: The tool can then perform the exponential decay fit for us, which populate the .lp attribute.
Step7: Finally to check the validity of the fit, we can plot the fit against the results.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import MDAnalysis as mda
from MDAnalysis.analysis.polymer import PersistenceLength
import matplotlib.pyplot as plt
%matplotlib inline
u = mda.Universe('plength.gro')
print('We have a universe: {}'.format(u))
print('We have {} chains'.format(len(u.residues)))
print('Our atom types are: {}'.format(set(u.atoms.types)))
ags = [r.atoms.select_atoms('type C or type N') for r in u.residues]
list(ags)
list(ags[0][:10])
p = PersistenceLength(ags)
p.run()
plt.ylabel('C(n)')
plt.xlabel('n')
plt.plot(p.results)
p.perform_fit()
print("The persistence length is {:.4f} A".format(p.lp))
p.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CHALLENGE
Step2: Task #2
Step3: CHALLENGE
|
<ASSISTANT_TASK:>
Python Code:
# Import spaCy and load the language library. Remember to use a larger model!
# Choose the words you wish to compare, and obtain their vectors
# Import spatial and define a cosine_similarity function
# Write an expression for vector arithmetic
# For example: new_vector = word1 - word2 + word3
# List the top ten closest vectors in the vocabulary to the result of the expression above
def vector_math(a,b,c):
# Test the function on known words:
vector_math('king','man','woman')
# Import SentimentIntensityAnalyzer and create an sid object
# Write a review as one continuous string (multiple sentences are ok)
review = ''
# Obtain the sid scores for your review
sid.polarity_scores(review)
def review_rating(string):
# Test the function on your review above:
review_rating(review)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Constants / Parameters
Step3: Read the cleaned temperature data for each site
Step4: The 'good start' column contains the date after which there are no longer any gaps bigger than BIG_GAP_LENGTH
Step5: Read the isd-history.txt metadata file
Step6: Take just the entry with the most recent data for each station callsign
|
<ASSISTANT_TASK:>
Python Code:
# boilerplate includes
import sys
import os
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
#from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.basemap import Basemap
import matplotlib.patheffects as path_effects
import pandas as pd
import seaborn as sns
import datetime
# import scipy.interpolate
# import re
from IPython.display import display, HTML
%matplotlib notebook
plt.style.use('seaborn-notebook')
pd.set_option('display.max_columns', None)
STATIONS = [
'KLAX',
'KSFO',
'KSAN',
'KMIA',
'KFAT',
'KJAX',
'KTPA',
'KRIV',
'KIAH',
'KMCO',
'KBUR',
# 'KSNA', # Not good
# 'KONT', # Not good, 1973+ might be usable but 2001 has an 86 day and 41 day gap.
# 'KHST', # Not good
# 'KFUL', # Bad
]
TEMPERATURE_DATADIR = '../data/temperatures'
# The label of the hourly temperature column
TEMP_COL = 'AT'
# Gaps in the data this big or longer will be considerd 'breaks'...
# so the 'good' continuous timeseries will start at the end of the last big gap
BIG_GAP_LENGTH = pd.Timedelta('14 days')
# # Time range to use for computing normals (30 year, just like NOAA uses)
# NORM_IN_START_DATE = '1986-07-01'
# NORM_IN_END_DATE = '2016-07-01'
# # Time range or normals to output to use when running 'medfoes on normal temperature' (2 years, avoiding leapyears)
# NORM_OUT_START_DATE = '2014-01-01'
# NORM_OUT_END_DATE = '2015-12-31 23:59:59'
def read_isd_history_stations_list(filename, skiprows=22):
Read and parse stations information from isd_history.txt file
fwfdef = (( ('USAF', (6, str)),
('WBAN', (5, str)),
('STATION NAME', (28, str)),
('CTRY', (4, str)),
('ST', (2, str)),
('CALL', (5, str)),
('LAT', (7, str)),
('LON', (8, str)),
('EVEV', (7, str)),
('BEGIN', (8, str)),
('END', (8, str)),
))
names = []
colspecs = []
converters = {}
i = 0
for k,v in fwfdef:
names.append(k)
colspecs.append((i, i+v[0]+1))
i += v[0]+1
converters[k] = v[1]
stdf = pd.read_fwf(filename, skiprows=skiprows,
names=names,
colspecs=colspecs,
converters=converters)
return stdf
#df = pd.DataFrame(columns=['callsign', 'data start', 'data end', 'good start', 'gaps after good start'],
tmp = []
for callsign in STATIONS:
# load the temperature dataset
fn = "{}_AT_cleaned.h5".format(callsign)
ot = pd.read_hdf(os.path.join(TEMPERATURE_DATADIR,fn), 'table')
# load the gaps information
gaps = pd.read_csv(os.path.join(TEMPERATURE_DATADIR,"{}_AT_gaps.tsv".format(callsign)),
sep='\t', comment='#',
names=['begin','end','length'],
parse_dates=[0,1])
# convert length to an actual timedelta
gaps['length'] = pd.to_timedelta(gaps['length'])
# make sure begin and end have the right timezone association (UTC)
gaps['begin'] = gaps['begin'].apply(lambda x: x.tz_localize('UTC'))
gaps['end'] = gaps['end'].apply(lambda x: x.tz_localize('UTC'))
big_gaps = gaps[gaps['length'] >= BIG_GAP_LENGTH]
end_of_last_big_gap = big_gaps['end'].max()
if pd.isnull(end_of_last_big_gap): # No big gaps... so use start of data
end_of_last_big_gap = ot.index[0]
num_gaps_after_last_big_gap = gaps[gaps['end'] > end_of_last_big_gap].shape[0]
print(callsign,
ot.index[0],
ot.index[-1],
end_of_last_big_gap,
num_gaps_after_last_big_gap,
sep='\t')
tmp.append([
callsign,
ot.index[0],
ot.index[-1],
end_of_last_big_gap,
num_gaps_after_last_big_gap,
])
#display(big_gaps)
df = pd.DataFrame(tmp, columns=['callsign', 'data start', 'data end', 'good start', 'gaps after good start']).set_index('callsign')
df['good start']
historydf = read_isd_history_stations_list(
os.path.join(TEMPERATURE_DATADIR,'ISD/isd-history.txt'))
df.join(historydf.set_index('CALL'))
sthistdf = historydf.set_index('CALL').loc[df.index]
sthistdf = sthistdf.reset_index().sort_values(['CALL','END'], ascending=[True,False]).set_index('CALL')
foo = sthistdf[~sthistdf.index.duplicated('first')]
foo = foo.join(df)
foo.drop('END',1, inplace=True)
foo.drop('BEGIN',1, inplace=True)
foo = foo.reindex(STATIONS)
foo
# Save the summary table
foo.to_csv('stations_summary.csv')
tmp = foo[['STATION NAME','ST','LAT','LON',
'EVEV','good start','data end']].sort_values(['ST','LAT'], ascending=[True,False])
display(tmp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using the Hodrick-Prescott Filter for trend analysis
Step2: ETS Theory (Error-Trend-Seasonality)
Step3: Weakness of SMA
Step4: Full reading on mathematics of EWMA
Step5: ARIMA models
Step6: Step 1 - Visualize the data
Step7: Conclusion
Step8: ARIMA models continued... 2
Step9: Thus, the ADF test confirms our assumption from visual analysis that definately the data is non-stationary and has a seasionality and trend factor to it.
Step10: Since, p-value('Original') - p-value('First Difference') < p-value('First Difference') - p-value('Second Difference'), it is the first difference that did most of the elimination of the trend.
Step11: Thus, we conclude that seasonal difference does not make the data stationary here, in fact we can observe visually that as we go further in time the variance began to increase.
Step12: ARIMA models continued... 3
Step13: Plotting the final 'Autocorrelation' and 'Partial autocorrelation'
Step14: ARIMA models continued... 4
Step15: Choosing the p, d, q values of the order and seasonal_order tuple is reading task
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import statsmodels.api as sm
# Importing built-in datasets in statsmodels
df = sm.datasets.macrodata.load_pandas().data
df.head()
print(sm.datasets.macrodata.NOTE)
df.head()
df.tail()
# statsmodels.timeseriesanalysis.datetools
index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))
index
df.index = index
df.head()
df['realgdp'].plot()
result = sm.tsa.filters.hpfilter(df['realgdp'])
result
type(result)
type(result[0])
type(result[1])
gdp_cycle, gdp_trend = result
df['trend'] = gdp_trend
df[['realgdp', 'trend']].plot()
# zooming in
df[['realgdp', 'trend']]['2000-03-31':].plot()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
airline = pd.read_csv('airline_passengers.csv', index_col = 'Month')
airline.head()
# this is a normal index
airline.index
# Get rid of all the missing values in this dataset
airline.dropna(inplace=True)
airline.index = pd.to_datetime(airline.index)
airline.head()
# now its a DatetimeIndex
airline.index
# Recap of making the SMA
airline['6-month-SMA'] = airline['Thousands of Passengers'].rolling(window=6).mean()
airline['12-month-SMA'] = airline['Thousands of Passengers'].rolling(window=12).mean()
airline.plot(figsize=(10,8))
airline['EWMA-12'] = airline['Thousands of Passengers'].ewm(span=12).mean()
airline[['Thousands of Passengers', 'EWMA-12']].plot(figsize=(10,8))
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
airline = pd.read_csv('airline_passengers.csv', index_col = 'Month')
airline.head()
airline.plot()
airline.dropna(inplace = True)
airline.index = pd.to_datetime(airline.index)
airline.head()
from statsmodels.tsa.seasonal import seasonal_decompose
# additive/ multiplicative models available
# suspected linear trend = use additive
# suspected non-linear trend = multiplicative model
result = seasonal_decompose(airline['Thousands of Passengers'], model='multiplicative')
result.seasonal
result.seasonal.plot()
result.trend.plot()
result.resid.plot()
fig = result.plot()
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('monthly-milk-production-pounds-p.csv')
df.head()
df.columns = ['Month', 'Milk in Pounds per Cow']
df.head()
df.tail()
df.drop(168, axis=0, inplace=True)
df.tail()
df['Month'] = pd.to_datetime(df['Month'])
df.head()
df.set_index('Month', inplace=True)
df.head()
df.index
df.describe()
df.describe().transpose()
df.plot();
time_series = df['Milk in Pounds per Cow']
type(time_series)
time_series.rolling(12).mean().plot(label='12 SMA')
time_series.rolling(12).std().plot(label='12 STD')
time_series.plot()
plt.legend();
from statsmodels.tsa.seasonal import seasonal_decompose
decomp = seasonal_decompose(time_series)
fig = decomp.plot()
fig.set_size_inches(15,8)
from statsmodels.tsa.stattools import adfuller
result = adfuller(df['Milk in Pounds per Cow'])
result
def adf_check(time_series):
result = adfuller(time_series)
print('Augumented Dicky-Fuller Test')
labels = ['ADF Test Statistic', 'p-value', '# of lags', 'Num of Observations used']
for value, label in zip(result, labels):
print(label + ': ' + str(value))
if result[1] < 0.05:
print('Strong evidence against null hypothesis')
print('Rejecting null hypothesis')
print('Data has no unit root! and is stationary')
else:
print('Weak evidence against null hypothesis')
print('Fail to reject null hypothesis')
print('Data has a unit root, it is non-stationary')
adf_check(df['Milk in Pounds per Cow'])
# Now making the data stationary
df['First Difference'] = df['Milk in Pounds per Cow'] - df['Milk in Pounds per Cow'].shift(1)
df['First Difference'].plot()
# adf_check(df['First Difference']) - THIS RESULTS IN LinAlgError: SVD did not converge ERROR
# Note: we need to drop the first NA value before plotting this
adf_check(df['First Difference'].dropna())
df['Second Difference'] = df['First Difference'] - df['First Difference'].shift(1)
df['Second Difference'].plot();
adf_check(df['Second Difference'].dropna())
# Let's plot seasonal difference
df['Seasonal Difference'] = df['Milk in Pounds per Cow'] - df['Milk in Pounds per Cow'].shift(12)
df['Seasonal Difference'].plot();
adf_check(df['Seasonal Difference'].dropna())
# Plotting 'Seasonal first difference'
df['Seasonal First Difference'] = df['First Difference'] - df['First Difference'].shift(12)
df['Seasonal First Difference'].plot();
adf_check(df['Seasonal First Difference'].dropna())
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
# Plotting the gradual decline autocorrelation
fig_first = plot_acf(df['First Difference'].dropna())
fig_first = plot_acf(df['First Difference'].dropna(), use_vlines=False)
fig_seasonal_first = plot_acf(df['Seasonal First Difference'].dropna(), use_vlines=False)
fig_seasonal_first_pacf = plot_pacf(df['Seasonal First Difference'].dropna(), use_vlines=False)
plot_acf(df['Seasonal First Difference'].dropna());
plot_pacf(df['Seasonal First Difference'].dropna());
# ARIMA model for non-sesonal data
from statsmodels.tsa.arima_model import ARIMA
# help(ARIMA)
# ARIMA model from seasonal data
# from statsmodels.tsa.statespace import sarimax
model = sm.tsa.statespace.SARIMAX(df['Milk in Pounds per Cow'], order=(0,1,0), seasonal_order=(1,1,1,12))
results = model.fit()
print(results.summary())
# residual errors of prediction on the original training data
results.resid
# plot of residual errors of prediction on the original training data
results.resid.plot();
# KDE plot of residual errors of prediction on the original training data
results.resid.plot(kind='kde');
# Creating a column forecast to house the forecasted values for existing values
df['forecast'] = results.predict(start=150, end=168)
df[['Milk in Pounds per Cow', 'forecast']].plot(figsize=(12,8));
# Forecasting for future data
df.tail()
from pandas.tseries.offsets import DateOffset
future_dates = [df.index[-1] + DateOffset(months=x) for x in range(0,24)]
future_dates
future_df = pd.DataFrame(index=future_dates, columns=df.columns)
future_df.head()
final_df = pd.concat([df, future_df])
final_df.head()
final_df.tail()
final_df['forecast'] = results.predict(start=168, end=192)
final_df.tail()
final_df[['Milk in Pounds per Cow', 'forecast']].plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: That pure python solution is a bit annoying as it requires a for loop with a break, and specific knowledge about how IRSA tables handler data headers (hence the use of linenum + 4 for skiprows). Alternatively, one could manipulate the data file to read in the data.
Step2: That truly wasn't all that better - as it required a bunch of clicks/text editor edits. (There are programs such as sed and awk that could be used to execute all the necessary edits from the command line, but that too is cumbersome and somewhat like the initial all python solution).
Step3: A benefit to using this method, as opposed to pandas, is that data typing and data units are naturally read from the IRSA table and included with the associated columns. Thus, if you are uncertain if some brightness measurement is in magnitudes or Janskys, the astropy Table can report on that information.
Step4: That wasn't too terrible. But what if we consider a more typical light curve table, where there are loads of missing data, such as Table 2 from Foley et al. 2009
Step5: Okay - there is nothing elegant about that particular solution. But it works, and wranglin' ain't pretty.
Step6: Now that we have the data in the appropriate X and y arrays, estimate the accuracy with which a K nearest neighbors classification model can predict whether or not a passenger would survive the Titanic disaster. Use $k=10$ fold cross validation for the prediction.
Step7: Note - that should have failed! And for good reason - recall that kNN models measure the Euclidean distance between all points within the feature space. So, when considering the sex of a passenger, what is the numerical distance between male and female?
Step8: Problem 3c
Step9: An accuracy of 68% isn't particularly inspiring. But, there's a lot of important information that we are excluding. As far as the Titanic is concerned, Kate and Leo taught us that female passengers are far more likely to survive, while male passengers are not. So, if we can include gender in the model then we may be able to achieve more accurate predictions.
Step10: A 14% improvement is pretty good! But, we can wrangle even more out of the gender feature. Recall that kNN models measure the Euclidean distance between sources, meaning the scale of each feature really matters. Given that the fare ranges from 0 up to 512.3292, the kNN model will see this feature as far more important than gender, for no other reason than the units that have been adopted.
Step11: Scaling the features leads to further improvement!
Step12: But wait! Does this actually make sense?
Step13: Problem 3i
Step14: Problem 3j
Step15: The last thing we'd like to add to the model is the Age feature. Unfortunately, for 177 passengers we do not have a reported value for their age. This is a standard issue when building models known as "missing data" and this happens in astronomy all the time (for example, LSST is going to observe millions of L and T dwarfs that are easily detected in the $y$-band, but which do not have a detection in $u$-band).
Step16: The accuracy of the model hasn't improved by adding the age information (even though we know children were more likely to survive than adults).
Step17: Using the mean age for missing values provides a marginal improvement over the models with no age information. Is there anything else we can do? Yes - we can build a machine learning model to predict the values of the missing data. So there will be a machine learning model within the final machine learning model. In order to predict ages, we will need to build a regression model. Simple algorithms include Linear or Logistic Regression, while more complex examples include kNN or random forest regression.
Step18: Problem 3n
Step19: Problem 3o
Step20: As far as ages are concerned, imputation of the missing data does not significantly improve the model.
Step21: Problem 4b
Step22: Problem 4c
Step23: Given that there are obs missing in each filter, with a substantial number missing in W3 and W4, we will create a new categorical variable for detection in each filter. We will also replace "upper limits" with -9.99.
Step24: Now that we have dealt with missing and categorical data we can construct out machine learning model. We will use a $k$ nearest neighbors regression model to determine how well we can measure photometric redshifts.
Step25: Problem 4e
Step26: If you made a plot of redshift vs. predictions, you likely saw that there are many sources that are far from the 1 to 1 line. These are called catastrophic outliers, and they are a serious problem for science programs that rely on photometric redshifts.
Step27: Problem 4g
Step28: Earlier we saw that the performance of a kNN model greatly improves with feature scaling.
Step29: The MinMax scaler didn't help at all!
|
<ASSISTANT_TASK:>
Python Code:
# Solution 1 - pure python solution with pandas
with open('irsa_catalog_WISE_iPTF14jg_search_results.tbl') as f:
ll = f.readlines()
for linenum, l in enumerate(ll):
if l[0] == '|':
header = l.replace('|', ',').replace(' ', '')
header = list(header[1:-2].split(','))
break
irsa_tbl = pd.read_csv("irsa_catalog_WISE_iPTF14jg_search_results.tbl",
skiprows=linenum+4, delim_whitespace=True,
header=None, names=header)
irsa_tbl.head(5)
# solution 2 - edit the text file
# !cp irsa_catalog_WISE_iPTF14jg_search_results.tbl tmp.tbl
### delete lines 1-89, and 90-92
### replace whitespace with commas (may require multiple operations)
### replace '|' with commas
### replace ',,' with single commas
### replace ',\n,' with '\n'
### delete the comma at the very beginning and very end of the file
tedit_tbl = pd.read_csv('tmp.tbl')
tedit_tbl.head(5)
from astropy.table import Table
Table.read('irsa_catalog_WISE_iPTF14jg_search_results.tbl', format='ipac')
# pure python solution with pandas
tbl4 = pd.read_csv('Miller_et_al2011_table4.txt',
skiprows=5, delim_whitespace=True,
skipfooter=3, engine='python',
names=['t_mid', 'J', 'Jdum', 'J_unc',
'H', 'Hdum', 'H_unc',
'K', 'Kdum', 'K_unc'])
tbl4.drop(columns=['Jdum', 'Hdum', 'Kdum'], inplace=True)
print(tbl4)
# a (not terribly satisfying) pure python solution
# read the file in, parse and write another file that plays nice with pandas
with open('Foley_et_al2009_for_pd.csv','w') as fw:
print('JD,Bmag,Bmag_unc,Vmag,Vmag_unc,Rmag,Rmag_unc,Imag,Imag_unc,Unfiltmag,Unfiltmag_unc,Telescope',file=fw)
with open('Foley_et_al2009_table2.txt') as f:
ll = f.readlines()
for l in ll:
if l[0] == '2':
print_str = l.split()[0] + ','
for col in l.split()[1:]:
if col == 'sdotsdotsdot':
print_str += '-999,-999,'
elif col[0] == '>':
print_str += '{},-999,'.format(col[1:])
elif col == 'KAIT':
print_str += 'KAIT'
elif col == 'Nickel':
print_str += 'Nickel'
elif col[0] == '(':
print_str += '0.{},'.format(col[1:-1])
else:
print_str += '{},'.format(col)
print(print_str,file=fw)
pd.read_csv('Foley_et_al2009_for_pd.csv')
titanic_df = pd.read_csv('titanic_kaggle_training_set.csv', comment='#')
feat_list = list(titanic_df.columns)
label = 'Survived'
feat_list.remove(label)
X = titanic_df[feat_list].values
y = titanic_df[label]
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_neighbors=7)
knn_clf.fit(X, y)
titanic_df
from sklearn.model_selection import cross_val_score
X = titanic_df[['Pclass', 'SibSp', 'Parch', 'Fare']]
knn_clf = KNeighborsClassifier(n_neighbors=7)
cv_results = cross_val_score(knn_clf, X, y, cv=10)
print('The accuracy from numeric features = {:.2f}%'.format(100*np.mean(cv_results)))
gender = np.ones(len(titanic_df['Sex']))
gender[np.where(titanic_df['Sex'] == 'female')] = 2
titanic_df['gender'] = gender
X = titanic_df[['Pclass', 'SibSp', 'Parch', 'Fare', 'gender']]
knn_clf = KNeighborsClassifier(n_neighbors=7)
cv_results = cross_val_score(knn_clf, X, y, cv=10)
print('The accuracy when including gender = {:.2f}%'.format(100*np.mean(cv_results)))
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X)
Xminmax = scaler.transform(X)
knn_clf = KNeighborsClassifier(n_neighbors=7)
cv_results = cross_val_score(knn_clf, Xminmax, y, cv=10)
print('The accuracy when scaling features = {:.2f}%'.format(100*np.mean(cv_results)))
# following previous example, set C = 0, S = 1, Q = 2
porigin = np.empty(len(titanic_df['Sex'])).astype(int)
porigin[np.where(titanic_df['Embarked'] == 'C')] = 0
porigin[np.where(titanic_df['Embarked'] == 'S')] = 1
porigin[np.where(titanic_df['Embarked'] == 'Q')] = 2
titanic_df['porigin'] = porigin
def create_bin_cat_feats(feature_array):
categories = np.unique(feature_array)
feat_dict = {}
for cat in categories:
exec('{} = np.zeros(len(feature_array)).astype(int)'.format(cat))
exec('{0}[np.where(feature_array == "{0}")] = 1'.format(cat))
exec('feat_dict["{0}"] = {0}'.format(cat))
return feat_dict
gender_dict = create_bin_cat_feats(titanic_df['Sex'])
porigin_dict = create_bin_cat_feats(titanic_df['Embarked'])
for feat in gender_dict.keys():
titanic_df[feat] = gender_dict[feat]
for feat in porigin_dict.keys():
titanic_df[feat] = porigin_dict[feat]
from sklearn.preprocessing import MinMaxScaler
X = titanic_df[['Pclass', 'SibSp', 'Parch', 'Fare', 'female', 'male', 'S', 'Q', 'C']]
scaler = MinMaxScaler()
scaler.fit(X)
Xminmax = scaler.transform(X)
knn_clf = KNeighborsClassifier(n_neighbors=7)
cv_results = cross_val_score(knn_clf, Xminmax, y, cv=10)
print('The accuracy with categorical features = {:.2f}%'.format(100*np.mean(cv_results)))
age_impute = titanic_df['Age'].copy()
age_impute[np.isnan(age_impute)] = -999
titanic_df['age_impute'] = age_impute
X = titanic_df[['Pclass', 'SibSp', 'Parch', 'Fare', 'female', 'male', 'S', 'Q', 'C', 'age_impute']]
scaler = MinMaxScaler()
scaler.fit(X)
Xminmax = scaler.transform(X)
knn_clf = KNeighborsClassifier(n_neighbors=7)
cv_results = cross_val_score(knn_clf, Xminmax, y, cv=10)
print('The accuracy with -999 for missing ages = {:.2f}%'.format(100*np.mean(cv_results)))
age_impute = titanic_df['Age'].copy().values
age_impute[np.isnan(age_impute)] = np.mean(age_impute[np.isfinite(age_impute)])
titanic_df['age_impute'] = age_impute
X = titanic_df[['Pclass', 'SibSp', 'Parch', 'Fare', 'female', 'male', 'S', 'Q', 'C', 'age_impute']]
scaler = MinMaxScaler()
scaler.fit(X)
Xminmax = scaler.transform(X)
knn_clf = KNeighborsClassifier(n_neighbors=7)
cv_results = cross_val_score(knn_clf, Xminmax, y, cv=10)
print('The accuracy with the mean for missing ages = {:.2f}%'.format(100*np.mean(cv_results)))
from sklearn.linear_model import LinearRegression
has_ages = np.where(np.isfinite(titanic_df['Age']))[0]
impute_X_train = titanic_df[['Pclass', 'SibSp', 'Parch', 'Fare', 'female', 'male', 'S', 'Q', 'C']].iloc[has_ages]
impute_y_train = titanic_df['Age'].iloc[has_ages]
scaler = MinMaxScaler()
scaler.fit(impute_X_train)
Xminmax = scaler.transform(impute_X_train)
lr_age = LinearRegression().fit(Xminmax, impute_y_train)
cv_results = cross_val_score(LinearRegression(), Xminmax, impute_y_train, cv=10, scoring='neg_mean_squared_error')
print('Missing ages have RMSE = {:.2f}'.format(np.mean((-1*cv_results)**0.5)))
missing_ages = np.where(np.isnan(titanic_df['Age']))[0]
impute_X_missing = titanic_df[['Pclass', 'SibSp', 'Parch', 'Fare', 'female', 'male', 'S', 'Q', 'C']].iloc[missing_ages]
X_missing_minmax = scaler.transform(impute_X_missing)
age_preds = lr_age.predict(X_missing_minmax)
age_impute = titanic_df['Age'].copy().values
age_impute[missing_ages] = age_preds
titanic_df['age_impute'] = age_impute
X = titanic_df[['Pclass', 'SibSp', 'Parch', 'Fare', 'female', 'male', 'S', 'Q', 'C', 'age_impute']]
scaler = MinMaxScaler()
scaler.fit(X)
Xminmax = scaler.transform(X)
knn_clf = KNeighborsClassifier(n_neighbors=7)
cv_results = cross_val_score(knn_clf, Xminmax, y, cv=10)
print('The accuracy with the mean for missing ages = {:.2f}%'.format(100*np.mean(cv_results)))
sdss = pd.read_csv("DSFP_SDSS_spec_train.csv")
sdss[:5]
type_dict = create_bin_cat_feats(sdss['type'])
for feat in type_dict.keys():
sdss[feat] = type_dict[feat]
# WISE non-detections have SNR < 2
for wsnr in ['w1snr', 'w2snr', 'w3snr', 'w4snr']:
frac_missing = sum(sdss[wsnr] < 2)/len(sdss[wsnr])
print('{:.2f}% of the obs in {} are non-detections'.format(100*frac_missing, wsnr[0:2]))
for filt in ['w1', 'w2', 'w3', 'w4']:
det = np.ones(len(sdss)).astype(int)
det[np.where(sdss['{}snr'.format(filt)] < 2)] = 0
sdss['{}det'.format(filt)] = det
mag = sdss['{}mpro'.format(filt)].values
mag[det == 0] = -9.99
sdss['{}mag'.format(filt)] = mag
X = np.array(sdss[['psfMag_u', 'psfMag_g', 'psfMag_r',
'psfMag_i', 'psfMag_z', 'modelMag_u', 'modelMag_g', 'modelMag_r',
'modelMag_i', 'modelMag_z', 'extinction_u', 'extinction_g',
'extinction_r', 'extinction_i', 'extinction_z','ext', 'ps',
'w1det', 'w1mag', 'w2det', 'w2mag', 'w3det', 'w3mag', 'w4det', 'w4mag']])
y = np.array(sdss['z'])
# cross validation goes here
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import mean_squared_error
knn_reg = KNeighborsRegressor(n_neighbors=7)
y_preds = cross_val_predict(knn_reg, X, y, cv=10)
print('The RMSE = {}'.format(np.sqrt(mean_squared_error(y,y_preds))))
# plotting goes here
fig, ax = plt.subplots()
ax.scatter(y, y_preds, alpha=0.02)
ax.plot([0,6],[0,6], 'Crimson')
ax.set_xlabel('z_SDSS')
ax.set_ylabel('z_kNN')
def catastrophic_fraction(ground_truth, predictions, threshold=0.2):
'''Function to calculate fraction of predictions that are catastrophic outliers
Parameter
---------
ground_truth : array-like
Correct labels for the model sources
predictions : array-like
Predictions for the model sources
threshold : float (optional, default=0.2)
The threshold to determine if a "miss" is catastrophic or not
Returns
-------
oh_nos : float
Fractional number of catastrophic outliers
'''
n_outliers = len(np.where(np.abs(ground_truth - predictions)/ground_truth > threshold)[0])
oh_nos = n_outliers/len(ground_truth)
return oh_nos
catastrophic_fraction(y, y_preds)
scaler = MinMaxScaler()
scaler.fit(X)
Xminmax = scaler.transform(X)
knn_reg = KNeighborsRegressor(n_neighbors=7)
y_preds = cross_val_predict(knn_reg, Xminmax, y, cv=10)
print('The RMSE = {}'.format(np.sqrt(mean_squared_error(y,y_preds))))
print('{} are catastrophic'.format(catastrophic_fraction(y, y_preds)))
# plotting goes here
fig, ax = plt.subplots()
ax.scatter(y, y_preds, alpha=0.02)
ax.plot([0,6],[0,6], 'Crimson')
ax.set_xlabel('z_SDSS')
ax.set_ylabel('z_kNN')
from sklearn.ensemble import RandomForestRegressor
for filt in ['u', 'r', 'i', 'z']:
sdss['psf_g-{}'.format(filt)] = sdss['psfMag_g'] - sdss['psfMag_{}'.format(filt)]
sdss['model_g-{}'.format(filt)] = sdss['modelMag_g'] - sdss['modelMag_{}'.format(filt)]
X = np.array(sdss[['ps', 'ext',
'w1det', 'w1mag', 'w2det', 'w2mag', 'w3det', 'w3mag', 'w4det', 'w4mag',
'psf_g-u', 'psf_g-r', 'psf_g-i', 'psf_g-z',
'model_g-u', 'model_g-r', 'model_g-i', 'model_g-z']])
rf_reg = RandomForestRegressor(n_estimators=100)
y_preds = cross_val_predict(rf_reg, X, y, cv=10)
print('The RMSE = {}'.format(np.sqrt(mean_squared_error(y,y_preds))))
print('{} are catastrophic'.format(catastrophic_fraction(y, y_preds)))
# plotting goes here
fig, ax = plt.subplots(figsize=(8,8))
ax.scatter(y, y_preds, alpha=0.02)
ax.plot([0,6],[0,6], 'Crimson')
ax.set_xlim(-0.1,4)
ax.set_ylim(-0.1,4)
ax.set_xlabel('z_SDSS')
ax.set_ylabel('z_kNN')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's get the shootings data.
Step2: Now let's iterate through the shootings, generate shapely points and check to see if they're in the geometry we care about.
Step3: Let's do something similar with homicides. It's exactly the same, in fact, but a few field names are different.
Step4: Now let's see how many homicides we can associate with shootings. We'll say that if the locations are within five meters and the date and time of the shooting is within 10 minutes of the homicide, they're the same incident.
|
<ASSISTANT_TASK:>
Python Code:
import requests
from shapely.geometry import shape, Point
r = requests.get('https://data.cityofchicago.org/api/geospatial/cauq-8yn6?method=export&format=GeoJSON')
for feature in r.json()['features']:
if feature['properties']['community'] == 'AUSTIN':
austin = feature
poly = shape(austin['geometry'])
import os
def get_data(table):
r = requests.get('%stable/json/%s' % (os.environ['NEWSROOMDB_URL'], table))
return r.json()
shootings = get_data('shootings')
homicides = get_data('homicides')
shootings_ca = []
for row in shootings:
if not row['Geocode Override']:
continue
points = row['Geocode Override'][1:-1].split(',')
if len(points) != 2:
continue
point = Point(float(points[1]), float(points[0]))
row['point'] = point
if poly.contains(point):
shootings_ca.append(row)
print 'Found %d shootings in this community area' % len(shootings_ca)
for f in shootings_ca:
print f['Date'], f['Time'], f['Age'], f['Sex'], f['Shooting Location']
homicides_ca = []
years = {}
for row in homicides:
if not row['Geocode Override']:
continue
points = row['Geocode Override'][1:-1].split(',')
if len(points) != 2:
continue
point = Point(float(points[1]), float(points[0]))
row['point'] = point
if poly.contains(point):
homicides_ca.append(row)
print 'Found %d homicides in this community area' % len(homicides_ca)
for f in homicides_ca:
print f['Occ Date'], f['Occ Time'], f['Age'], f['Sex'], f['Address of Occurrence']
if not f['Occ Date']:
continue
dt = datetime.strptime(f['Occ Date'], '%Y-%m-%d')
if dt.year not in years:
years[dt.year] = 0
years[dt.year] += 1
print years
import pyproj
from datetime import datetime, timedelta
geod = pyproj.Geod(ellps='WGS84')
associated = []
for homicide in homicides_ca:
if not homicide['Occ Time']:
homicide['Occ Time'] = '00:01'
if not homicide['Occ Date']:
homicide['Occ Date'] = '2000-01-01'
homicide_dt = datetime.strptime('%s %s' % (homicide['Occ Date'], homicide['Occ Time']), '%Y-%m-%d %H:%M')
for shooting in shootings_ca:
if not shooting['Time']:
shooting['Time'] = '00:01'
if not shooting['Time']:
shooting['Time'] = '2000-01-01'
shooting_dt = datetime.strptime('%s %s' % (shooting['Date'], shooting['Time']), '%Y-%m-%d %H:%M')
diff = homicide_dt - shooting_dt
seconds = divmod(diff.days * 86400 + diff.seconds, 60)[0]
if abs(seconds) <= 600:
angle1, angle2, distance = geod.inv(
homicide['point'].x, homicide['point'].y, shooting['point'].x, shooting['point'].y)
if distance < 5:
associated.append((homicide, shooting))
break
print len(associated)
years = {}
for homicide in homicides:
if not homicide['Occ Date']:
continue
dt = datetime.strptime(homicide['Occ Date'], '%Y-%m-%d')
if dt.year not in years:
years[dt.year] = 0
years[dt.year] += 1
print years
from csv import DictWriter
from ftfy import fix_text, guess_bytes
for idx, row in enumerate(shootings_ca):
if 'point' in row.keys():
del row['point']
for key in row:
#print idx, key, row[key]
if type(row[key]) is str:
#print row[key]
row[key] = fix_text(row[key].replace('\xa0', '').decode('utf8'))
for idx, row in enumerate(homicides_ca):
if 'point' in row.keys():
del row['point']
for key in row:
#print idx, key, row[key]
if type(row[key]) is str:
#print row[key]
row[key] = row[key].decode('utf8')
with open('/Users/abrahamepton/Documents/austin_shootings.csv', 'w+') as fh:
writer = DictWriter(fh, sorted(shootings_ca[0].keys()))
writer.writeheader()
for row in shootings_ca:
try:
writer.writerow(row)
except:
print row
with open('/Users/abrahamepton/Documents/austin_homicides.csv', 'w+') as fh:
writer = DictWriter(fh, sorted(homicides_ca[0].keys()))
writer.writeheader()
for row in homicides_ca:
try:
writer.writerow(row)
except:
print row
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TFP 確率的レイヤー
Step2: 迅速に作成
Step3: 注意
Step4: このコラボでは(線形回帰問題のコンテキストで)その方法を紹介します。
Step5: ケース 1
Step6: ケース 2
Step7: ケース 3
Step8: ケース 4
Step9: ケース 5
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Import { display-mode: "form" }
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.math.psd_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# For numeric stability, set the default floating-point dtype to float64
tf.keras.backend.set_floatx('float64')
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1]),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Task
Step2: Solving problem using regression
Step3: Solving problem using Logistic Regression
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/glass/glass.data'
col_names = ['id','ri','na','mg','al','si','k','ca','ba','fe','glass_type']
glass = pd.read_csv(url, names=col_names, index_col='id')
glass.sort_values(by='al', inplace=True)
glass.head()
glass['household'] = glass.glass_type.map({1:0, 2:0, 3:0, 5:1, 6:1, 7:1})
glass.head()
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter(glass.al, glass.household)
plt.xlabel('al')
plt.ylabel('household')
# Lets see if we deal with an imbalanced problem..
glass['household'].value_counts()
# fit a linear regression model
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
feature_cols = ['al']
X = glass[feature_cols]
y = glass.household
linreg.fit(X, y)
glass['household_pred'] = linreg.predict(X)
# scatter plot that includes the regression line
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred, color='red')
plt.xlabel('al')
plt.ylabel('household')
# Because eventually we need to decide a class (0 or 1) we need a cutoff value
# cutoff value can be .5 since ~halpf of data are below and ~half of data above
CUTTOFF = 0.5
test_values = [.5, 1., 3.]
for i in test_values:
print "if al = ", i, " is household = ", 0.5 < linreg.predict(i)
import numpy as np
# Predict for all household values
glass['household_pred_class'] = np.where(glass.household_pred >= 0.5, 1, 0)
glass.head()
# plot the class predictions
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_class, color='red')
plt.xlabel('al')
plt.ylabel('household')
# fit a logistic regression model and store the class predictions
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=1e9)
feature_cols = ['al']
X = glass[feature_cols]
y = glass.household
logreg.fit(X, y)
# store the predicted class
glass['household_pred_class'] = logreg.predict(X)
# store the predicted probabilites of class 1
glass['household_pred_prob'] = logreg.predict_proba(X)[:, 1]
# plot the class predictions
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_class, color='red')
plt.plot(glass.al, glass.household_pred_prob, color='blue')
plt.xlabel('al')
plt.ylabel('household')
glass['household_pred_class_logistic_reg'] = glass.household_pred_class
glass.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: using a cache_dir
Step2: using NQQ25 dataset
Step3: Previous collection contains only a dataset named 'nama_gdp_c'
Step4: Get value for year 2012.
|
<ASSISTANT_TASK:>
Python Code:
# all import here
import os
import jsonstat
cache_dir = os.path.abspath(os.path.join("..", "tests", "fixtures", "www.cso.ie"))
jsonstat.cache_dir(cache_dir)
base_uri = 'http://www.cso.ie/StatbankServices/StatbankServices.svc/jsonservice/responseinstance/'
uri = base_uri + "NQQ25"
filename = "cso_ie-NQQ25.json"
collection_1 = jsonstat.from_url(uri, filename)
collection_1
dataset = collection_1.dataset(0)
dataset
dataset.dimension('Sector')
dataset.dimension('Quarter')
dataset.dimension('Statistic')
dataset.data(Sector='03', Quarter='1997Q4', Statistic='NQQ25S1')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (1b) Vetores Esparsos
Step2: (1c) Atributos OHE como vetores esparsos
Step4: (1d) Função de codificação OHE
Step5: (1e) Aplicar OHE em uma base de dados
Step6: Part 2
Step7: (2b) Dicionário OHE de atributos únicos
Step9: (2c) Criação automática do dicionário OHE
Step10: Part 3
Step11: (3a) Carregando e dividindo os dados
Step13: (3b) Extração de atributos
Step14: (3c) Crie o dicionário de OHE dessa base de dados
Step16: (3d) Aplicando OHE à base de dados
Step19: Visualização 1
Step21: (3e) Atributos não observados
Step22: Part 4
Step24: (4b) Log loss
Step25: (4c) Baseline log loss
Step27: (4d) Probabilidade da Predição
Step29: (4e) Avalie o modelo
Step30: (4f) log-loss da validação
Step31: Visualização 2
Step33: Parte 5
Step35: (5b) Criando hashed features
Step37: (5c) Esparsidade
Step38: (5d) Modelo logístico com hashed features
Step39: (5e) Avaliando a base de testes
|
<ASSISTANT_TASK:>
Python Code:
# Data for manual OHE
# Note: the first data point does not include any value for the optional third feature
sampleOne = [(0, 'mouse'), (1, 'black')]
sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
sampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree])
# EXERCICIO
sampleOHEDictManual = {}
sampleOHEDictManual[(0,'bear')] = 0
sampleOHEDictManual[(0,'cat')] = 1
sampleOHEDictManual[(0,'mouse')] = 2
sampleOHEDictManual[(1,'black')] = 3
sampleOHEDictManual[(1,'tabby')] = 4
sampleOHEDictManual[(2,'mouse')] = 5
sampleOHEDictManual[(2,'salmon')] = 6
# TEST One-hot-encoding (1a)
from test_helper import Test
Test.assertEqualsHashed(sampleOHEDictManual[(0,'bear')],
'b6589fc6ab0dc82cf12099d1c2d40ab994e8410c',
"incorrect value for sampleOHEDictManual[(0,'bear')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'cat')],
'356a192b7913b04c54574d18c28d46e6395428ab',
"incorrect value for sampleOHEDictManual[(0,'cat')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')],
'da4b9237bacccdf19c0760cab7aec4a8359010b0',
"incorrect value for sampleOHEDictManual[(0,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'black')],
'77de68daecd823babbb58edb1c8e14d7106e83bb',
"incorrect value for sampleOHEDictManual[(1,'black')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')],
'1b6453892473a467d07372d45eb05abc2031647a',
"incorrect value for sampleOHEDictManual[(1,'tabby')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')],
'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4',
"incorrect value for sampleOHEDictManual[(2,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')],
'c1dfd96eea8cc2b62785275bca38ac261256e278',
"incorrect value for sampleOHEDictManual[(2,'salmon')]")
Test.assertEquals(len(sampleOHEDictManual.keys()), 7,
'incorrect number of keys in sampleOHEDictManual')
import numpy as np
from pyspark.mllib.linalg import SparseVector
# EXERCICIO
aDense = np.array([0., 3., 0., 4.])
aSparse = SparseVector(4,[(1,3),(3,4)])
bDense = np.array([0., 0., 0., 1.])
bSparse = SparseVector(4,[(3,1)])
w = np.array([0.4, 3.1, -1.4, -.5])
print aDense.dot(w)
print aSparse.dot(w)
print bDense.dot(w)
print bSparse.dot(w)
# TEST Sparse Vectors (1b)
Test.assertTrue(isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(aDense.dot(w) == aSparse.dot(w),
'dot product of aDense and w should equal dot product of aSparse and w')
Test.assertTrue(bDense.dot(w) == bSparse.dot(w),
'dot product of bDense and w should equal dot product of bSparse and w')
# Reminder of the sample features
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# EXERCICIO
sampleOneOHEFeatManual = SparseVector(7, [(sampleOHEDictManual[s],1.0) for s in sampleOne])
sampleTwoOHEFeatManual = SparseVector(7, [(sampleOHEDictManual[s],1.0) for s in sampleTwo])
sampleThreeOHEFeatManual = SparseVector(7, [(sampleOHEDictManual[s],1.0) for s in sampleThree])
# TEST OHE Features as sparse vectors (1c)
Test.assertTrue(isinstance(sampleOneOHEFeatManual, SparseVector),
'sampleOneOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleTwoOHEFeatManual, SparseVector),
'sampleTwoOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleThreeOHEFeatManual, SparseVector),
'sampleThreeOHEFeatManual needs to be a SparseVector')
Test.assertEqualsHashed(sampleOneOHEFeatManual,
'ecc00223d141b7bd0913d52377cee2cf5783abd6',
'incorrect value for sampleOneOHEFeatManual')
Test.assertEqualsHashed(sampleTwoOHEFeatManual,
'26b023f4109e3b8ab32241938e2e9b9e9d62720a',
'incorrect value for sampleTwoOHEFeatManual')
Test.assertEqualsHashed(sampleThreeOHEFeatManual,
'c04134fd603ae115395b29dcabe9d0c66fbdc8a7',
'incorrect value for sampleThreeOHEFeatManual')
# EXERCICIO
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
You should ensure that the indices used to create a SparseVector are sorted.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
return SparseVector(numOHEFeats, [(OHEDict[s],1.0) for s in rawFeats])
# Calculate the number of features in sampleOHEDictManual
numSampleOHEFeats = len(sampleOHEDictManual)
# Run oneHotEnoding on sampleOne
sampleOneOHEFeat = oneHotEncoding(sampleOne, sampleOHEDictManual, numSampleOHEFeats)
print sampleOneOHEFeat
# TEST Define an OHE Function (1d)
Test.assertTrue(sampleOneOHEFeat == sampleOneOHEFeatManual,
'sampleOneOHEFeat should equal sampleOneOHEFeatManual')
Test.assertEquals(sampleOneOHEFeat, SparseVector(7, [2,3], [1.0,1.0]),
'incorrect value for sampleOneOHEFeat')
Test.assertEquals(oneHotEncoding([(1, 'black'), (0, 'mouse')], sampleOHEDictManual,
numSampleOHEFeats), SparseVector(7, [2,3], [1.0,1.0]),
'incorrect definition for oneHotEncoding')
# EXERCICIO
sampleOHEData = sampleDataRDD.map(lambda x: oneHotEncoding(x,sampleOHEDictManual, numSampleOHEFeats))
print sampleOHEData.collect()
# TEST Apply OHE to a dataset (1e)
sampleOHEDataValues = sampleOHEData.collect()
Test.assertTrue(len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements')
Test.assertEquals(sampleOHEDataValues[0], SparseVector(7, {2: 1.0, 3: 1.0}),
'incorrect OHE for first sample')
Test.assertEquals(sampleOHEDataValues[1], SparseVector(7, {1: 1.0, 4: 1.0, 5: 1.0}),
'incorrect OHE for second sample')
Test.assertEquals(sampleOHEDataValues[2], SparseVector(7, {0: 1.0, 3: 1.0, 6: 1.0}),
'incorrect OHE for third sample')
# EXERCICIO
sampleDistinctFeats = (sampleDataRDD
.flatMap(lambda x : x )
.distinct()
)
# TEST Pair RDD of (featureID, category) (2a)
Test.assertEquals(sorted(sampleDistinctFeats.collect()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'incorrect value for sampleDistinctFeats')
# EXERCICIO
sampleOHEDict = (sampleDistinctFeats
.zipWithIndex()
.collectAsMap())
print sampleOHEDict
# TEST OHE Dictionary from distinct features (2b)
Test.assertEquals(sorted(sampleOHEDict.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDict has unexpected keys')
Test.assertEquals(sorted(sampleOHEDict.values()), range(7), 'sampleOHEDict has unexpected values')
# EXERCICIO
def createOneHotDict(inputData):
Creates a one-hot-encoder dictionary based on the input data.
Args:
inputData (RDD of lists of (int, str)): An RDD of observations where each observation is
made up of a list of (featureID, value) tuples.
Returns:
dict: A dictionary where the keys are (featureID, value) tuples and map to values that are
unique integers.
return (inputData
.flatMap(lambda x: x)
.distinct()
.zipWithIndex()
.collectAsMap()
)
sampleOHEDictAuto = createOneHotDict(sampleDataRDD)
print sampleOHEDictAuto
# TEST Automated creation of an OHE dictionary (2c)
Test.assertEquals(sorted(sampleOHEDictAuto.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDictAuto has unexpected keys')
Test.assertEquals(sorted(sampleOHEDictAuto.values()), range(7),
'sampleOHEDictAuto has unexpected values')
import os.path
baseDir = os.path.join('Data')
inputPath = os.path.join('dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
if os.path.isfile(fileName):
rawData = (sc
.textFile(fileName, 2)
.map(lambda x: x.replace('\t', ','))) # work with either ',' or '\t' separated data
print rawData.take(1)
# EXERCICIO
weights = [.8, .1, .1]
seed = 42
# Use randomSplit with weights and seed
rawTrainData, rawValidationData, rawTestData = rawData.randomSplit(weights, seed)
# Cache the data
rawTrainData.cache()
rawValidationData.cache()
rawTestData.cache()
nTrain = rawTrainData.count()
nVal = rawValidationData.count()
nTest = rawTestData.count()
print nTrain, nVal, nTest, nTrain + nVal + nTest
print rawData.take(1)
# TEST Loading and splitting the data (3a)
Test.assertTrue(all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]),
'you must cache the split data')
Test.assertEquals(nTrain, 79911, 'incorrect value for nTrain')
Test.assertEquals(nVal, 10075, 'incorrect value for nVal')
Test.assertEquals(nTest, 10014, 'incorrect value for nTest')
# EXERCICIO
def parsePoint(point):
Converts a comma separated string into a list of (featureID, value) tuples.
Note:
featureIDs should start at 0 and increase to the number of features - 1.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
Returns:
list: A list of (featureID, value) tuples.
seq = []
for x,y in enumerate(point.split(',')[1:]):
seq.append((x,y))
return seq
parsedTrainFeat = rawTrainData.map(parsePoint)
numCategories = (parsedTrainFeat
.flatMap(lambda x: x)
.distinct()
.map(lambda x: (x[0], 1))
.reduceByKey(lambda x, y: x + y)
.sortByKey()
.collect()
)
print numCategories[2][1]
# TEST Extract features (3b)
Test.assertEquals(numCategories[2][1], 855, 'incorrect implementation of parsePoint')
Test.assertEquals(numCategories[32][1], 4, 'incorrect implementation of parsePoint')
# EXERCICIO
ctrOHEDict = createOneHotDict(parsedTrainFeat)
numCtrOHEFeats = len(ctrOHEDict.keys())
print numCtrOHEFeats
print ctrOHEDict[(0, '')]
# TEST Create an OHE dictionary from the dataset (3c)
Test.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDict')
Test.assertTrue((0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict')
from pyspark.mllib.regression import LabeledPoint
# EXERCICIO
def parseOHEPoint(point, OHEDict, numOHEFeats):
Obtain the label and feature vector for this raw observation.
Note:
You must use the function `oneHotEncoding` in this implementation or later portions
of this lab may not function as expected.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer.
numOHEFeats (int): The number of unique features in the training dataset.
Returns:
LabeledPoint: Contains the label for the observation and the one-hot-encoding of the
raw features based on the provided OHE dictionary.
<COMPLETAR>
OHETrainData = rawTrainData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHETrainData.cache()
print OHETrainData.take(1)
# Check that oneHotEncoding function was used in parseOHEPoint
backupOneHot = oneHotEncoding
oneHotEncoding = None
withOneHot = False
try: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats)
except TypeError: withOneHot = True
oneHotEncoding = backupOneHot
# TEST Apply OHE to the dataset (3d)
numNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5))
numNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5))
Test.assertEquals(numNZ, numNZAlt, 'incorrect implementation of parseOHEPoint')
Test.assertTrue(withOneHot, 'oneHotEncoding not present in parseOHEPoint')
def bucketFeatByCount(featCount):
Bucket the counts by powers of two.
for i in range(11):
size = 2 ** i
if featCount <= size:
return size
return -1
featCounts = (OHETrainData
.flatMap(lambda lp: lp.features.indices)
.map(lambda x: (x, 1))
.reduceByKey(lambda x, y: x + y))
featCountsBuckets = (featCounts
.map(lambda x: (bucketFeatByCount(x[1]), 1))
.filter(lambda (k, v): k != -1)
.reduceByKey(lambda x, y: x + y)
.collect())
print featCountsBuckets
import matplotlib.pyplot as plt
x, y = zip(*featCountsBuckets)
x, y = np.log(x), np.log(y)
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2))
ax.set_xlabel(r'$\log_e(bucketSize)$'), ax.set_ylabel(r'$\log_e(countInBucket)$')
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
pass
# EXERCICIO
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be
ignored.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
<COMPLETAR>
OHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHEValidationData.cache()
print OHEValidationData.take(1)
# TEST Handling unseen features (3e)
numNZVal = (OHEValidationData
.map(lambda lp: len(lp.features.indices))
.sum())
Test.assertEquals(numNZVal, 372080, 'incorrect number of features')
from pyspark.mllib.classification import LogisticRegressionWithSGD
# fixed hyperparameters
numIters = 50
stepSize = 10.
regParam = 1e-6
regType = 'l2'
includeIntercept = True
# EXERCICIO
model0 = <COMPLETAR>
sortedWeights = sorted(model0.weights)
print sortedWeights[:5], model0.intercept
# TEST Logistic regression (4a)
Test.assertTrue(np.allclose(model0.intercept, 0.56455084025), 'incorrect value for model0.intercept')
Test.assertTrue(np.allclose(sortedWeights[0:5],
[-0.45899236853575609, -0.37973707648623956, -0.36996558266753304,
-0.36934962879928263, -0.32697945415010637]), 'incorrect value for model0.weights')
# EXERCICIO
from math import log
def computeLogLoss(p, y):
Calculates the value of log loss for a given probabilty and label.
Note:
log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it
and when p is 1 we need to subtract a small value (epsilon) from it.
Args:
p (float): A probabilty between 0 and 1.
y (int): A label. Takes on the values 0 and 1.
Returns:
float: The log loss value.
<COMPLETAR>
print computeLogLoss(.5, 1)
print computeLogLoss(.5, 0)
print computeLogLoss(.99, 1)
print computeLogLoss(.99, 0)
print computeLogLoss(.01, 1)
print computeLogLoss(.01, 0)
print computeLogLoss(0, 1)
print computeLogLoss(1, 1)
print computeLogLoss(1, 0)
# TEST Log loss (4b)
Test.assertTrue(np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)],
[0.69314718056, 0.0100503358535, 4.60517018599]),
'computeLogLoss is not correct')
Test.assertTrue(np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)],
[25.3284360229, 1.00000008275e-11, 25.3284360229]),
'computeLogLoss needs to bound p away from 0 and 1 by epsilon')
# EXERCICIO
# Note that our dataset has a very high click-through rate by design
# In practice click-through rate can be one to two orders of magnitude lower
classOneFracTrain = OHETrainData.<COMPLETAR>
print classOneFracTrain
logLossTrBase = OHETrainData.<COMPLETAR>
print 'Baseline Train Logloss = {0:.3f}\n'.format(logLossTrBase)
# TEST Baseline log loss (4c)
Test.assertTrue(np.allclose(classOneFracTrain, 0.22717773523), 'incorrect value for classOneFracTrain')
Test.assertTrue(np.allclose(logLossTrBase, 0.535844), 'incorrect value for logLossTrBase')
# EXERCICIO
from math import exp # exp(-t) = e^-t
def getP(x, w, intercept):
Calculate the probability for an observation given a set of weights and intercept.
Note:
We'll bound our raw prediction between 20 and -20 for numerical purposes.
Args:
x (SparseVector): A vector with values of 1.0 for features that exist in this
observation and 0.0 otherwise.
w (DenseVector): A vector of weights (betas) for the model.
intercept (float): The model's intercept.
Returns:
float: A probability between 0 and 1.
# calculate rawPrediction = w.x + intercept
rawPrediction = <COMPLETAR>
# Bound the raw prediction value
rawPrediction = min(rawPrediction, 20)
rawPrediction = max(rawPrediction, -20)
# calculate (1+e^-rawPrediction)^-1
return <COMPLETAR>
trainingPredictions = OHETrainData.<COMPLETAR>
print trainingPredictions.take(5)
# TEST Predicted probability (4d)
Test.assertTrue(np.allclose(trainingPredictions.sum(), 18135.4834348),
'incorrect value for trainingPredictions')
# EXERCICIO
def evaluateResults(model, data):
Calculates the log loss for the data given the model.
Args:
model (LogisticRegressionModel): A trained logistic regression model.
data (RDD of LabeledPoint): Labels and features for each observation.
Returns:
float: Log loss for the data.
return (data
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
)
logLossTrLR0 = evaluateResults(model0, OHETrainData)
print ('OHE Features Train Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTrBase, logLossTrLR0))
# TEST Evaluate the model (4e)
Test.assertTrue(np.allclose(logLossTrLR0, 0.456903), 'incorrect value for logLossTrLR0')
# EXERCICIO
logLossValBase = OHEValidationData.<COMPLETAR>
logLossValLR0 = evaluateResults(model0, OHEValidationData)
print ('OHE Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, logLossValLR0))
# TEST Validation log loss (4f)
Test.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase')
Test.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0')
labelsAndScores = OHEValidationData.map(lambda lp:
(lp.label, getP(lp.features, model0.weights, model0.intercept)))
labelsAndWeights = labelsAndScores.collect()
labelsAndWeights.sort(key=lambda (k, v): v, reverse=True)
labelsByWeight = np.array([k for (k, v) in labelsAndWeights])
length = labelsByWeight.size
truePositives = labelsByWeight.cumsum()
numPositive = truePositives[-1]
falsePositives = np.arange(1.0, length + 1, 1.) - truePositives
truePositiveRate = truePositives / numPositive
falsePositiveRate = falsePositives / (length - numPositive)
# Generate layout and plot data
fig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1))
ax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05)
ax.set_ylabel('True Positive Rate (Sensitivity)')
ax.set_xlabel('False Positive Rate (1 - Specificity)')
plt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.)
plt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model
pass
from collections import defaultdict
import hashlib
def hashFunction(numBuckets, rawFeats, printMapping=False):
Calculate a feature dictionary for an observation's features based on hashing.
Note:
Use printMapping=True for debug purposes and to better understand how the hashing works.
Args:
numBuckets (int): Number of buckets to use as features.
rawFeats (list of (int, str)): A list of features for an observation. Represented as
(featureID, value) tuples.
printMapping (bool, optional): If true, the mappings of featureString to index will be
printed.
Returns:
dict of int to float: The keys will be integers which represent the buckets that the
features have been hashed to. The value for a given key will contain the count of the
(featureID, value) tuples that have hashed to that key.
mapping = {}
for ind, category in rawFeats:
featureString = category + str(ind)
mapping[featureString] = int(int(hashlib.md5(featureString).hexdigest(), 16) % numBuckets)
if(printMapping): print mapping
sparseFeatures = defaultdict(float)
for bucket in mapping.values():
sparseFeatures[bucket] += 1.0
return dict(sparseFeatures)
# Reminder of the sample values:
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# EXERCICIO
# Use four buckets
sampOneFourBuckets = <COMPLETAR>
sampTwoFourBuckets = <COMPLETAR>
sampThreeFourBuckets = <COMPLETAR>
# Use one hundred buckets
sampOneHundredBuckets = <COMPLETAR>
sampTwoHundredBuckets = <COMPLETAR>
sampThreeHundredBuckets = <COMPLETAR>
print '\t\t 4 Buckets \t\t\t 100 Buckets'
print 'SampleOne:\t {0}\t\t {1}'.format(sampOneFourBuckets, sampOneHundredBuckets)
print 'SampleTwo:\t {0}\t\t {1}'.format(sampTwoFourBuckets, sampTwoHundredBuckets)
print 'SampleThree:\t {0}\t {1}'.format(sampThreeFourBuckets, sampThreeHundredBuckets)
# TEST Hash function (5a)
Test.assertEquals(sampOneFourBuckets, {2: 1.0, 3: 1.0}, 'incorrect value for sampOneFourBuckets')
Test.assertEquals(sampThreeHundredBuckets, {72: 1.0, 5: 1.0, 14: 1.0},
'incorrect value for sampThreeHundredBuckets')
# EXERCICIO
def parseHashPoint(point, numBuckets):
Create a LabeledPoint for this observation using hashing.
Args:
point (str): A comma separated string where the first value is the label and the rest are
features.
numBuckets: The number of buckets to hash to.
Returns:
LabeledPoint: A LabeledPoint with a label (0.0 or 1.0) and a SparseVector of hashed
features.
<COMPLETAR>
numBucketsCTR = 2 ** 15
hashTrainData = rawTrainData.map(lambda x: parseHashPoint(x,numBucketsCTR))
hashTrainData.cache()
hashValidationData = rawValidationData.map(lambda x: parseHashPoint(x,numBucketsCTR))
hashValidationData.cache()
hashTestData = rawTestData.map(lambda x: parseHashPoint(x,numBucketsCTR))
hashTestData.cache()
print hashTrainData.take(1)
# TEST Creating hashed features (5b)
hashTrainDataFeatureSum = sum(hashTrainData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTrainDataLabelSum = sum(hashTrainData
.map(lambda lp: lp.label)
.take(100))
hashValidationDataFeatureSum = sum(hashValidationData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashValidationDataLabelSum = sum(hashValidationData
.map(lambda lp: lp.label)
.take(100))
hashTestDataFeatureSum = sum(hashTestData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTestDataLabelSum = sum(hashTestData
.map(lambda lp: lp.label)
.take(100))
Test.assertEquals(hashTrainDataFeatureSum, 772, 'incorrect number of features in hashTrainData')
Test.assertEquals(hashTrainDataLabelSum, 24.0, 'incorrect labels in hashTrainData')
Test.assertEquals(hashValidationDataFeatureSum, 776,
'incorrect number of features in hashValidationData')
Test.assertEquals(hashValidationDataLabelSum, 16.0, 'incorrect labels in hashValidationData')
Test.assertEquals(hashTestDataFeatureSum, 774, 'incorrect number of features in hashTestData')
Test.assertEquals(hashTestDataLabelSum, 23.0, 'incorrect labels in hashTestData')
# EXERCICIO
def computeSparsity(data, d, n):
Calculates the average sparsity for the features in an RDD of LabeledPoints.
Args:
data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation.
d (int): The total number of features.
n (int): The number of observations in the RDD.
Returns:
float: The average of the ratio of features in a point to total features.
return (data
.<COMPLETAR>
.<COMPLETAR>
)/(d*n*1.)
averageSparsityHash = computeSparsity(hashTrainData, numBucketsCTR, nTrain)
averageSparsityOHE = computeSparsity(OHETrainData, numCtrOHEFeats, nTrain)
print 'Average OHE Sparsity: {0:.7e}'.format(averageSparsityOHE)
print 'Average Hash Sparsity: {0:.7e}'.format(averageSparsityHash)
# TEST Sparsity (5c)
Test.assertTrue(np.allclose(averageSparsityOHE, 1.6717677e-04),
'incorrect value for averageSparsityOHE')
Test.assertTrue(np.allclose(averageSparsityHash, 1.1805561e-03),
'incorrect value for averageSparsityHash')
numIters = 500
regType = 'l2'
includeIntercept = True
# Initialize variables using values from initial model training
bestModel = None
bestLogLoss = 1e10
# EXERCICIO
stepSizes = [1, 10]
regParams = [1e-6, 1e-3]
for stepSize in stepSizes:
for regParam in regParams:
model = (<COMPLETAR>)
logLossVa = <COMPLETAR>
print ('\tstepSize = {0:.1f}, regParam = {1:.0e}: logloss = {2:.3f}'
.format(stepSize, regParam, logLossVa))
if (logLossVa < bestLogLoss):
bestModel = model
bestLogLoss = logLossVa
print ('Hashed Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, bestLogLoss))
# TEST Logistic model with hashed features (5d)
Test.assertTrue(np.allclose(bestLogLoss, 0.4481683608), 'incorrect value for bestLogLoss')
# EXERCICIO
# Log loss for the best model from (5d)
logLossValLR0 = <COMPLETAR>
logLossTest = <COMPLETAR>
# Log loss for the baseline model
logLossTestBaseline = hashTestData.map(lambda lp: computeLogLoss(classOneFracTrain,lp.label)).mean()
print ('Hashed Features Test Log Loss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTestBaseline, logLossTest))
# TEST Evaluate on the test set (5e)
Test.assertTrue(np.allclose(logLossTestBaseline, 0.537438),
'incorrect value for logLossTestBaseline')
Test.assertTrue(np.allclose(logLossTest, 0.455616931), 'incorrect value for logLossTest')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cargando los datos y explorándolos
Step2: Vemos que los datos no están en formato CSV, aunque sí tienen algo de estructura. Si intentamos cargarlos con pandas no tendremos mucho éxito
Step3: Tenemos que hacer los siguientes cambios
Step4: Las fechas también se pueden parsear de manera manual con el argumento
Step5: Accediendo a los datos
Step6: Para acceder a las filas tenemos dos métodos
Step7: Puedo incluso hacer secciones basadas en fechas
Step8: También puedo indexar utilizando arrays de valores booleanos, por ejemplo procedentes de la comprobación de una condición
Step9: Podemos agrupar nuestros datos utilizando groupby
Step10: Y podemos reorganizar los datos utilizando pivot tables
Step11: Por último, pandas proporciona métodos para calcular magnitudes como medias móviles usando el método rolling
Step12: Plotting
Step13: Cajas
Step14: Pintando la temperatura máxima de las máximas, mínima de las mínimas, media de las medias para cada día del año de los años disponnibles
Step15: Visualizaciones especiales
|
<ASSISTANT_TASK:>
Python Code:
# Importamos pandas
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
# Vemos qué pinta tiene el fichero
!head ./tabernas_meteo_data.txt
# Tratamos de cargarlo en pandas
pd.read_csv("./tabernas_meteo_data.txt").head(5)
data = pd.read_csv(
"./tabernas_meteo_data.txt",
delim_whitespace=True, # delimitado por espacios en blanco
usecols=(0, 2, 3, 4, 5), # columnas que queremos usar
skiprows=2, # saltar las dos primeras líneas
names=['DATE', 'TMAX', 'TMIN', 'TMED', 'PRECIP'],
parse_dates=['DATE'],
# date_parser=lambda x: pd.datetime.strptime(x, '%d-%m-%y'), # Parseo manual
dayfirst=True, # ¡Importante
index_col=["DATE"] # Si queremos indexar por fechas
)
# Ordenando de más antigua a más moderna
data.sort_index(inplace=True)
# Mostrando sólo las primeras o las últimas líneas
data.head()
# Comprobamos los tipos de datos de la columnas
data.dtypes
# Pedomos información general del dataset
data.info()
# Descripción estadística
data.describe()
# Una vez convertido en un objeto fecha se pueden obtener cosas como:
data.index.dayofweek
# Accediendo como clave
data['TMAX'].head()
# Accediendo como atributo
data.TMIN.head()
# Accediendo a varias columnas a la vez
data[['TMAX', 'TMIN']].head()
# Modificando valores de columnas
data[['TMAX', 'TMIN']] / 10
# Aplicando una función a una columna entera (ej. media numpy)
import numpy as np
np.mean(data.TMAX)
# Calculando la media con pandas
data.TMAX.mean()
# Accediendo a una fila por índice
data.iloc[1]
# Accediendo a una fila por etiqueta
data.loc["2016-09-02"]
data.loc["2016-12-01":]
# Búsqueda de valores nulos
data.loc[data.TMIN.isnull()]
# Agruparemos por año y día: creemos dos columnas nuevas
data['year'] = data.index.year
data['month'] = data.index.month
# Creamos la agrupación
monthly = data.groupby(by=['year', 'month'])
# Podemos ver los grupos que se han creado
monthly.groups.keys()
# Accedemos a un grupo
monthly.get_group((2016,3)).head()
# O hacemos una agregación de los datos:
monthly_mean = monthly.mean()
monthly_mean.head(24)
# Dejar los años como índices y ver la media mensual en cada columna
monthly_mean.reset_index().pivot(index='year', columns='month')
# Calcular la media de la columna TMAX
monthly.TMAX.mean().head(15)
# Media trimensual centrada
monthly_mean.TMAX.rolling(3, center=True).mean().head(15)
# Pintar la temperatura máx, min, med
data.plot(y=["TMAX", "TMIN", "TMED"])
plt.title('Temperaturas')
data.loc[:, 'TMAX':'PRECIP'].plot.box()
group_daily = data.groupby(['month', data.index.day])
daily_agg = group_daily.agg({'TMED': 'mean', 'TMAX': 'max', 'TMIN': 'min', 'PRECIP': 'mean'})
daily_agg.head()
daily_agg.plot(y=['TMED', 'TMAX', 'TMIN'])
# scatter_matrix
from pandas.tools.plotting import scatter_matrix
axes = scatter_matrix(data.loc[:, "TMAX":"TMED"])
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = './css/aeropython.css'
HTML(open(css_file, "r").read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Simulating fragments
Step3: Number of amplicons per taxon
Step4: Converting fragments to kde object
Step5: Checking ampfrag info
Step6: Making an incorp config file
Step7: Selecting incorporator taxa
Step8: Creating a community file
Step9: Plotting community rank abundances
Step10: Simulating gradient fractions
Step11: Plotting fractions
Step12: Adding diffusion
Step13: Adding DBL 'smearing'
Step14: Comparing DBL+diffusion to diffusion
Step15: Adding isotope incorporation to BD distribution
Step16: Plotting stats on BD shift from isotope incorporation
Step17: Simulating an OTU table
Step18: Plotting taxon abundances
Step19: Simulating PCR bias
Step20: Plotting change in relative abundances
Step21: Subsampling from the OTU table
Step22: Plotting seq count distribution
Step23: Plotting abundance distributions
Step24: Making a wide OTU table
Step25: Making metadata (phyloseq
Step26: Community analysis
Step27: DESeq2
Step28: Checking results of confusion matrix
Step29: Notes
Step30: qSIP
Step31: Assessing qSIP atom % excess accuracy
Step32: regression
Step33: Calculating a confusion matrix
Step34: delta BD
|
<ASSISTANT_TASK:>
Python Code:
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation_rep3/'
genomeDir = '/var/seq_data/ncbi_db/genome/Jan2016/bac_complete_spec-rep1_rn/'
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
figureDir = '/home/nick/notebook/SIPSim/figures/bac_genome_n1147/'
bandwidth = 0.8
DBL_scaling = 0.5
subsample_dist = 'lognormal'
subsample_mean = 9.432
subsample_scale = 0.5
subsample_min = 10000
subsample_max = 30000
import glob
from os.path import abspath
import nestly
from IPython.display import Image
import os
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
if not os.path.isdir(workDir):
os.makedirs(workDir)
if not os.path.isdir(figureDir):
os.makedirs(figureDir)
%cd $workDir
# Determining min/max BD that
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_range_BD = min_GC/100.0 * 0.098 + 1.66
max_range_BD = max_GC/100.0 * 0.098 + 1.66
max_range_BD = max_range_BD + max_13C_shift_in_BD
print 'Min BD: {}'.format(min_range_BD)
print 'Max BD: {}'.format(max_range_BD)
# estimated coverage
mean_frag_size = 9000.0
mean_amp_len = 300.0
n_frags = 10000
coverage = round(n_frags * mean_amp_len / mean_frag_size, 1)
msg = 'Average coverage from simulating {} fragments: {}X'
print msg.format(n_frags, coverage)
!SIPSim fragments \
$genomeDir/genome_index.txt \
--fp $genomeDir \
--fr ../../515F-806R.fna \
--fld skewed-normal,9000,2500,-5 \
--flr None,None \
--nf 10000 \
--np 24 \
2> ampFrags.log \
> ampFrags.pkl
!printf "Number of taxa with >=1 amplicon: "
!grep "Number of amplicons: " ampFrags.log | \
perl -ne "s/^.+ +//; print unless /^0$/" | wc -l
!grep "Number of amplicons: " ampFrags.log | \
perl -pe 's/.+ +//' | hist
!SIPSim fragment_KDE \
ampFrags.pkl \
> ampFrags_kde.pkl
!SIPSim KDE_info \
-s ampFrags_kde.pkl \
> ampFrags_kde_info.txt
%%R
# loading
df = read.delim('ampFrags_kde_info.txt', sep='\t')
df.kde1 = df %>%
filter(KDE_ID == 1)
df.kde1 %>% head(n=3)
BD_GC50 = 0.098 * 0.5 + 1.66
%%R -w 500 -h 250
# plotting
p.amp = ggplot(df.kde1, aes(median)) +
geom_histogram(binwidth=0.001) +
geom_vline(xintercept=BD_GC50, linetype='dashed', color='red', alpha=0.7) +
labs(x='Median buoyant density') +
theme_bw() +
theme(
text = element_text(size=16)
)
p.amp
!SIPSim incorpConfigExample \
--percTaxa 10 \
--percIncorpUnif 100 \
--n_reps 3 \
> PT10_PI100.config
# checking output
!cat PT10_PI100.config
!SIPSim KDE_selectTaxa \
-f 0.1 \
ampFrags_kde.pkl \
> incorporators.txt
!SIPSim communities \
--config PT10_PI100.config \
$genomeDir/genome_index.txt \
> comm.txt
%%R -w 750 -h 300
tbl = read.delim('comm.txt', sep='\t') %>%
mutate(library = library %>% as.character %>% as.numeric,
condition = ifelse(library %% 2 == 0, 'Control', 'Treatment'))
ggplot(tbl, aes(rank, rel_abund_perc, color=condition, group=library)) +
geom_line() +
scale_y_log10() +
scale_color_discrete('Community') +
labs(x='Rank', y='Relative abundance (%)') +
theme_bw() +
theme(
text=element_text(size=16)
)
!SIPSim gradient_fractions \
--BD_min $min_range_BD \
--BD_max $max_range_BD \
comm.txt \
> fracs.txt
%%R -w 600 -h 500
tbl = read.delim('fracs.txt', sep='\t')
ggplot(tbl, aes(fraction, fraction_size)) +
geom_bar(stat='identity') +
facet_grid(library ~ .) +
labs(y='fraction size') +
theme_bw() +
theme(
text=element_text(size=16)
)
%%R -w 450 -h 250
tbl$library = as.character(tbl$library)
ggplot(tbl, aes(library, fraction_size)) +
geom_boxplot() +
labs(y='fraction size') +
theme_bw() +
theme(
text=element_text(size=16)
)
!SIPSim diffusion \
--bw $bandwidth \
--np 20 \
ampFrags_kde.pkl \
> ampFrags_kde_dif.pkl \
2> ampFrags_kde_dif.log
!SIPSim DBL \
--comm comm.txt \
--commx $DBL_scaling \
--np 20 \
-o ampFrags_kde_dif_DBL.pkl \
ampFrags_kde_dif.pkl \
2> ampFrags_kde_dif_DBL.log
# checking output
!tail -n 5 ampFrags_kde_dif_DBL.log
# none
!SIPSim KDE_info \
-s ampFrags_kde.pkl \
> ampFrags_kde_info.txt
# diffusion
!SIPSim KDE_info \
-s ampFrags_kde_dif.pkl \
> ampFrags_kde_dif_info.txt
# diffusion + DBL
!SIPSim KDE_info \
-s ampFrags_kde_dif_DBL.pkl \
> ampFrags_kde_dif_DBL_info.txt
%%R
inFile = 'ampFrags_kde_info.txt'
df.raw = read.delim(inFile, sep='\t') %>%
filter(KDE_ID == 1)
df.raw$stage = 'raw'
inFile = 'ampFrags_kde_dif_info.txt'
df.dif = read.delim(inFile, sep='\t')
df.dif$stage = 'diffusion'
inFile = 'ampFrags_kde_dif_DBL_info.txt'
df.DBL = read.delim(inFile, sep='\t')
df.DBL$stage = 'diffusion +\nDBL'
df = rbind(df.raw, df.dif, df.DBL)
df.dif = ''
df.DBL = ''
df %>% head(n=3)
%%R -w 350 -h 300
df$stage = factor(df$stage, levels=c('raw', 'diffusion', 'diffusion +\nDBL'))
ggplot(df, aes(stage)) +
geom_boxplot(aes(y=min), color='red') +
geom_boxplot(aes(y=median), color='darkgreen') +
geom_boxplot(aes(y=max), color='blue') +
labs(y = 'Buoyant density (g ml^-1)') +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.x = element_blank()
)
!SIPSim isotope_incorp \
--comm comm.txt \
--shift ampFrags_BD-shift.txt \
--taxa incorporators.txt \
--np 20 \
-o ampFrags_kde_dif_DBL_incorp.pkl \
ampFrags_kde_dif_DBL.pkl \
PT10_PI100.config \
2> ampFrags_kde_dif_DBL_incorp.log
# checking log
!tail -n 5 ampFrags_kde_dif_DBL_incorp.log
%%R
inFile = 'ampFrags_BD-shift.txt'
df = read.delim(inFile, sep='\t') %>%
mutate(library = library %>% as.character)
%%R -h 275 -w 375
inFile = 'ampFrags_BD-shift.txt'
df = read.delim(inFile, sep='\t') %>%
mutate(library = library %>% as.character %>% as.numeric)
df.s = df %>%
mutate(incorporator = ifelse(min > 0.001, TRUE, FALSE),
incorporator = ifelse(is.na(incorporator), 'NA', incorporator),
condition = ifelse(library %% 2 == 0, 'control', 'treatment')) %>%
group_by(library, incorporator, condition) %>%
summarize(n_incorps = n())
# plotting
ggplot(df.s, aes(library %>% as.character, n_incorps, fill=incorporator)) +
geom_bar(stat='identity') +
labs(x='Community', y = 'Count', title='Number of incorporators\n(according to BD shift)') +
theme_bw() +
theme(
text = element_text(size=16)
)
!SIPSim OTU_table \
--abs 1e9 \
--np 20 \
ampFrags_kde_dif_DBL_incorp.pkl \
comm.txt \
fracs.txt \
> OTU_n2_abs1e9.txt \
2> OTU_n2_abs1e9.log
# checking log
!tail -n 5 OTU_n2_abs1e9.log
%%R
## BD for G+C of 0 or 100
BD.GCp0 = 0 * 0.098 + 1.66
BD.GCp50 = 0.5 * 0.098 + 1.66
BD.GCp100 = 1 * 0.098 + 1.66
%%R -w 700 -h 450
# plotting absolute abundances
# loading file
df = read.delim('OTU_n2_abs1e9.txt', sep='\t')
df.s = df %>%
group_by(library, BD_mid) %>%
summarize(total_count = sum(count))
## plot
p = ggplot(df.s, aes(BD_mid, total_count)) +
#geom_point() +
geom_area(stat='identity', alpha=0.3, position='dodge') +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Total abundance') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16)
)
p
%%R -w 700 -h 450
# plotting number of taxa at each BD
df.nt = df %>%
filter(count > 0) %>%
group_by(library, BD_mid) %>%
summarize(n_taxa = n())
## plot
p = ggplot(df.nt, aes(BD_mid, n_taxa)) +
#geom_point() +
geom_area(stat='identity', alpha=0.3, position='dodge') +
#geom_histogram(stat='identity') +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Number of taxa') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
%%R -w 700 -h 450
# plotting relative abundances
## plot
p = ggplot(df, aes(BD_mid, count, fill=taxon)) +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Absolute abundance') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p + geom_area(stat='identity', position='dodge', alpha=0.5)
%%R -w 700 -h 450
p +
geom_area(stat='identity', position='fill') +
labs(x='Buoyant density', y='Relative abundance')
!SIPSim OTU_PCR \
OTU_n2_abs1e9.txt \
--debug \
> OTU_n2_abs1e9_PCR.txt
%%R -w 800 -h 300
# loading file
F = 'OTU_n2_abs1e9_PCR.txt'
df.SIM = read.delim(F, sep='\t') %>%
mutate(molarity_increase = final_molarity / init_molarity * 100,
library = library %>% as.character)
p1 = ggplot(df.SIM, aes(init_molarity, final_molarity, color=library)) +
geom_point(shape='O', alpha=0.5) +
labs(x='Initial molarity', y='Final molarity') +
theme_bw() +
theme(
text = element_text(size=16)
)
p2 = ggplot(df.SIM, aes(init_molarity, molarity_increase, color=library)) +
geom_point(shape='O', alpha=0.5) +
scale_y_log10() +
labs(x='Initial molarity', y='% increase in molarity') +
theme_bw() +
theme(
text = element_text(size=16)
)
grid.arrange(p1, p2, ncol=2)
# PCR w/out --debug (no extra output)
!SIPSim OTU_PCR \
OTU_n2_abs1e9.txt \
> OTU_n2_abs1e9_PCR.txt
!SIPSim OTU_subsample \
--dist $subsample_dist \
--dist_params mean:$subsample_mean,sigma:$subsample_scale \
--min_size $subsample_min \
--max_size $subsample_max \
OTU_n2_abs1e9_PCR.txt \
> OTU_n2_abs1e9_PCR_subNorm.txt
%%R -w 350 -h 250
df = read.csv('OTU_n2_abs1e9_PCR_subNorm.txt', sep='\t')
df.s = df %>%
group_by(library, fraction) %>%
summarize(total_count = sum(count)) %>%
ungroup() %>%
mutate(library = as.character(library))
ggplot(df.s, aes(library, total_count)) +
geom_boxplot() +
labs(y='Number of sequences\nper fraction') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R
# loading file
df.abs = read.delim('OTU_n2_abs1e9.txt', sep='\t')
df.sub = read.delim('OTU_n2_abs1e9_PCR_subNorm.txt', sep='\t')
#lib.reval = c('1' = 'control',
# '2' = 'treatment',
# '3' = 'control',
# '4' = 'treatment',
# '5' = 'control',
# '6' = 'treatment')
#df.abs = mutate(df.abs, library = plyr::revalue(as.character(library), lib.reval))
#df.sub = mutate(df.sub, library = plyr::revalue(as.character(library), lib.reval))
%%R -w 700 -h 1000
# plotting absolute abundances
## plot
p = ggplot(df.abs, aes(BD_mid, count, fill=taxon)) +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank(),
legend.position = 'none',
plot.margin=unit(c(1,1,0.1,1), "cm")
)
p1 = p + geom_area(stat='identity', position='dodge', alpha=0.5) +
labs(y='Total community\n(absolute abundance)')
# plotting absolute abundances of subsampled
## plot
p = ggplot(df.sub, aes(BD_mid, count, fill=taxon)) +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p + geom_area(stat='identity', position='dodge', alpha=0.5) +
labs(y='Subsampled community\n(absolute abundance)') +
theme(
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank(),
plot.margin=unit(c(0.1,1,0.1,1), "cm")
)
# plotting relative abundances of subsampled
p3 = p + geom_area(stat='identity', position='fill') +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(y='Subsampled community\n(relative abundance)') +
theme(
axis.title.y = element_text(vjust=1),
plot.margin=unit(c(0.1,1,1,1.35), "cm")
)
# combining plots
grid.arrange(p1, p2, p3, ncol=1)
!SIPSim OTU_wideLong -w \
OTU_n2_abs1e9_PCR_subNorm.txt \
> OTU_n2_abs1e9_PCR_subNorm_w.txt
!SIPSim OTU_sampleData \
OTU_n2_abs1e9_PCR_subNorm.txt \
> OTU_n2_abs1e9_PCR_subNorm_meta.txt
# making phyloseq object from OTU table
!SIPSimR phyloseq_make \
OTU_n2_abs1e9_PCR_subNorm_w.txt \
-s OTU_n2_abs1e9_PCR_subNorm_meta.txt \
> OTU_n2_abs1e9_PCR_subNorm.physeq
## making ordination
!SIPSimR phyloseq_ordination \
OTU_n2_abs1e9_PCR_subNorm.physeq \
OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.pdf
## filtering phyloseq object to just taxa/samples of interest (eg., BD-min/max)
!SIPSimR phyloseq_edit \
--BD_min 1.71 \
--BD_max 1.75 \
OTU_n2_abs1e9_PCR_subNorm.physeq \
> OTU_n2_abs1e9_PCR_subNorm_filt.physeq
## making ordination
!SIPSimR phyloseq_ordination \
OTU_n2_abs1e9_PCR_subNorm_filt.physeq \
OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.pdf
# making png figures
!convert OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.pdf OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.png
!convert OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.pdf OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.png
Image(filename='OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.png')
Image(filename='OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.png')
## DESeq2
!SIPSimR phyloseq_DESeq2 \
--log2 0.25 \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_all 0.25 \
OTU_n2_abs1e9_PCR_subNorm_filt.physeq \
> OTU_n2_abs1e9_PCR_subNorm_DS2.txt
## Confusion matrix
!SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj 0.1 \
ampFrags_BD-shift.txt \
OTU_n2_abs1e9_PCR_subNorm_DS2.txt
%%R -w 500 -h 350
byClass = read.delim('DESeq2-cMtx_byClass.txt', sep='\t')
byClass %>% filter(variables=='Balanced Accuracy') %>% print
ggplot(byClass, aes(variables, values)) +
geom_bar(stat='identity') +
labs(y='Value') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.x = element_blank(),
axis.text.x = element_text(angle=45, hjust=1)
)
%%R -w 550 -h 350
df_cMtx = read.delim('DESeq2-cMtx_data.txt', sep='\t') %>%
gather(clsfy, clsfy_value, incorp.pred, incorp.known) %>%
filter(! is.na(clsfy_value))
ggplot(df_cMtx, aes(log2FoldChange, padj)) +
geom_point(size=3, shape='O') +
facet_grid(clsfy ~ clsfy_value) +
labs(x='log2 fold change', y='Adjusted P-value') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -w 300 -h 300
# checking correspondence of padj & padj.BH
# ggplot(df_cMtx, aes(padj, padj.BH)) +
# geom_point(shape='O', size=2) +
# theme_bw() +
# theme(
# text = element_text(size=16)
# )
%%R
clsfy = function(guess,known){
if(is.na(guess) | is.na(known)){
return(NA)
}
if(guess == TRUE){
if(guess == known){
return('True positive')
} else {
return('False positive')
}
} else
if(guess == FALSE){
if(guess == known){
return('True negative')
} else {
return('False negative')
}
} else {
stop('Error: true or false needed')
}
}
%%R
df = read.delim('DESeq2-cMtx_data.txt', sep='\t')
df = df %>%
filter(! is.na(log2FoldChange), library %in% c(2,4,6)) %>%
mutate(taxon = reorder(taxon, -log2FoldChange),
cls = mapply(clsfy, incorp.pred, incorp.known))
df %>% head(n=3)
%%R -w 800 -h 350
df.TN = df %>% filter(cls == 'True negative')
df.TP = df %>% filter(cls == 'True positive')
df.FP = df %>% filter(cls == 'False negative')
ggplot(df, aes(taxon, log2FoldChange, color=cls,
ymin=log2FoldChange - lfcSE, ymax=log2FoldChange + lfcSE)) +
geom_pointrange(size=0.4, alpha=0.5) +
geom_pointrange(data=df.TP, size=0.4, alpha=0.3) +
geom_pointrange(data=df.FP, size=0.4, alpha=0.3) +
labs(x = 'Taxon', y = 'Log2 fold change') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid.major.x = element_blank(),
panel.grid.minor.x = element_blank(),
legend.title=element_blank(),
axis.text.x = element_blank(),
legend.position = 'bottom'
)
!SIPSim qSIP \
OTU_n2_abs1e9.txt \
OTU_n2_abs1e9_PCR_subNorm.txt \
> OTU_n2_abs1e9_PCR_subNorm_qSIP.txt
# making an experimental design file for qSIP
import itertools
x = range(1,7)
y = ['control', 'treatment']
expDesignFile = os.path.join(workDir, 'qSIP_exp_design.txt')
with open(expDesignFile, 'wb') as outFH:
for i,z in itertools.izip(x,itertools.cycle(y)):
line = '\t'.join([str(i),z])
outFH.write(line + '\n')
!head $expDesignFile
!SIPSim qSIP_atomExcess \
--np 10 \
OTU_n2_abs1e9_PCR_subNorm_qSIP.txt \
qSIP_exp_design.txt \
> OTU_n2_abs1e9_PCR_subNorm_qSIP_atom.txt
%%R
df_qSIP = read.delim('OTU_n2_abs1e9_PCR_subNorm_qSIP_atom.txt', sep='\t')
df_shift = read.delim('ampFrags_BD-shift.txt', sep='\t') %>%
filter(library %in% c(2,4,6)) %>%
group_by(taxon) %>%
summarize(median = median(median)) %>%
ungroup() %>%
rename('median_true_BD_shift' = median)
df_qSIP %>% head(n=3) %>% print
print('------------------------')
df_shift %>% head(n=3) %>% print
%%R
df.j = inner_join(df_qSIP, df_shift, c('taxon' = 'taxon')) %>%
filter(!is.na(BD_diff)) %>%
mutate(true_incorporator = ifelse(median_true_BD_shift > 0.03, TRUE, FALSE),
true_atom_fraction_excess = median_true_BD_shift / 0.036,
atom_fraction_excess = ifelse(is.na(atom_CI_low), 0, atom_fraction_excess))
df.j %>% head(n=3)
%%R -w 650 -h 300
ggplot(df.j, aes(BD_diff, fill=true_incorporator)) +
geom_histogram(binwidth=0.005, alpha=0.7, position='identity') +
scale_color_discrete('Incorporator?') +
labs(x='qSIP: BD shift (g ml^-1)') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -w 800 -h 300
df.j$taxon = reorder(df.j$taxon, -df.j$atom_fraction_excess)
ggplot(df.j, aes(taxon, true_atom_fraction_excess,
ymin=atom_CI_low, ymax=atom_CI_high)) +
geom_linerange(alpha=0.75) +
geom_point(color='red', size=0.25) +
geom_point(aes(y=atom_fraction_excess), color='green', size=0.2) +
labs(y='13C atom fraction excess') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R -w 500 -h 250
# true incorporator error
ggplot(df.j, aes(atom_fraction_excess - true_atom_fraction_excess,
fill=true_incorporator)) +
geom_histogram(binwidth=0.05, alpha=0.7, position='identity') +
scale_fill_discrete('Incorporator?') +
labs(x='distance from true value') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R
zip.res = pscl::zeroinfl(true_atom_fraction_excess ~ atom_fraction_excess, data=df.j)
zip.res %>% summary
%%R
lm.res = lm(true_atom_fraction_excess ~ atom_fraction_excess, data=df.j)
lm.res %>% summary
!SIPSimR qSIP_confuseMtx \
--libs 2,4,6 \
ampFrags_BD-shift.txt \
OTU_n2_abs1e9_PCR_subNorm_qSIP_atom.txt
%%R -h 250
df = read.delim('qSIP-cMtx_byClass.txt', sep='\t') %>%
filter(library == 2)
ggplot(df, aes(variables, values)) +
geom_bar(stat='identity') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=45, hjust=1),
axis.title.x = element_blank()
)
%%R
df
!SIPSim deltaBD \
OTU_n2_abs1e9_PCR_subNorm.txt \
qSIP_exp_design.txt \
> OTU_n2_abs1e9_PCR_subNorm_dBD.txt
%%R
df_dBD = read.delim('OTU_n2_abs1e9_PCR_subNorm_dBD.txt', sep='\t')
df_shift = read.delim('ampFrags_BD-shift.txt', sep='\t') %>%
filter(library %in% c(2,4,6)) %>%
group_by(taxon) %>%
summarize(median = median(median)) %>%
ungroup() %>%
rename('median_true_BD_shift' = median)
df_dBD %>% head(n=3) %>% print
print('------------------------')
df_shift %>% head(n=3) %>% print
%%R
df.j = inner_join(df_dBD, df_shift, c('taxon' = 'taxon')) %>%
mutate(true_incorporator = ifelse(median_true_BD_shift > 0.03, TRUE, FALSE))
df.j %>% head(n=3)
%%R -w 650 -h 300
ggplot(df.j, aes(delta_BD, fill=true_incorporator)) +
geom_histogram(binwidth=0.005, alpha=0.7, position='identity') +
scale_color_discrete('Incorporator?') +
labs(x='deltaBD: BD shift (g ml^-1)') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -w 800 -h 300
df.j$taxon = reorder(df.j$taxon, -df.j$delta_BD)
ggplot(df.j, aes(taxon, median_true_BD_shift)) +
geom_point(color='red', size=0.25) +
geom_point(aes(y=delta_BD), color='green', size=0.2) +
labs(y='BD shift') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R
lm.res = lm(median_true_BD_shift ~ delta_BD, data=df.j)
lm.res %>% summary
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorFlow won't be able to understand strings as labels, you'll need to use pandas .apply() method to apply a custom function that converts them to 0s and 1s. This might be hard if you aren't very familiar with pandas, so feel free to take a peek at the solutions for this part.
Step2: Perform a Train Test Split on the Data
Step3: Create the Feature Columns for tf.esitmator
Step4: Import Tensorflow
Step5: Create the tf.feature_columns for the categorical values. Use vocabulary lists or just use hash buckets.
Step6: Create the continuous feature_columns for the continuous values using numeric_column
Step7: Put all these variables into a single list with the variable name feat_cols
Step8: Create Input Function
Step9: Create your model with tf.estimator
Step10: Train your model on the data, for at least 5000 steps.
Step11: Evaluation
Step12: Use model.predict() and pass in your input function. This will produce a generator of predictions, which you can then transform into a list, with list()
Step13: Each item in your list will look like this
Step14: Create a list of only the class_ids key values from the prediction list of dictionaries, these are the predictions you will use to compare against the real y_test values.
Step15: Import classification_report from sklearn.metrics and then see if you can figure out how to use it to easily get a full report of your model's performance on the test data.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
census = pd.read_csv("./data/census_data.csv")
census.head()
census['income_bracket'].unique()
def label_fix(label):
if label==' <=50K':
return 0
else:
return 1
# Applying function to every row of the DataFrame
census['income_bracket'] = census['income_bracket'].apply(label_fix)
# Alternative
# lambda label:int(label==' <=50k')
# census['income_bracket'].apply(lambda label: int(label==' <=50K'))
from sklearn.model_selection import train_test_split
x_data = census.drop('income_bracket', axis = 1)
y_labels = census['income_bracket']
X_train, X_test, y_train, y_test = train_test_split(x_data, y_labels, test_size = 0.3,random_state = 101)
x_data.head()
y_labels.head()
census.columns
import tensorflow as tf
gender = tf.feature_column.categorical_column_with_vocabulary_list("gender", ["Female", "Male"])
occupation = tf.feature_column.categorical_column_with_hash_bucket("occupation", hash_bucket_size=1000)
marital_status = tf.feature_column.categorical_column_with_hash_bucket("marital_status", hash_bucket_size=1000)
relationship = tf.feature_column.categorical_column_with_hash_bucket("relationship", hash_bucket_size=1000)
education = tf.feature_column.categorical_column_with_hash_bucket("education", hash_bucket_size=1000)
workclass = tf.feature_column.categorical_column_with_hash_bucket("workclass", hash_bucket_size=1000)
native_country = tf.feature_column.categorical_column_with_hash_bucket("native_country", hash_bucket_size=1000)
age = tf.feature_column.numeric_column("age")
education_num = tf.feature_column.numeric_column("education_num")
capital_gain = tf.feature_column.numeric_column("capital_gain")
capital_loss = tf.feature_column.numeric_column("capital_loss")
hours_per_week = tf.feature_column.numeric_column("hours_per_week")
feat_cols = [gender, occupation, marital_status, relationship, education, workclass, native_country,
age, education_num, capital_gain, capital_loss, hours_per_week]
input_func = tf.estimator.inputs.pandas_input_fn(x = X_train,
y = y_train,
batch_size = 100,
num_epochs = None,
shuffle = True)
model = tf.estimator.LinearClassifier(feature_columns = feat_cols)
model.train(input_fn = input_func,
steps = 5000)
pred_fn = tf.estimator.inputs.pandas_input_fn(x = X_test,
batch_size = len(X_test),
shuffle = False)
predictions = list(model.predict(input_fn = pred_fn))
predictions[0]
final_preds = []
for pred in predictions:
final_preds.append(pred['class_ids'][0])
final_preds[:10]
from sklearn.metrics import classification_report
print(classification_report(y_test, final_preds))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate Two Networks with Different Spacing
Step2: Position Networks Appropriately, then Stitch Together
Step3: Quickly Visualize the Network
Step4: Create Geometry Objects for Each Layer
Step5: Add Geometrical Properties to the Small Domain
Step6: Add Geometrical Properties to the Large Domain
Step7: Create Phase and Physics Objects
Step8: Add pore-scale models for diffusion to each Physics
Step9: For the small layer we've used a normal diffusive conductance model, which when combined with the diffusion coefficient of air will be equivalent to open-air diffusion. If we want the small layer to have some tortuosity we must account for this
Step10: Note that this extra line is NOT a pore-scale model, so it will be over-written when the phys_sm object is regenerated.
Step11: Perform a Diffusion Calculation
Step12: Visualize the Concentration Distribution
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy as sp
import openpnm as op
%config InlineBackend.figure_formats = ['svg']
import openpnm.models.geometry as gm
import openpnm.models.physics as pm
import openpnm.models.misc as mm
import matplotlib.pyplot as plt
np.set_printoptions(precision=4)
np.random.seed(10)
ws = op.Workspace()
ws.settings["loglevel"] = 40
%matplotlib inline
spacing_lg = 6e-5
layer_lg = op.network.Cubic(shape=[10, 10, 1], spacing=spacing_lg)
spacing_sm = 2e-5
layer_sm = op.network.Cubic(shape=[30, 5, 1], spacing=spacing_sm)
# Start by assigning labels to each network for identification later
layer_sm.set_label("small", pores=layer_sm.Ps, throats=layer_sm.Ts)
layer_lg.set_label("large", pores=layer_lg.Ps, throats=layer_lg.Ts)
# Next manually offset CL one full thickness relative to the GDL
layer_sm['pore.coords'] -= [0, spacing_sm*5, 0]
layer_sm['pore.coords'] += [0, 0, spacing_lg/2 - spacing_sm/2] # And shift up by 1/2 a lattice spacing
# Finally, send both networks to stitch which will stitch CL onto GDL
from openpnm.topotools import stitch
stitch(network=layer_lg, donor=layer_sm,
P_network=layer_lg.pores('back'), P_donor=layer_sm.pores('front'),
len_max=5e-5)
combo_net = layer_lg
combo_net.name = 'combo'
fig, ax = plt.subplots(figsize=[5, 5])
op.topotools.plot_connections(network=combo_net, ax=ax);
Ps = combo_net.pores('small')
Ts = combo_net.throats('small')
geom_sm = op.geometry.GenericGeometry(network=combo_net, pores=Ps, throats=Ts)
Ps = combo_net.pores('large')
Ts = combo_net.throats('small', mode='not')
geom_lg = op.geometry.GenericGeometry(network=combo_net, pores=Ps, throats=Ts)
geom_sm['pore.diameter'] = spacing_sm
geom_sm['pore.area'] = spacing_sm**2
geom_sm['throat.diameter'] = spacing_sm
geom_sm['throat.cross_sectional_area'] = spacing_sm**2
geom_sm['throat.length'] = 1e-12 # A very small number to represent nearly 0-length
# geom_sm.add_model(propname='throat.length',
# model=gm.throat_length.classic)
geom_sm.add_model(propname='throat.diffusive_size_factors',
model=gm.diffusive_size_factors.spheres_and_cylinders)
geom_lg['pore.diameter'] = spacing_lg*np.random.rand(combo_net.num_pores('large'))
geom_lg.add_model(propname='pore.area',
model=gm.pore_cross_sectional_area.sphere)
geom_lg.add_model(propname='throat.diameter',
model=mm.from_neighbor_pores,
prop='pore.diameter', mode='min')
geom_lg.add_model(propname='throat.cross_sectional_area',
model=gm.throat_cross_sectional_area.cylinder)
geom_lg.add_model(propname='throat.length',
model=gm.throat_length.spheres_and_cylinders)
geom_lg.add_model(propname='throat.diffusive_size_factors',
model=gm.diffusive_size_factors.spheres_and_cylinders)
air = op.phases.Air(network=combo_net, name='air')
phys_lg = op.physics.GenericPhysics(network=combo_net, geometry=geom_lg, phase=air)
phys_sm = op.physics.GenericPhysics(network=combo_net, geometry=geom_sm, phase=air)
phys_lg.add_model(propname='throat.diffusive_conductance',
model=pm.diffusive_conductance.ordinary_diffusion)
phys_sm.add_model(propname='throat.diffusive_conductance',
model=pm.diffusive_conductance.ordinary_diffusion)
porosity = 0.5
tortuosity = 2
phys_sm['throat.diffusive_conductance'] *= (porosity/tortuosity)
# Set source term
air['pore.A1'] = -1e-10 # Reaction pre-factor
air['pore.A2'] = 1 # Reaction order
air['pore.A3'] = 0 # A generic offset that is not needed so set to 0
phys_sm.add_model(propname='pore.reaction',
model=pm.generic_source_term.power_law,
A1='pore.A1', A2='pore.A2', A3='pore.A3',
X='pore.concentration',
regen_mode='deferred')
Deff = op.algorithms.ReactiveTransport(network=combo_net, phase=air)
Ps = combo_net.pores(['large', 'front'], mode='intersection')
Deff.set_value_BC(pores=Ps, values=1)
Ps = combo_net.pores('small')
Deff.set_source(propname='pore.reaction', pores=Ps)
Deff.settings['conductance'] = 'throat.diffusive_conductance'
Deff.settings['quantity'] = 'pore.concentration'
Deff.run()
fig, ax = plt.subplots(figsize=[5, 5])
op.topotools.plot_coordinates(network=combo_net, c=Deff['pore.concentration'],
cmap='jet', markersize=40, ax=ax);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the following code we setup a horsetail matching optimization using test problem 3, and then run optimizations under three targets
|
<ASSISTANT_TASK:>
Python Code:
from horsetailmatching import HorsetailMatching, GaussianParameter
from horsetailmatching.demoproblems import TP3
from scipy.optimize import minimize
import numpy as np
import matplotlib.pyplot as plt
def plotHorsetail(theHM, c='b', label=''):
(q, h, t), _, _ = theHM.getHorsetail()
plt.plot(q, h, c=c, label=label)
plt.plot(t, h, c=c, linestyle='dashed')
plt.xlim([-10, 10])
u1 = GaussianParameter()
def standardTarget(h):
return 0.
theHM = HorsetailMatching(TP3, u1, ftarget=standardTarget, samples_prob=5000)
solution1 = minimize(theHM.evalMetric, x0=0.6, method='COBYLA',
constraints=[{'type': 'ineq', 'fun': lambda x: x}, {'type': 'ineq', 'fun': lambda x: 1-x}])
theHM.evalMetric(solution1.x)
print(solution1)
plotHorsetail(theHM, c='b', label='Standard')
def riskAverseTarget(h):
return 0. - 3.*h**3.
theHM.ftarget=riskAverseTarget
solution2 = minimize(theHM.evalMetric, x0=0.6, method='COBYLA',
constraints=[{'type': 'ineq', 'fun': lambda x: x}, {'type': 'ineq', 'fun': lambda x: 1-x}])
theHM.evalMetric(solution2.x)
print(solution2)
plotHorsetail(theHM, c='g', label='Risk Averse')
def veryRiskAverseTarget(h):
return 1. - 10.**h**10.
theHM.ftarget=veryRiskAverseTarget
solution3 = minimize(theHM.evalMetric, x0=0.6, method='COBYLA',
constraints=[{'type': 'ineq', 'fun': lambda x: x}, {'type': 'ineq', 'fun': lambda x: 1-x}])
theHM.evalMetric(solution3.x)
print(solution3)
plotHorsetail(theHM, c='r', label='Very Risk Averse')
plt.xlim([-10, 5])
plt.ylim([0, 1])
plt.xlabel('Quantity of Interest')
plt.legend(loc='lower left')
plt.plot()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
Step2: Data channel array consisted of 204 MEG planor gradiometers,
Step3: The data have strong line frequency (60 Hz and harmonics) and cHPI coil
Step4: Let's use Maxwell filtering to clean the data a bit.
Step5: We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.
Step6: Now we epoch our data, average it, and look at the first dipole response.
Step7: Let's do some dipole fits. The phantom is properly modeled by a single-shell
Step8: Now we can compare to the actual locations, taking the difference in mm
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import mne
from mne import find_events, fit_dipole
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
print(__doc__)
data_path = bst_phantom_elekta.data_path()
raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
raw = read_raw_fif(raw_fname, add_eeg_ref=False)
events = find_events(raw, 'STI201')
raw.plot(events=events)
raw.info['bads'] = ['MEG2421']
raw.plot_psd(tmax=60.)
raw.fix_mag_coil_types()
raw = mne.preprocessing.maxwell_filter(raw, origin=(0., 0., 0.))
raw.filter(None, 40., h_trans_bandwidth='auto', filter_length='auto',
phase='zero')
raw.plot(events=events)
tmin, tmax = -0.1, 0.1
event_id = list(range(1, 33))
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01),
decim=5, preload=True, add_eeg_ref=False)
epochs['1'].average().plot()
t_peak = 60e-3 # ~60 MS at largest peak
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)
cov = mne.compute_covariance(epochs, tmax=0)
data = []
for ii in range(1, 33):
evoked = epochs[str(ii)].average().crop(t_peak, t_peak)
data.append(evoked.data[:, 0])
evoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)
del epochs, raw
dip = fit_dipole(evoked, cov, sphere, n_jobs=1)[0]
actual_pos = mne.dipole.get_phantom_dipoles(kind='122')[0]
diffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))
print('Differences (mm):\n%s' % diffs[:, np.newaxis])
print('μ = %s' % (np.mean(diffs),))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model set-up
Step2: BUG!!!!
Step3: We now define the parameter uncertainties
Step4: And, in a next step, perform the model sampling
Step5: Save the model data for later re-use (e.g. to extend the data set)
Step7: Calculation of cell information entropy
Step8: The algorithm works on the simple idea that we do not explicitly require the single outputs at each location, but only the relative probability values. This may not matter too much for single entropy estimates (uni-variate), but it will matter a lot for multivariate cases, because we do not need to check all possible outcomes! Note that all outcomes with zero probability are simply not considered in the sorting algorithm (and they do not play any role in the calculation of the entropy, anyway), and that's exactly what we want to have!
Step9: We now visualise the cell information entropy, shown in Fig. (). We can here clearly identify uncertain regions within this model section. It is interesting to note that we can mostly still identify the distinct layer boundaries in the fuzzy areas of uncertainty around their borders (note
Step10: Here again an example of single models (adjust to visualise and probably include something like a "table plot" of multiple images for a paper!)
Step11: And here the "mean" lithologies (note
Step12: And here a bit more meaningful
Step13: Idea
Step14: Problem now
Step15: Try own "entropy colormap"
Step16: For comparison again
Step17: And the difference, for clarity
Step18: Clearly, the highset reduction is in the area around the borehole, but interestingly, the uncertianty in other areas is also reduced! Note specifically the reduction of uncertainties in the two neighbouring fold hinges.
Step19: We also just include one timing step to estimate the approximate simualtion time
Step20: Intersting! Only a local reduction around the drilling position, however
Step21: Interesting! And now both combined
Step22: We can see that now only a part on the left remains with significant uncertainties. So, let's "drill" into this, as well
Step23: Additional idea to speed up computation (especially for higher multivariate examples)
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
%matplotlib inline
# here the usual imports. If any of the imports fails,
# make sure that pynoddy is installed
# properly, ideally with 'python setup.py develop'
# or 'python setup.py install'
import sys, os
import matplotlib.pyplot as plt
import numpy as np
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy
reload(pynoddy)
import pynoddy.history
import pynoddy.experiment
reload(pynoddy.experiment)
rcParams.update({'font.size': 15})
from pynoddy.experiment import monte_carlo
model_url = 'http://tectonique.net/asg/ch3/ch3_7/his/typeb.his'
ue = pynoddy.experiment.Experiment(url = model_url)
ue.change_cube_size(100)
sec = ue.get_section('y')
sec.block.shape
ue.plot_section('y')
plt.imshow(sec.block[:,50,:].transpose(), origin = 'lower left', interpolation = 'none')
tmp = sec.block[:,50,:]
tmp.shape
ue.set_random_seed(12345)
ue.info(events_only = True)
param_stats = [{'event' : 2,
'parameter': 'Amplitude',
'stdev': 100.0,
'type': 'normal'},
{'event' : 2,
'parameter': 'Wavelength',
'stdev': 500.0,
'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 500.0,
'type': 'normal'}]
ue.set_parameter_statistics(param_stats)
ue.set_random_seed(112358)
# perfrom random sampling
resolution = 100
sec = ue.get_section('y')
tmp = sec.block[:,50,:]
n_draws = 5000
model_sections = np.empty((n_draws, tmp.shape[0], tmp.shape[1]))
for i in range(n_draws):
ue.random_draw()
tmp_sec = ue.get_section('y', resolution = resolution,
remove_tmp_files = True)
model_sections[i,:,:] = tmp_sec.block[:,50,:]
import pickle
f_out = open("model_sections_5k.pkl", 'w')
pickle.dump(model_sections, f_out)
def entropy(diff_array, n_samples):
Determine entropy from diffarray using switchpoints
switchpts = np.where(diff_array > 0 )[0] + 1
switchpts = np.append(0, switchpts)
switchpts = np.append(switchpts, n_samples)
pdiff = np.diff(switchpts)
j_prob = pdiff / float(n_samples)
# calculate entropy
h = 0.
for jp in j_prob:
h -= jp * np.log2(jp)
return h
# sort data (axis = 0: sort along model results!)
mssort = np.sort(model_sections, axis = 0)
# create difference matrix
mssort_diff = mssort[1:,:,:] - mssort[:-1,:,:]
n_samples = model_sections.shape[0]
# and now: for all!
h = np.empty_like(mssub[0,:,:])
for i in range(100):
for j in range(40):
h[i,j] = entropy(mssort_diff[:,i,j], n_samples)
h[50,30]
plt.imshow(h.transpose(), origin = 'lower left',
cmap = 'gray', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
plt.imshow(mssub[70,:,:].transpose(), origin = 'lower left', interpolation = 'none')
plt.imshow(np.mean(mssub, axis = 0).transpose(), origin = 'lower left', interpolation = 'none')
# step 1: estimate probabilities (note: unfortunate workaround with ones multiplication,
# there may be a better way, but this is somehow a recurring problem of implicit
# array flattening in numpy)
litho_id = 4
prob = np.sum(np.ones_like(mssub) * (mssub == litho_id), axis = 0) / float(n_samples)
plt.imshow(prob.transpose(),
origin = 'lower left',
interpolation = 'none',
cmap = 'gray_r')
plt.colorbar(orientation = 'horizontal')
sys.path.append("/Users/flow/git/mutual_information/")
import hspace
reload(hspace)
locs = np.array([1,2,3])
locs.shape
locs = np.array([[1,1],[1,2],[1,3]])
locs.shape
models_sub = model_sections[:10,:,:]
joint_units = []
for entry in models_sub:
joint_val = ""
for i, loc in enumerate(locs):
joint_val += "%d" % entry[loc[0], loc[1]]
joint_units.append(joint_val)
print joint_units
hspace.joint_entropy(model_sections, locs, n_samples)
# now: define position of "drill":
n = 10
drill_i = [60] * n
drill_j = range(39,39-n,-1)
drill_locs = zip(drill_i, drill_j)
# determine joint entropy of drill_locs:
h_joint_drill = hspace.joint_entropy(model_sections, drill_locs, n_samples)
# check with arbitrary additional position:
locs = drill_locs + [[50, 20]]
print locs
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
print h_joint_drill
print h_joint_loc
print h_joint_loc - h_joint_drill
print h[50,30]
# Determine max value of initital entropies for colorbar scaling
h_max = np.max(h)
print h_max
n_max = int(np.ceil(2 ** h_max))
print n_max
print np.log2(n_max)
pwd
# Import Viridis colorscale
import colormaps as cmaps
plt.register_cmap(name='viridis', cmap=cmaps.viridis)
plt.register_cmap(name='magma', cmap=cmaps.magma)
plt.set_cmap(cmaps.viridis)
# generate conditional entropies for entire section:
h_cond_drill = np.zeros_like(h)
for i in range(100):
for j in range(40):
# add position to locations
locs = drill_locs + [[i,j]]
# determine joint entropy
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
# subtract joint entropy of drill locs to obtain conditional entropy
h_cond_drill[i,j] = h_joint_loc - h_joint_drill
import simple_2D_example
from simple_2D_example import entropy_colormap
reload(simple_2D_example)
ecmap = simple_2D_example.entropy_colormap(h_max);
plt.imshow(h_cond_drill.transpose(), origin = 'lower left',
cmap = 'viridis', interpolation = 'none', vmax=np.log2(n_max+0.02))
plt.colorbar(orientation = 'horizontal')
# half-step contour lines
contour_levels = np.log2(np.arange(1., n_max + 0.001, .5))
plt.contour(h_cond_drill.transpose(), contour_levels, colors = 'gray')
# superpose 1-step contour lines
contour_levels = np.log2(np.arange(1., n_max + 0.001, 1.))
plt.contour(h_cond_drill.transpose(), contour_levels, colors = 'white')
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,99])
plt.ylim([0,39])
plt.imshow(h.transpose(), origin = 'lower left',
cmap = 'viridis', interpolation = 'none', vmax=np.log2(n_max+0.02))
plt.colorbar(orientation = 'horizontal')
# half-step contour lines
contour_levels = np.log2(np.arange(1., n_max + 0.001, .5))
plt.contour(h.transpose(), contour_levels, colors = 'gray')
# superpose 1-step contour lines
contour_levels = np.log2(np.arange(1., n_max + 0.001, 1.))
plt.contour(h.transpose(), contour_levels, colors = 'white')
plt.imshow((h - h_cond_drill).transpose(), origin = 'lower left',
cmap = 'viridis', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
# plot drilling positions above it:
dp = np.array(drill_locs).transpose()
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,99])
plt.ylim([0,39])
# define position of "drill":
n = 10
drill_i = [20] * n
drill_j = range(39,39-n,-1)
drill_locs = zip(drill_i, drill_j)
# determine joint entropy of drill_locs:
h_joint_drill = hspace.joint_entropy(model_sections, drill_locs, n_samples)
%%timeit
locs = drill_locs + [[50,20]]
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
# esimated total time:
ttime = 100 * 40 * 0.0476
print("Estimated total time: %.3f seconds or %.3f minutes" % (ttime, ttime/60.))
# generate conditional entropies for entire section:
h_cond_drill = np.zeros_like(h)
for i in range(100):
for j in range(40):
# add position to locations
locs = drill_locs + [[i,j]]
# determine joint entropy
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
# subtract joint entropy of drill locs to obtain conditional entropy
h_cond_drill[i,j] = h_joint_loc - h_joint_drill
# store results
f_out = open("h_cond_drill_i20_10.pkl", 'w')
pickle.dump(h_cond_drill, f_out)
f_out.close()
plt.imshow(h_cond_drill.transpose(), origin = 'lower left',
cmap = 'gray', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
# plot drilling positions above it:
dp = np.array(drill_locs).transpose()
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,100])
plt.ylim([0,40])
plt.imshow((h - h_cond_drill).transpose(), origin = 'lower left',
cmap = 'RdBu', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
# plot drilling positions above it:
dp = np.array(drill_locs).transpose()
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,100])
plt.ylim([0,40])
# define position of "drill":
n = 30
drill_i = [20] * n
drill_j = range(39,39-n,-1)
drill_locs = zip(drill_i, drill_j)
# determine joint entropy of drill_locs:
h_joint_drill = hspace.joint_entropy(model_sections, drill_locs, n_samples)
%%timeit
locs = drill_locs + [[50,20]]
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
# esimated total time:
ttime = 100 * 40 * 0.130
print("Estimated total time: %.3f seconds or %.3f minutes" % (ttime, ttime/60.))
# generate conditional entropies for entire section:
h_cond_drill = np.zeros_like(h)
for i in range(100):
for j in range(40):
# add position to locations
locs = drill_locs + [[i,j]]
# determine joint entropy
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
# subtract joint entropy of drill locs to obtain conditional entropy
h_cond_drill[i,j] = h_joint_loc - h_joint_drill
plt.imshow(h_cond_drill.transpose(), origin = 'lower left',
cmap = 'gray', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
# plot drilling positions above it:
dp = np.array(drill_locs).transpose()
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,100])
plt.ylim([0,40])
plt.imshow(h.transpose(), origin = 'lower left',
cmap = 'gray', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
plt.imshow((h - h_cond_drill).transpose(), origin = 'lower left',
cmap = 'RdBu', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
# plot drilling positions above it:
dp = np.array(drill_locs).transpose()
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,100])
plt.ylim([0,40])
# define position of "drill":
n = 30
drill_i = [60] * n
drill_j = range(39,39-n,-1)
drill_locs = zip(drill_i, drill_j)
# determine joint entropy of drill_locs:
h_joint_drill = hspace.joint_entropy(model_sections, drill_locs, n_samples)
%%timeit
locs = drill_locs + [[50,20]]
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
# generate conditional entropies for entire section:
h_cond_drill = np.zeros_like(h)
for i in range(100):
for j in range(40):
# add position to locations
locs = drill_locs + [[i,j]]
# determine joint entropy
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
# subtract joint entropy of drill locs to obtain conditional entropy
h_cond_drill[i,j] = h_joint_loc - h_joint_drill
plt.imshow(h_cond_drill.transpose(), origin = 'lower left',
cmap = 'gray', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
# plot drilling positions above it:
dp = np.array(drill_locs).transpose()
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,100])
plt.ylim([0,40])
plt.imshow((h - h_cond_drill).transpose(), origin = 'lower left',
cmap = 'RdBu', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
# plot drilling positions above it:
dp = np.array(drill_locs).transpose()
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,100])
plt.ylim([0,40])
# define position of "drill":
n = 30
drill_i = [60] * n + [20] * n
drill_j = range(39,39-n,-1) + range(39,39-n,-1)
drill_locs = zip(drill_i, drill_j)
# determine joint entropy of drill_locs:
h_joint_drill = hspace.joint_entropy(model_sections, drill_locs, n_samples)
%%timeit
locs = drill_locs + [[50,20]]
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
# generate conditional entropies for entire section:
h_cond_drill = np.zeros_like(h)
for i in range(100):
for j in range(40):
# add position to locations
locs = drill_locs + [[i,j]]
# determine joint entropy
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
# subtract joint entropy of drill locs to obtain conditional entropy
h_cond_drill[i,j] = h_joint_loc - h_joint_drill
plt.imshow(h_cond_drill.transpose(), origin = 'lower left',
cmap = 'gray', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
# plot drilling positions above it:
dp = np.array(drill_locs).transpose()
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,100])
plt.ylim([0,40])
plt.imshow((h - h_cond_drill).transpose(), origin = 'lower left',
cmap = 'RdBu', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
# plot drilling positions above it:
dp = np.array(drill_locs).transpose()
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,100])
plt.ylim([0,40])
# define position of "drill":
n = 30
drill_i = [60] * n + [20] * n + [5] * n
drill_j = range(39,39-n,-1) + range(39,39-n,-1) + range(39,39-n,-1)
drill_locs = zip(drill_i, drill_j)
# determine joint entropy of drill_locs:
h_joint_drill = hspace.joint_entropy(model_sections, drill_locs, n_samples)
%%timeit
locs = drill_locs + [[50,20]]
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
# generate conditional entropies for entire section:
h_cond_drill = np.zeros_like(h)
for i in range(100):
for j in range(40):
# add position to locations
locs = drill_locs + [[i,j]]
# determine joint entropy
h_joint_loc = hspace.joint_entropy(model_sections, locs, n_samples)
# subtract joint entropy of drill locs to obtain conditional entropy
h_cond_drill[i,j] = h_joint_loc - h_joint_drill
# store results
f_out = open("h_cond_drill_i62_20_5_30.pkl", 'w')
pickle.dump(h_cond_drill, f_out)
f_out.close()
plt.imshow(h_cond_drill.transpose(), origin = 'lower left',
cmap = 'gray', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
# plot drilling positions above it:
dp = np.array(drill_locs).transpose()
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,100])
plt.ylim([0,40])
plt.imshow((h - h_cond_drill).transpose(), origin = 'lower left',
cmap = 'RdBu', interpolation = 'none')
plt.colorbar(orientation = 'horizontal')
# plot drilling positions above it:
dp = np.array(drill_locs).transpose()
plt.plot(dp[0], dp[1], 'ws')
plt.xlim([0,100])
plt.ylim([0,40])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Markers
Step2: Exercise 3.2
Step3: Linestyles
Step4: It is a bit confusing, but the line styles mentioned above are only valid for lines. Whenever you are dealing with the linestyles of the edges of "Patch" objects, you will need to use words instead of the symbols. So "solid" instead of "-", and "dashdot" instead of "-.". This issue will be fixed for the v2.1 release and allow these specifications to be used interchangably.
Step5: Plot attributes
Step6: | Property | Value Type
Step8: Colormaps
Step9: When colormaps are created in mpl, they get "registered" with a name. This allows one to specify a colormap to use by name.
Step10: Mathtext
Step11: Hatches
Step12: Ugly tie contest!
Step13: Transforms
Step14: You can also change the rc settings during runtime within a python script or interactively from the python shell. All of the rc settings are stored in a dictionary-like variable called matplotlib.rcParams, which is global to the matplotlib package. rcParams can be modified directly. Newer versions of matplotlib can use rc(), for example
|
<ASSISTANT_TASK:>
Python Code:
%load exercises/3.1-colors.py
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, t, t**2, t, t**3)
plt.show()
xs, ys = np.mgrid[:4, 9:0:-1]
markers = [".", "+", ",", "x", "o", "D", "d", "", "8", "s", "p", "*", "|", "_", "h", "H", 0, 4, "<", "3",
1, 5, ">", "4", 2, 6, "^", "2", 3, 7, "v", "1", "None", None, " ", ""]
descripts = ["point", "plus", "pixel", "cross", "circle", "diamond", "thin diamond", "",
"octagon", "square", "pentagon", "star", "vertical bar", "horizontal bar", "hexagon 1", "hexagon 2",
"tick left", "caret left", "triangle left", "tri left", "tick right", "caret right", "triangle right", "tri right",
"tick up", "caret up", "triangle up", "tri up", "tick down", "caret down", "triangle down", "tri down",
"Nothing", "Nothing", "Nothing", "Nothing"]
fig, ax = plt.subplots(1, 1, figsize=(7.5, 4))
for x, y, m, d in zip(xs.T.flat, ys.T.flat, markers, descripts):
ax.scatter(x, y, marker=m, s=100)
ax.text(x + 0.1, y - 0.1, d, size=14)
ax.set_axis_off()
plt.show()
%load exercises/3.2-markers.py
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, , t, t**2, , t, t**3, )
plt.show()
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, '-', t, t**2, '--', t, t**3, '-.', t, -t, ':')
plt.show()
fig, ax = plt.subplots(1, 1)
ax.bar([1, 2, 3, 4], [10, 20, 15, 13], ls='dashed', ec='r', lw=5)
plt.show()
t = np.arange(0., 5., 0.2)
# red dashes, blue squares and green triangles
plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
plt.show()
%load exercises/3.3-properties.py
t = np.arange(0.0, 5.0, 0.1)
a = np.exp(-t) * np.cos(2*np.pi*t)
plt.plot(t, a, )
plt.show()
# %load http://matplotlib.org/mpl_examples/color/colormaps_reference.py
==================
Colormap reference
==================
Reference for colormaps included with Matplotlib.
This reference example shows all colormaps included with Matplotlib. Note that
any colormap listed here can be reversed by appending "_r" (e.g., "pink_r").
These colormaps are divided into the following categories:
Sequential:
These colormaps are approximately monochromatic colormaps varying smoothly
between two color tones---usually from low saturation (e.g. white) to high
saturation (e.g. a bright blue). Sequential colormaps are ideal for
representing most scientific data since they show a clear progression from
low-to-high values.
Diverging:
These colormaps have a median value (usually light in color) and vary
smoothly to two different color tones at high and low values. Diverging
colormaps are ideal when your data has a median value that is significant
(e.g. 0, such that positive and negative values are represented by
different colors of the colormap).
Qualitative:
These colormaps vary rapidly in color. Qualitative colormaps are useful for
choosing a set of discrete colors. For example::
color_list = plt.cm.Set3(np.linspace(0, 1, 12))
gives a list of RGB colors that are good for plotting a series of lines on
a dark background.
Miscellaneous:
Colormaps that don't fit into the categories above.
import numpy as np
import matplotlib.pyplot as plt
# Have colormaps separated into categories:
# http://matplotlib.org/examples/color/colormaps_reference.html
cmaps = [('Perceptually Uniform Sequential', [
'viridis', 'plasma', 'inferno', 'magma']),
('Sequential', [
'Greys', 'Purples', 'Blues', 'Greens', 'Oranges', 'Reds',
'YlOrBr', 'YlOrRd', 'OrRd', 'PuRd', 'RdPu', 'BuPu',
'GnBu', 'PuBu', 'YlGnBu', 'PuBuGn', 'BuGn', 'YlGn']),
('Sequential (2)', [
'binary', 'gist_yarg', 'gist_gray', 'gray', 'bone', 'pink',
'spring', 'summer', 'autumn', 'winter', 'cool', 'Wistia',
'hot', 'afmhot', 'gist_heat', 'copper']),
('Diverging', [
'PiYG', 'PRGn', 'BrBG', 'PuOr', 'RdGy', 'RdBu',
'RdYlBu', 'RdYlGn', 'Spectral', 'coolwarm', 'bwr', 'seismic']),
('Qualitative', [
'Pastel1', 'Pastel2', 'Paired', 'Accent',
'Dark2', 'Set1', 'Set2', 'Set3',
'tab10', 'tab20', 'tab20b', 'tab20c']),
('Miscellaneous', [
'flag', 'prism', 'ocean', 'gist_earth', 'terrain', 'gist_stern',
'gnuplot', 'gnuplot2', 'CMRmap', 'cubehelix', 'brg', 'hsv',
'gist_rainbow', 'rainbow', 'jet', 'nipy_spectral', 'gist_ncar'])]
nrows = max(len(cmap_list) for cmap_category, cmap_list in cmaps)
gradient = np.linspace(0, 1, 256)
gradient = np.vstack((gradient, gradient))
def plot_color_gradients(cmap_category, cmap_list, nrows):
fig, axes = plt.subplots(nrows=nrows)
fig.subplots_adjust(top=0.95, bottom=0.01, left=0.2, right=0.99)
axes[0].set_title(cmap_category + ' colormaps', fontsize=14)
for ax, name in zip(axes, cmap_list):
ax.imshow(gradient, aspect='auto', cmap=plt.get_cmap(name))
pos = list(ax.get_position().bounds)
x_text = pos[0] - 0.01
y_text = pos[1] + pos[3]/2.
fig.text(x_text, y_text, name, va='center', ha='right', fontsize=10)
# Turn off *all* ticks & spines, not just the ones with colormaps.
for ax in axes:
ax.set_axis_off()
for cmap_category, cmap_list in cmaps:
plot_color_gradients(cmap_category, cmap_list, nrows)
plt.show()
fig, (ax1, ax2) = plt.subplots(1, 2)
z = np.random.random((10, 10))
ax1.imshow(z, interpolation='none', cmap='gray')
ax2.imshow(z, interpolation='none', cmap='coolwarm')
plt.show()
plt.scatter([1, 2, 3, 4], [4, 3, 2, 1])
plt.title(r'$\sigma_i=15$', fontsize=20)
plt.show()
import matplotlib as mpl
from matplotlib.rcsetup import cycler
mpl.rc('axes', prop_cycle=cycler('color', 'rgc') +
cycler('lw', [1, 4, 6]) +
cycler('linestyle', ['-', '-.', ':']))
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t)
plt.plot(t, t**2)
plt.plot(t, t**3)
plt.show()
mpl.rc('axes', prop_cycle=cycler('color', ['r', 'orange', 'c', 'y']) +
cycler('hatch', ['x', 'xx-', '+O.', '*']))
x = np.array([0.4, 0.2, 0.5, 0.8, 0.6])
y = [0, -5, -6, -5, 0]
plt.fill(x+1, y)
plt.fill(x+2, y)
plt.fill(x+3, y)
plt.fill(x+4, y)
plt.show()
import matplotlib
print(matplotlib.matplotlib_fname())
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcdefaults() # for when re-running this cell
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.plot([1, 2, 3, 4])
mpl.rc('lines', linewidth=2, linestyle='-.')
# Equivalent older, but still valid syntax
#mpl.rcParams['lines.linewidth'] = 2
#mpl.rcParams['lines.linestyle'] = '-.'
ax2.plot([1, 2, 3, 4])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
from typing import List
def all_prefixes(string: str) -> List[str]:
result = []
for i in range(len(string)):
result.append(string[:i+1])
return result
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Task 1. Compiling Ebola Data
Step2: sum_row
Step3: Now, we define for each country a function, which, for a given file, returns a dictionnary with the country, date, upper and lower bounds for the new cases, and upper and lower bounds for the new deaths.
Step4: As the files for the Sierra Leone does not contain data for the new deaths, we first extract the total deaths for each day, and we will process them later to get the new deaths.
Step5: We now transform the data for the Sierra Leone
Step6: We can now insert the data in a dataframe. For Liberia, December's data is in a completely different format so we dropped it
Step7: Finally, to have some final general idea for the data, we average the bounds.
Step8: Task 2. RNA Sequences
Step9: Now we repeat this operation for every other spreadsheet except the metadata. At each iteration we simply concatenate the data at the end of the previous data, this accumulating all the files' data into a single dataframe. We don't care about any index right now since we will use a random one later.
Step10: Finally, we do a merge with the metadata. We join on the BARCODE column. This column will be the index of the metadata when we import it in this case. Finally we set the index for the three columns BARCODE, GROUP and SAMPLE which are all the columns of the metada and are unique.
Step11: Task 3. Class War in Titanic
Step12: For each of the following questions state clearly your assumptions and discuss your findings
Step13: Next we can list the data types of each field.
Step14: When it comes to the object fields, we can be a bit more precise. name, sec, ticket, cabin, embarked, boat and home.dex are all strings.
Step15: Moreover, we can also note some ranges of other fields. For example, sex has only two possible values female and male. embarked can only be S, C and Q.
Step16: Then we make categorical data as actually categorical.
Step17: 2. We plot the histogram of the travel class.
Step18: Next we plot the histogram of the three embark ports.
Step19: Next we plot the histogram of the sex.
Step20: Next, we cut the ages data into decades and plot the histogram of the devades.
Step21: 3. We plot the cabin floor data as a pie chart.
Step22: 4. Here, we plot the proportion of people that survived in the first class.
Step23: Next, we plot the proportion of people that survived in the second class.
Step24: Finally, we plot the proportion of people that survived in the third class.
Step25: As we can see, the lower the class, the higher the probability of death.
Step26: Here we set these new columns to appropriate values. We essentialy separate the survived columns for easier summing later on. Finnaly we slice the data to take only the columns of interest.
Step27: We group the data by the sec and class of the passangers and we sum it. Then we have the sum of alive and dead people groupped as we wish and we can easily calculate the proportion of them that survived, which we plot as a histogram.
Step28: We can see that there is a huge difference of survival between the classes and sexes
Step29: Next, we set the correct category to people below or above the median age. The people that have the median age are grouped with the people below it. Next we set this column as a categorical column.
Step30: Next, we take the columns that are of interest to us and group by age category, sec and travel class. Then we sum over these groups, obtaining the people that lived and those that died which which we can compute the proportion and display it as a dataframe.
|
<ASSISTANT_TASK:>
Python Code:
DATA_FOLDER = 'Data' # Use the data folder provided in Tutorial 02 - Intro to Pandas.
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
from dateutil.parser import parse
from os import listdir
from os.path import isfile, join
sns.set_context('notebook')
def get_files(country):
path = DATA_FOLDER + "/ebola/" + country + "_data/"
return [f for f in listdir(path) if isfile(join(path, f))]
def sum_row(row, total_col):
return float(row[total_col].values[0])
def sum_rows(rows, total_col):
tot = 0
for row in rows:
tot += sum_row(row, total_col)
return tot
def get_row_guinea(file):
country = 'guinea'
date = file[:10]
raw = pd.read_csv(DATA_FOLDER + "/ebola/" + country + "_data/" + file)
total_col = "Totals"
new_cases_lower = sum_row(raw[raw.Description == "New cases of confirmed"], total_col)
new_cases_upper = sum_row(raw[raw.Description == "Total new cases registered so far"], total_col)
new_deaths_lower = sum_row(raw[(raw.Description == "New deaths registered today (confirmed)") | (raw.Description == "New deaths registered")], total_col)
new_deaths_upper = sum_row(raw[(raw.Description == "New deaths registered today") | (raw.Description == "New deaths registered")], total_col)
return {'Country' : country, 'Date' : parse(date), 'NewCasesLower' : new_cases_lower, 'NewCasesUpper' : new_cases_upper, 'NewDeathsLower' : new_deaths_lower, 'NewDeathsUpper' : new_deaths_upper}
def get_row_liberia(file):
country = 'liberia'
date = file[:10]
raw = pd.read_csv(DATA_FOLDER + "/ebola/" + country + "_data/" + file).fillna(0)
total_col = "National"
new_cases_lower = sum_row(raw[raw.Variable == "New case/s (confirmed)"], total_col)
list_cases_upper = (["New Case/s (Suspected)",
"New Case/s (Probable)",
"New case/s (confirmed)"])
new_cases_upper = sum_rows([raw[raw.Variable == row] for row in list_cases_upper], total_col)
new_deaths_lower = sum_row(raw[raw.Variable == "Newly reported deaths"], total_col)
new_deaths_upper = new_deaths_lower
return {'Country' : country, 'Date' : parse(date), 'NewCasesLower' : new_cases_lower, 'NewCasesUpper' : new_cases_upper, 'NewDeathsLower' : new_deaths_lower, 'NewDeathsUpper' : new_deaths_upper}
def get_row_sl(file):
country = 'sl'
date = file[:10]
raw = pd.read_csv(DATA_FOLDER + "/ebola/" + country + "_data/" + file).fillna(0)
total_col = "National"
new_cases_lower = sum_row(raw[raw.variable == "new_confirmed"], total_col)
list_cases_upper = (["new_suspected",
"new_probable",
"new_confirmed"])
new_cases_upper = sum_rows([raw[raw.variable == row] for row in list_cases_upper], total_col)
list_death_upper = (["death_suspected",
"death_probable",
"death_confirmed"])
total_death_upper = sum_rows([raw[raw.variable == row] for row in list_death_upper], total_col)
total_death_lower = sum_row(raw[raw.variable == "death_confirmed"], total_col)
return {'Country' : country, 'Date' : parse(date), 'NewCasesLower' : new_cases_lower, 'NewCasesUpper' : new_cases_upper, 'TotalDeathLower' : total_death_lower, 'TotalDeathUpper' : total_death_upper}
rows_guinea = [get_row_guinea(file) for file in get_files("guinea")]
rows_liberia = [get_row_liberia(file) for file in get_files("liberia")]
rows_sl_total_deaths = [get_row_sl(file) for file in get_files("sl")]
dic_sl_total_deaths = {}
for row in rows_sl_total_deaths:
dic_sl_total_deaths[row['Date']] = row
rows_sl = []
for date, entry in dic_sl_total_deaths.items():
date_before = date - datetime.timedelta(days=1)
if date_before in dic_sl_total_deaths:
if entry['TotalDeathUpper'] != 0 and dic_sl_total_deaths[date_before]['TotalDeathUpper'] != 0 and entry['TotalDeathLower'] != 0 and dic_sl_total_deaths[date_before]['TotalDeathLower'] != 0:
copy = dict(entry)
del copy['TotalDeathUpper']
del copy['TotalDeathLower']
copy['NewDeathsUpper'] = entry['TotalDeathUpper'] - dic_sl_total_deaths[date_before]['TotalDeathUpper']
copy['NewDeathsLower'] = entry['TotalDeathLower'] - dic_sl_total_deaths[date_before]['TotalDeathLower']
rows_sl.append(copy)
raw_dataframe = pd.DataFrame(columns=['Country', 'Date', 'NewCasesLower', 'NewCasesUpper', 'NewDeathsLower', 'NewDeathsUpper'])
for row in rows_sl, rows_guinea:
raw_dataframe = raw_dataframe.append(row, ignore_index = True)
for row in rows_liberia:
if row['Date'].month != 12: #December data is erroneous
raw_dataframe = raw_dataframe.append(row, ignore_index = True)
raw_dataframe
dataframe = raw_dataframe.set_index(['Country', 'Date'])
dataframe_no_day = raw_dataframe
dataframe_no_day['Year'] = raw_dataframe['Date'].apply(lambda x: x.year)
dataframe_no_day['Month'] = raw_dataframe['Date'].apply(lambda x: x.month)
final_df = dataframe_no_day[['Country', 'Year', 'Month', 'NewCasesLower', 'NewCasesUpper', 'NewDeathsLower', 'NewDeathsUpper']].groupby(['Country', 'Year', 'Month']).mean()
final_df
s1 = final_df[['NewCasesLower', 'NewCasesUpper']].mean(axis=1)
s2 = final_df[['NewDeathsLower', 'NewDeathsUpper']].mean(axis=1)
final = pd.concat([s1, s2], axis=1)
final.columns = ['NewCasesAverage', 'NewDeathsAverage']
final
mid = pd.read_excel(DATA_FOLDER + '/microbiome/MID1.xls', sheetname='Sheet 1', header=None)
mid.fillna('unknown', inplace=True)
mid['BARCODE'] = 'MID1'
mid.columns = ['Taxon', 'Count', 'BARCODE']
for i in range(2, 10):
midi = pd.read_excel(DATA_FOLDER + '/microbiome/MID' + str(i) + '.xls', sheetname='Sheet 1', header=None)
midi.fillna('unknown', inplace=True)
midi['BARCODE'] = 'MID' + str(i)
midi.columns = ['Taxon', 'Count', 'BARCODE']
mid = pd.concat([mid, midi])
metadata = pd.read_excel(DATA_FOLDER + '/microbiome/metadata.xls', sheetname='Sheet1', index_col=0)
metadata.fillna('unknown', inplace=True)
merged = pd.merge(mid, metadata, right_index=True, left_on='BARCODE')
merged = merged.set_index(keys=['BARCODE', 'Taxon'])
merged
from IPython.core.display import HTML
HTML(filename=DATA_FOLDER+'/titanic.html')
titanic = pd.read_excel(DATA_FOLDER + '/titanic.xls', sheetname='titanic')
titanic
titanic.dtypes
titanic.describe()
class_dic = {1 : 'First Class', 2 : 'Second Class', 3 : 'Third Class', np.nan : np.nan}
survived_dic = {0 : 'Deceased' , 1 : 'Survived', np.nan : np.nan}
emarked_dic = {'C' : 'Cherbourg', 'Q' : 'Queenstown', 'S' : 'Southampton', np.nan : np.nan}
titanic['pclass'] = titanic['pclass'].apply(lambda x: class_dic[x])
titanic['survived'] = titanic['survived'].apply(lambda x: survived_dic[x])
titanic['embarked'] = titanic['embarked'].apply(lambda x: emarked_dic[x])
titanic['pclass'] = titanic.pclass.astype('category')
titanic['survived'] = titanic.survived.astype('category')
titanic['sex'] = titanic.sex.astype('category')
titanic['embarked'] = titanic.embarked.astype('category')
titanic['cabin'] = titanic.cabin.astype('category')
titanic['boat'] = titanic.boat.astype('category')
titanic.pclass.value_counts(sort=False).plot(kind='bar')
titanic.embarked.value_counts().plot(kind='bar')
titanic.sex.value_counts().plot(kind='bar')
pd.cut(titanic.age, range(0,90,10)).value_counts(sort=False).plot(kind='bar')
titanic.cabin.dropna().apply(lambda x : x[0]).value_counts(sort=False).plot(kind='pie')
titanic[titanic.pclass == "First Class"].survived.value_counts(sort=False).plot(kind='pie')
titanic[titanic.pclass == "Second Class"].survived.value_counts(sort=False).plot(kind='pie')
titanic[titanic.pclass == "Third Class"].survived.value_counts(sort=False).plot(kind='pie')
titanic.insert(0, 'alive', 0)
titanic.insert(0, 'dead', 0)
titanic.insert(0, 'ratio', 0)
titanic.loc[titanic['survived'] == "Survived", 'alive'] = 1
titanic.loc[titanic['survived'] == "Deceased", 'dead'] = 1
df = titanic[['pclass', 'sex', 'alive', 'dead', 'ratio']]
aggregated = df.groupby(['sex', 'pclass']).sum()
(aggregated['alive'] / (aggregated['alive'] + aggregated['dead'])).plot(kind='bar')
titanic.dropna(axis=0, subset=['age'], inplace=True)
titanic.insert(0, 'age_category', 0)
median = titanic['age'].median()
titanic.loc[titanic['age'] > median, 'age_category'] = "Age > " + str(median)
titanic.loc[titanic['age'] <= median, 'age_category'] = "Age <= " + str(median)
titanic['age_category'] = titanic.age_category.astype('category')
sub = titanic[['pclass', 'sex', 'age_category', 'alive', 'dead', 'ratio']]
subagg = sub.groupby(['age_category', 'sex', 'pclass']).sum()
subagg['ratio'] = (subagg['alive'] / (subagg['alive'] + subagg['dead']))
only_ratio = subagg[['ratio']]
only_ratio
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can wire up the GPIO pins to a logic analyzer to verify that our circuit produces the correct triangle waveform.
Step2: TODO
|
<ASSISTANT_TASK:>
Python Code:
import magma as m
m.set_mantle_target('ice40')
import mantle
def DefineTriangle(n):
T = m.Bits(n)
class _Triangle(m.Circuit):
name = f'Triangle{n}'
IO = ['I', m.In(T), 'O', m.Out(T)]
@classmethod
def definition(io):
invert = mantle.Invert(n)
mux = mantle.Mux(2, n)
m.wire( mux( io.I, invert(io.I), io.I[n-1] ), io.O )
return _Triangle
def Triangle(n):
return DefineTriangle(n)()
from loam.boards.icestick import IceStick
N = 8
icestick = IceStick()
icestick.Clock.on()
for i in range(N):
icestick.J3[i].output().on()
main = icestick.main()
counter = mantle.Counter(32)
sawtooth = counter.O[8:8+N]
tri = Triangle(N)
m.wire( tri(sawtooth), main.J3 )
m.EndDefine()
m.compile('build/triangle', main)
%%bash
cd build
cat triangle.pcf
yosys -q -p 'synth_ice40 -top main -blif triangle.blif' triangle.v
arachne-pnr -q -d 1k -o triangle.txt -p triangle.pcf triangle.blif
icepack triangle.txt triangle.bin
iceprog triangle.bin
import csv
import magma as m
with open("data/triangle-capture.csv") as triangle_capture_csv:
csv_reader = csv.reader(triangle_capture_csv)
next(csv_reader, None) # skip the headers
rows = [row for row in csv_reader]
timestamps = [float(row[0]) for row in rows]
values = [m.bitutils.seq2int(tuple(int(x) for x in row[1:])) for row in rows]
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(timestamps[:1000], values[:1000], "-")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
import tensorflow as tf
model = tf.keras.model.Sequential()
model.add(tf.keras.layers.Embedding(n_most_words,n_dim,input_length = X_train.shape[1]))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Conv1D(64, 3, padding = 'same', activation = 'relu'))
model.add(tf.keras.layers.LSTM(64,dropout=0.25,recurrent_dropout=0.25))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Dense(50,activation='relu'))
model.add(tf.keras.layers.Dense(3,activation='softmax'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Conversion
Step2: You may also visualize the directed acylic graph (DAG) of the BBN through networkx.
Step3: Inference
Step4: Here, we visualize the marginal probabilities without observations and with different observations.
Step5: Plot the marginal probabilities.
|
<ASSISTANT_TASK:>
Python Code:
json_data = {
"V": ["Letter", "Grade", "Intelligence", "SAT", "Difficulty"],
"E": [["Difficulty", "Grade"],
["Intelligence", "Grade"],
["Intelligence", "SAT"],
["Grade", "Letter"]],
"Vdata": {
"Letter": {
"ord": 4,
"numoutcomes": 2,
"vals": ["weak", "strong"],
"parents": ["Grade"],
"children": None,
"cprob": {
"['A']": [.1, .9],
"['B']": [.4, .6],
"['C']": [.99, .01]
}
},
"SAT": {
"ord": 3,
"numoutcomes": 2,
"vals": ["lowscore", "highscore"],
"parents": ["Intelligence"],
"children": None,
"cprob": {
"['low']": [.95, .05],
"['high']": [.2, .8]
}
},
"Grade": {
"ord": 2,
"numoutcomes": 3,
"vals": ["A", "B", "C"],
"parents": ["Difficulty", "Intelligence"],
"children": ["Letter"],
"cprob": {
"['easy', 'low']": [.3, .4, .3],
"['easy', 'high']": [.9, .08, .02],
"['hard', 'low']": [.05, .25, .7],
"['hard', 'high']": [.5, .3, .2]
}
},
"Intelligence": {
"ord": 1,
"numoutcomes": 2,
"vals": ["low", "high"],
"parents": None,
"children": ["SAT", "Grade"],
"cprob": [.7, .3]
},
"Difficulty": {
"ord": 0,
"numoutcomes": 2,
"vals": ["easy", "hard"],
"parents": None,
"children": ["Grade"],
"cprob": [.6, .4]
}
}
}
from pybbn.graph.dag import Bbn
from pybbn.graph.jointree import EvidenceBuilder
from pybbn.pptc.inferencecontroller import InferenceController
from pybbn.graph.factory import Factory
bbn = Factory.from_libpgm_discrete_dictionary(json_data)
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
import warnings
with warnings.catch_warnings():
warnings.simplefilter('ignore')
nx_graph, labels = bbn.to_nx_graph()
pos = nx.nx_agraph.graphviz_layout(nx_graph, prog='neato')
plt.figure(figsize=(10, 8))
plt.subplot(121)
nx.draw(
nx_graph,
pos=pos,
with_labels=True,
labels=labels,
arrowsize=15,
edge_color='k',
width=2.0,
style='dash',
font_size=13,
font_weight='normal',
node_size=100)
plt.title('libpgm BBN DAG')
join_tree = InferenceController.apply(bbn)
import pandas as pd
def potential_to_df(p):
data = []
for pe in p.entries:
try:
v = pe.entries.values()[0]
except:
v = list(pe.entries.values())[0]
p = pe.value
t = (v, p)
data.append(t)
return pd.DataFrame(data, columns=['val', 'p'])
def potentials_to_dfs(join_tree):
data = []
for node in join_tree.get_bbn_nodes():
name = node.variable.name
df = potential_to_df(join_tree.get_bbn_potential(node))
t = (name, df)
data.append(t)
return data
marginal_dfs = potentials_to_dfs(join_tree)
# insert an observation evidence for when SAT=highscore
ev = EvidenceBuilder() \
.with_node(join_tree.get_bbn_node_by_name('SAT')) \
.with_evidence('highscore', 1.0) \
.build()
join_tree.unobserve_all()
join_tree.set_observation(ev)
sat_high_dfs = potentials_to_dfs(join_tree)
# insert an observation evidence for when SAT=lowscore
ev = EvidenceBuilder() \
.with_node(join_tree.get_bbn_node_by_name('SAT')) \
.with_evidence('lowscore', 1.0) \
.build()
join_tree.unobserve_all()
join_tree.set_observation(ev)
sat_low_dfs = potentials_to_dfs(join_tree)
# merge all dataframes so we can visualize then side-by-side
all_dfs = []
for i in range(len(marginal_dfs)):
all_dfs.append(marginal_dfs[i])
all_dfs.append(sat_high_dfs[i])
all_dfs.append(sat_low_dfs[i])
import numpy as np
fig, axes = plt.subplots(len(marginal_dfs), 3, figsize=(15, 20), sharey=True)
for i, ax in enumerate(np.ravel(axes)):
all_dfs[i][1].plot.bar(x='val', y='p', legend=False, ax=ax)
ax.set_title(all_dfs[i][0])
ax.set_ylim([0.0, 1.0])
ax.set_xlabel('')
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Space Time Split
Step2: Splitting a space-time vector (an event)
Step3: This can be split into time and space components by multiplying with the time-vector $d_0$,
Step4: and applying the BladeMap, which results in a scalar+vector in $\mathbb{P}$
Step5: The space and time components can be separated by grade projection,
Step6: We therefor define a split() function, which has a simple condition allowing it to act on a vector or a multivector in $\mathbb{D}$.
Step7: The split can be inverted by applying the BladeMap again, and multiplying by $d_0$
Step8: Splitting a Bivector
Step9: $F$ splits into a vector/bivector in $\mathbb{P}$
Step10: If $F$ is interpreted as the electromagnetic bivector, the Electric and Magnetic fields can be separated by grade
Step11: Lorentz Transformations
Step12: In this way, the effect of a lorentz transformation on the electric and magnetic fields can be computed by rotating the bivector with $F \rightarrow RF\tilde{R}$
Step13: Then splitting into $E$ and $B$ fields
Step14: Lorentz Invariants
|
<ASSISTANT_TASK:>
Python Code:
from clifford import Cl, pretty
pretty(precision=1)
# Dirac Algebra `D`
D, D_blades = Cl(1,3, firstIdx=0, names='d')
# Pauli Algebra `P`
P, P_blades = Cl(3, names='p')
# put elements of each in namespace
locals().update(D_blades)
locals().update(P_blades)
from clifford import BladeMap
bm = BladeMap([(d01,p1),
(d02,p2),
(d03,p3),
(d12,p12),
(d23,p23),
(d13,p13),
(d0123, p123)])
X = D.randomV()*10
X
X*d0
bm(X*d0)
x = bm(X*d0)
x(0) # the time component
x(1) # the space component
def split(X):
return bm(X.odd*d0+X.even)
split(X)
x = split(X)
bm(x)*d0
F = D.randomMV()(2)
F
split(F)
E = split(F)(1)
iB = split(F)(2)
E
iB
R = D.randomRotor()
R
F_ = R*F*~R
F_
E_ = split(F_)(1)
E_
iB_ = split(F_)(2)
iB_
i = p123
E = split(F)(1)
B = -i*split(F)(2)
F**2
split(F**2) == E**2 - B**2 + (2*E|B)*i
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Boat Club
Step2: Question 1
Step3: Question 2
Step4: Question 3
Step5: Question 4
Step6: Question 5
Step7: Question 6
Step8: Question 7
Step9: Question 8
Step10: Submitting your assignment
|
<ASSISTANT_TASK:>
Python Code:
# Run this cell to set up the notebook.
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from client.api.notebook import Notebook
ok = Notebook('lab05.ok')
young_sailors = pd.DataFrame({
"sid": [2701, 18869, 63940, 21869, 17436],
"sname": ["Jerry", "Morgan", "Danny", "Jack", "Dustin"],
"rating": [8, 6, 4, 9, 3],
"age": [25, 26, 21, 27, 22],
})
salty_sailors = pd.DataFrame({
"sid": [2701, 17436, 45433, 22689, 46535],
"sname": ["Jerry", "Dustin", "Balon", "Euron", "Victarion"],
"rating": [8, 3, 7, 10, 2],
"age": [25, 22, 39, 35, 37],
})
boats = pd.DataFrame({
"bid": [41116, 54505, 50041, 35168, 58324],
"bname": ["The Black Sparrow", "The Great Kraken", "The Prophetess", "Silence", "Iron Victory"],
"color": ["Black", "Orange", "Silver", "Red", "Grey"],
})
reservations = pd.DataFrame({
"sid": [21869, 45433, 18869, 22689, 21869, 17436, 63940, 45433, 21869, 18869],
"bid": [41116, 35168, 50041, 41116, 58324, 50041, 54505, 41116, 50041, 41116],
"day": ["3/1", "3/1", "3/2", "3/2", "3/2", "3/3", "3/3", "3/3", "3/3", "3/4"],
})
def project(df, columns):
...
project(salty_sailors, ["sname", "age"])
_ = ok.grade('qproject')
_ = ok.backup()
def select(df, condition):
...
select(young_sailors, lambda x: x["rating"] > 6)
_ = ok.grade('qselect')
_ = ok.backup()
def union(df1, df2):
...
union(young_sailors, salty_sailors)
_ = ok.grade('qunion')
_ = ok.backup()
def intersection(df1, df2):
...
intersection(young_sailors, salty_sailors)
_ = ok.grade('qintersection')
_ = ok.backup()
def difference(df1, df2):
return df1.where(df1.apply(lambda x: ~x.isin(df2[x.name]))).dropna()
difference(young_sailors, salty_sailors)
_ = ok.grade('qdifference')
_ = ok.backup()
def cross_product(df1, df2):
# add a column "tmp-key" of zeros to df1 and df2
df1 = pd.concat([df1, pd.Series(0, index=df1.index, name="tmp-key")], axis=1)
df2 = pd.concat([df2, pd.Series(0, index=df2.index, name="tmp-key")], axis=1)
# use Pandas merge functionality along with drop
# to compute outer product and remove extra column
return (pd
.merge(df1, df2, on="tmp-key")
...
cross_product(young_sailors, salty_sailors)
_ = ok.grade('qcross_product')
_ = ok.backup()
def theta_join(df1, df2, condition):
return select(cross_product(df1, df2), condition)
theta_join(young_sailors, salty_sailors, lambda x: x["age_x"] > x["age_y"])
_ = ok.grade('qtheta_join')
_ = ok.backup()
def natural_join(df1, df2, attr):
return select(cross_product(df1, df2), lambda x: x[attr+"_x"] == x[attr+"_y"])
all_sailors = union(young_sailors, salty_sailors)
sailor_reservtions = natural_join(all_sailors, reservations, "sid")
sailors_and_boats = natural_join(sailor_reservtions, boats, "bid")
project(sailors_and_boats, ["sname", "bname", "day"])
_ = ok.grade('qnatural_join')
_ = ok.backup()
i_finished_the_lab = False
_ = ok.grade('qcompleted')
_ = ok.backup()
_ = ok.submit()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Layout of a Function
Step2: Notice that the sequence of function definiton (def) and then function call (function_name()) is important! Think about it
Step3: Reading (out loud!) the error message hopefully makes the error obvious... Quite explicit, isn't it?
Step4: A challenge for you!
Step5: Now define a function named "gloomyDay" that prints "I hate rainy days!"
Step6: Finally, call the two functions you have defined so that "I hate rainy days!" is printed before "What a lovely day!"
Step7: Arguments
Step8: A challenge for you!
Step9: A little more useful, right? If we had to print out name badges for a big conference, rather than typing "Hi! My name is ..." hundreds of times, if we had a list of people's names, we could just use a for loop to print out each one in turn using this function. The function adds the part that is the same for every name badge and all we need to do is pass it the input parameters. In fact, why don't we try that now?
Step10: In the function printMyName we used just one parameter as an input, but we are not constrained to just one. We can input many parameters separated by commas; let's redefine the printMyName function
Step11: And now can pass input parameters to a function dynamically from a data structure within a loop
Step12: Neat right? We've simplified things to that we can focus only on what's important
Step13: There's actually another way to do this that is quite helpful because it's easier to read
Step14: Scoping
Step15: Notice how the ErrorMessage is the same as before when we tried to print a variable that wasn't defined yet? It's the same concept
Step16: Default Parameters
Step17: So we only have to provide a value for a parameter with a default setting if we want to change it for some reason.
Step18: Assigning to a Variable
Step19: One important thing to remember is that return always marks the end of the list of instructions in a function. So whatever code is written below return and yet still indented in the function scope won't be executed
Step21: 5 is the last value printed becayse a return statement ends the execution of the function, regardless of whether a result (i.e. a value following the return keyword on the same line ) to the caller.
Step22: Let's take a closer look at what's happening above...
Step24: A Challenge for you!
Step25: Functions as Parameters of Other Functions
Step26: Code (Applied Geo-example)
Step27: Now, fix the code in the next cell to use the variables defined in the last cell. The calcProportion function should return the proportion of the population that the boro borough composes of London. The getLocation function should return the coordinates of the boro borough.
Step28: Write some code to print the longitude of Lambeth. This could be done in a single line but don't stress if you need to use more lines...
Step29: Write some code to print the proportion of the London population that lives in the City of London. Using the function defined above, this should take only one line of code.
Step30: Write code to loop over the london_boroughs dictionary, use the calcProportion and getLocation functions to then print proportions and locations of all the boroughs.
|
<ASSISTANT_TASK:>
Python Code:
myList = [1,"two", False, 9.99]
len(myList) # A function
print(myList) # A different function!
# the function definition
def myFirstFunc():
print("Nice to meet you!")
# the function call
myFirstFunc()
print(myVariable)
myVariable = "Hallo Hallo!"
myVariable = "Hallo Hallo!"
print(myVariable)
#your code here
def sunnyDay():
print("What a lovely day!")
#your code here
#your code here
gloomyDay()
sunnyDay()
def printMyName( name ):
print("Hi! My name is: " + name)
printMyName("Gerardus")
#your code here
printMyName("James")
for name in ["Jon Reades", "James Millington", "Chen Zhong", "Naru Shiode"]:
printMyName(name)
def printMyName(name, surname):
print("Hi! My name is "+ name + " " + surname)
printMyName("Gerardus", "Merkatoor")
britishProgrammers = [
["Babbage", "Charles"],
["Lovelace", "Ada"],
["Turing", "Alan"],
]
for p in britishProgrammers:
printMyName(p[1], p[0])
#your code here
def printMyAge(name, age):
print(name + " is " + str(age) + " years old.")
printMyAge('Jon',25)
def printMyAge(name, age):
print(f"{name} is {age} years old.") # This is called a 'f-string' and we use {...} to add variables
printMyAge('Jon',25)
def whoAmI(myname, mysurname):
if not myname:
myname = 'Charles'
if not mysurname:
mysurname = 'Babbage'
print("Hi! My name is "+ myname + " " + mysurname + "!")
print(myname) # myname _only_ exists 'inside' the function definition
whoAmI('Ada','Lovelace')
def printInternational(name, surname, greeting="Hi"):
print(greeting + "! My name is "+ name + " " + surname)
printInternational("Ada", "Lovelace")
printInternational("Charles", "Babbage")
printInternational("Laurent", "Ribardière", "Bonjour")
printInternational("François", "Lionet", "Bonjour")
printInternational("Alan", "Turing")
printInternational("Harsha","Suryanarayana", "Namaste")
def sumOf(firstQuantity, secondQuantity):
return firstQuantity + secondQuantity
print(sumOf(1,2))
print(sumOf(109845309234.30945098345,223098450985698054902309342.43598723900923489))
returnedValue = sumOf(4, 3)
# Notice the casting from int to str!
print(f"This is the returned value: {returnedValue}")
def printNumbers():
print(2)
print(5)
return
print(9999)
print(800000)
printNumbers()
def oddNumbers(inputRange):
A function that prints only the odd numbers for a given range from 0 to inputRange.
inputRange - an integer representing the maximum of the range
for i in range(inputRange):
if i%2 != 0:
print(i)
oddNumbers(10)
print("And...")
oddNumbers(15)
help(oddNumbers)
help(len)
myList = [1,2,3]
help(myList.append)
#your code here
def oddNumbers(inputRange):
for i in range(inputRange):
if i%2 != 0:
print(i)
else:
print("Yuck, an even number!")
oddNumbers(8)
def addTwo(param1):
return param1 + 2
def multiplyByThree(param1): # Note: this is a *separate* variable from the param1 in addTwo() because of scoping!
return param1 * 3
# you can use multiplyByThree
# with a regular argument as input
print(multiplyByThree(2))
# but also with a function as input
print(multiplyByThree(addTwo(2)))
# And then
print(addTwo(multiplyByThree(2)))
# London's total population
london_pop = 7375000
# list with some of London's borough. Feel free to add more!
london_boroughs = {
"City of London": {
"population": 8072,
"coordinates" : [-0.0933, 51.5151]
},
"Camden": {
"population": 220338,
"coordinates" : [-0.2252,1.5424]
},
"Hackney": {
"population": 220338,
"coordinates" : [-0.0709, 51.5432]
},
"Lambeth": {
"population": 303086,
"coordinates" : [-0.1172,51.5013]
}
}
def calcProportion(boro,city_pop=???):
return ???['population']/???
def getLocation(???):
return boro[???]
#in this function definition we provide a default value for city_pop
#this makes sense here because we are only dealing with london
def calcProportion(boro,city_pop=7375000):
return boro['population']/city_pop
def getLocation(boro):
return boro['coordinates'] #returns the value for the `coordinates` key from the value for the `Lambeth` key
#one-liner (see if you can understand how it works)
print(getLocation(london_boroughs['Lambeth'])[0])
# A longer but possibly more user-friendly way:
coord = getLocation(london_boroughs['Lambeth'])
long = coord[0]
print(long)
print(calcProportion(london_boroughs['City of London']))
for boro, data in london_boroughs.items():
prop = calcProportion(data)
location = getLocation(data)
print(prop)
print(location)
print("")
#to print more nicely you could use string formatting:
#print("Proportion is {0:3.3f}%".format(prop*100))
#print("Location of " + boro + " is " + str(location))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $g(x)\rightarrow 1$ for $x\rightarrow\infty$
Step2: Likelihood of the model
Step3: Here we found the minimum of the loss function simply by computing it over a large range of values. In practice, this approach is not possible when the dimensionality of the loss function (number of weights) is very large. To find the minimum of the loss function, the gradient descent algorithm (or stochastic gradient descent) is often used.
Step4: We will use the package sci-kit learn (http
Step5: Note that although the loss function is not linear, the decision function is a linear function of the weights and features. This is why the Logistic regression is called a linear model.
Step6: Evaluating the performance of a binary classifier
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-10,10)
y = 1/(1+np.exp(-x))
p = plt.plot(x,y)
plt.grid(True)
# Simple example:
# we have 20 students that took an exam and we want to know if we can use
# the number of hours they studied to predict if they pass or fail the
# exam
# m = 20 training samples
# n = 1 feature (number of hours)
X = np.array([0.50, 0.75, 1.00, 1.25, 1.50, 1.75, 1.75, 2.00, 2.25, 2.50,
2.75, 3.00, 3.25, 3.50, 4.00, 4.25, 4.50, 4.75, 5.00, 5.50])
# 1 = pass, 0 = fail
y = np.array([0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1])
print(X.shape)
print(y.shape)
p = plt.plot(X,y,'o')
tx = plt.xlabel('x [h]')
ty = plt.ylabel('y ')
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fx = np.linspace(-5,5)
Ly1 = np.log2(1+np.exp(-fx))
Ly0 = np.log2(1+np.exp(-fx)) - np.log2(np.exp(-fx))
p = plt.plot(fx,Ly1,label='L(1,f(x))')
p = plt.plot(fx,Ly0,label='L(0,f(x))')
plt.xlabel('f(x)')
plt.ylabel('L')
plt.legend()
# coming back to our simple example
def Loss(x_i,y_i, w0, w1):
fx = w0 + x_i*w1
if y_i == 1:
return np.log2(1+np.exp(-fx))
if y_i == 0:
return np.log2(1+np.exp(-fx)) - np.log2(np.exp(-fx))
else:
raise Exception('y_i must be 0 or 1')
def sumLoss(x,y, w0, w1):
sumloss = 0
for x_i, y_i in zip(x,y):
sumloss += Loss(x_i,y_i, w0, w1)
return sumloss
# lets compute the loss function for several values
w0s = np.linspace(-10,20,100)
w1s = np.linspace(-10,20,100)
sumLoss_vals = np.zeros((w0s.size, w1s.size))
for k, w0 in enumerate(w0s):
for l, w1 in enumerate(w1s):
sumLoss_vals[k,l] = sumLoss(X,y,w0,w1)
# let's find the values of w0 and w1 that minimize the loss
ind0, ind1 = np.where(sumLoss_vals == sumLoss_vals.min())
print((ind0,ind1))
print((w0s[ind0], w1s[ind1]))
# plot the loss function
p = plt.pcolor(w0s, w1s, sumLoss_vals)
c = plt.colorbar()
p2 = plt.plot(w1s[ind1], w0s[ind0], 'ro')
tx = plt.xlabel('w1')
ty = plt.ylabel('w0')
# plot the solution
x = np.linspace(0,6,100)
def h_w(x, w0=w0s[ind0], w1=w1s[ind1]):
return 1/(1+np.exp(-(w0+x*w1)))
p1 = plt.plot(x, h_w(x))
p2 = plt.plot(X,y,'ro')
tx = plt.xlabel('x [h]')
ty = plt.ylabel('y ')
# probability of passing the exam if you worked 5 hours:
print(h_w(5))
# The same thing using the sklearn module
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(C=1e10)
# to train our model we use the "fit" method
# we have to reshape X because we have only one feature here
model.fit(X.reshape(-1,1),y)
# to see the weights
print(model.coef_)
print(model.intercept_)
# use the trained model to predict new values
print(model.predict_proba(5))
print(model.predict(5))
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fx = np.linspace(-5,5, 200)
Logit = np.log2(1+np.exp(-fx))
Percep = np.maximum(0,- fx)
Hinge = np.maximum(0, 1- fx)
ZeroOne = np.ones(fx.size)
ZeroOne[fx>=0] = 0
p = plt.plot(fx,Logit,label='Logistic Regression')
p = plt.plot(fx,Percep,label='Perceptron')
p = plt.plot(fx,Hinge,label='Hinge-loss')
p = plt.plot(fx,ZeroOne,label='Zero-One loss')
plt.xlabel('f(x)')
plt.ylabel('L')
plt.legend()
ylims = plt.ylim((0,7))
# for example
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix
# logistic regression with L2 regularization, C controls the strength of the regularization
# C = 1/lambda
model = LogisticRegression(C=1, penalty='l2')
# cross validation using 10 folds
y_pred = cross_val_predict(model, X.reshape(-1,1), y=y, cv=10)
print(confusion_matrix(y,y_pred))
print('Accuracy = ' + str(accuracy_score(y, y_pred)))
print('Precision = ' + str(precision_score(y, y_pred)))
print('Recall = ' + str(precision_score(y, y_pred)))
print('F_1 = ' + str(f1_score(y, y_pred)))
# try to run it with different number of folds for the cross-validation
# and different values of the regularization strength
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Recommending movies
Step2: Preparing the dataset
Step3: As before, we'll split the data by putting 80% of the ratings in the train set, and 20% in the test set.
Step4: Let's also figure out unique user ids and movie titles present in the data.
Step5: Implementing a model
Step6: This model takes user ids and movie titles, and outputs a predicted rating
Step7: Loss and metrics
Step8: The task itself is a Keras layer that takes true and predicted as arguments, and returns the computed loss. We'll use that to implement the model's training loop.
Step9: Fitting and evaluating
Step10: Then shuffle, batch, and cache the training and evaluation data.
Step11: Then train the model
Step12: As the model trains, the loss is falling and the RMSE metric is improving.
Step13: The lower the RMSE metric, the more accurate our model is at predicting ratings.
Step14: Exporting for serving
Step15: We can now load it back and perform predictions
Step16: Convert the model to TensorFLow Lite
Step17: Once the model is converted, you can run it like regular TensorFlow Lite models. Please check out TensorFlow Lite documentation to learn more.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
import os
import pprint
import tempfile
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
ratings = tfds.load("movielens/100k-ratings", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"user_rating": x["user_rating"]
})
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
movie_titles = ratings.batch(1_000_000).map(lambda x: x["movie_title"])
user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"])
unique_movie_titles = np.unique(np.concatenate(list(movie_titles)))
unique_user_ids = np.unique(np.concatenate(list(user_ids)))
class RankingModel(tf.keras.Model):
def __init__(self):
super().__init__()
embedding_dimension = 32
# Compute embeddings for users.
self.user_embeddings = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)
])
# Compute embeddings for movies.
self.movie_embeddings = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_movie_titles, mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)
])
# Compute predictions.
self.ratings = tf.keras.Sequential([
# Learn multiple dense layers.
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dense(64, activation="relu"),
# Make rating predictions in the final layer.
tf.keras.layers.Dense(1)
])
def call(self, inputs):
user_id, movie_title = inputs
user_embedding = self.user_embeddings(user_id)
movie_embedding = self.movie_embeddings(movie_title)
return self.ratings(tf.concat([user_embedding, movie_embedding], axis=1))
RankingModel()((["42"], ["One Flew Over the Cuckoo's Nest (1975)"]))
task = tfrs.tasks.Ranking(
loss = tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError()]
)
class MovielensModel(tfrs.models.Model):
def __init__(self):
super().__init__()
self.ranking_model: tf.keras.Model = RankingModel()
self.task: tf.keras.layers.Layer = tfrs.tasks.Ranking(
loss = tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError()]
)
def call(self, features: Dict[str, tf.Tensor]) -> tf.Tensor:
return self.ranking_model(
(features["user_id"], features["movie_title"]))
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
labels = features.pop("user_rating")
rating_predictions = self(features)
# The task computes the loss and the metrics.
return self.task(labels=labels, predictions=rating_predictions)
model = MovielensModel()
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
model.fit(cached_train, epochs=3)
model.evaluate(cached_test, return_dict=True)
test_ratings = {}
test_movie_titles = ["M*A*S*H (1970)", "Dances with Wolves (1990)", "Speed (1994)"]
for movie_title in test_movie_titles:
test_ratings[movie_title] = model({
"user_id": np.array(["42"]),
"movie_title": np.array([movie_title])
})
print("Ratings:")
for title, score in sorted(test_ratings.items(), key=lambda x: x[1], reverse=True):
print(f"{title}: {score}")
tf.saved_model.save(model, "export")
loaded = tf.saved_model.load("export")
loaded({"user_id": np.array(["42"]), "movie_title": ["Speed (1994)"]}).numpy()
converter = tf.lite.TFLiteConverter.from_saved_model("export")
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the model.
if input_details[0]["name"] == "serving_default_movie_title:0":
interpreter.set_tensor(input_details[0]["index"], np.array(["Speed (1994)"]))
interpreter.set_tensor(input_details[1]["index"], np.array(["42"]))
else:
interpreter.set_tensor(input_details[0]["index"], np.array(["42"]))
interpreter.set_tensor(input_details[1]["index"], np.array(["Speed (1994)"]))
interpreter.invoke()
rating = interpreter.get_tensor(output_details[0]['index'])
print(rating)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic settings
Step2: Loading Data
Step3: Create PyTorch DataLoader objects
Step4: Initialize Hidden Layer Inducing Points
Step5: Create The DSPPHiddenLayer Class
Step6: Create the DSPP Class
Step7: Train the Model
Step8: Make Predictions, compute RMSE and Test NLL
|
<ASSISTANT_TASK:>
Python Code:
import gpytorch
import torch
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.means import ConstantMean, LinearMean
from gpytorch.kernels import ScaleKernel, MaternKernel
from gpytorch.variational import VariationalStrategy, BatchDecoupledVariationalStrategy
from gpytorch.variational import MeanFieldVariationalDistribution
from gpytorch.models.deep_gps import DeepGP
from gpytorch.models.deep_gps.dspp import DSPPLayer, DSPP
import gpytorch.settings as settings
import os
# this is for running the notebook in our testing framework
smoke_test = ('CI' in os.environ)
batch_size = 1000 # Size of minibatch
milestones = [20, 150, 300] # Epochs at which we will lower the learning rate by a factor of 0.1
num_inducing_pts = 300 # Number of inducing points in each hidden layer
num_epochs = 400 # Number of epochs to train for
initial_lr = 0.01 # Initial learning rate
hidden_dim = 3 # Number of GPs (i.e., the width) in the hidden layer.
num_quadrature_sites = 8 # Number of quadrature sites (see paper for a description of this. 5-10 generally works well).
## Modified settings for smoke test purposes
num_epochs = num_epochs if not smoke_test else 1
import urllib.request
from scipy.io import loadmat
from math import floor
if not smoke_test and not os.path.isfile('../bike.mat'):
print('Downloading \'bike\' UCI dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1pR1H9ee4U89C1y_uYe9qAypKsHs1EL5I', '../bike.mat')
if smoke_test: # this is for running the notebook in our testing framework
X, y = torch.randn(1000, 3), torch.randn(1000)
else:
data = torch.Tensor(loadmat('bike.mat')['data'])
# Map features to [-1, 1]
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2.0 * (X / X.max(0)[0]) - 1.0
# Z-score labels
y = data[:, -1]
y -= y.mean()
y /= y.std()
shuffled_indices = torch.randperm(X.size(0))
X = X[shuffled_indices, :]
y = y[shuffled_indices]
train_n = int(floor(0.75 * X.size(0)))
train_x = X[:train_n, :].contiguous()
train_y = y[:train_n].contiguous()
test_x = X[train_n:, :].contiguous()
test_y = y[train_n:].contiguous()
if torch.cuda.is_available():
train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()
print(train_x.shape, train_y.shape, test_x.shape, test_y.shape)
from torch.utils.data import TensorDataset, DataLoader
train_dataset = TensorDataset(train_x, train_y)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_dataset = TensorDataset(test_x, test_y)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
from scipy.cluster.vq import kmeans2
# Use k-means to initialize inducing points (only helpful for the first layer)
inducing_points = (train_x[torch.randperm(min(1000 * 100, train_n))[0:num_inducing_pts], :])
inducing_points = inducing_points.clone().data.cpu().numpy()
inducing_points = torch.tensor(kmeans2(train_x.data.cpu().numpy(),
inducing_points, minit='matrix')[0])
if torch.cuda.is_available():
inducing_points = inducing_points.cuda()
class DSPPHiddenLayer(DSPPLayer):
def __init__(self, input_dims, output_dims, num_inducing=300, inducing_points=None, mean_type='constant', Q=8):
if inducing_points is not None and output_dims is not None and inducing_points.dim() == 2:
# The inducing points were passed in, but the shape doesn't match the number of GPs in this layer.
# Let's assume we wanted to use the same inducing point initialization for each GP in the layer,
# and expand the inducing points to match this.
inducing_points = inducing_points.unsqueeze(0).expand((output_dims,) + inducing_points.shape)
inducing_points = inducing_points.clone() + 0.01 * torch.randn_like(inducing_points)
if inducing_points is None:
# No inducing points were specified, let's just initialize them randomly.
if output_dims is None:
# An output_dims of None implies there is only one GP in this layer
# (e.g., the last layer for univariate regression).
inducing_points = torch.randn(num_inducing, input_dims)
else:
inducing_points = torch.randn(output_dims, num_inducing, input_dims)
else:
# Get the number of inducing points from the ones passed in.
num_inducing = inducing_points.size(-2)
# Let's use mean field / diagonal covariance structure.
variational_distribution = MeanFieldVariationalDistribution(
num_inducing_points=num_inducing,
batch_shape=torch.Size([output_dims]) if output_dims is not None else torch.Size([])
)
# Standard variational inference.
variational_strategy = VariationalStrategy(
self,
inducing_points,
variational_distribution,
learn_inducing_locations=True
)
batch_shape = torch.Size([]) if output_dims is None else torch.Size([output_dims])
super(DSPPHiddenLayer, self).__init__(variational_strategy, input_dims, output_dims, Q)
if mean_type == 'constant':
# We'll use a constant mean for the final output layer.
self.mean_module = ConstantMean(batch_shape=batch_shape)
elif mean_type == 'linear':
# As in Salimbeni et al. 2017, we find that using a linear mean for the hidden layer improves performance.
self.mean_module = LinearMean(input_dims, batch_shape=batch_shape)
self.covar_module = ScaleKernel(MaternKernel(batch_shape=batch_shape, ard_num_dims=input_dims),
batch_shape=batch_shape, ard_num_dims=None)
def forward(self, x, mean_input=None, **kwargs):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
class TwoLayerDSPP(DSPP):
def __init__(self, train_x_shape, inducing_points, num_inducing, hidden_dim=3, Q=3):
hidden_layer = DSPPHiddenLayer(
input_dims=train_x_shape[-1],
output_dims=hidden_dim,
mean_type='linear',
inducing_points=inducing_points,
Q=Q,
)
last_layer = DSPPHiddenLayer(
input_dims=hidden_layer.output_dims,
output_dims=None,
mean_type='constant',
inducing_points=None,
num_inducing=num_inducing,
Q=Q,
)
likelihood = GaussianLikelihood()
super().__init__(Q)
self.likelihood = likelihood
self.last_layer = last_layer
self.hidden_layer = hidden_layer
def forward(self, inputs, **kwargs):
hidden_rep1 = self.hidden_layer(inputs, **kwargs)
output = self.last_layer(hidden_rep1, **kwargs)
return output
def predict(self, loader):
with settings.fast_computations(log_prob=False, solves=False), torch.no_grad():
mus, variances, lls = [], [], []
for x_batch, y_batch in loader:
preds = self.likelihood(self(x_batch, mean_input=x_batch))
mus.append(preds.mean.cpu())
variances.append(preds.variance.cpu())
# Compute test log probability. The output of a DSPP is a weighted mixture of Q Gaussians,
# with the Q weights specified by self.quad_weight_grid. The below code computes the log probability of each
# test point under this mixture.
# Step 1: Get log marginal for each Gaussian in the output mixture.
base_batch_ll = self.likelihood.log_marginal(y_batch, self(x_batch))
# Step 2: Weight each log marginal by its quadrature weight in log space.
deep_batch_ll = self.quad_weights.unsqueeze(-1) + base_batch_ll
# Step 3: Take logsumexp over the mixture dimension, getting test log prob for each datapoint in the batch.
batch_log_prob = deep_batch_ll.logsumexp(dim=0)
lls.append(batch_log_prob.cpu())
return torch.cat(mus, dim=-1), torch.cat(variances, dim=-1), torch.cat(lls, dim=-1)
model = TwoLayerDSPP(
train_x.shape,
inducing_points,
num_inducing=num_inducing_pts,
hidden_dim=hidden_dim,
Q=num_quadrature_sites
)
if torch.cuda.is_available():
model.cuda()
model.train()
from gpytorch.mlls import DeepPredictiveLogLikelihood
adam = torch.optim.Adam([{'params': model.parameters()}], lr=initial_lr, betas=(0.9, 0.999))
sched = torch.optim.lr_scheduler.MultiStepLR(adam, milestones=milestones, gamma=0.1)
# The "beta" parameter here corresponds to \beta_{reg} from the paper, and represents a scaling factor on the KL divergence
# portion of the loss.
objective = DeepPredictiveLogLikelihood(model.likelihood, model, num_data=train_n, beta=0.05)
import tqdm
epochs_iter = tqdm.notebook.tqdm(range(num_epochs), desc="Epoch")
for i in epochs_iter:
minibatch_iter = tqdm.notebook.tqdm(train_loader, desc="Minibatch", leave=False)
for x_batch, y_batch in minibatch_iter:
adam.zero_grad()
output = model(x_batch)
loss = -objective(output, y_batch)
loss.backward()
adam.step()
sched.step()
model.eval()
means, vars, ll = model.predict(test_loader)
weights = model.quad_weights.unsqueeze(-1).exp().cpu()
# `means` currently contains the predictive output from each Gaussian in the mixture.
# To get the total mean output, we take a weighted sum of these means over the quadrature weights.
rmse = ((weights * means).sum(0) - test_y.cpu()).pow(2.0).mean().sqrt().item()
ll = ll.mean().item()
print('RMSE: ', rmse, 'Test NLL: ', -ll)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a Modin DataFrame
Step2: Generate a spreadsheet widget with the DataFrame
Step3: Displaying the Spreadsheet
Step4: Exporting Changes
Step5: SpreadsheetWidget API
Step6: Retrieving and Applying Transformation History
Step7: Additional Example
|
<ASSISTANT_TASK:>
Python Code:
# Please install the required packages using `pip install -r requirements.txt` in the current directory
# For all ways to install Modin see official documentation at:
# https://modin.readthedocs.io/en/latest/installation.html
import modin.pandas as pd
import modin.spreadsheet as mss
columns_names = [
"trip_id", "vendor_id", "pickup_datetime", "dropoff_datetime", "store_and_fwd_flag",
"rate_code_id", "pickup_longitude", "pickup_latitude", "dropoff_longitude", "dropoff_latitude",
"passenger_count", "trip_distance", "fare_amount", "extra", "mta_tax", "tip_amount",
"tolls_amount", "ehail_fee", "improvement_surcharge", "total_amount", "payment_type",
"trip_type", "pickup", "dropoff", "cab_type", "precipitation", "snow_depth", "snowfall",
"max_temperature", "min_temperature", "average_wind_speed", "pickup_nyct2010_gid",
"pickup_ctlabel", "pickup_borocode", "pickup_boroname", "pickup_ct2010",
"pickup_boroct2010", "pickup_cdeligibil", "pickup_ntacode", "pickup_ntaname", "pickup_puma",
"dropoff_nyct2010_gid", "dropoff_ctlabel", "dropoff_borocode", "dropoff_boroname",
"dropoff_ct2010", "dropoff_boroct2010", "dropoff_cdeligibil", "dropoff_ntacode",
"dropoff_ntaname", "dropoff_puma",
]
parse_dates=["pickup_datetime", "dropoff_datetime"]
df = pd.read_csv('s3://modin-datasets/trips_data.csv', names=columns_names,
header=None, parse_dates=parse_dates)
df
spreadsheet = mss.from_dataframe(df)
spreadsheet
changed_df = mss.to_dataframe(spreadsheet)
changed_df
# Duplicates the `Reset Filters` button
spreadsheet.reset_filters()
# Duplicates the `Reset Sort` button
spreadsheet.reset_sort()
# Duplicates the `Clear History` button
spreadsheet.clear_history()
# Gets the modified DataFrame that matches the changes to the spreadsheet
# This is the same functionality as `mss.to_dataframe`
spreadsheet.get_changed_df()
spreadsheet.get_history()
another_df = df.copy()
spreadsheet.apply_history(another_df)
mss.from_dataframe(df, show_toolbar=False, grid_options={'forceFitColumns': False, 'editable': False, 'highlightSelectedCell': True})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Separate continuous, categorical and label column names
Step2: 2. Convert categorical columns to category dtypes
Step3: Optional
Step4: 3. Set the embedding sizes
Step5: 4. Create an array of categorical values
Step6: 5. Convert "cats" to a tensor
Step7: 6. Create an array of continuous values
Step8: 7. Convert "conts" to a tensor
Step9: 8. Create a label tensor
Step10: 9. Create train and test sets from <tt>cats</tt>, <tt>conts</tt>, and <tt>y</tt>
Step11: Define the model class
Step12: 10. Set the random seed
Step13: 11. Create a TabularModel instance
Step14: 12. Define the loss and optimization functions
Step15: Train the model
Step16: 13. Plot the Cross Entropy Loss against epochs
Step17: 14. Evaluate the test set
Step18: 15. Calculate the overall percent accuracy
Step19: BONUS
|
<ASSISTANT_TASK:>
Python Code:
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
%matplotlib inline
df = pd.read_csv('../Data/income.csv')
print(len(df))
df.head()
df['label'].value_counts()
df.columns
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS:
print(f'cat_cols has {len(cat_cols)} columns')
print(f'cont_cols has {len(cont_cols)} columns')
print(f'y_col has {len(y_col)} column')
# DON'T WRITE HERE
# CODE HERE
# DON'T WRITE HERE
# THIS CELL IS OPTIONAL
df = shuffle(df, random_state=101)
df.reset_index(drop=True, inplace=True)
df.head()
# CODE HERE
# DON'T WRITE HERE
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
cats[:5]
# DON'T WRITE HERE
# CODE HERE
# DON'T WRITE HERE
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
conts[:5]
# DON'T WRITE HERE
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
conts.dtype
# DON'T WRITE HERE
# CODE HERE
# DON'T WRITE HERE
# CODE HERE
b = 30000 # suggested batch size
t = 5000 # suggested test size
# DON'T WRITE HERE
class TabularModel(nn.Module):
def __init__(self, emb_szs, n_cont, out_sz, layers, p=0.5):
# Call the parent __init__
super().__init__()
# Set up the embedding, dropout, and batch normalization layer attributes
self.embeds = nn.ModuleList([nn.Embedding(ni, nf) for ni,nf in emb_szs])
self.emb_drop = nn.Dropout(p)
self.bn_cont = nn.BatchNorm1d(n_cont)
# Assign a variable to hold a list of layers
layerlist = []
# Assign a variable to store the number of embedding and continuous layers
n_emb = sum((nf for ni,nf in emb_szs))
n_in = n_emb + n_cont
# Iterate through the passed-in "layers" parameter (ie, [200,100]) to build a list of layers
for i in layers:
layerlist.append(nn.Linear(n_in,i))
layerlist.append(nn.ReLU(inplace=True))
layerlist.append(nn.BatchNorm1d(i))
layerlist.append(nn.Dropout(p))
n_in = i
layerlist.append(nn.Linear(layers[-1],out_sz))
# Convert the list of layers into an attribute
self.layers = nn.Sequential(*layerlist)
def forward(self, x_cat, x_cont):
# Extract embedding values from the incoming categorical data
embeddings = []
for i,e in enumerate(self.embeds):
embeddings.append(e(x_cat[:,i]))
x = torch.cat(embeddings, 1)
# Perform an initial dropout on the embeddings
x = self.emb_drop(x)
# Normalize the incoming continuous data
x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1)
# Set up model layers
x = self.layers(x)
return x
# CODE HERE
# DON'T WRITE HERE
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
model
# DON'T WRITE HERE
# CODE HERE
# DON'T WRITE HERE
import time
start_time = time.time()
epochs = 300
losses = []
for i in range(epochs):
i+=1
y_pred = model(cat_train, con_train)
loss = criterion(y_pred, y_train)
losses.append(loss)
# a neat trick to save screen space:
if i%25 == 1:
print(f'epoch: {i:3} loss: {loss.item():10.8f}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'epoch: {i:3} loss: {loss.item():10.8f}') # print the last line
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
# CODE HERE
# DON'T WRITE HERE
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
print(f'CE Loss: {loss:.8f}')
# TO EVALUATE THE TEST SET
# CODE HERE
# DON'T WRITE HERE
# WRITE YOUR CODE HERE:
# RUN YOUR CODE HERE:
# DON'T WRITE HERE
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in the initial data and see what predictors are available
Step2: Observe the summary statistics for the variables in our dataset
Step3: Show summary stats for the continuous variables
Step4: Develop a crash frequency model
Step5: Now estimate the regression model and output the summary
Step6: We will now compute the value of the safety performance function (spf) on a data set also from I-90
Step7: Implementation of the Emprical Bayes Method
Step8: Calculation of Accident Reduction Potential
Step9: Confidence and Prediction Intervals for Negative Binomial Regression Model
Step10: Calculate the values of the Poisson mean
Step11: Calculate confidence intervals the Poisson means (mu)
Step12: Calculate prediction intervals for the Poisson parameters (m)
Step13: Calculate prediction intervals for the predicted responses (y)
Step14: Plotting of the Intervals
|
<ASSISTANT_TASK:>
Python Code:
from crash_modeling_tools import *
import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
import statsmodels.api as sm
crash_data = pd.read_csv('../data/crash_modeling_tools_demo_data/crash_data_final_90.csv')
crash_data = crash_data.dropna()
crash_data.head()
show_summary_stats(crash_data,[0,9])
show_summary_stats(crash_data)
offset_term = np.log(crash_data['seg_lng'] * 3)
# need to cast log_avg_aadt to float since statsmodels thinks (incorrectly) that it is categorical
crash_data['log_aadt'] = crash_data.log_avg_aadt.astype(np.float)
mod_nb = smf.glm('tot_acc_ct~log_aadt+lanewid+avg_grad+C(curve)+C(surf_typ)',data=crash_data,offset=offset_term,
family=sm.families.NegativeBinomial()).fit()
mod_nb.summary()
data_eb = pd.read_csv('../data/crash_modeling_tools_demo_data/crash_data_eb.csv')
data_eb = data_eb.dropna()
data_eb['log_aadt'] = data_eb.log_avg_aadt.astype(np.float)
compute_spf(mod_nb,data_eb)
eb_safety_estimates = estimate_empirical_bayes(mod_nb,data_eb,data_eb['seg_lng'],data_eb['tot_acc_ct'])
eb_safety_estimates.head()
arp = calc_accid_reduc_potential(mod_nb,data_eb,data_eb['seg_lng'],data_eb['tot_acc_ct'])
arp.head()
data_design = pd.read_csv('../data/crash_modeling_tools_demo_data/data_design.csv')
var_eta_hat = calc_var_eta_hat(mod_nb,data_design)
mu_hat = calc_mu_hat_nb(mod_nb,data_design)
mu_hat
ci_mu_nb = calc_ci_mu_nb(mu_hat,var_eta_hat)
ci_mu_nb.head()
pi_m_nb = calc_pi_m_nb(mod_nb,mu_hat,var_eta_hat)
pi_m_nb.head()
pi_y_nb = calc_pi_y_nb(mod_nb,mu_hat,var_eta_hat)
pi_y_nb.head()
% matplotlib inline
aadt_range = np.arange(9700,148400,100)
plot_and_save_nb_cis_and_pis(data_design,mod_nb,mu_hat,var_eta_hat,aadt_range,'AADT')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Deck responds to the len() function
Step2: Reading specific cards from the deck is provided by the getitem method
Step3: Get a random item from a sequence
Step4: Deck supports slicing
Step5: Deck is iterable
Step6: Deck can also be iterated in reverse
Step7: in operator works because deck is iterable
Step8: Sorting
|
<ASSISTANT_TASK:>
Python Code:
import collections
Card = collections.namedtuple('Card', ['rank', 'suit'])
class FrenchDeck:
ranks = [str(n) for n in range(2, 11)] + list('JQKA')
suits = 'spades diamonds clubs hearts'.split()
def __init__(self):
self._cards = [Card(rank, suit) for suit in self.suits
for rank in self.ranks]
def __len__(self):
return len(self._cards)
def __getitem__(self, position):
return self._cards[position]
beer_card = Card('7', 'diamonds')
beer_card
deck = FrenchDeck()
len(deck)
deck[0]
deck[-1]
from random import choice
choice(deck)
choice(deck)
choice(deck)
deck[:3]
for idx, card in enumerate(deck):
print(idx, ' -> ', card)
deck[12::13]
for card in deck:
print(card)
for card in reversed(deck):
print(card)
Card('Q', 'hearts') in deck
Card('A', 'beasts') in deck
suit_values = dict(spades=3, hearts=2, diamonds=1, clubs=0)
suit_values
def spades_high(card:Card):
rank_value = FrenchDeck.ranks.index(card.rank)
return rank_value * len(suit_values) + suit_values[card.suit]
spades_high(Card('2', 'clubs'))
spades_high(Card('A', 'spades'))
for card in sorted(deck, key=spades_high):
print(card)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run notebook ssvm.ipynb.
Step2: Load trained RankSVM parameters and prediction results
Step3: Compute evaluation metrics
Step4: Evaluate RankSVM predictions
Step5: SSVM prediction using RankSVM weights
Step6: Evaluate SSVM predictions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import os, pickle, random
import pandas as pd
import numpy as np
import cvxopt
random.seed(1234554321)
np.random.seed(123456789)
cvxopt.base.setseed(123456789)
%run 'ssvm.ipynb'
fname = os.path.join(data_dir, 'rank-Glas.pkl')
rank_dict = pickle.load(open(fname, 'rb')) # a dict: query -> {'PRED': trajectory, 'C': ranksvm-c, 'W': model_params}
len(rank_dict)
def evaluation(predictions):
F1_all = []; pF1_all = []; tau_all = []
for key in sorted(predictions.keys()):
F1, pF1, tau = evaluate(predictions[key]['PRED'], TRAJ_GROUP_DICT[key])
F1_all.append(F1); pF1_all.append(pF1); tau_all.append(tau)
F1_mean = np.mean(F1_all); pF1_mean = np.mean(pF1_all); tau_mean = np.mean(tau_all)
print('F1 (%.3f, %.3f), pairsF1 (%.3f, %.3f), Tau (%.3f, %.3f)' % \
(F1_mean, np.std(F1_all)/np.sqrt(len(F1_all)), \
pF1_mean, np.std(pF1_all)/np.sqrt(len(pF1_all)), \
tau_mean, np.std(tau_all)/np.sqrt(len(tau_all))))
return F1_mean, pF1_mean, tau_mean
evaluation(rank_dict)
n_edge_features = 5
predictions = dict()
cnt = 1
queries = sorted(rank_dict.keys())
for q in queries:
ps, L = q
# compute feature scaling parameters
trajid_set = set(trajid_set_all) - TRAJ_GROUP_DICT[q]
poi_set = set()
for tid in trajid_set:
if len(traj_dict[tid]) >= 2:
poi_set = poi_set | set(traj_dict[tid])
poi_list = sorted(poi_set)
poi_id_dict, poi_id_rdict = dict(), dict()
for idx, poi in enumerate(poi_list):
poi_id_dict[poi] = idx
poi_id_rdict[idx] = poi
n_states = len(poi_list)
poi_info = calc_poi_info(sorted(trajid_set), traj_all, poi_all)
traj_list = [traj_dict[k] for k in sorted(trajid_set) if len(traj_dict[k]) >= 2]
node_features_list = Parallel(n_jobs=N_JOBS)\
(delayed(calc_node_features)\
(tr[0], len(tr), poi_list, poi_info.copy(), poi_clusters=POI_CLUSTERS, \
cats=POI_CAT_LIST, clusters=POI_CLUSTER_LIST) for tr in traj_list)
#edge_features = calc_edge_features(list(trajid_set), poi_list, traj_dict, poi_info.copy())
fdim = node_features_list[0].shape
X_node_all = np.vstack(node_features_list)
#scaler = MaxAbsScaler(copy=False)
scaler = MinMaxScaler(feature_range=(-1,1), copy=False)
scaler.fit(X_node_all)
# features scaling
X_node_test = calc_node_features(ps, L, poi_list, poi_info, poi_clusters=POI_CLUSTERS, \
cats=POI_CAT_LIST, clusters=POI_CLUSTER_LIST)
X_node_test = scaler.transform(X_node_test) # feature scaling
# inference
W = rank_dict[q]['W']
unary_params = np.tile(W, (n_states, 1))
pw_params = np.zeros((n_states, n_states, n_edge_features))
unary_features = X_node_test
#pw_features = edge_features.copy()
pw_features = np.zeros(pw_params.shape)
y_pred = do_inference_listViterbi(poi_id_dict[ps],L,len(poi_list),unary_params,pw_params,unary_features,pw_features)
#y_pred = do_inference_viterbi(poi_id_dict[ps], L,len(poi_list),unary_params,pw_params,unary_features,pw_features)
predictions[q] = {'PRED': [poi_id_rdict[p] for p in y_pred]}
print(cnt, rank_dict[q]['PRED'], '->', predictions[q]['PRED']); cnt += 1
evaluation(predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: II) MEASUREMENTS
Step2: Baro
Step3: GPS
Step4: GPS velocity
Step5: Acceleration
Step6: III) PROBLEM FORMULATION
Step7: Initial uncertainty $P_0$
Step8: Dynamic matrix $A$
Step9: Disturbance Control Matrix $B$
Step10: Measurement Matrix $H$
Step11: Measurement noise covariance $R$
Step12: Process noise covariance $Q$
Step13: Identity Matrix
Step14: Input
Step15: V) TEST
Step16: VI) PLOT
|
<ASSISTANT_TASK:>
Python Code:
m = 10000 # timesteps
dt = 1/ 250.0 # update loop at 250Hz
t = np.arange(m) * dt
freq = 0.1 # Hz
amplitude = 0.5 # meter
alt_true = 405 + amplitude * np.cos(2 * np.pi * freq * t)
height_true = 5 + amplitude * np.cos(2 * np.pi * freq * t)
vel_true = - amplitude * (2 * np.pi * freq) * np.sin(2 * np.pi * freq * t)
acc_true = - amplitude * (2 * np.pi * freq)**2 * np.cos(2 * np.pi * freq * t)
plt.plot(t, height_true)
plt.plot(t, vel_true)
plt.plot(t, acc_true)
plt.legend(['elevation', 'velocity', 'acceleration'], loc='best')
plt.xlabel('time')
sonar_sampling_period = 1 / 10.0 # sonar reading at 10Hz
# Sonar noise
sigma_sonar_true = 0.05 # in meters
meas_sonar = height_true[::(sonar_sampling_period/dt)] + sigma_sonar_true * np.random.randn(m // (sonar_sampling_period/dt))
t_meas_sonar = t[::(sonar_sampling_period/dt)]
plt.plot(t_meas_sonar, meas_sonar, 'or')
plt.plot(t, height_true)
plt.legend(['Sonar measure', 'Elevation (true)'])
plt.title("Sonar measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
baro_sampling_period = 1 / 10.0 # baro reading at 10Hz
# Baro noise
sigma_baro_true = 2.0 # in meters
meas_baro = alt_true[::(baro_sampling_period/dt)] + sigma_baro_true * np.random.randn(m // (baro_sampling_period/dt))
t_meas_baro = t[::(baro_sampling_period/dt)]
plt.plot(t_meas_baro, meas_baro, 'or')
plt.plot(t, alt_true)
plt.title("Baro measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
gps_sampling_period = 1 / 1.0 # gps reading at 1Hz
# GPS noise
sigma_gps_true = 5.0 # in meters
meas_gps = alt_true[::(gps_sampling_period/dt)] + sigma_gps_true * np.random.randn(m // (gps_sampling_period/dt))
t_meas_gps = t[::(gps_sampling_period/dt)]
plt.plot(t_meas_gps, meas_gps, 'or')
plt.plot(t, alt_true)
plt.title("GPS measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
gpsvel_sampling_period = 1 / 1.0 # gps reading at 1Hz
# GPS noise
sigma_gpsvel_true = 10.0 # in meters/s
meas_gpsvel = vel_true[::(gps_sampling_period/dt)] + sigma_gpsvel_true * np.random.randn(m // (gps_sampling_period/dt))
t_meas_gps = t[::(gps_sampling_period/dt)]
plt.plot(t_meas_gps, meas_gpsvel, 'or')
plt.plot(t, vel_true)
plt.title("GPS velocity measurement")
plt.xlabel('time (s)')
plt.ylabel('vel (m/s)')
sigma_acc_true = 0.2 # in m.s^-2
acc_bias = 1.5
meas_acc = acc_true + sigma_acc_true * np.random.randn(m) + acc_bias
plt.plot(t, meas_acc, '.')
plt.plot(t, acc_true)
plt.title("Accelerometer measurement")
plt.xlabel('time (s)')
plt.ylabel('acc ($m.s^{-2}$)')
x = np.matrix([0.0, 0.0, 0.0, 0.0]).T
print(x, x.shape)
P = np.diag([100.0, 100.0, 100.0, 100.0])
print(P, P.shape)
dt = 1 / 250.0 # Time step between filter steps (update loop at 250Hz)
A = np.matrix([[1.0, 0.0, dt, 0.5*dt**2],
[0.0, 1.0, dt, 0.5*dt**2],
[0.0, 0.0, 1.0, dt ],
[0.0, 0.0, 0.0, 1.0]])
print(A, A.shape)
B = np.matrix([[0.5*dt**2],
[0.5*dt**2],
[dt ],
[0.0]])
print(B, B.shape)
H_sonar = np.matrix([[0.0, 1.0, 0.0, 0.0]])
print(H_sonar, H_sonar.shape)
H_baro = np.matrix([[1.0, 0.0, 0.0, 0.0]])
print(H_baro, H_baro.shape)
H_gps = np.matrix([[1.0, 0.0, 0.0, 0.0]])
print(H_gps, H_gps.shape)
H_gpsvel = np.matrix([[0.0, 0.0, 1.0, 0.0]])
print(H_gpsvel, H_gpsvel.shape)
# sonar
sigma_sonar = sigma_sonar_true # sonar noise
R_sonar = np.matrix([[sigma_sonar**2]])
print(R_sonar, R_sonar.shape)
# baro
sigma_baro = sigma_baro_true # sonar noise
R_baro = np.matrix([[sigma_baro**2]])
print(R_baro, R_baro.shape)
# gps
sigma_gps = sigma_gps_true # sonar noise
R_gps = np.matrix([[sigma_gps**2]])
print(R_gps, R_gps.shape)
# gpsvel
sigma_gpsvel = sigma_gpsvel_true # sonar noise
R_gpsvel = np.matrix([[sigma_gpsvel**2]])
print(R_gpsvel, R_gpsvel.shape)
from sympy import Symbol, Matrix, latex
from sympy.interactive import printing
printing.init_printing()
dts = Symbol('\Delta t')
s1 = Symbol('\sigma_1') # drift of accelerometer bias
Qs = Matrix([[0.5*dts**2], [0.5*dts**2], [dts], [1.0]])
Qs*Qs.T*s1**2
sigma_acc_drift = 0.0001
G = np.matrix([[0.5*dt**2],
[0.5*dt**2],
[dt],
[1.0]])
Q = G*G.T*sigma_acc_drift**2
print(Q, Q.shape)
I = np.eye(4)
print(I, I.shape)
u = meas_acc
print(u, u.shape)
# Re init state
# State
x[0] = 300.0
x[1] = 5.0
x[2] = 0.0
x[3] = 0.0
# Estimate covariance
P[0,0] = 100.0
P[1,1] = 100.0
P[2,2] = 100.0
P[3,3] = 100.0
# Preallocation for Plotting
# estimate
zt = []
ht = []
dzt= []
zetat=[]
# covariance
Pz = []
Ph = []
Pdz= []
Pzeta=[]
# kalman gain
Kz = []
Kh = []
Kdz= []
Kzeta=[]
for filterstep in range(m):
# ========================
# Time Update (Prediction)
# ========================
# Project the state ahead
x = A*x + B*u[filterstep]
# Project the error covariance ahead
P = A*P*A.T + Q
# ===============================
# Measurement Update (Correction)
# ===============================
# Sonar (only at the beginning, ex take off)
if filterstep%25 == 0 and (filterstep <2000 or filterstep>9000):
# Compute the Kalman Gain
S_sonar = H_sonar*P*H_sonar.T + R_sonar
K_sonar = (P*H_sonar.T) * np.linalg.pinv(S_sonar)
# Update the estimate via z
Z_sonar = meas_sonar[filterstep//25]
y_sonar = Z_sonar - (H_sonar*x) # Innovation or Residual
x = x + (K_sonar*y_sonar)
# Update the error covariance
P = (I - (K_sonar*H_sonar))*P
# Baro
if filterstep%25 == 0:
# Compute the Kalman Gain
S_baro = H_baro*P*H_baro.T + R_baro
K_baro = (P*H_baro.T) * np.linalg.pinv(S_baro)
# Update the estimate via z
Z_baro = meas_baro[filterstep//25]
y_baro = Z_baro - (H_baro*x) # Innovation or Residual
x = x + (K_baro*y_baro)
# Update the error covariance
P = (I - (K_baro*H_baro))*P
# GPS
if filterstep%250 == 0:
# Compute the Kalman Gain
S_gps = H_gps*P*H_gps.T + R_gps
K_gps = (P*H_gps.T) * np.linalg.pinv(S_gps)
# Update the estimate via z
Z_gps = meas_gps[filterstep//250]
y_gps = Z_gps - (H_gps*x) # Innovation or Residual
x = x + (K_gps*y_gps)
# Update the error covariance
P = (I - (K_gps*H_gps))*P
# GPSvel
if filterstep%250 == 0:
# Compute the Kalman Gain
S_gpsvel = H_gpsvel*P*H_gpsvel.T + R_gpsvel
K_gpsvel = (P*H_gpsvel.T) * np.linalg.pinv(S_gpsvel)
# Update the estimate via z
Z_gpsvel = meas_gpsvel[filterstep//250]
y_gpsvel = Z_gpsvel - (H_gpsvel*x) # Innovation or Residual
x = x + (K_gpsvel*y_gpsvel)
# Update the error covariance
P = (I - (K_gpsvel*H_gpsvel))*P
# ========================
# Save states for Plotting
# ========================
zt.append(float(x[0]))
ht.append(float(x[1]))
dzt.append(float(x[2]))
zetat.append(float(x[3]))
Pz.append(float(P[0,0]))
Ph.append(float(P[1,1]))
Pdz.append(float(P[2,2]))
Pzeta.append(float(P[3,3]))
# Kz.append(float(K[0,0]))
# Kdz.append(float(K[1,0]))
# Kzeta.append(float(K[2,0]))
plt.figure(figsize=(17,15))
plt.subplot(321)
plt.plot(t, zt, color='b')
plt.fill_between(t, np.array(zt) - 10* np.array(Pz), np.array(zt) + 10*np.array(Pz), alpha=0.2, color='b')
plt.plot(t, alt_true, 'g')
plt.plot(t_meas_baro, meas_baro, '.r')
plt.plot(t_meas_gps, meas_gps, 'ok')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([405 - 50 * amplitude, 405 + 30 * amplitude])
plt.legend(['estimate', 'true altitude', 'baro reading', 'gps reading', 'sonar switched off/on'], loc='lower right')
plt.title('Altitude')
plt.subplot(322)
plt.plot(t, ht, color='b')
plt.fill_between(t, np.array(ht) - 10* np.array(Ph), np.array(ht) + 10*np.array(Ph), alpha=0.2, color='b')
plt.plot(t, height_true, 'g')
plt.plot(t_meas_sonar, meas_sonar, '.r')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
# plt.ylim([5 - 1.5 * amplitude, 5 + 1.5 * amplitude])
plt.ylim([5 - 10 * amplitude, 5 + 10 * amplitude])
plt.legend(['estimate', 'true height above ground', 'sonar reading', 'sonar switched off/on'])
plt.title('Height')
plt.subplot(323)
plt.plot(t, dzt, color='b')
plt.fill_between(t, np.array(dzt) - 10* np.array(Pdz), np.array(dzt) + 10*np.array(Pdz), alpha=0.2, color='b')
plt.plot(t, vel_true, 'g')
plt.plot(t_meas_gps, meas_gpsvel, 'ok')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([0 - 10.0 * amplitude, + 10.0 * amplitude])
plt.legend(['estimate', 'true velocity', 'gps_vel reading', 'sonar switched off/on'])
plt.title('Velocity')
plt.subplot(324)
plt.plot(t, zetat, color='b')
plt.fill_between(t, np.array(zetat) - 10* np.array(Pzeta), np.array(zetat) + 10*np.array(Pzeta), alpha=0.2, color='b')
plt.plot(t, -acc_bias * np.ones_like(t), 'g')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
plt.ylim([-2.0, 1.0])
# plt.ylim([0 - 2.0 * amplitude, + 2.0 * amplitude])
plt.legend(['estimate', 'true bias', 'sonar switched off/on'])
plt.title('Acc bias')
plt.subplot(325)
plt.plot(t, Pz)
plt.plot(t, Ph)
plt.plot(t, Pdz)
plt.ylim([0, 1.0])
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
plt.legend(['Altitude', 'Height', 'Velocity', 'sonar switched off/on'])
plt.title('Incertitudes')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Now let's take random samples and look at the probability distribution of the sample mean. As usual, we will use simulation to get an empirical approximation to this distribution.
Step3: Let us simulate the mean of a random sample of 100 delays, then of 400 delays, and finally of 625 delays. We will perform 1000 repetitions of each of these process.
Step4: You can see the Central Limit Theorem in action – the histograms of the sample means are roughly normal, even though the histogram of the delays themselves is far from normal.
Step5: Take a look at the SDs in the sample mean histograms above. In all three of them, the SD of the population of delays is about 40 minutes, because all the samples were taken from the same population.
Step6: The values in the second and third columns are very close. If we plot each of those columns with the sample size on the horizontal axis, the two graphs are essentially indistinguishable.
|
<ASSISTANT_TASK:>
Python Code:
united = Table.read_table('http://inferentialthinking.com/notebooks/united_summer2015.csv')
delay = united.select('Delay')
pop_mean = np.mean(delay.column('Delay'))
pop_mean
delay_opts = {
'xlabel': 'Delay (minute)',
'ylabel': 'Percent per minute',
'xlim': (-20, 200),
'ylim': (0, 0.037),
'bins': 22,
}
nbi.hist(united.column('Delay'), options=delay_opts)
Empirical distribution of random sample means
def simulate_sample_mean(table, label, sample_size, repetitions=1000):
means = make_array()
for i in range(repetitions):
new_sample = table.sample(sample_size)
new_sample_mean = np.mean(new_sample.column(label))
means = np.append(means, new_sample_mean)
# Print all relevant quantities
print("Sample size: ", sample_size)
print("Population mean:", np.mean(table.column(label)))
print("Average of sample means: ", np.mean(means))
print("Population SD:", np.std(table.column(label)))
print("SD of sample means:", np.std(means))
return means
means_opts = {
'xlabel': 'Sample Means',
'ylabel': 'Percent per unit',
'xlim': (5, 35),
'ylim': (0, 0.25),
'bins': 30,
}
nbi.hist(simulate_sample_mean, table=fixed(delay), label=fixed('Delay'),
sample_size=widgets.ToggleButtons(options=[100, 400, 625]),
options=means_opts)
pop_sd = np.std(delay.column('Delay'))
pop_sd
sd_comparison
sd_comparison.plot('Sample Size n')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, let's include some code that performs a sublevelset filtration by some scalar function on the vertices of a triangle mesh.
Step3: Let's also define a function which will plot a particular scalar function on XY and XZ slices of the mesh
Step4: Experiment 1
Step5: Now let's load in all of the meshes and sort them so that contiguous groups of 10 meshes are the same pose (by default they are sorted by subject).
Step6: Finally, we compute the 0D sublevelset filtration on all of the shapes, followed by a Wasserstein distance computation between all pairs to examine how different shapes cluster together. We also display the result of 3D multidimensional scaling using the matrix of all pairs of Wasserstein distances.
Step7: Experiment 2
Step8: Let's now load in a few of the nonrigid meshes and compute the sublevelset function of their heat kernel signatures
Step9: Finally, we plot the results
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib notebook
import scipy.io as sio
from scipy import sparse
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import sys
sys.path.append("pyhks")
from HKS import *
from GeomUtils import *
from ripser import ripser
from persim import plot_diagrams, wasserstein
from sklearn.manifold import MDS
from sklearn.decomposition import PCA
import warnings
warnings.filterwarnings('ignore')
def do0DSublevelsetFiltrationMesh(VPos, ITris, fn):
x = fn(VPos, ITris)
N = VPos.shape[0]
# Add edges between adjacent points in the mesh
I, J = getEdges(VPos, ITris)
V = np.maximum(x[I], x[J])
# Add vertex birth times along the diagonal of the distance matrix
I = np.concatenate((I, np.arange(N)))
J = np.concatenate((J, np.arange(N)))
V = np.concatenate((V, x))
#Create the sparse distance matrix
D = sparse.coo_matrix((V, (I, J)), shape=(N, N)).tocsr()
return ripser(D, distance_matrix=True, maxdim=0)['dgms'][0]
def plotPCfn(VPos, fn, cmap = 'afmhot'):
plot an XY slice of a mesh with the scalar function used in a
sublevelset filtration
x = fn - np.min(fn)
x = x/np.max(x)
c = plt.get_cmap(cmap)
C = c(np.array(np.round(x*255.0), dtype=np.int64))
plt.scatter(VPos[:, 0], VPos[:, 1], 10, c=C)
plt.axis('equal')
ax = plt.gca()
ax.set_facecolor((0.3, 0.3, 0.3))
subjectNum = 1
poseNum = 9
i = subjectNum*10 + poseNum
fn = lambda VPos, ITris: VPos[:, 1] #Return the y coordinate as a function
(VPos, _, ITris) = loadOffFile("shapes/tr_reg_%.03d.off"%i)
x = fn(VPos, ITris)
I = do0DSublevelsetFiltrationMesh(VPos, ITris, fn)
plt.figure(figsize=(10, 4))
plt.subplot(131)
plotPCfn(VPos, x, cmap = 'afmhot')
plt.title("Subject %i Pose %i"%(subjectNum, poseNum))
plt.subplot(132)
plotPCfn(VPos[:, [2, 1, 0]], x, cmap = 'afmhot')
plt.subplot(133)
plot_diagrams([I])
plt.show()
meshes = []
for poseNum in range(10):
for subjectNum in range(10):
i = subjectNum*10 + poseNum
VPos, _, ITris = loadOffFile("shapes/tr_reg_%.03d.off"%i)
meshes.append((VPos, ITris))
dgms = []
N = len(meshes)
print("Computing persistence diagrams...")
for i, (VPos, ITris) in enumerate(meshes):
x = fn(VPos, ITris)
I = do0DSublevelsetFiltrationMesh(VPos, ITris, fn)
I = I[np.isfinite(I[:, 1]), :]
dgms.append(I)
# Compute Wasserstein distances in order of pose
DWass = np.zeros((N, N))
for i in range(N):
if i%10 == 0:
print("Comparing pose %i..."%(i/10))
for j in range(i+1, N):
DWass[i, j] = wasserstein(dgms[i], dgms[j])
DWass = DWass + DWass.T
# Re-sort by class
# Now do MDS and PCA, respectively
mds = MDS(n_components=3, dissimilarity='precomputed')
mds.fit_transform(DWass)
XWass = mds.embedding_
plt.figure(figsize=(8, 4))
plt.subplot(121)
plt.imshow(DWass, cmap = 'afmhot', interpolation = 'none')
plt.title("Wasserstein")
ax1 = plt.gca()
ax2 = plt.subplot(122, projection='3d')
ax2.set_title("Wasserstein By Pose")
for i in range(10):
X = XWass[i*10:(i+1)*10, :]
ax2.scatter(X[:, 0], X[:, 1], X[:, 2])
Is = (i*10 + np.arange(10)).tolist() + (-2*np.ones(10)).tolist()
Js = (-2*np.ones(10)).tolist() + (i*10 + np.arange(10)).tolist()
ax1.scatter(Is, Js, 10)
plt.show()
classNum = 0
articulationNum = 1
classes = ['ant', 'hand', 'human', 'octopus', 'pliers', 'snake', 'shark', 'bear', 'chair']
i = classNum*10 + articulationNum
fn = lambda VPos, ITris: -getHKS(VPos, ITris, 20, t = 30)
(VPos, _, ITris) = loadOffFile("shapes_nonrigid/%.3d.off"%i)
x = fn(VPos, ITris)
I = do0DSublevelsetFiltrationMesh(VPos, ITris, fn)
plt.figure(figsize=(8, 8))
plt.subplot(221)
plotPCfn(VPos, x, cmap = 'afmhot')
plt.title("Class %i Articulation %i"%(classNum, articulationNum))
plt.subplot(222)
plotPCfn(VPos[:, [2, 1, 0]], x, cmap = 'afmhot')
plt.subplot(223)
plotPCfn(VPos[:, [0, 2, 1]], x, cmap = 'afmhot')
plt.subplot(224)
plot_diagrams([I])
plt.show()
N = 90
meshesNonrigid = []
for i in range(N):
(VPos, _, ITris) = loadOffFile("shapes_nonrigid/%.3d.off"%i)
meshesNonrigid.append((VPos, ITris))
dgmsNonrigid = []
N = len(meshesNonrigid)
print("Computing persistence diagrams...")
for i, (VPos, ITris) in enumerate(meshesNonrigid):
if i%10 == 0:
print("Finished first %i meshes"%i)
x = fn(VPos, ITris)
I = do0DSublevelsetFiltrationMesh(VPos, ITris, lambda VPos, ITris: -getHKS(VPos, ITris, 20, t = 30))
I = I[np.isfinite(I[:, 1]), :]
dgmsNonrigid.append(I)
# Compute Wasserstein distances
print("Computing Wasserstein distances...")
DWassNonrigid = np.zeros((N, N))
for i in range(N):
if i%10 == 0:
print("Finished first %i distances"%i)
for j in range(i+1, N):
DWassNonrigid[i, j] = wasserstein(dgmsNonrigid[i], dgmsNonrigid[j])
DWassNonrigid = DWassNonrigid + DWassNonrigid.T
# Now do MDS and PCA, respectively
mds = MDS(n_components=3, dissimilarity='precomputed')
mds.fit_transform(DWassNonrigid)
XWassNonrigid = mds.embedding_
plt.figure(figsize=(8, 4))
plt.subplot(121)
plt.imshow(DWassNonrigid, cmap = 'afmhot', interpolation = 'none')
ax1 = plt.gca()
plt.xticks(5+10*np.arange(10), classes, rotation='vertical')
plt.yticks(5+10*np.arange(10), classes)
plt.title("Wasserstein Distances")
ax2 = plt.subplot(122, projection='3d')
ax2.set_title("3D MDS")
for i in range(9):
X = XWassNonrigid[i*10:(i+1)*10, :]
ax2.scatter(X[:, 0], X[:, 1], X[:, 2])
Is = (i*10 + np.arange(10)).tolist() + (-2*np.ones(10)).tolist()
Js = (91*np.ones(10)).tolist() + (i*10 + np.arange(10)).tolist()
ax1.scatter(Is, Js, 10)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Classification on imbalanced data
Step2: Data processing and exploration
Step3: Examine the class label imbalance
Step4: This shows the small fraction of positive samples.
Step5: Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where overfitting is a significant concern from the lack of training data.
Step6: Normalize the input features using the sklearn StandardScaler.
Step7: Caution
Step8: Define the model and metrics
Step9: Understanding useful metrics
Step10: Test run the model
Step11: Optional
Step12: The correct bias to set can be derived from
Step13: Set that as the initial bias, and the model will give much more reasonable initial guesses.
Step14: With this initialization the initial loss should be approximately
Step15: This initial loss is about 50 times less than if would have been with naive initialization.
Step16: Confirm that the bias fix helps
Step17: The above figure makes it clear
Step18: Check training history
Step19: Note
Step20: Evaluate your model on the test dataset and display the results for the metrics you created above
Step21: If the model had predicted everything perfectly, this would be a diagonal matrix where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
Step22: Plot the AUPRC
Step23: It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
Step24: Train a model with class weights
Step25: Check training history
Step26: Evaluate metrics
Step27: Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade-offs between these different types of errors for your application.
Step28: Plot the AUPRC
Step29: Oversampling
Step30: Using NumPy
Step31: Using tf.data
Step32: Each dataset provides (feature, label) pairs
Step33: Merge the two together using tf.data.Dataset.sample_from_datasets
Step34: To use this dataset, you'll need the number of steps per epoch.
Step35: Train on the oversampled data
Step36: If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
Step37: Re-train
Step38: Re-check training history
Step39: Evaluate metrics
Step40: Plot the ROC
Step41: Plot the AUPRC
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps = 0.001 # 0 => 0.1¢
cleaned_df['Log Amount'] = np.log(cleaned_df.pop('Amount')+eps)
# Use a utility from sklearn to split and shuffle your dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns=train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns)
sns.jointplot(x=pos_df['V5'], y=pos_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(x=neg_df['V5'], y=neg_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
_ = plt.suptitle("Negative distribution")
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
keras.metrics.AUC(name='prc', curve='PR'), # precision-recall curve
]
def make_model(metrics=METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_prc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
model = make_model()
model.summary()
model.predict(train_features[:10])
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
initial_bias = np.log([pos/neg])
initial_bias
model = make_model(output_bias=initial_bias)
model.predict(train_features[:10])
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
initial_weights = os.path.join(tempfile.mkdtemp(), 'initial_weights')
model.save_weights(initial_weights)
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale on y-axis to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train ' + label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val ' + label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels))
def plot_metrics(history):
metrics = ['loss', 'prc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend();
plot_metrics(baseline_history)
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right');
def plot_prc(name, labels, predictions, **kwargs):
precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, predictions)
plt.plot(precision, recall, label=name, linewidth=2, **kwargs)
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right');
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
weight_for_0 = (1 / neg) * (total / 2.0)
weight_for_1 = (1 / pos) * (total / 2.0)
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
plot_metrics(weighted_history)
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right');
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right');
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
resampled_ds = tf.data.Dataset.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks=[early_stopping],
validation_data=val_ds)
plot_metrics(resampled_history)
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch=20,
epochs=10*EPOCHS,
callbacks=[early_stopping],
validation_data=(val_ds))
plot_metrics(resampled_history)
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right');
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_prc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_prc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: matplotlib
Step2: matplotlib also conveniently has the ability to plot multiple things on the
Step3: Question 0
Step4: Dataset
Step5: Question 1
Step6: seaborn
Step7: Question 3
Step8: Notice that seaborn will fit a curve to the histogram of the data. Fancy!
Step9: Question 6
Step10: Question 8
Step11: Question 10
Step12: Want to learn more?
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline
import matplotlib.pyplot as plt
# These lines load the tests.
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('lab03.ok')
# Set up (x, y) pairs from 0 to 2*pi
xs = np.linspace(0, 2 * np.pi, 300)
ys = np.cos(xs)
# plt.plot takes in x-values and y-values and plots them as a line
plt.plot(xs, ys)
plt.plot(xs, ys)
plt.plot(xs, np.sin(xs))
# Here's the starting code from last time. Edit / Add code to create the plot above.
plt.plot(xs, ys)
plt.plot(xs, np.sin(xs))
bike_trips = pd.read_csv('bikeshare.csv')
# Here we'll do some pandas datetime parsing so that the dteday column
# contains datetime objects.
bike_trips['dteday'] += ':' + bike_trips['hr'].astype(str)
bike_trips['dteday'] = pd.to_datetime(bike_trips['dteday'], format="%Y-%m-%d:%H")
bike_trips = bike_trips.drop(['yr', 'mnth', 'hr'], axis=1)
bike_trips.head()
# This plot shows the temperature at each data point
bike_trips.plot.line(x='dteday', y='temp')
# Stop here! Discuss why this plot is shaped like this with your partner.
...
...
...
...
# In your plot, you'll notice that your points are larger than ours. That's
# fine. If you'd like them to be smaller, you can add scatter_kws={'s': 6}
# to your lmplot call. That tells the underlying matplotlib scatter function
# to change the size of the points.
...
# Note that the legend for workingday isn't super helpful. 0 in this case
# means "not a working day" and 1 means "working day". Try fixing the legend
# to be more descriptive.
...
i_definitely_finished = False
_ = ok.grade('qcompleted')
_ = ok.backup()
_ = ok.submit()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We are given training images for each of cervix types. Lets first count them for each class.
Step2: Image types
Step3: Now, lets read the files for each type to get an idea about how the images look like.
Step4: Additional images
Step5: All images
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from skimage.io import imread, imshow
import cv2
%matplotlib inline
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls
from subprocess import check_output
print(check_output(["ls", "../input/train"]).decode("utf8"))
from glob import glob
basepath = '../input/train/'
all_cervix_images = []
for path in sorted(glob(basepath + "*")):
cervix_type = path.split("/")[-1]
cervix_images = sorted(glob(basepath + cervix_type + "/*"))
all_cervix_images = all_cervix_images + cervix_images
all_cervix_images = pd.DataFrame({'imagepath': all_cervix_images})
all_cervix_images['filetype'] = all_cervix_images.apply(lambda row: row.imagepath.split(".")[-1], axis=1)
all_cervix_images['type'] = all_cervix_images.apply(lambda row: row.imagepath.split("/")[-2], axis=1)
all_cervix_images.head()
print('We have a total of {} images in the whole dataset'.format(all_cervix_images.shape[0]))
type_aggregation = all_cervix_images.groupby(['type', 'filetype']).agg('count')
type_aggregation_p = type_aggregation.apply(lambda row: 1.0*row['imagepath']/all_cervix_images.shape[0], axis=1)
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(10, 8))
type_aggregation.plot.barh(ax=axes[0])
axes[0].set_xlabel("image count")
type_aggregation_p.plot.barh(ax=axes[1])
axes[1].set_xlabel("training size fraction")
fig = plt.figure(figsize=(12,8))
i = 1
for t in all_cervix_images['type'].unique():
ax = fig.add_subplot(1,3,i)
i+=1
f = all_cervix_images[all_cervix_images['type'] == t]['imagepath'].values[0]
plt.imshow(plt.imread(f))
plt.title('sample for cervix {}'.format(t))
print(check_output(["ls", "../input/additional"]).decode("utf8"))
basepath = '../input/additional/'
all_cervix_images_a = []
for path in sorted(glob(basepath + "*")):
cervix_type = path.split("/")[-1]
cervix_images = sorted(glob(basepath + cervix_type + "/*"))
all_cervix_images_a = all_cervix_images_a + cervix_images
all_cervix_images_a = pd.DataFrame({'imagepath': all_cervix_images_a})
all_cervix_images_a['filetype'] = all_cervix_images_a.apply(lambda row: row.imagepath.split(".")[-1], axis=1)
all_cervix_images_a['type'] = all_cervix_images_a.apply(lambda row: row.imagepath.split("/")[-2], axis=1)
all_cervix_images_a.head()
print('We have a total of {} images in the whole dataset'.format(all_cervix_images_a.shape[0]))
type_aggregation = all_cervix_images_a.groupby(['type', 'filetype']).agg('count')
type_aggregation_p = type_aggregation.apply(lambda row: 1.0*row['imagepath']/all_cervix_images_a.shape[0], axis=1)
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(10, 8))
type_aggregation.plot.barh(ax=axes[0])
axes[0].set_xlabel("image count")
type_aggregation_p.plot.barh(ax=axes[1])
axes[1].set_xlabel("training size fraction")
fig = plt.figure(figsize=(12,8))
i = 1
for t in all_cervix_images_a['type'].unique():
ax = fig.add_subplot(1,3,i)
i+=1
f = all_cervix_images_a[all_cervix_images_a['type'] == t]['imagepath'].values[0]
plt.imshow(plt.imread(f))
plt.title('sample for cervix {}'.format(t))
all_cervix_images_ = pd.concat( [all_cervix_images, all_cervix_images_a], join='outer' )
#all_cervix_images_ = all_cervix_images.append(all_cervix_images_a)
#all_cervix_images_a.merge(all_cervix_images,how='left')
#all_cervix_images_ = pd.DataFrame({'imagepath': all_cervix_images_})
#all_cervix_images_['filetype'] = all_cervix_images_.apply(lambda row: row.imagepath.split(".")[-1], axis=1)
#all_cervix_images_['type'] = all_cervix_images_.apply(lambda row: row.imagepath.split("/")[-2], axis=1)
#all_cervix_images_.head()
print(all_cervix_images_)
print('We have a total of {} images in the whole dataset'.format(all_cervix_images_.shape[0]))
type_aggregation = all_cervix_images_.groupby(['type', 'filetype']).agg('count')
type_aggregation_p = type_aggregation.apply(lambda row: 1.0*row['imagepath']/all_cervix_images_a.shape[0], axis=1)
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(10, 8))
type_aggregation.plot.barh(ax=axes[0])
axes[0].set_xlabel("image count")
type_aggregation_p.plot.barh(ax=axes[1])
axes[1].set_xlabel("training size fraction")
fig = plt.figure(figsize=(12,8))
i = 1
for t in all_cervix_images_['type'].unique():
ax = fig.add_subplot(1,3,i)
i+=1
f = all_cervix_images_[all_cervix_images_['type'] == t]['imagepath'].values[0]
plt.imshow(plt.imread(f))
plt.title('sample for cervix {}'.format(t))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Clickable Surface
Step2: Design our own texture
Step3: Lines
Step5: Parametric Functions
Step6: Indexed Geometries
Step7: Buffer Geometries
|
<ASSISTANT_TASK:>
Python Code:
ball = Mesh(geometry=SphereGeometry(radius=1),
material=MeshLambertMaterial(color='red'),
position=[2, 1, 0])
c = PerspectiveCamera(position=[0, 5, 5], up=[0, 1, 0],
children=[DirectionalLight(color='white', position=[3, 5, 1], intensity=0.5)])
scene = Scene(children=[ball, c, AmbientLight(color='#777777')])
renderer = Renderer(camera=c,
scene=scene,
controls=[OrbitControls(controlling=c)])
display(renderer)
ball.geometry.radius = 0.5
import time, math
ball.material.color = '#4400dd'
for i in range(1, 150, 2):
ball.geometry.radius = i / 100.
ball.position = [math.cos(i / 10.), math.sin(i / 50.), i / 100.]
time.sleep(.05)
# Generate surface data:
view_width = 600
view_height = 400
nx, ny = (20, 20)
xmax=1
x = np.linspace(-xmax, xmax, nx)
y = np.linspace(-xmax, xmax, ny)
xx, yy = np.meshgrid(x, y)
z = xx ** 2 - yy ** 2
#z[6,1] = float('nan')
# Generate scene objects from data:
surf_g = SurfaceGeometry(z=list(z[::-1].flat),
width=2 * xmax,
height=2 * xmax,
width_segments=nx - 1,
height_segments=ny - 1)
surf = Mesh(geometry=surf_g,
material=MeshLambertMaterial(map=height_texture(z[::-1], 'YlGnBu_r')))
surfgrid = SurfaceGrid(geometry=surf_g, material=LineBasicMaterial(color='black'),
position=[0, 0, 1e-2]) # Avoid overlap by lifting grid slightly
# Set up picking bojects:
hover_point = Mesh(geometry=SphereGeometry(radius=0.05),
material=MeshLambertMaterial(color='hotpink'))
click_picker = Picker(controlling=surf, event='dblclick')
hover_picker = Picker(controlling=surf, event='mousemove')
# Set up scene:
key_light = DirectionalLight(color='white', position=[3, 5, 1], intensity=0.4)
c = PerspectiveCamera(position=[0, 3, 3], up=[0, 0, 1], aspect=view_width / view_height,
children=[key_light])
scene = Scene(children=[surf, c, surfgrid, hover_point, AmbientLight(intensity=0.8)])
renderer = Renderer(camera=c, scene=scene,
width=view_width, height=view_height,
controls=[OrbitControls(controlling=c), click_picker, hover_picker])
# Set up picking responses:
# Add a new marker when double-clicking:
out = Output()
def f(change):
value = change['new']
with out:
print('Clicked on %s' % (value,))
point = Mesh(geometry=SphereGeometry(radius=0.05),
material=MeshLambertMaterial(color='red'),
position=value)
scene.add(point)
click_picker.observe(f, names=['point'])
# Have marker follow picker point:
link((hover_point, 'position'), (hover_picker, 'point'))
# Show picker point coordinates as a label:
h = HTML()
def g(change):
h.value = 'Green point at (%.3f, %.3f, %.3f)' % tuple(change['new'])
g({'new': hover_point.position})
hover_picker.observe(g, names=['point'])
display(VBox([h, renderer, out]))
surf_g.z = list((-z[::-1]).flat)
surf.material.map = height_texture(-z[::-1])
import numpy as np
from scipy import ndimage
import matplotlib
import matplotlib.pyplot as plt
from skimage import img_as_ubyte
jet = matplotlib.cm.get_cmap('jet')
np.random.seed(int(1)) # start random number generator
n = int(5) # starting points
size = int(32) # size of image
im = np.zeros((size,size)) # create zero image
points = size*np.random.random((2, n**2)) # locations of seed values
im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = size # seed high values
im = ndimage.gaussian_filter(im, sigma=size/(float(4)*n)) # smooth high values into surrounding areas
im *= 1/np.max(im)# rescale to be in the range [0,1]
rgba_im = img_as_ubyte(jet(im)) # convert the values to rgba image using the jet colormap
t = DataTexture(data=rgba_im, format='RGBAFormat', width=size, height=size)
geometry = SphereGeometry(radius=1, widthSegments=16, heightSegments=10)#TorusKnotGeometry(radius=2, radialSegments=200)
material = MeshLambertMaterial(map=t)
myobject = Mesh(geometry=geometry, material=material)
c = PerspectiveCamera(position=[0, 3, 3], fov=40,
children=[DirectionalLight(color='#ffffff', position=[3, 5, 1], intensity=0.5)])
scene = Scene(children=[myobject, c, AmbientLight(color='#777777')])
renderer = Renderer(camera=c, scene = scene, controls=[OrbitControls(controlling=c)], width=400, height=400)
display(renderer)
# On windows, linewidth of the material has no effect
size = 4
linesgeom = Geometry(vertices=[[0, 0, 0],
[size, 0, 0],
[0, 0, 0],
[0, size, 0],
[0, 0, 0],
[0, 0, size]],
colors = ['red', 'red', 'green', 'green', 'white', 'orange'])
lines = Line(geometry=linesgeom,
material=LineBasicMaterial(linewidth=5, vertexColors='VertexColors'),
type='LinePieces',
)
scene = Scene(children=[
lines,
DirectionalLight(color='#ccaabb', position=[0,10,0]),
AmbientLight(color='#cccccc'),
])
c = PerspectiveCamera(position=[10, 10, 10])
renderer = Renderer(camera=c, background='black', background_opacity=1, scene=scene, controls=[OrbitControls(controlling=c)],
width=400, height=400)
display(renderer)
f =
function f(origu,origv) {
// scale u and v to the ranges I want: [0, 2*pi]
var u = 2*Math.PI*origu;
var v = 2*Math.PI*origv;
var x = Math.sin(u);
var y = Math.cos(v);
var z = Math.cos(u+v);
return new THREE.Vector3(x,y,z)
}
surf_g = ParametricGeometry(func=f, slices=16, stacks=16);
surf = Mesh(geometry=surf_g, material=MeshLambertMaterial(color='green', side='FrontSide'))
surf2 = Mesh(geometry=surf_g, material=MeshLambertMaterial(color='yellow', side='BackSide'))
c = PerspectiveCamera(position=[5, 5, 3], up=[0, 0, 1],
children=[DirectionalLight(color='white',
position=[3, 5, 1],
intensity=0.6)])
scene = Scene(children=[surf, surf2, c, AmbientLight(intensity=0.5)])
renderer = Renderer(camera=c, scene=scene, controls=[OrbitControls(controlling=c)], width=400, height=400)
display(renderer)
from pythreejs import *
from IPython.display import display
vertices = [
[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 1],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0],
[1, 1, 1]
]
faces = [
[0, 1, 3],
[0, 3, 2],
[0, 2, 4],
[2, 6, 4],
[0, 4, 1],
[1, 4, 5],
[2, 3, 6],
[3, 7, 6],
[1, 5, 3],
[3, 5, 7],
[4, 6, 5],
[5, 6, 7]
]
vertexcolors = ['#000000', '#0000ff', '#00ff00', '#ff0000',
'#00ffff', '#ff00ff', '#ffff00', '#ffffff']
# Map the vertex colors into the 'color' slot of the faces
faces = [f + [None, [vertexcolors[i] for i in f]] for f in faces]
# Create the geometry:
cubeGeometry = Geometry(vertices=vertices,
faces=faces,
colors=vertexcolors)
# Calculate normals per face, for nice crisp edges:
cubeGeometry.exec_three_obj_method('computeFaceNormals')
# Create a mesh. Note that the material need to be told to use the vertex colors.
myobjectCube = Mesh(
geometry=cubeGeometry,
material=MeshLambertMaterial(vertexColors='VertexColors'),
position=[-0.5, -0.5, -0.5], # Center the cube
)
# Set up a scene and render it:
cCube = PerspectiveCamera(position=[3, 3, 3], fov=20,
children=[DirectionalLight(color='#ffffff', position=[-3, 5, 1], intensity=0.5)])
sceneCube = Scene(children=[myobjectCube, cCube, AmbientLight(color='#dddddd')])
rendererCube = Renderer(camera=cCube, background='black', background_opacity=1,
scene=sceneCube, controls=[OrbitControls(controlling=cCube)])
display(rendererCube)
from pythreejs import *
import numpy as np
from IPython.display import display
vertices = np.asarray([
[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 1],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0],
[1, 1, 1]
], dtype='float32')
faces = np.asarray([
[0, 1, 3],
[0, 3, 2],
[0, 2, 4],
[2, 6, 4],
[0, 4, 1],
[1, 4, 5],
[2, 3, 6],
[3, 7, 6],
[1, 5, 3],
[3, 5, 7],
[4, 6, 5],
[5, 6, 7]
], dtype='uint16').ravel() # We need to flatten index array
vertexcolors = np.asarray([(0,0,0), (0,0,1), (0,1,0), (1,0,0),
(0,1,1), (1,0,1), (1,1,0), (1,1,1)], dtype='float32')
cubeGeometry = BufferGeometry(attributes=dict(
position=BufferAttribute(vertices, normalized=False),
index=BufferAttribute(faces, normalized=False),
color=BufferAttribute(vertexcolors),
))
myobjectCube = Mesh(
geometry=cubeGeometry,
material=MeshLambertMaterial(vertexColors='VertexColors'),
position=[-0.5, -0.5, -0.5] # Center the cube
)
cCube = PerspectiveCamera(
position=[3, 3, 3], fov=20,
children=[DirectionalLight(color='#ffffff', position=[-3, 5, 1], intensity=0.5)])
sceneCube = Scene(children=[myobjectCube, cCube, AmbientLight(color='#dddddd')])
rendererCube = Renderer(camera=cCube, background='black', background_opacity=1,
scene = sceneCube, controls=[OrbitControls(controlling=cCube)])
display(rendererCube)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: Looking at some relationships
|
<ASSISTANT_TASK:>
Python Code:
# Import modules that contain functions we need
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# Read in data that will be used for the calculations.
# The data needs to be in the same directory(folder) as the program
# Using pandas read_csv method, we can create a data frame
data = pd.read_csv("./data/elements.csv")
# If you're not using a Binder link, you can get the data with this instead:
#data = pd.read_csv("http://php.scripts.psu.edu/djh300/cmpsc221/pt-data1.csv")"
# displays the first several rows of the data set
data.head()
# the names of all the columns in the dataset
data.columns
ax = data.plot('Atomic Number', 'Atomic Radius (pm)', title="Atomic Radius vs. Atomic Number", legend=False)
ax.set(xlabel="x label", ylabel="y label")
data.plot('Atomic Number', 'Mass')
data[['Name', 'Year Discovered']].sort_values(by='Year Discovered')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Profile array copy via dask threaded scheduler
Step3: NumPy arrays
Step4: Zarr arrays (in-memory)
Step5: Without the dask lock, we get better CPU utilisation.
Step6: Bcolz carrays (in-memory)
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.insert(0, '..')
import zarr
print('zarr', zarr.__version__)
from zarr import blosc
import numpy as np
import h5py
import bcolz
# don't let bcolz use multiple threads internally, we want to
# see whether dask can make good use of multiple CPUs
bcolz.set_nthreads(1)
import multiprocessing
import dask
import dask.array as da
from dask.diagnostics import Profiler, ResourceProfiler, CacheProfiler
from dask.diagnostics.profile_visualize import visualize
from cachey import nbytes
import bokeh
from bokeh.io import output_notebook
output_notebook()
import tempfile
import operator
from functools import reduce
from zarr.util import human_readable_size
def h5fmem(**kwargs):
Convenience function to create an in-memory HDF5 file.
# need a file name even tho nothing is ever written
fn = tempfile.mktemp()
# file creation args
kwargs['mode'] = 'w'
kwargs['driver'] = 'core'
kwargs['backing_store'] = False
# open HDF5 file
h5f = h5py.File(fn, **kwargs)
return h5f
def h5d_diagnostics(d):
Print some diagnostics on an HDF5 dataset.
print(d)
nbytes = reduce(operator.mul, d.shape) * d.dtype.itemsize
cbytes = d._id.get_storage_size()
if cbytes > 0:
ratio = nbytes / cbytes
else:
ratio = np.inf
r = ' compression: %s' % d.compression
r += '; compression_opts: %s' % d.compression_opts
r += '; shuffle: %s' % d.shuffle
r += '\n nbytes: %s' % human_readable_size(nbytes)
r += '; nbytes_stored: %s' % human_readable_size(cbytes)
r += '; ratio: %.1f' % ratio
r += '; chunks: %s' % str(d.chunks)
print(r)
def profile_dask_copy(src, dst, chunks, num_workers=multiprocessing.cpu_count(), dt=0.1, lock=True):
dsrc = da.from_array(src, chunks=chunks)
with Profiler() as prof, ResourceProfiler(dt=dt) as rprof:
da.store(dsrc, dst, num_workers=num_workers, lock=lock)
visualize([prof, rprof], min_border_top=60, min_border_bottom=60)
# a1 = np.arange(400000000, dtype='i4')
a1 = np.random.normal(2000, 1000, size=200000000).astype('u2')
a1
human_readable_size(a1.nbytes)
a2 = np.empty_like(a1)
chunks = 2**20, # 4M
%time a2[:] = a1
profile_dask_copy(a1, a2, chunks, lock=True, dt=.01)
profile_dask_copy(a1, a2, chunks, lock=False, dt=.01)
z1 = zarr.array(a1, chunks=chunks, compression='blosc',
compression_opts=dict(cname='lz4', clevel=1, shuffle=2))
z1
z2 = zarr.empty_like(z1)
z2
profile_dask_copy(z1, z2, chunks, lock=True, dt=.02)
profile_dask_copy(z1, z2, chunks, lock=False, dt=0.02)
# for comparison, using blosc internal threads
%timeit -n3 -r5 z2[:] = z1
%prun z2[:] = z1
h5f = h5fmem()
h5f
h1 = h5f.create_dataset('h1', data=a1, chunks=chunks, compression='lzf', shuffle=True)
h5d_diagnostics(h1)
h2 = h5f.create_dataset('h2', shape=h1.shape, chunks=h1.chunks,
compression=h1.compression, compression_opts=h1.compression_opts,
shuffle=h1.shuffle)
h5d_diagnostics(h2)
profile_dask_copy(h1, h2, chunks, lock=True, dt=0.1)
profile_dask_copy(h1, h2, chunks, lock=False, dt=0.1)
c1 = bcolz.carray(a1, chunklen=chunks[0],
cparams=bcolz.cparams(cname='lz4', clevel=1, shuffle=2))
c1
c2 = bcolz.zeros(a1.shape, chunklen=chunks[0], dtype=a1.dtype,
cparams=bcolz.cparams(cname='lz4', clevel=1, shuffle=2))
c2
profile_dask_copy(c1, c2, chunks, lock=True, dt=0.05)
# not sure it's safe to use bcolz without a lock, but what the heck...
profile_dask_copy(c1, c2, chunks, lock=False, dt=0.05)
# for comparison
%timeit -n3 -r5 c2[:] = c1
# for comparison
%timeit -n3 -r5 c1.copy()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Access the API
Step2: 3. Present results
Step3: 4. Get more info
Step4: Exercise
|
<ASSISTANT_TASK:>
Python Code:
!pip install omdb
# Import the library.
import omdb
# Search for movies.
movies = omdb.search("Westworld")
movies
# Since "movies" is a list, we can loop through it.
for movie in movies:
print("Title: " + movie["title"])
print("Type: " + movie["type"])
print("Year: " + movie["year"])
print("Link: http://imdb.com/title/" + movie["imdb_id"])
print()
# Lets pick the first movie (remember, lists always start at 0).
movie = movies[0]
print(movie["title"])
print(movie["year"])
# Instead of searching (and get a list of movies), we can specify the movie we want (and get a single movie).
movie = omdb.title("Westworld", year="2016")
movie
# Present some info about this movie.
print(movie["title"])
print(movie["year"])
print(movie["country"])
print("Rating: " + movie["imdb_rating"])
print(movie["plot"])
# Modify this.
movie = omdb.title("Westworld", year="2016")
print(movie["title"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Numeric widgets
Step2: Sliders can also be displayed vertically.
Step3: FloatProgress
Step4: BoundedFloatText
Step5: FloatText
Step6: Boolean widgets
Step7: Checkbox
Step8: Selection widgets
Step9: The following is also valid
Step10: RadioButtons
Step11: Select
Step12: ToggleButtons
Step13: SelectMultiple
Step14: String widgets
Step15: Textarea
Step16: Latex
Step17: HTML
Step18: Button
|
<ASSISTANT_TASK:>
Python Code:
from IPython.html import widgets
[n for n in dir(widgets) if not n.endswith('Widget') and n[0] == n[0].upper() and not n[0] == '_']
widgets.FloatSlider(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Test:',
)
widgets.FloatSlider(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Test',
orientation='vertical',
)
widgets.FloatProgress(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Loading:',
)
widgets.BoundedFloatText(
value=7.5,
min=5.0,
max=10.0,
description='Text:',
)
widgets.FloatText(
value=7.5,
description='Any:',
)
widgets.ToggleButton(
description='Click me',
value=False,
)
widgets.Checkbox(
description='Check me',
value=True,
)
from IPython.display import display
w = widgets.Dropdown(
options=['1', '2', '3'],
value='2',
description='Number:',
)
display(w)
w.value
w = widgets.Dropdown(
options={'One': 1, 'Two': 2, 'Three': 3},
value=2,
description='Number:',
)
display(w)
w.value
widgets.RadioButtons(
description='Pizza topping:',
options=['pepperoni', 'pineapple', 'anchovies'],
)
widgets.Select(
description='OS:',
options=['Linux', 'Windows', 'OSX'],
)
widgets.ToggleButtons(
description='Speed:',
options=['Slow', 'Regular', 'Fast'],
)
w = widgets.SelectMultiple(
description="Fruits",
options=['Apples', 'Oranges', 'Pears']
)
display(w)
w.value
widgets.Text(
description='String:',
value='Hello World',
)
widgets.Textarea(
description='String:',
value='Hello World',
)
widgets.Latex(
value="$$\\frac{n!}{k!(n-k)!} = \\binom{n}{k}$$",
)
widgets.HTML(
value="Hello <b>World</b>"
)
widgets.Button(description='Click me')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following box is useless if you're not using a notebook - they just enable the online notebook drawing stuff.
|
<ASSISTANT_TASK:>
Python Code:
# these imports let you use opencv
import cv2 #opencv itself
import common #some useful opencv functions
import video # some video stuff
import numpy as np # matrix manipulations
#the following are to do with this interactive notebook code
%matplotlib inline
from matplotlib import pyplot as plt # this lets you draw inline pictures in the notebooks
import pylab # this allows you to control figure size
pylab.rcParams['figure.figsize'] = (10.0, 8.0) # this controls figure size in the notebook
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'key1': ['a', 'a', 'b', 'b', 'a', 'c'],
'key2': ['one', 'two', 'gee', 'two', 'three', 'two']})
def g(df):
return df.groupby('key1')['key2'].apply(lambda x: x.str.endswith('e').sum()).reset_index(name='count')
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This series records the number of hardcover book sales at a retail store over 30 days. Notice that we have a single column of observations Hardcover with a time index Date.
Step2: Linear regression with the time dummy produces the model
Step3: Time-step features let you model time dependence. A series is time dependent if its values can be predicted from the time they occured. In the Hardcover Sales series, we can predict that sales later in the month are generally higher than sales earlier in the month.
Step4: Linear regression with a lag feature produces the model
Step5: You can see from the lag plot that sales on one day (Hardcover) are correlated with sales from the previous day (Lag_1). When you see a relationship like this, you know a lag feature will be useful.
Step6: Time-step feature
Step7: The procedure for fitting a linear regression model follows the standard steps for scikit-learn.
Step8: The model actually created is (approximately)
Step9: Lag feature
Step10: When creating lag features, we need to decide what to do with the missing values produced. Filling them in is one option, maybe with 0.0 or "backfilling" with the first known value. Instead, we'll just drop the missing values, making sure to also drop values in the target from corresponding dates.
Step11: The lag plot shows us how well we were able to fit the relationship between the number of vehicles one day and the number the previous day.
Step12: What does this prediction from a lag feature mean about how well we can predict the series across time? The following time plot shows us how our forecasts now respond to the behavior of the series in the recent past.
|
<ASSISTANT_TASK:>
Python Code:
#$HIDE_INPUT$
import pandas as pd
df = pd.read_csv(
"../input/ts-course-data/book_sales.csv",
index_col='Date',
parse_dates=['Date'],
).drop('Paperback', axis=1)
df.head()
#$HIDE_INPUT$
import numpy as np
df['Time'] = np.arange(len(df.index))
df.head()
#$HIDE_INPUT$
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use("seaborn-whitegrid")
plt.rc(
"figure",
autolayout=True,
figsize=(11, 4),
titlesize=18,
titleweight='bold',
)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=16,
titlepad=10,
)
%config InlineBackend.figure_format = 'retina'
fig, ax = plt.subplots()
ax.plot('Time', 'Hardcover', data=df, color='0.75')
ax = sns.regplot(x='Time', y='Hardcover', data=df, ci=None, scatter_kws=dict(color='0.25'))
ax.set_title('Time Plot of Hardcover Sales');
#$HIDE_INPUT$
df['Lag_1'] = df['Hardcover'].shift(1)
df = df.reindex(columns=['Hardcover', 'Lag_1'])
df.head()
#$HIDE_INPUT$
fig, ax = plt.subplots()
ax = sns.regplot(x='Lag_1', y='Hardcover', data=df, ci=None, scatter_kws=dict(color='0.25'))
ax.set_aspect('equal')
ax.set_title('Lag Plot of Hardcover Sales');
#$HIDE_INPUT$
from pathlib import Path
from warnings import simplefilter
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
simplefilter("ignore") # ignore warnings to clean up output cells
# Set Matplotlib defaults
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True, figsize=(11, 4))
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=14,
titlepad=10,
)
plot_params = dict(
color="0.75",
style=".-",
markeredgecolor="0.25",
markerfacecolor="0.25",
legend=False,
)
%config InlineBackend.figure_format = 'retina'
# Load Tunnel Traffic dataset
data_dir = Path("../input/ts-course-data")
tunnel = pd.read_csv(data_dir / "tunnel.csv", parse_dates=["Day"])
# Create a time series in Pandas by setting the index to a date
# column. We parsed "Day" as a date type by using `parse_dates` when
# loading the data.
tunnel = tunnel.set_index("Day")
# By default, Pandas creates a `DatetimeIndex` with dtype `Timestamp`
# (equivalent to `np.datetime64`, representing a time series as a
# sequence of measurements taken at single moments. A `PeriodIndex`,
# on the other hand, represents a time series as a sequence of
# quantities accumulated over periods of time. Periods are often
# easier to work with, so that's what we'll use in this course.
tunnel = tunnel.to_period()
tunnel.head()
df = tunnel.copy()
df['Time'] = np.arange(len(tunnel.index))
df.head()
from sklearn.linear_model import LinearRegression
# Training data
X = df.loc[:, ['Time']] # features
y = df.loc[:, 'NumVehicles'] # target
# Train the model
model = LinearRegression()
model.fit(X, y)
# Store the fitted values as a time series with the same time index as
# the training data
y_pred = pd.Series(model.predict(X), index=X.index)
#$HIDE_INPUT$
ax = y.plot(**plot_params)
ax = y_pred.plot(ax=ax, linewidth=3)
ax.set_title('Time Plot of Tunnel Traffic');
df['Lag_1'] = df['NumVehicles'].shift(1)
df.head()
from sklearn.linear_model import LinearRegression
X = df.loc[:, ['Lag_1']]
X.dropna(inplace=True) # drop missing values in the feature set
y = df.loc[:, 'NumVehicles'] # create the target
y, X = y.align(X, join='inner') # drop corresponding values in target
model = LinearRegression()
model.fit(X, y)
y_pred = pd.Series(model.predict(X), index=X.index)
#$HIDE_INPUT$
fig, ax = plt.subplots()
ax.plot(X['Lag_1'], y, '.', color='0.25')
ax.plot(X['Lag_1'], y_pred)
ax.set_aspect('equal')
ax.set_ylabel('NumVehicles')
ax.set_xlabel('Lag_1')
ax.set_title('Lag Plot of Tunnel Traffic');
#$HIDE_INPUT$
ax = y.plot(**plot_params)
ax = y_pred.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
|
<ASSISTANT_TASK:>
Python Code:
# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne import io
from mne.connectivity import spectral_connectivity, seed_target_indices
from mne.datasets import sample
from mne.time_frequency import AverageTFR
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
# Pick MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
exclude='bads')
# Create epochs for left-visual condition
event_id, tmin, tmax = 3, -0.2, 0.5
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),
preload=True)
# Use 'MEG 2343' as seed
seed_ch = 'MEG 2343'
picks_ch_names = [raw.ch_names[i] for i in picks]
# Create seed-target indices for connectivity computation
seed = picks_ch_names.index(seed_ch)
targets = np.arange(len(picks))
indices = seed_target_indices(seed, targets)
# Define wavelet frequencies and number of cycles
cwt_frequencies = np.arange(7, 30, 2)
cwt_n_cycles = cwt_frequencies / 7.
# Run the connectivity analysis using 2 parallel jobs
sfreq = raw.info['sfreq'] # the sampling frequency
con, freqs, times, _, _ = spectral_connectivity(
epochs, indices=indices,
method='wpli2_debiased', mode='cwt_morlet', sfreq=sfreq,
cwt_frequencies=cwt_frequencies, cwt_n_cycles=cwt_n_cycles, n_jobs=1)
# Mark the seed channel with a value of 1.0, so we can see it in the plot
con[np.where(indices[1] == seed)] = 1.0
# Show topography of connectivity from seed
title = 'WPLI2 - Visual - Seed %s' % seed_ch
layout = mne.find_layout(epochs.info, 'meg') # use full layout
tfr = AverageTFR(epochs.info, con, times, freqs, len(epochs))
tfr.plot_topo(fig_facecolor='w', font_color='k', border='k')
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.