markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) encoder_state = encoding_layer(embed_input, rnn_size, num_layers, keep_prob) processed_target_data = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, processed_target_data) train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size,\ sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model)
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability
# Number of Epochs epochs = 20 # Batch Size batch_size = 512 # RNN Size rnn_size = 512 # Number of Layers num_layers = 1 # Embedding Size encoding_embedding_size = 512 decoding_embedding_size = 512 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.6
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
Build the Graph Build the graph using the neural network you implemented.
""" DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients)
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target_batch, [(0,0),(0,max_seq - target_batch.shape[1]), (0,0)], 'constant') if max_seq - batch_train_logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved')
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the <UNK> word id.
def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function words = sentence.split(" ") word_ids = [] for word in words: eord = word.lower() if word in vocab_to_int: word_id = vocab_to_int[word] else: word_id = vocab_to_int['<UNK>'] word_ids.append(word_id) return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq)
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
载入数据 上一个ipython notebook已经做了下面这些数据特征预处理 1. City因为类别太多丢掉 2. DOB生成Age字段,然后丢掉原字段 3. EMI_Loan_Submitted_Missing 为1(EMI_Loan_Submitted) 为0(EMI_Loan_Submitted缺省) EMI_Loan_Submitted丢掉 4. EmployerName丢掉 5. Existing_EMI对缺省值用均值填充 6. Interest_Rate_Missing同 EMI_Loan_Submitted 7. Lead_Creation_Date丢掉 8. Loan_Amount_Applied, Loan_Tenure_Applied 均值填充 9. Loan_Amount_Submitted_Missing 同 EMI_Loan_Submitted 10. Loan_Tenure_Submitted_Missing 同 EMI_Loan_Submitted 11. LoggedIn, Salary_Account 丢掉 12. Processing_Fee_Missing 同 EMI_Loan_Submitted 13. Source - top 2 kept as is and all others combined into different category 14. Numerical变化 和 One-Hot编码
train = pd.read_csv('train_modified.csv') test = pd.read_csv('test_modified.csv') train.shape, test.shape target='Disbursed' IDcol = 'ID' train['Disbursed'].value_counts()
kaggle/Feature_engineering_and_model_tuning/Feature-engineering_and_Parameter_Tuning_XGBoost/XGBoost models tuning.ipynb
thushear/MLInAction
apache-2.0
建模与交叉验证 写一个大的函数完成以下的功能 1. 数据建模 2. 求训练准确率 3. 求训练集AUC 4. 根据xgboost交叉验证更新n_estimators 5. 画出特征的重要度
#test_results = pd.read_csv('test_results.csv') def modelfit(alg, dtrain, dtest, predictors,useTrainCV=True, cv_folds=5, early_stopping_rounds=50): if useTrainCV: xgb_param = alg.get_xgb_params() xgtrain = xgb.DMatrix(dtrain[predictors].values, label=dtrain[target].values) xgtest = xgb.DMatrix(dtest[predictors].values) cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds, early_stopping_rounds=early_stopping_rounds, show_progress=False) alg.set_params(n_estimators=cvresult.shape[0]) #建模 alg.fit(dtrain[predictors], dtrain['Disbursed'],eval_metric='auc') #对训练集预测 dtrain_predictions = alg.predict(dtrain[predictors]) dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1] #输出模型的一些结果 print "\n关于现在这个模型" print "准确率 : %.4g" % metrics.accuracy_score(dtrain['Disbursed'].values, dtrain_predictions) print "AUC 得分 (训练集): %f" % metrics.roc_auc_score(dtrain['Disbursed'], dtrain_predprob) feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False) feat_imp.plot(kind='bar', title='Feature Importances') plt.ylabel('Feature Importance Score')
kaggle/Feature_engineering_and_model_tuning/Feature-engineering_and_Parameter_Tuning_XGBoost/XGBoost models tuning.ipynb
thushear/MLInAction
apache-2.0
第1步- 对于高的学习率找到最合适的estimators个数
predictors = [x for x in train.columns if x not in [target, IDcol]] xgb1 = XGBClassifier( learning_rate =0.1, n_estimators=1000, max_depth=5, min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27) modelfit(xgb1, train, test, predictors) #对subsample 和 max_features 用grid search查找最好的参数 param_test1 = { 'max_depth':range(3,10,2), 'min_child_weight':range(1,6,2) } gsearch1 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=5, min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27), param_grid = param_test1, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch1.fit(train[predictors],train[target]) gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_ # 对于max_depth和min_child_weight查找最好的参数 param_test2 = { 'max_depth':[4,5,6], 'min_child_weight':[4,5,6] } gsearch2 = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=5, min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch2.fit(train[predictors],train[target]) gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_ #交叉验证对min_child_weight寻找最合适的参数 param_test2b = { 'min_child_weight':[6,8,10,12] } gsearch2b = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=4, min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test2b, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch2b.fit(train[predictors],train[target]) gsearch2b.grid_scores_, gsearch2b.best_params_, gsearch2b.best_score_ #Grid seach选择合适的gamma param_test3 = { 'gamma':[i/10.0 for i in range(0,5)] } gsearch3 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=4, min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch3.fit(train[predictors],train[target]) gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_ predictors = [x for x in train.columns if x not in [target, IDcol]] xgb2 = XGBClassifier( learning_rate =0.1, n_estimators=1000, max_depth=4, min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27) modelfit(xgb2, train, test, predictors)
kaggle/Feature_engineering_and_model_tuning/Feature-engineering_and_Parameter_Tuning_XGBoost/XGBoost models tuning.ipynb
thushear/MLInAction
apache-2.0
Tune subsample and colsample_bytree
#对subsample 和 colsample_bytree用grid search寻找最合适的参数 param_test4 = { 'subsample':[i/10.0 for i in range(6,10)], 'colsample_bytree':[i/10.0 for i in range(6,10)] } gsearch4 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4, min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test4, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch4.fit(train[predictors],train[target]) gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
kaggle/Feature_engineering_and_model_tuning/Feature-engineering_and_Parameter_Tuning_XGBoost/XGBoost models tuning.ipynb
thushear/MLInAction
apache-2.0
tune subsample:
# 同上 param_test5 = { 'subsample':[i/100.0 for i in range(75,90,5)], 'colsample_bytree':[i/100.0 for i in range(75,90,5)] } gsearch5 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4, min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch5.fit(train[predictors],train[target]) gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
kaggle/Feature_engineering_and_model_tuning/Feature-engineering_and_Parameter_Tuning_XGBoost/XGBoost models tuning.ipynb
thushear/MLInAction
apache-2.0
对正则化做交叉验证
#对reg_alpha用grid search寻找最合适的参数 param_test6 = { 'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100] } gsearch6 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4, min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test6, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch6.fit(train[predictors],train[target]) gsearch6.grid_scores_, gsearch6.best_params_, gsearch6.best_score_ # 换一组参数对reg_alpha用grid search寻找最合适的参数 param_test7 = { 'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05] } gsearch7 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4, min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test7, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch7.fit(train[predictors],train[target]) gsearch7.grid_scores_, gsearch7.best_params_, gsearch7.best_score_ xgb3 = XGBClassifier( learning_rate =0.1, n_estimators=1000, max_depth=4, min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8, reg_alpha=0.005, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27) modelfit(xgb3, train, test, predictors) xgb4 = XGBClassifier( learning_rate =0.01, n_estimators=5000, max_depth=4, min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8, reg_alpha=0.005, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27) modelfit(xgb4, train, test, predictors)
kaggle/Feature_engineering_and_model_tuning/Feature-engineering_and_Parameter_Tuning_XGBoost/XGBoost models tuning.ipynb
thushear/MLInAction
apache-2.0
Analyze image stats
import matplotlib from numpy.random import randn import matplotlib.pyplot as plt from matplotlib.ticker import FuncFormatter %matplotlib inline def to_percent(y, position): # Ignore the passed in position. This has the effect of scaling the default # tick locations. s = str(100 * y) # The percent symbol needs escaping in latex if matplotlib.rcParams['text.usetex'] is True: return s + r'$\%$' else: return s + '%'
notebook/winter2017_004.images_data.ipynb
svebk/qpr-winter-2017
mit
Images distribution
def get_ad_images(ad_id, ads_images_dict, url_sha1_dict, verbose=False): images_url_list = ads_images_dict[ad_id] images_sha1s = [] for image_url in images_url_list: if image_url is None or not image_url: continue try: images_sha1s.append(url_sha1_dict[image_url.strip()].strip()) except: if verbose: print 'Cannot find sha1 for: {}.'.format(image_url) return images_sha1s # Analyze distribution of images in ads_images_dict images_count = [] for ad_id in ads_images_dict: images_count.append(len(get_ad_images(ad_id, ads_images_dict, url_sha1_dict))) def print_stats(np_img_count): print np.min(np_img_count), np.mean(np_img_count), np.max(np_img_count) # Normed histogram seems to be broken, # using weights as suggested in http://stackoverflow.com/questions/5498008/pylab-histdata-normed-1-normalization-seems-to-work-incorrect weights = np.ones_like(np_img_count)/float(len(np_img_count)) res = plt.hist(np_img_count, bins=100, weights=weights) print np.sum(res[0]) # Create the formatter using the function to_percent. This multiplies all the # default labels by 100, making them all percentages formatter = FuncFormatter(to_percent) # Set the formatter plt.gca().yaxis.set_major_formatter(formatter) plt.show() print_stats(np.asarray(images_count))
notebook/winter2017_004.images_data.ipynb
svebk/qpr-winter-2017
mit
Faces distribution
def get_faces_images(images_sha1s, faces_dict): faces_out = {} for sha1 in images_sha1s: img_notfound = False try: tmp_faces = faces_dict[sha1] except: img_notfound = True if img_notfound or tmp_faces['count']==0: faces_out[sha1] = [] continue bboxes = [] for face in tmp_faces['detections']: bbox = [float(x) for x in tmp_faces['detections'][face]['bbox'].split(',')] bbox.append(float(tmp_faces['detections'][face]['score'])) bboxes.append(bbox) #print bboxes faces_out[sha1] = bboxes return faces_out def show_faces(faces, images_dir): from matplotlib.pyplot import imshow from IPython.display import display import numpy as np %matplotlib inline imgs = [] for face in faces: if faces[face]: img = open_image(face, images_dir) draw_face_bbox(img, faces[face]) imgs.append(img) if not imgs: print 'No face images' display(*imgs) # get all faces ads from each ad faces_in_images_percent = [] for ad_id in ads_images_dict: images_sha1s = get_ad_images(ad_id, ads_images_dict, url_sha1_dict) faces_images = get_faces_images(images_sha1s, faces_dict) if len(faces_images)==0: continue nb_faces = 0 for face in faces_images: if faces_images[face]: nb_faces += 1 faces_in_images_percent.append(float(nb_faces)/len(faces_images)) np_faces_in_images_percent = np.asarray(faces_in_images_percent) print_stats(np_faces_in_images_percent) no_faces = np.where(np_faces_in_images_percent==0.0) print no_faces[0].shape print np_faces_in_images_percent.shape percent_noface = float(no_faces[0].shape[0])/np_faces_in_images_percent.shape[0] print 1-percent_noface # get all faces scores from each ad faces_scores = [] all_faces = [] for ad_id in ads_images_dict: images_sha1s = get_ad_images(ad_id, ads_images_dict, url_sha1_dict) faces_images = get_faces_images(images_sha1s, faces_dict) if len(faces_images)==0: continue nb_faces = 0 for face in faces_images: if faces_images[face]: for one_face in faces_images[face]: all_faces.append([face, one_face]) faces_scores.append(float(one_face[4])) np_faces_scores = np.asarray(faces_scores) print_stats(faces_scores) low_scores_faces = np.where(np_faces_scores<0.90)[0] print float(len(low_scores_faces))/len(np_faces_scores) very_low_scores_faces = np.where(np_faces_scores<0.80)[0] print float(len(very_low_scores_faces))/len(np_faces_scores) #all_faces print len(np_faces_scores) nb_faces_to_show = 10 np.random.shuffle(very_low_scores_faces) faces_to_show = [all_faces[x] for x in very_low_scores_faces[:nb_faces_to_show]] print faces_to_show for face_id, face in faces_to_show: print face_id, face face_dict = {} face_dict[face_id] = [face] show_faces(face_dict, images_dir)
notebook/winter2017_004.images_data.ipynb
svebk/qpr-winter-2017
mit
Show images and faces of one ad
def get_fnt(img, txt): from PIL import ImageFont # portion of image width you want text width to be img_fraction = 0.20 fontsize = 2 font = ImageFont.truetype("arial.ttf", fontsize) while font.getsize(txt)[0] < img_fraction*img.size[0]: # iterate until the text size is just larger than the criteria fontsize += 1 font = ImageFont.truetype("arial.ttf", fontsize) return font, font.getsize(txt)[0] def draw_face_bbox(img, bboxes, width=4): from PIL import ImageDraw import numpy as np draw = ImageDraw.Draw(img) for bbox in bboxes: for i in range(width): rect_start = (int(np.round(bbox[0] + width/2 - i)), int(np.round(bbox[1] + width/2 - i))) rect_end = (int(np.round(bbox[2] - width/2 + i)), int(np.round(bbox[3] - width/2 + i))) draw.rectangle((rect_start, rect_end), outline=(0, 255, 0)) # print score? if len(bbox)==5: score = str(bbox[4]) fnt, text_size = get_fnt(img, score[:5]) draw.text((np.round((bbox[0]+bbox[2])/2-text_size/2),np.round(bbox[1])), score[:5], font=fnt, fill=(255,255,255,64)) def open_image(sha1, images_dir): from PIL import Image img = Image.open(os.path.join(images_dir, sha1[:3], sha1)) return img #face images of ad '84FC37A4E38F7DE2B9FCAAB902332ED60A344B8DF90893A5A8BE3FC1139FCD5A' are blurred but detected # image '20893a926fbf50d1a5994f70ec64dbf33dd67e2a' highly pixelated # male strippers '20E4597A6DA11BC07BB7578FFFCE07027F885AF02265FD663C0911D2699E0A79' all_ads_id = range(len(ads_images_dict.keys())) import numpy as np np.random.shuffle(all_ads_id) ad_id = ads_images_dict.keys()[all_ads_id[0]] print ad_id images_sha1s = get_ad_images(ad_id, ads_images_dict, url_sha1_dict) print images_sha1s faces = get_faces_images(images_sha1s, faces_dict) print faces show_faces(faces, images_dir)
notebook/winter2017_004.images_data.ipynb
svebk/qpr-winter-2017
mit
Dataset: "Some time-series"
def gimme_one_random_number(): return nd.random_uniform(low=0, high=1, shape=(1,1)).asnumpy()[0][0] def create_one_time_series(seq_length=10): freq = (gimme_one_random_number()*0.5) + 0.1 # 0.1 to 0.6 ampl = gimme_one_random_number() + 0.5 # 0.5 to 1.5 x = np.sin(np.arange(0, seq_length) * freq) * ampl return x def create_batch_time_series(seq_length=10, num_samples=4): column_labels = ['t'+str(i) for i in range(0, seq_length)] df = pd.DataFrame(create_one_time_series(seq_length=seq_length)).transpose() df.columns = column_labels df.index = ['s'+str(0)] for i in range(1, num_samples): more_df = pd.DataFrame(create_one_time_series(seq_length=seq_length)).transpose() more_df.columns = column_labels more_df.index = ['s'+str(i)] df = pd.concat([df, more_df], axis=0) return df # returns a dataframe of shape (num_samples, seq_length) # Create some time-series # uncomment below to force predictible random numbers # mx.random.seed(1) if CREATE_DATA_SETS: data_train = create_batch_time_series(seq_length=SEQ_LENGTH, num_samples=NUM_SAMPLES_TRAINING) data_test = create_batch_time_series(seq_length=SEQ_LENGTH, num_samples=NUM_SAMPLES_TESTING) # Write data to csv data_train.to_csv("../data/timeseries/train.csv") data_test.to_csv("../data/timeseries/test.csv") else: data_train = pd.read_csv("../data/timeseries/train.csv", index_col=0) data_test = pd.read_csv("../data/timeseries/test.csv", index_col=0)
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Check the data real quick
# num_sampling_points = min(SEQ_LENGTH, 400) # (data_train.sample(4).transpose().iloc[range(0, SEQ_LENGTH, SEQ_LENGTH//num_sampling_points)]).plot()
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Preparing the data for training
# print(data_train.loc[:,data_train.columns[:-1]]) # inputs # print(data_train.loc[:,data_train.columns[1:]]) # outputs (i.e. inputs shift by +1) batch_size = 64 batch_size_test = 1 seq_length = 16 num_batches_train = data_train.shape[0] // batch_size num_batches_test = data_test.shape[0] // batch_size_test num_features = 1 # we do 1D time series for now, this is like vocab_size = 1 for characters # inputs are from t0 to t_seq_length - 1. because the last point is kept for the output ("label") of the penultimate point data_train_inputs = data_train.loc[:,data_train.columns[:-1]] data_train_labels = data_train.loc[:,data_train.columns[1:]] data_test_inputs = data_test.loc[:,data_test.columns[:-1]] data_test_labels = data_test.loc[:,data_test.columns[1:]] train_data_inputs = nd.array(data_train_inputs.values).reshape((num_batches_train, batch_size, seq_length, num_features)) train_data_labels = nd.array(data_train_labels.values).reshape((num_batches_train, batch_size, seq_length, num_features)) test_data_inputs = nd.array(data_test_inputs.values).reshape((num_batches_test, batch_size_test, seq_length, num_features)) test_data_labels = nd.array(data_test_labels.values).reshape((num_batches_test, batch_size_test, seq_length, num_features)) train_data_inputs = nd.swapaxes(train_data_inputs, 1, 2) train_data_labels = nd.swapaxes(train_data_labels, 1, 2) test_data_inputs = nd.swapaxes(test_data_inputs, 1, 2) test_data_labels = nd.swapaxes(test_data_labels, 1, 2) print('num_samples_training={0} | num_batches_train={1} | batch_size={2} | seq_length={3}'.format(NUM_SAMPLES_TRAINING, num_batches_train, batch_size, seq_length)) print('train_data_inputs shape: ', train_data_inputs.shape) print('train_data_labels shape: ', train_data_labels.shape) # print(data_train_inputs.values) # print(train_data_inputs[0]) # see what one batch looks like
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Long short-term memory (LSTM) RNNs An LSTM block has mechanisms to enable "memorizing" information for an extended number of time steps. We use the LSTM block with the following transformations that map inputs to outputs across blocks at consecutive layers and consecutive time steps: $\newcommand{\xb}{\mathbf{x}} \newcommand{\RR}{\mathbb{R}}$ $$g_t = \text{tanh}(X_t W_{xg} + h_{t-1} W_{hg} + b_g),$$ $$i_t = \sigma(X_t W_{xi} + h_{t-1} W_{hi} + b_i),$$ $$f_t = \sigma(X_t W_{xf} + h_{t-1} W_{hf} + b_f),$$ $$o_t = \sigma(X_t W_{xo} + h_{t-1} W_{ho} + b_o),$$ $$c_t = f_t \odot c_{t-1} + i_t \odot g_t,$$ $$h_t = o_t \odot \text{tanh}(c_t),$$ where $\odot$ is an element-wise multiplication operator, and for all $\xb = [x_1, x_2, \ldots, x_k]^\top \in \RR^k$ the two activation functions: $$\sigma(\xb) = \left[\frac{1}{1+\exp(-x_1)}, \ldots, \frac{1}{1+\exp(-x_k)}]\right]^\top,$$ $$\text{tanh}(\xb) = \left[\frac{1-\exp(-2x_1)}{1+\exp(-2x_1)}, \ldots, \frac{1-\exp(-2x_k)}{1+\exp(-2x_k)}\right]^\top.$$ In the transformations above, the memory cell $c_t$ stores the "long-term" memory in the vector form. In other words, the information accumulatively captured and encoded until time step $t$ is stored in $c_t$ and is only passed along the same layer over different time steps. Given the inputs $c_t$ and $h_t$, the input gate $i_t$ and forget gate $f_t$ will help the memory cell to decide how to overwrite or keep the memory information. The output gate $o_t$ further lets the LSTM block decide how to retrieve the memory information to generate the current state $h_t$ that is passed to both the next layer of the current time step and the next time step of the current layer. Such decisions are made using the hidden-layer parameters $W$ and $b$ with different subscripts: these parameters will be inferred during the training phase by gluon. Allocate parameters
num_inputs = num_features # for a 1D time series, this is just a scalar equal to 1.0 num_outputs = num_features # same comment num_hidden_units = [8, 8] # num of hidden units in each hidden LSTM layer num_hidden_layers = len(num_hidden_units) # num of hidden LSTM layers num_units_layers = [num_features] + num_hidden_units ######################## # Weights connecting the inputs to the hidden layer ######################## Wxg, Wxi, Wxf, Wxo, Whg, Whi, Whf, Who, bg, bi, bf, bo = {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {} for i_layer in range(1, num_hidden_layers+1): num_inputs = num_units_layers[i_layer-1] num_hidden_units = num_units_layers[i_layer] Wxg[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01 Wxi[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01 Wxf[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01 Wxo[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01 ######################## # Recurrent weights connecting the hidden layer across time steps ######################## Whg[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01 Whi[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01 Whf[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01 Who[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01 ######################## # Bias vector for hidden layer ######################## bg[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01 bi[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01 bf[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01 bo[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01 ######################## # Weights to the output nodes ######################## Why = nd.random_normal(shape=(num_units_layers[-1], num_outputs), ctx=ctx) * .01 by = nd.random_normal(shape=num_outputs, ctx=ctx) * .01
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Attach the gradients
params = [] for i_layer in range(1, num_hidden_layers+1): params += [Wxg[i_layer], Wxi[i_layer], Wxf[i_layer], Wxo[i_layer], Whg[i_layer], Whi[i_layer], Whf[i_layer], Who[i_layer], bg[i_layer], bi[i_layer], bf[i_layer], bo[i_layer]] params += [Why, by] # add the output layer for param in params: param.attach_grad()
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Softmax Activation
def softmax(y_linear, temperature=1.0): lin = (y_linear-nd.max(y_linear)) / temperature exp = nd.exp(lin) partition = nd.sum(exp, axis=0, exclude=True).reshape((-1,1)) return exp / partition
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Cross-entropy loss function
def cross_entropy(yhat, y): return - nd.mean(nd.sum(y * nd.log(yhat), axis=0, exclude=True)) def rmse(yhat, y): return nd.mean(nd.sqrt(nd.sum(nd.power(y - yhat, 2), axis=0, exclude=True)))
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Averaging the loss over the sequence
def average_ce_loss(outputs, labels): assert(len(outputs) == len(labels)) total_loss = 0. for (output, label) in zip(outputs,labels): total_loss = total_loss + cross_entropy(output, label) return total_loss / len(outputs) def average_rmse_loss(outputs, labels): assert(len(outputs) == len(labels)) total_loss = 0. for (output, label) in zip(outputs,labels): total_loss = total_loss + rmse(output, label) return total_loss / len(outputs)
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Optimizer
def SGD(params, learning_rate): for param in params: # print('grrrrr: ', param.grad) param[:] = param - learning_rate * param.grad def adam(params, learning_rate, M , R, index_adam_call, beta1, beta2, eps): k = -1 for param in params: k += 1 M[k] = beta1 * M[k] + (1. - beta1) * param.grad R[k] = beta2 * R[k] + (1. - beta2) * (param.grad)**2 # bias correction since we initilized M & R to zeros, they're biased toward zero on the first few iterations m_k_hat = M[k] / (1. - beta1**(index_adam_call)) r_k_hat = R[k] / (1. - beta2**(index_adam_call)) if((np.isnan(M[k].asnumpy())).any() or (np.isnan(R[k].asnumpy())).any()): # print('GRRRRRR ', M, K) stop() # print('grrrrr: ', param.grad) param[:] = param - learning_rate * m_k_hat / (nd.sqrt(r_k_hat) + eps) # print('m_k_hat r_k_hat', m_k_hat, r_k_hat) return params, M, R # def adam(params, learning_rate, M, R, index_iteration, beta1=0.9, beta2=0.999, eps=1e-8): # for k, param in enumerate(params): # if k==0: # print('batch_iteration {}: {}'.format(index_iteration, param)) # M[k] = beta1 * M[k] + (1. - beta1) * param.grad # R[k] = beta2 * R[k] + (1. - beta2) * (param.grad)**2 # m_k_hat = M[k] / (1. - beta1**(index_iteration)) # r_k_hat = R[k] / (1. - beta2**(index_iteration)) # param[:] = param - learning_rate * m_k_hat / (nd.sqrt(r_k_hat) + eps) # # print(beta1, beta2, M, R) # if k==0: # print('batch_iteration {}: {}'.format(index_iteration, param.grad)) # for k, param in enumerate(params): # print('batch_iteration {}: {}'.format(index_iteration, param)) # return M, R
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Define the model
def single_lstm_unit_calcs(X, c, Wxg, h, Whg, bg, Wxi, Whi, bi, Wxf, Whf, bf, Wxo, Who, bo): g = nd.tanh(nd.dot(X, Wxg) + nd.dot(h, Whg) + bg) i = nd.sigmoid(nd.dot(X, Wxi) + nd.dot(h, Whi) + bi) f = nd.sigmoid(nd.dot(X, Wxf) + nd.dot(h, Whf) + bf) o = nd.sigmoid(nd.dot(X, Wxo) + nd.dot(h, Who) + bo) ####################### c = f * c + i * g h = o * nd.tanh(c) return c, h def deep_lstm_rnn(inputs, h, c, temperature=1.0): """ h: dict of nd.arrays, each key is the index of a hidden layer (from 1 to whatever). Index 0, if any, is the input layer """ outputs = [] # inputs is one BATCH of sequences so its shape is number_of_seq, seq_length, features_dim # (latter is 1 for a time series, vocab_size for a character, n for a n different times series) for X in inputs: # X is batch of one time stamp. E.g. if each batch has 37 sequences, then the first value of X will be a set of the 37 first values of each of the 37 sequences # that means each iteration on X corresponds to one time stamp, but it is done in batches of different sequences h[0] = X # the first hidden layer takes the input X as input for i_layer in range(1, num_hidden_layers+1): # lstm units now have the 2 following inputs: # i) h_t from the previous layer (equivalent to the input X for a non-deep lstm net), # ii) h_t-1 from the current layer (same as for non-deep lstm nets) c[i_layer], h[i_layer] = single_lstm_unit_calcs(h[i_layer-1], c[i_layer], Wxg[i_layer], h[i_layer], Whg[i_layer], bg[i_layer], Wxi[i_layer], Whi[i_layer], bi[i_layer], Wxf[i_layer], Whf[i_layer], bf[i_layer], Wxo[i_layer], Who[i_layer], bo[i_layer]) yhat_linear = nd.dot(h[num_hidden_layers], Why) + by # yhat is a batch of several values of the same time stamp # this is basically the prediction of the sequence, which overlaps most of the input sequence, plus one point (character or value) # yhat = softmax(yhat_linear, temperature=temperature) # yhat = nd.sigmoid(yhat_linear) # yhat = nd.tanh(yhat_linear) yhat = yhat_linear # we cant use a 1.0-bounded activation function since amplitudes can be greater than 1.0 outputs.append(yhat) # outputs has same shape as inputs, i.e. a list of batches of data points. # print('some shapes... yhat outputs', yhat.shape, len(outputs) ) return (outputs, h, c)
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Test and visualize predictions
def test_prediction(one_input_seq, one_label_seq, temperature=1.0): ##################################### # Set the initial state of the hidden representation ($h_0$) to the zero vector ##################################### # some better initialization needed?? h, c = {}, {} for i_layer in range(1, num_hidden_layers+1): h[i_layer] = nd.zeros(shape=(batch_size_test, num_units_layers[i_layer]), ctx=ctx) c[i_layer] = nd.zeros(shape=(batch_size_test, num_units_layers[i_layer]), ctx=ctx) outputs, h, c = deep_lstm_rnn(one_input_seq, h, c, temperature=temperature) loss = rmse(outputs[-1][0], one_label_seq) return outputs[-1][0].asnumpy()[-1], one_label_seq.asnumpy()[-1], loss.asnumpy()[-1], outputs, one_label_seq def check_prediction(index): o, label, loss, outputs, labels = test_prediction(test_data_inputs[index], test_data_labels[index], temperature=1.0) prediction = round(o, 3) true_label = round(label, 3) outputs = [float(i.asnumpy().flatten()) for i in outputs] true_labels = list(test_data_labels[index].asnumpy().flatten()) # print(outputs, '\n----\n', true_labels) df = pd.DataFrame([outputs, true_labels]).transpose() df.columns = ['predicted', 'true'] # print(df) rel_error = round(100. * (prediction / true_label - 1.0), 2) # print('\nprediction = {0} | actual_value = {1} | rel_error = {2}'.format(prediction, true_label, rel_error)) return df epochs = 48 # at some point, some nans appear in M, R matrices of Adam. TODO investigate why moving_loss = 0. learning_rate = 0.001 # 0.1 works for a [8, 8] after about 70 epochs of 32-sized batches # Adam Optimizer stuff beta1 = .9 beta2 = .999 index_adam_call = 0 # M & R arrays to keep track of momenta in adam optimizer. params is a list that contains all ndarrays of parameters M = {k: nd.zeros_like(v) for k, v in enumerate(params)} R = {k: nd.zeros_like(v) for k, v in enumerate(params)} df_moving_loss = pd.DataFrame(columns=['Loss', 'Error']) df_moving_loss.index.name = 'Epoch' # needed to update plots on the fly %matplotlib notebook fig, axes_fig1 = plt.subplots(1,1, figsize=(6,3)) fig2, axes_fig2 = plt.subplots(1,1, figsize=(6,3)) for e in range(epochs): ############################ # Attenuate the learning rate by a factor of 2 every 100 epochs ############################ if ((e+1) % 80 == 0): learning_rate = learning_rate / 2.0 # TODO check if its ok to adjust learning_rate when using Adam Optimizer h, c = {}, {} for i_layer in range(1, num_hidden_layers+1): h[i_layer] = nd.zeros(shape=(batch_size, num_units_layers[i_layer]), ctx=ctx) c[i_layer] = nd.zeros(shape=(batch_size, num_units_layers[i_layer]), ctx=ctx) for i in range(num_batches_train): data_one_hot = train_data_inputs[i] label_one_hot = train_data_labels[i] with autograd.record(): outputs, h, c = deep_lstm_rnn(data_one_hot, h, c) loss = average_rmse_loss(outputs, label_one_hot) loss.backward() # SGD(params, learning_rate) index_adam_call += 1 # needed for bias correction in Adam optimizer params, M, R = adam(params, learning_rate, M, R, index_adam_call, beta1, beta2, 1e-8) ########################## # Keep a moving average of the losses ########################## if (i == 0) and (e == 0): moving_loss = nd.mean(loss).asscalar() else: moving_loss = .99 * moving_loss + .01 * nd.mean(loss).asscalar() df_moving_loss.loc[e] = round(moving_loss, 4) ############################ # Predictions and plots ############################ data_prediction_df = check_prediction(index=e) axes_fig1.clear() data_prediction_df.plot(ax=axes_fig1) fig.canvas.draw() prediction = round(data_prediction_df.tail(1)['predicted'].values.flatten()[-1], 3) true_label = round(data_prediction_df.tail(1)['true'].values.flatten()[-1], 3) rel_error = round(100. * np.abs(prediction / true_label - 1.0), 2) print("Epoch = {0} | Loss = {1} | Prediction = {2} True = {3} Error = {4}".format(e, moving_loss, prediction, true_label, rel_error )) axes_fig2.clear() if e == 0: moving_rel_error = rel_error else: moving_rel_error = .9 * moving_rel_error + .1 * rel_error df_moving_loss.loc[e, ['Error']] = moving_rel_error axes_loss_plot = df_moving_loss.plot(ax=axes_fig2, secondary_y='Loss', color=['r','b']) axes_loss_plot.right_ax.grid(False) # axes_loss_plot.right_ax.set_yscale('log') fig2.canvas.draw() %matplotlib inline
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Initial point method The point method of determining enthalpy of adsorption is the simplest method. It just returns the first measured point in the enthalpy curve. Depending on the data, the first point method may or may not be representative of the actual value.
import matplotlib.pyplot as plt # Initial point method isotherm = next(i for i in isotherms_calorimetry if i.material=='HKUST-1(Cu)') res = pgc.initial_enthalpy_point(isotherm, 'enthalpy', verbose=True) plt.show() isotherm = next(i for i in isotherms_calorimetry if i.material=='Takeda 5A') res = pgc.initial_enthalpy_point(isotherm, 'enthalpy', verbose=True) plt.show()
docs/examples/initial_enthalpy.ipynb
pauliacomi/pyGAPS
mit
Compound model method This method attempts to model the enthalpy curve by the superposition of several contributions. It is slower, as it runs a constrained minimisation algorithm with several initial starting guesses, then selects the optimal one.
# Modelling method isotherm = next(i for i in isotherms_calorimetry if i.material=='HKUST-1(Cu)') res = pgc.initial_enthalpy_comp(isotherm, 'enthalpy', verbose=True) plt.show() isotherm = next(i for i in isotherms_calorimetry if i.material=='Takeda 5A') res = pgc.initial_enthalpy_comp(isotherm, 'enthalpy', verbose=True) plt.show()
docs/examples/initial_enthalpy.ipynb
pauliacomi/pyGAPS
mit
Now we mus define the function which we want to minimize
def distances(x,y): '''Function that return distance between two locations Input: Two 2D numpy arrays Output: Distance between locations''' x_rp = np.repeat(x,x_n.shape[0],0).reshape(-1,1) y_rp = np.repeat(y,x_n.shape[0],0).reshape(-1,1) dist_x = (x_rp - x_n[:,:1])**2 dist_y = (y_rp - x_n[:,1:2])**2 return np.sqrt(dist_x+dist_y).reshape((-1,1)) def cost_function(x_0): '''Function that calculate total cost due to transport for a depot/distribution center location x_0 Input: 2D numpy array Output: Total cost''' x = np.array([[x_0[0,0]]]) y = np.array([[x_0[0,1]]]) dist = distances(x,y) dist_costo = quantities*costs*dist return np.sum(dist_costo)
Center of Gravity with JAX.ipynb
jomavera/Work
mit
With the defined function we can calculate the gradient with JAX
gradient_funcion = jit(grad(cost_function)) #jit (just in time) compile makes faster the evaluation of the gradient.
Center of Gravity with JAX.ipynb
jomavera/Work
mit
Now lets define the procedure to apply gradient descent or newton nethod
def optimize(funtion_opt, grad_fun, x_0, method, n_iter): '''Input: funtion_opt: Function to minimize grad_fun: gradient of the function to minimize x_0: initial 2D coordiantes of depot/distribution center method: method to use for minimize n_iter: Number of iterations of the method -------------- Output: xs: List of x coordiantes for each iteration ys: List of y coordiantes for each iteration fs: List of costs for each iteration''' #Create empty lists to fill with iteration values xs = [] ys = [] fs = [] #Add the initial location xs.append(x_0[0,0]) ys.append(x_0[0,1]) fs.append(cost_function(x_0)) for i in range(n_iter): if method == 'newton': loss_val = funtion_opt(x_0) loss_vec = np.array([[loss_val, loss_val]]) x_0 -= 0.005*loss_vec/grad_fun(x_0) elif method == 'grad_desc': step = 0.0001*grad_fun(x_0) x_0 -= step xs.append(x_0[0,0]) ys.append(x_0[0,1]) fs.append(cost_function(x_0)) return xs, ys, fs
Center of Gravity with JAX.ipynb
jomavera/Work
mit
Lets minimize with gradient descent
#Initial locationl of depots/distribution centers x0=np.array([[4.0,-84.0]]) print("Initial Cost: {:0.2f}".format(cost_function(x0 ) )) xs, ys, fs = optimize(cost_function, gradient_funcion, x0, 'grad_desc', 100) print("Final Cost: {:0.2f}".format(fs[-1]))
Center of Gravity with JAX.ipynb
jomavera/Work
mit
Now lets plot the trayectory of the optimization procedure.
from mpl_toolkits import mplot3d import matplotlib.pyplot as plt #We must modified how we feed the input to the cost function to plot values of x and y coordinates def cost_function_2(x,y): dist = distances(x,y) dist_costo = quantities*costs*dist return np.sum(dist_costo) FIGSIZE = (9, 7) xs = np.array(xs).reshape(-1,) ys = np.array(ys).reshape(-1,) fs = np.array(fs) X, Y = np2.meshgrid(np2.linspace(-5., 5., 50), np2.linspace(-84., -74., 50)) func_vec = np2.vectorize(cost_function_2) f = func_vec(X,Y) indices = (slice(None, None, 4), slice(None, None, 4)) fig = plt.figure(figsize=FIGSIZE) ax = plt.axes(projection='3d', azim=10,elev=10) ax.plot_surface(X, Y, f, shade=True, linewidth=2, antialiased=True,alpha=0.5) ax.plot3D(xs, ys, fs, color='black', lw=4)
Center of Gravity with JAX.ipynb
jomavera/Work
mit
Define a year as a "Superman year" whose films feature more Superman characters than Batman. How many years in film history have been Superman years?
c = cast c = c[(c.character == 'Superman') | (c.character == 'Batman')] c = c.groupby(['year', 'character']).size() c = c.unstack() c = c.fillna(0) c.head() d = c.Superman - c.Batman print('Superman years:') print(len(d[d > 0.0]))
Exercises-4.ipynb
climberwb/pycon-pandas-tutorial
mit
How many years have been "Batman years", with more Batman characters than Superman characters?
d = c.Superman - c.Batman print('Batman years:') print(len(d[d < 0.0]))
Exercises-4.ipynb
climberwb/pycon-pandas-tutorial
mit
Plot the number of actor roles each year and the number of actress roles each year over the history of film.
c = cast #c = c[(c.character == 'Superman') | (c.character == 'Batman')] c = c.groupby(['year', 'type']).size() c = c.unstack() c = c.fillna(0) c.plot()
Exercises-4.ipynb
climberwb/pycon-pandas-tutorial
mit
Plot the number of actor roles each year and the number of actress roles each year, but this time as a kind='area' plot.
c.plot(kind='area')
Exercises-4.ipynb
climberwb/pycon-pandas-tutorial
mit
Plot the difference between the number of actor roles each year and the number of actress roles each year over the history of film.
c = cast c = c.groupby(['year', 'type']).size() c = c.unstack('type') (c.actor - c.actress).plot()
Exercises-4.ipynb
climberwb/pycon-pandas-tutorial
mit
Plot the fraction of roles that have been 'actor' roles each year in the hitsory of film.
(c.actor/ (c.actor + c.actress)).plot(ylim=[0,1])
Exercises-4.ipynb
climberwb/pycon-pandas-tutorial
mit
Plot the fraction of supporting (n=2) roles that have been 'actor' roles each year in the history of film.
c = cast[(cast["n"] == 2) ] c = c.groupby(['year','type']).size() c = c.unstack('type') (c.actor/ (c.actor + c.actress)).plot(ylim=[0,1])
Exercises-4.ipynb
climberwb/pycon-pandas-tutorial
mit
Build a plot with a line for each rank n=1 through n=3, where the line shows what fraction of that rank's roles were 'actor' roles for each year in the history of film.
c = cast c = c[c.n <= 3] c = c.groupby(['year', 'type', 'n']).size() c = c.unstack('type') r = c.actor / (c.actor + c.actress) r = r.unstack('n') r.plot(ylim=[0,1])
Exercises-4.ipynb
climberwb/pycon-pandas-tutorial
mit
By doing this we get a few variables initialized. First, a symmetric transition count matrix, $\mathbf{N}$, where we see that the most frequent transitions are those within metastable states (corresponding to the terms in the diagonal $N_{ii}$). Non-diagonal transitions are much less frequent (i.e. $N_{ij}<<N_{ii}$ for all $i\neq j$). Then we get the transition matrix $\mathbf{T}$, whose diagonal elements are close to 1, as corresponds to a system with high metastability (i.e. high probability of the system remaining where it was). We can also construct a rate matrix, $\mathbf{K}$. From it we obtain eigenvalues ($\lambda_i$) and corresponding eigenvectors ($\Psi_i$). The latter allow for estimating equilibrium probabilities (note that $U$ and $F$ have the largest populations). The eigenvalues are sorted by value, with the first eigenvalue ($\lambda_0$) being zero, as corresponds to a system with a unique stationary distribution. All other eigenvalues are negative, and they are characteristic of a two-state like system as there is a considerable time-scale separation between the slowest mode ($\lambda_1$, corresponding to a relaxation time of $\tau_1=-1/\lambda_1$) and the other two ($\lambda_2$ and $\lambda_3$), as shown below.
fig, ax = plt.subplots() ax.bar([0.5,1.5,2.5], -1./bhs.evals[1:], width=1) ax.set_xlabel(r'Eigenvalue', fontsize=16) ax.set_ylabel(r'$\tau_i$', fontsize=18) ax.set_xlim([0,4]) plt.show()
example/fourstate/fourstate_tpt.ipynb
daviddesancho/BestMSM
gpl-2.0
Committors and fluxes Next we calculate the committors and fluxes for this four state model. For this we define two end states, so that we estimate the flux between folded ($F$) and unfolded ($U$). The values of the committor or $p_{fold}$ are defined to be 1 and 0 for $U$ and $F$, respectively, and using the Berezhkovskii-Hummer-Szabo (BHS) method we calculate the committors for the rest of the states.
bhs.run_commit()
example/fourstate/fourstate_tpt.ipynb
daviddesancho/BestMSM
gpl-2.0
We also obtain the flux matrix, $\mathbf{J}$, containing local fluxes ($J_{ji}=J_{i\rightarrow j}$) for the different edges in the network. The signs represent the direction of the transition: positive for those fluxes going from low to high $p_{fold}$ and negative for those going from high to low $p_{fold}$. For example, for intermediate $I_1$ (second column) we see that the transitions to $I_2$ and $F$ have a positive flux (i.e. flux goes from low to high $p_{fold}$). A property of flux conservation that must be fulfilled is that the flux into one state is the same as the flux out of that state, $J_j=\sum_{p_{fold}(i)<p_{fold}(j)}J_{i\rightarrow j}=\sum_{p_{fold}(i)>p_{fold}(j)}J_{j\rightarrow i}$. We check for this property for states $I_1$ and $I_2$.
print " j J_j(<-) J_j(->)" print " - -------- --------" for i in [1,2]: print "%2i %10.4e %10.4e"%(i, np.sum([bhs.J[i,x] for x in range(4) if bhs.pfold[x] < bhs.pfold[i]]),\ np.sum([bhs.J[x,i] for x in range(4) if bhs.pfold[x] > bhs.pfold[i]]))
example/fourstate/fourstate_tpt.ipynb
daviddesancho/BestMSM
gpl-2.0
Paths through the network Another important bit in transition path theory is the possibility of identifying paths through the network. The advantage of a simple case like the one we are looking at is that we can enumerate all those paths and check how much flux each of them carry. For example, the contribution of one given path $U\rightarrow I_1\rightarrow I_2\rightarrow F$ to the total flux is given by $J_{U\rightarrow I_1\rightarrow I_2\rightarrow F}=J_{U \rightarrow I_1}(J_{I_1 \rightarrow I_2}/J_{I_1})(J_{I_2 \rightarrow F}/J_{I_2})$. In the BHS paper, simple rules are defined for calculating the length of a given edge in the network. These rules are implemented in the gen_path_lengths function.
import tpt_functions Jnode, Jpath = tpt_functions.gen_path_lengths(range(4), bhs.J, bhs.pfold, \ bhs.sum_flux, [3], [0]) JpathG = nx.DiGraph(Jpath.transpose()) print Jnode print Jpath
example/fourstate/fourstate_tpt.ipynb
daviddesancho/BestMSM
gpl-2.0
We can exhaustively enumerate the paths and check whether the fluxes add up to the total flux.
tot_flux = 0 paths = {} k = 0 for path in nx.all_simple_paths(JpathG, 0, 3): paths[k] ={} paths[k]['path'] = path f = bhs.J[path[1],path[0]] print "%2i -> %2i: %10.4e "%(path[0], path[1], \ bhs.J[path[1],path[0]]) for i in range(2, len(path)): print "%2i -> %2i: %10.4e %10.4e"%(path[i-1], path[i], \ bhs.J[path[i],path[i-1]], Jnode[path[i-1]]) f *= bhs.J[path[i],path[i-1]]/Jnode[path[i-1]] tot_flux += f paths[k]['flux'] = f print " J(path) = %10.4e"%f print k+=1 print " Commulative flux: %10.4e"%tot_flux
example/fourstate/fourstate_tpt.ipynb
daviddesancho/BestMSM
gpl-2.0
So indeed the cumulative flux is equal to the total flux we estimated before. Below we print the sorted paths for furu
sorted_paths = sorted(paths.items(), key=operator.itemgetter(1)) sorted_paths.reverse() k = 1 for path in sorted_paths: print k, ':', path[1]['path'], ':', 'flux = %g'%path[1]['flux'] k +=1
example/fourstate/fourstate_tpt.ipynb
daviddesancho/BestMSM
gpl-2.0
Highest flux paths One of the great things of using TPT is that it allows for visualizing the highest flux paths. In general we cannot just enumerate all the paths, so we resort to Dijkstra's algorithm to find the highest flux path. The problem with this is that the algorithm does not find the second highest flux path. So once identified, we must remove the flux from one path, so that the next highest flux path can be found by the algorithm. An algorithm for doing this was elegantly proposed by Metzner, Schütte and Vanden Eijnden. Now we implement it for the model system.
while True: Jnode, Jpath = tpt_functions.gen_path_lengths(range(4), bhs.J, bhs.pfold, \ bhs.sum_flux, [3], [0]) # generate nx graph from matrix JpathG = nx.DiGraph(Jpath.transpose()) # find shortest path try: path = nx.dijkstra_path(JpathG, 0, 3) pathlength = nx.dijkstra_path_length(JpathG, 0, 3) print " shortest path:", path, pathlength except nx.NetworkXNoPath: print " No path for %g -> %g\n Stopping here"%(0, 3) break # calculate contribution to flux f = bhs.J[path[1],path[0]] print "%2i -> %2i: %10.4e "%(path[0], path[1], bhs.J[path[1],path[0]]) path_fluxes = [f] for j in range(2, len(path)): i = j - 1 print "%2i -> %2i: %10.4e %10.4e"%(path[i], path[j], \ bhs.J[path[j],path[i]], \ bhs.J[path[j],path[i]]/Jnode[path[i]]) f *= bhs.J[path[j],path[i]]/Jnode[path[i]] path_fluxes.append(bhs.J[path[j],path[i]]) # find bottleneck ib = np.argmin(path_fluxes) print "bottleneck: %2i -> %2i"%(path[ib],path[ib+1]) # remove flux from edges for j in range(1,len(path)): i = j - 1 bhs.J[path[j],path[i]] -= f # numerically there may be some leftover flux in bottleneck bhs.J[path[ib+1],path[ib]] = 0. bhs.sum_flux -= f print ' flux from path ', path, ': %10.4e'%f print ' fluxes', path_fluxes print ' leftover flux: %10.4e\n'%bhs.sum_flux
example/fourstate/fourstate_tpt.ipynb
daviddesancho/BestMSM
gpl-2.0
Under the hood: Inferring chlorophyll distribution ~~Grid approximation: computing probability everywhere~~ <font color='red'>Magical MCMC: Dealing with computational complexity</font> Probabilistic Programming with PyMC3: Industrial grade MCMC Back to Contents <a id="MCMC"></a> Magical MCMC: Dealing with computational complexity Grid approximation: useful for understanding mechanics of Bayesian computation computationally intensive impractical and often intractable for large data sets or high-dimension models MCMC allows sampling <u>where it probabilistically matters</u>: compute current probability given location in parameter space propose jump to new location in parameter space compute new probability at proposed location jump to new location if $\frac{new\ probability}{current\ probability}>1$ jump to new location if $\frac{new\ probability}{current\ probability}>\gamma\in [0, 1]$ otherwise stay in current location
def mcmc(data, μ_0=0.5, n_samples=1000,): print(f'{data.size} data points') data = data.reshape(1, -1) # set priors σ=0.75 # keep σ fixed for simplicity trace_μ = np.nan * np.ones(n_samples) # trace: where the sampler has been trace_μ[0] = μ_0 # start with a first guess for i in range(1, n_samples): proposed_μ = norm.rvs(loc=trace_μ[i-1], scale=0.1, size=1) prop_par_dict = dict(μ=proposed_μ, σ=σ) curr_par_dict = dict(μ=trace_μ[i-1], σ=σ) log_prob_prop = get_log_lik(data, prop_par_dict ) + get_log_prior(prop_par_dict) log_prob_curr = get_log_lik(data, curr_par_dict ) + get_log_prior(curr_par_dict) ratio = np.exp(log_prob_prop - log_prob_curr) if ratio > 1: # accept proposal trace_μ[i] = proposed_μ else: # evaluate low proba proposal if uniform.rvs(size=1, loc=0, scale=1) > ratio: # reject proposal trace_μ[i] = trace_μ[i-1] else: # accept proposal trace_μ[i] = proposed_μ return trace_μ def get_log_lik(data, param_dict): return np.sum(norm.logpdf(data, loc=param_dict['μ'], scale=param_dict['σ'] ), axis=1) def get_log_prior(par_dict, loc=1, scale=1): return norm.logpdf(par_dict['μ'], loc=loc, scale=scale)
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
Timing MCMC
%%time mcmc_n_samples = 2000 trace1 = mcmc(data=df_data_s.chl_l.values, n_samples=mcmc_n_samples) f, ax = pl.subplots(nrows=2, figsize=(8, 8)) ax[0].plot(np.arange(mcmc_n_samples), trace1, marker='.', ls=':', color='k') ax[0].set_title('trace of μ, 500 data points') ax[1].set_title('μ marginal posterior') pm.plots.kdeplot(trace1, ax=ax[1], label='mcmc', color='orange', lw=2, zorder=1) ax[1].legend(loc='upper left') ax[1].set_ylim(bottom=0) df_μ = df_grid_3.groupby(['μ']).sum().drop('σ', axis=1)[['post_prob'] ].reset_index() ax2 = ax[1].twinx() df_μ.plot(x='μ', y='post_prob', ax=ax2, color='k', label='grid',) ax2.set_ylim(bottom=0); ax2.legend(loc='upper right') f.tight_layout() f.savefig('./figJar/Presentation/mcmc_1.svg')
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
<img src='./resources/mcmc_1.svg?modified="1"'>
%%time samples = 2000 trace2 = mcmc(data=df_data.chl_l.values, n_samples=samples) f, ax = pl.subplots(nrows=2, figsize=(8, 8)) ax[0].plot(np.arange(samples), trace2, marker='.', ls=':', color='k') ax[0].set_title(f'trace of μ, {df_data.chl_l.size} data points') ax[1].set_title('μ marginal posterior') pm.plots.kdeplot(trace2, ax=ax[1], label='mcmc', color='orange', lw=2, zorder=1) ax[1].legend(loc='upper left') ax[1].set_ylim(bottom=0) f.tight_layout() f.savefig('./figJar/Presentation/mcmc_2.svg')
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
<img src='./figJar/Presentation/mcmc_2.svg?modified=2'>
f, ax = pl.subplots(ncols=2, figsize=(12, 5)) ax[0].stem(pm.autocorr(trace1[1500:])) ax[1].stem(pm.autocorr(trace2[1500:])) ax[0].set_title(f'{df_data_s.chl_l.size} data points') ax[1].set_title(f'{df_data.chl_l.size} data points') f.suptitle('trace autocorrelation', fontsize=19) f.savefig('./figJar/Presentation/grid8.svg') f, ax = pl.subplots(nrows=2, figsize=(8, 8)) thinned_trace = np.random.choice(trace2[100:], size=200, replace=False) ax[0].plot(np.arange(200), thinned_trace, marker='.', ls=':', color='k') ax[0].set_title('thinned trace of μ') ax[1].set_title('μ marginal posterior') pm.plots.kdeplot(thinned_trace, ax=ax[1], label='mcmc', color='orange', lw=2, zorder=1) ax[1].legend(loc='upper left') ax[1].set_ylim(bottom=0) f.tight_layout() f.savefig('./figJar/Presentation/grid9.svg') f, ax = pl.subplots() ax.stem(pm.autocorr(thinned_trace[:20])); f.savefig('./figJar/Presentation/stem2.svg', dpi=300, format='svg');
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
What's going on? Highly autocorrelated trace: <br> $\rightarrow$ inadequate parameter space exploration<br> $\rightarrow$ poor convergence... Metropolis MCMC<br> $\rightarrow$ easy to implement + memory efficient<br> $\rightarrow$ inefficient parameter space exploration<br> $\rightarrow$ better MCMC sampler? Hamiltonian Monte Carlo (HMC) Greatly improved convergence Well mixed traces are a signature and an easy diagnostic HMC does require a lot of tuning, Not practical for the inexperienced applied statistician or scientist No-U-Turn Sampler (NUTS), HMC that automates most tuning steps NUTS scales well to complex problems with many parameters (1000's) Implemented in popular libraries Probabilistic modeling for the beginner <font color='red'>Under the hood: Inferring chlorophyll distribution</font> ~~Grid approximation: computing probability everywhere~~ ~~MCMC: how it works~~ <font color='red'>Probabilistic Programming with PyMC3: Industrial grade MCMC </font> Back to Contents <a id='PyMC3'></a> <u>Probabilistic Programming with PyMC3</u> relatively simple syntax easily used in conjuction with mainstream python scientific data structures<br> $\rightarrow$numpy arrays <br> $\rightarrow$pandas dataframes models of reasonable complexity span ~10-20 lines.
with pm.Model() as m1: μ_ = pm.Normal('μ', mu=1, sd=1) σ = pm.Uniform('σ', lower=0, upper=2) lkl = pm.Normal('likelihood', mu=μ_, sd=σ, observed=df_data.chl_l.dropna()) graph_m1 = pm.model_to_graphviz(m1) graph_m1.format = 'svg' graph_m1.render('./figJar/Presentation/graph_m1');
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
<center> <img src="./resources/graph_m1.svg"/> </center>
with m1: trace_m1 = pm.sample(2000, tune=1000, chains=4) pm.traceplot(trace_m1); ar.plot_posterior(trace_m1, kind='hist', round_to=2);
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
Back to Contents <a id='Reg'></a> <u><font color='purple'>Tutorial Overview:</font></u> Probabilistic modeling for the beginner<br> $\rightarrow$~~The basics~~<br> $\rightarrow$~~Starting easy: inferring chlorophyll~~<br> <font color='red'>$\rightarrow$Regression: adding a predictor to estimate chlorophyll</font> Back to Contents <a id='DataPrep'></a> Regression: Adding a predictor to estimate chlorophyll <font color=red>Data preparation</font> Writing a regression model in PyMC3 Are my priors making sense? Model fitting Flavors of uncertainty Linear regression takes the form $$ y = \alpha + \beta x $$ where $$\ \ \ \ \ y = log_{10}(chl)$$ and $$x = log_{10}\left(\frac{Gr}{MxBl}\right)$$
df_data.head().T df_data['Gr-MxBl'] = -1 * df_data['MxBl-Gr']
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
Regression coefficients easier to interpret with centered predictor:<br><br> $$x_c = x - \bar{x}$$
df_data['Gr-MxBl_c'] = df_data['Gr-MxBl'] - df_data['Gr-MxBl'].mean() df_data[['Gr-MxBl_c', 'chl_l']].info() x_c = df_data.dropna()['Gr-MxBl_c'].values y = df_data.dropna().chl_l.values
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
$$ y = \alpha + \beta x_c$$<br> $\rightarrow \alpha=y$ when $x=\bar{x}$<br> $\rightarrow \beta=\Delta y$ when $x$ increases by one unit
g3 = sb.PairGrid(df_data.loc[:, ['Gr-MxBl_c', 'chl_l']], height=3, diag_sharey=False,) g3.map_diag(sb.kdeplot, color='k') g3.map_offdiag(sb.scatterplot, color='k'); make_lower_triangle(g3) f = pl.gcf() axs = f.get_axes() xlabel = r'$log_{10}\left(\frac{Rrs_{green}}{max(Rrs_{blue})}\right), centered$' ylabel = r'$log_{10}(chl)$' axs[0].set_xlabel(xlabel) axs[2].set_xlabel(xlabel) axs[2].set_ylabel(ylabel) axs[3].set_xlabel(ylabel) f.tight_layout() f.savefig('./figJar/Presentation/pairwise_1.png')
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
Back to Contents <a id='RegPyMC3'></a> Regression: Adding a predictor to estimate chlorophyll ~~Data preparation~~ <font color=red>Writing a regression model in PyMC3</font> Are my priors making sense? Model fitting Flavors of uncertainty
with pm.Model() as m_vague_prior: # priors σ = pm.Uniform('σ', lower=0, upper=2) α = pm.Normal('α', mu=0, sd=1) β = pm.Normal('β', mu=0, sd=1) # deterministic model μ = α + β * x_c # likelihood chl_i = pm.Normal('chl_i', mu=μ, sd=σ, observed=y)
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
<center> <img src="./resources/m_vague_graph.svg"/> </center> Back to Contents <a id='PriorCheck'></a> Regression: Adding a predictor to estimate chlorophyll ~~Data preparation~~ ~~Writing a regression model in PyMC3~~ <font color=red>Are my priors making sense?</font> Model fitting Flavors of uncertainty
vague_priors = pm.sample_prior_predictive(samples=500, model=m_vague_prior, vars=['α', 'β',]) x_dummy = np.linspace(-1.5, 1.5, num=50).reshape(-1, 1) α_prior_vague = vague_priors['α'].reshape(1, -1) β_prior_vague = vague_priors['β'].reshape(1, -1) chl_l_prior_μ_vague = α_prior_vague + β_prior_vague * x_dummy f, ax = pl.subplots( figsize=(6, 5)) ax.plot(x_dummy, chl_l_prior_μ_vague, color='k', alpha=0.1,); ax.set_xlabel(r'$log_{10}\left(\frac{green}{max(blue)}\right)$, centered') ax.set_ylabel('$log_{10}(chl)$') ax.set_title('Vague priors') ax.set_ylim(-3.5, 3.5) f.tight_layout(pad=1) f.savefig('./figJar/Presentation/prior_checks_1.png')
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
<center> <img src='./figJar/Presentation/prior_checks_1.png?modified=3' width=65%> </center
with pm.Model() as m_informative_prior: α = pm.Normal('α', mu=0, sd=0.2) β = pm.Normal('β', mu=0, sd=0.5) σ = pm.Uniform('σ', lower=0, upper=2) μ = α + β * x_c chl_i = pm.Normal('chl_i', mu=μ, sd=σ, observed=y) prior_info = pm.sample_prior_predictive(model=m_informative_prior, vars=['α', 'β']) α_prior_info = prior_info['α'].reshape(1, -1) β_prior_info = prior_info['β'].reshape(1, -1) chl_l_prior_info = α_prior_info + β_prior_info * x_dummy f, ax = pl.subplots( figsize=(6, 5)) ax.plot(x_dummy, chl_l_prior_info, color='k', alpha=0.1,); ax.set_xlabel(r'$log_{10}\left(\frac{green}{max(blue}\right)$, centered') ax.set_ylabel('$log_{10}(chl)$') ax.set_title('Weakly informative priors') ax.set_ylim(-3.5, 3.5) f.tight_layout(pad=1) f.savefig('./figJar/Presentation/prior_checks_2.png')
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
<table> <tr> <td> <img src='./resources/prior_checks_1.png?modif=1' /> </td> <td> <img src='./resources/prior_checks_2.png?modif=2' /> </td> </tr> </table> Back to Contents <a id='Mining'></a> Regression: Adding a predictor to estimate chlorophyll ~~Data preparatrion~~ ~~Writing a regression model in PyMC3~~ ~~Are my priors making sense?~~ <font color=red>Model fitting</font> Flavors of uncertainty
with m_vague_prior: trace_vague = pm.sample(2000, tune=1000, chains=4) with m_informative_prior: trace_inf = pm.sample(2000, tune=1000, chains=4) f, axs = pl.subplots(ncols=2, nrows=2, figsize=(12, 7)) ar.plot_posterior(trace_vague, var_names=['α', 'β'], round_to=2, ax=axs[0,:], kind='hist'); ar.plot_posterior(trace_inf, var_names=['α', 'β'], round_to=2, ax=axs[1, :], kind='hist', color='brown'); axs[0,0].tick_params(rotation=20) axs[0,0].text(-0.137, 430, 'vague priors', fontdict={'fontsize': 15}) axs[1,0].tick_params(rotation=20) axs[1,0].text(-0.137, 430, 'informative priors', fontdict={'fontsize': 15}) f.tight_layout() f.savefig('./figJar/Presentation/reg_posteriors.svg')
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
<center> <img src='./resources/reg_posteriors.svg'/> </center> Back to Contents <a id='UNC'></a> Regression: Adding a predictor to estimate chlorophyll ~~Data preparation~~ ~~Writing a regression model in PyMC3~~ ~~Are my priors making sense?~~ ~~Data review and model fitting~~ <font color=red>Flavors of uncertainty</font> Two types of uncertainties: 1. model uncertainty 2. prediction uncertainty
α_posterior = trace_inf.get_values('α').reshape(1, -1) β_posterior = trace_inf.get_values('β').reshape(1, -1) σ_posterior = trace_inf.get_values('σ').reshape(1, -1)
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
model uncertainty: uncertainty around the model mean
μ_posterior = α_posterior + β_posterior * x_dummy pl.plot(x_dummy, μ_posterior[:, ::16], color='k', alpha=0.1); pl.plot(x_dummy, μ_posterior[:, 1], color='k', label='model mean') pl.scatter(x_c, y, color='orange', edgecolor='k', alpha=0.5, label='obs'); pl.legend(); pl.ylim(-2.5, 2.5); pl.xlim(-1, 1); pl.xlabel(r'$log_{10}\left(\frac{Gr}{max(Blue)}\right)$') pl.ylabel(r'$log_{10}(chlorophyll)$') f = pl.gcf() f.savefig('./figJar/Presentation/mu_posterior.svg')
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
<center> <img src='./resources/mu_posterior.svg/'> </center> prediction uncertainty: posterior predictive checks
ppc = norm.rvs(loc=μ_posterior, scale=σ_posterior); ci_94_perc = pm.hpd(ppc.T, alpha=0.06); pl.scatter(x_c, y, color='orange', edgecolor='k', alpha=0.5, label='obs'); pl.legend(); pl.plot(x_dummy, ppc.mean(axis=1), color='k', label='mean prediction'); pl.fill_between(x_dummy.flatten(), ci_94_perc[:, 0], ci_94_perc[:, 1], alpha=0.5, color='k', label='94% credibility interval:\n94% chance that prediction\nwill be in here!'); pl.xlim(-1, 1); pl.ylim(-2.5, 2.5) pl.legend(fontsize=12, loc='upper left') f = pl.gcf() f.savefig('./figJar/Presentation/ppc.svg')
posts/a-bayesian-tutorial-in-python-part-II.ipynb
madHatter106/DataScienceCorner
mit
Global variables are shared between cells. Try executing the cell below:
y = 2 * x print(y)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Keyboard Shortcuts There are a few keyboard shortcuts you should be aware of to make your notebook experience more pleasant. To escape editing of a cell, press esc. Escaping a Markdown cell won't render it, so make sure to execute it if you wish to render the markdown. Notice how the highlight color switches back to blue when you have escaped a cell. You can navigate between cells by pressing your arrow keys. Executing a cell automatically shifts the cell cursor down 1 cell if one exists, or creates a new cell below the current one if none exist. To place a cell below the current one, press b. To place a cell above the current one, press a. To delete a cell, press dd. To convert a cell to Markdown press m. Note you have to be in esc mode. To convert it back to Code press y. Note you have to be in esc mode. Get familiar with these keyboard shortcuts, they really help! You can restart a notebook and clear all cells by clicking Kernel -&gt; Restart &amp; Clear Output. If you don't want to clear cell outputs, just hit Kernel -&gt; Restart. By convention, Jupyter notebooks are expected to be run from top to bottom. Failing to execute some cells or executing cells out of order can result in errors. After restarting the notebook, try running the y = 2 * x cell 2 cells above and observe what happens. After you have modified a Jupyter notebook for one of the assignments by modifying or executing some of its cells, remember to save your changes! You can save with the Command/Control + s shortcut or by clicking File -&gt; Save and Checkpoint. This has only been a brief introduction to Jupyter notebooks, but it should be enough to get you up and running on the assignments for this course. Python Tutorial Python is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing. We expect that many of you will have some experience with Python and numpy; for the rest of you, this section will serve as a quick crash course both on the Python programming language and on the use of Python for scientific computing. Some of you may have previous knowledge in Matlab, in which case we also recommend the numpy for Matlab users page (https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html). In this tutorial, we will cover: Basic Python: Basic data types (Containers, Lists, Dictionaries, Sets, Tuples), Functions, Classes Numpy: Arrays, Array indexing, Datatypes, Array math, Broadcasting Matplotlib: Plotting, Subplots, Images IPython: Creating notebooks, Typical workflows A Brief Note on Python Versions As of Janurary 1, 2020, Python has officially dropped support for python2. We'll be using Python 3.7 for this iteration of the course. You should have activated your cs231n virtual environment created in the Setup Instructions before calling jupyter notebook. If that is the case, the cell below should print out a major version of 3.7.
!python --version
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Basics of Python Python is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python:
def quicksort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right) print(quicksort([3,6,8,10,1,2,1]))
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Basic data types Numbers Integers and floats work as you would expect from other languages:
x = 3 print(x, type(x)) print(x + 1) # Addition print(x - 1) # Subtraction print(x * 2) # Multiplication print(x ** 2) # Exponentiation x += 1 print(x) x *= 2 print(x) y = 2.5 print(type(y)) print(y, y + 1, y * 2, y ** 2)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators. Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation. Booleans Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&amp;&amp;, ||, etc.):
t, f = True, False print(type(t))
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Now we let's look at the operations:
print(t and f) # Logical AND; print(t or f) # Logical OR; print(not t) # Logical NOT; print(t != f) # Logical XOR;
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Strings
hello = 'hello' # String literals can use single quotes world = "world" # or double quotes; it does not matter print(hello, len(hello)) hw = hello + ' ' + world # String concatenation print(hw) hw12 = '{} {} {}'.format(hello, world, 12) # string formatting print(hw12)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
String objects have a bunch of useful methods; for example:
s = "hello" print(s.capitalize()) # Capitalize a string print(s.upper()) # Convert a string to uppercase; prints "HELLO" print(s.rjust(7)) # Right-justify a string, padding with spaces print(s.center(7)) # Center a string, padding with spaces print(s.replace('l', '(ell)')) # Replace all instances of one substring with another print(' world '.strip()) # Strip leading and trailing whitespace
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
You can find a list of all string methods in the documentation. Containers Python includes several built-in container types: lists, dictionaries, sets, and tuples. Lists A list is the Python equivalent of an array, but is resizeable and can contain elements of different types:
xs = [3, 1, 2] # Create a list print(xs, xs[2]) print(xs[-1]) # Negative indices count from the end of the list; prints "2" xs[2] = 'foo' # Lists can contain elements of different types print(xs) xs.append('bar') # Add a new element to the end of the list print(xs) x = xs.pop() # Remove and return the last element of the list print(x, xs)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
As usual, you can find all the gory details about lists in the documentation. Slicing In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:
nums = list(range(5)) # range is a built-in function that creates a list of integers print(nums) # Prints "[0, 1, 2, 3, 4]" print(nums[2:4]) # Get a slice from index 2 to 4 (exclusive); prints "[2, 3]" print(nums[2:]) # Get a slice from index 2 to the end; prints "[2, 3, 4]" print(nums[:2]) # Get a slice from the start to index 2 (exclusive); prints "[0, 1]" print(nums[:]) # Get a slice of the whole list; prints ["0, 1, 2, 3, 4]" print(nums[:-1]) # Slice indices can be negative; prints ["0, 1, 2, 3]" nums[2:4] = [8, 9] # Assign a new sublist to a slice print(nums) # Prints "[0, 1, 8, 9, 4]"
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Loops You can loop over the elements of a list like this:
animals = ['cat', 'dog', 'monkey'] for animal in animals: print(animal)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
If you want access to the index of each element within the body of a loop, use the built-in enumerate function:
animals = ['cat', 'dog', 'monkey'] for idx, animal in enumerate(animals): print('#{}: {}'.format(idx + 1, animal))
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
List comprehensions When programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:
nums = [0, 1, 2, 3, 4] squares = [] for x in nums: squares.append(x ** 2) print(squares)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
You can make this code simpler using a list comprehension:
nums = [0, 1, 2, 3, 4] squares = [x ** 2 for x in nums] print(squares)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
List comprehensions can also contain conditions:
nums = [0, 1, 2, 3, 4] even_squares = [x ** 2 for x in nums if x % 2 == 0] print(even_squares)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Dictionaries A dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this:
d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data print(d['cat']) # Get an entry from a dictionary; prints "cute" print('cat' in d) # Check if a dictionary has a given key; prints "True" d['fish'] = 'wet' # Set an entry in a dictionary print(d['fish']) # Prints "wet" print(d['monkey']) # KeyError: 'monkey' not a key of d print(d.get('monkey', 'N/A')) # Get an element with a default; prints "N/A" print(d.get('fish', 'N/A')) # Get an element with a default; prints "wet" del d['fish'] # Remove an element from a dictionary print(d.get('fish', 'N/A')) # "fish" is no longer a key; prints "N/A"
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
You can find all you need to know about dictionaries in the documentation. It is easy to iterate over the keys in a dictionary:
d = {'person': 2, 'cat': 4, 'spider': 8} for animal, legs in d.items(): print('A {} has {} legs'.format(animal, legs))
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:
nums = [0, 1, 2, 3, 4] even_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0} print(even_num_to_square)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Sets A set is an unordered collection of distinct elements. As a simple example, consider the following:
animals = {'cat', 'dog'} print('cat' in animals) # Check if an element is in a set; prints "True" print('fish' in animals) # prints "False" animals.add('fish') # Add an element to a set print('fish' in animals) print(len(animals)) # Number of elements in a set; animals.add('cat') # Adding an element that is already in the set does nothing print(len(animals)) animals.remove('cat') # Remove an element from a set print(len(animals))
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:
animals = {'cat', 'dog', 'fish'} for idx, animal in enumerate(animals): print('#{}: {}'.format(idx + 1, animal))
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Set comprehensions: Like lists and dictionaries, we can easily construct sets using set comprehensions:
from math import sqrt print({int(sqrt(x)) for x in range(30)})
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Tuples A tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example:
d = {(x, x + 1): x for x in range(10)} # Create a dictionary with tuple keys t = (5, 6) # Create a tuple print(type(t)) print(d[t]) print(d[(1, 2)]) t[0] = 1
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Functions Python functions are defined using the def keyword. For example:
def sign(x): if x > 0: return 'positive' elif x < 0: return 'negative' else: return 'zero' for x in [-1, 0, 1]: print(sign(x))
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
We will often define functions to take optional keyword arguments, like this:
def hello(name, loud=False): if loud: print('HELLO, {}'.format(name.upper())) else: print('Hello, {}!'.format(name)) hello('Bob') hello('Fred', loud=True)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Classes The syntax for defining classes in Python is straightforward:
class Greeter: # Constructor def __init__(self, name): self.name = name # Create an instance variable # Instance method def greet(self, loud=False): if loud: print('HELLO, {}'.format(self.name.upper())) else: print('Hello, {}!'.format(self.name)) g = Greeter('Fred') # Construct an instance of the Greeter class g.greet() # Call an instance method; prints "Hello, Fred" g.greet(loud=True) # Call an instance method; prints "HELLO, FRED!"
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Numpy Numpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy. To use Numpy, we first need to import the numpy package:
import numpy as np
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Arrays A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension. We can initialize numpy arrays from nested Python lists, and access elements using square brackets:
a = np.array([1, 2, 3]) # Create a rank 1 array print(type(a), a.shape, a[0], a[1], a[2]) a[0] = 5 # Change an element of the array print(a) b = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array print(b) print(b.shape) print(b[0, 0], b[0, 1], b[1, 0])
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Numpy also provides many functions to create arrays:
a = np.zeros((2,2)) # Create an array of all zeros print(a) b = np.ones((1,2)) # Create an array of all ones print(b) c = np.full((2,2), 7) # Create a constant array print(c) d = np.eye(2) # Create a 2x2 identity matrix print(d) e = np.random.random((2,2)) # Create an array filled with random values print(e)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Array indexing Numpy offers several ways to index into arrays. Slicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:
import numpy as np # Create the following rank 2 array with shape (3, 4) # [[ 1 2 3 4] # [ 5 6 7 8] # [ 9 10 11 12]] a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) # Use slicing to pull out the subarray consisting of the first 2 rows # and columns 1 and 2; b is the following array of shape (2, 2): # [[2 3] # [6 7]] b = a[:2, 1:3] print(b)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
A slice of an array is a view into the same data, so modifying it will modify the original array.
print(a[0, 1]) b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1] print(a[0, 1])
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing:
# Create the following rank 2 array with shape (3, 4) a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) print(a)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Two ways of accessing the data in the middle row of the array. Mixing integer indexing with slices yields an array of lower rank, while using only slices yields an array of the same rank as the original array:
row_r1 = a[1, :] # Rank 1 view of the second row of a row_r2 = a[1:2, :] # Rank 2 view of the second row of a row_r3 = a[[1], :] # Rank 2 view of the second row of a print(row_r1, row_r1.shape) print(row_r2, row_r2.shape) print(row_r3, row_r3.shape) # We can make the same distinction when accessing columns of an array: col_r1 = a[:, 1] col_r2 = a[:, 1:2] print(col_r1, col_r1.shape) print() print(col_r2, col_r2.shape)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Integer array indexing: When you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array. Here is an example:
a = np.array([[1,2], [3, 4], [5, 6]]) # An example of integer array indexing. # The returned array will have shape (3,) and print(a[[0, 1, 2], [0, 1, 0]]) # The above example of integer array indexing is equivalent to this: print(np.array([a[0, 0], a[1, 1], a[2, 0]])) # When using integer array indexing, you can reuse the same # element from the source array: print(a[[0, 0], [1, 1]]) # Equivalent to the previous integer array indexing example print(np.array([a[0, 1], a[0, 1]]))
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix:
# Create a new array from which we will select elements a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) print(a) # Create an array of indices b = np.array([0, 2, 0, 1]) # Select one element from each row of a using the indices in b print(a[np.arange(4), b]) # Prints "[ 1 6 7 11]" # Mutate one element from each row of a using the indices in b a[np.arange(4), b] += 10 print(a)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Boolean array indexing: Boolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition. Here is an example:
import numpy as np a = np.array([[1,2], [3, 4], [5, 6]]) bool_idx = (a > 2) # Find the elements of a that are bigger than 2; # this returns a numpy array of Booleans of the same # shape as a, where each slot of bool_idx tells # whether that element of a is > 2. print(bool_idx) # We use boolean array indexing to construct a rank 1 array # consisting of the elements of a corresponding to the True values # of bool_idx print(a[bool_idx]) # We can do all of the above in a single concise statement: print(a[a > 2])
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation. Datatypes Every numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example:
x = np.array([1, 2]) # Let numpy choose the datatype y = np.array([1.0, 2.0]) # Let numpy choose the datatype z = np.array([1, 2], dtype=np.int64) # Force a particular datatype print(x.dtype, y.dtype, z.dtype)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit